Whatsapp/ Telegram: +65 9776 5889 Live Chat Submit Ticket   Login

Install Windows 2012 Server with GUI on Vastspace SSD Cloud Server just under 4 minutes

If the article on “Cloud Server with SSD vs Cloud Server with spinning drives” isn’t enough to convince you the superior read and write performance of what SSD Cloud server offers, check out this video.

Installing Windows 2012 Server with GUI, just under 4 minutes is near impossible using the conventional spinning hard drives.

Cloud Server with SSD vs Cloud Server with spinning drives

Cloud Server with SSD vs Cloud Server with spinning drives

We have been talking much about our new Cloud server with SSD and its performance. Today, we want to make a comparison and benchmark on the cloud servers with spinning drives SSDs.

Vastspace SSD Cloud Server nodes use only enterprise SSD drives ensuring fast and consistent command response times as well as protect data loss and corruption.

We have done the read & write tests  for our Cloud SSD VPS against a popular SSD VPS before. Today,  we are carrying out test on 2  identical Cloud servers with SSD and Raid 10 15,000 rpm SAS drives respectively.
The test Cloud Servers comes with 2 CPU core, 2Gb memory and 20Gb of disk space.

Both test servers are installed with CentOS 6.5 x64 and hosted in Vastspace Singapore Data Center.

The result is obvious that SSD Cloud server beat the Cloud server with spinning drives hands down, despite the Raid 10 15K rpm SAS drives is still slower in terms of write speed compares to the solid state drives.

Cloud server vs SSD Cloud Server





Corrupted or blank emails on SmarterMail

I’m have been dealing with Smartermail for many years, probably since V3. The common issue that I’ve been experiencing with SmarterMail is blank and corrupted emails. I’m sure I’m not the only one and if you have googled they were many similar incidents.

I do not have an obvious reason but it has been observed this has happened on IMAP accounts and only on “busy” machine and less likely to happen on the more powerful servers with good IO and less populated servers. Nonetheless, if this is happening to you, try the following,

The blank items indicate that the users mailbox.cfg file became corrupt for their inbox. This can be corrected by performing the following steps:

1. Stop the SmarterMail service and ensure mailservice.exe terminates successfully.
2. Navigate to the users inbox folder, for example C:SmarterMailDomainsdomain.comusersUserIDMailInbox
3. Remove the mailbox.cfg file.
4. Start the SmarterMail service.
5. Sign in as the user, and there should no longer be any blank


VPS with Ploop

To understand the benefits of having PLOOP On OpenVZ container (Linux VPS), we need to knows what are the limitations of the traditional file system on VPS.

  • Since containers are living on one same file system, they all share common properties of that file system (it’s type, block size, and other options). That means we can not configure the above properties on a per-container basis.
  • One such property that deserves a special item in this list is file system journal. While journal is a good thing to have, because it helps to maintain file system integrity and improve reboot times (by eliminating fsck in many cases), it is also a bottleneck for containers. If one container will fill up in-memory journal (with lots of small operations leading to file metadata updates, e.g. file truncates), all the other containers I/O will block waiting for the journal to be written to disk. In some extreme cases we saw up to 15 seconds of such blockage.
  • Since many containers share the same file system with limited space, in order to limit containers disk space we had to develop per-directory disk quotas (i.e. vzquota).
  • Since many containers share the same file system, and the number of inodes on a file system is limited [for most file systems], vzquota should also be able to limit inodes on a per container (per directory) basis.
  • In order for in-container (aka second-level) disk quota (i.e. standard per-user and per-group UNIX dist quota) to work, we had to provide a dummy file system called simfs. Its sole purpose is to have a superblock which is needed for disk quota to work.
  • When doing a live migration without some sort of shared storage (like NAS or SAN), we sync the files to a destination system using rsync, which does the exact copy of all files, except that their i-node numbers on disk will change. If there are some apps that rely on files’ i-node numbers being constant (which is normally the case), those apps are not surviving the migration
  • Finally, a container backup or snapshot is harder to do because there is a lot of small files that need to be copied.


In order to address the above problems OpenVVZ decided to implement a container-in-a-file technology, not different from what various VM products are using, but working as effectively as all the other container bits and pieces in OpenVZ.

The main idea of ploop is to have an image file, use it as a block device, and create and use a file system on that device. Some readers will recognize that this is exactly what Linux loop device does! Right, the only thing is loop device is very inefficient (say, using it leads to double caching of data in memory) and its functionality is very limited.


  • File system journal is not bottleneck any more
  • Large-size image files I/O instead of lots of small-size files I/O on management operations
  • Disk space quota can be implemented based on virtual device sizes; no need for per-directory quotas
  • Number of inodes doesn’t have to be limited because this is not a shared resource anymore (each CT has its own file system)
  • Live backup is easy and consistent
  • Live migration is reliable and efficient
  • Different containers may use file systems of different types and properties

In addition:

  • Efficient container creation
  • [Potential] support for QCOW2 and other image formats
  • Support for different storage types


This article is extracted and found at : https://openvz.org/Ploop/Why

Outlook 2013 IMAP Sync Troubles

Recently, I’m experiencing issue with IMAP Sync troubles with my Outlook2013. Many articles point to  the direction on the update KB2837618 and KB2837643 were released by Microsoft on 12 Nov 2013 that might have caused the hiccups. However, I don’t recommend un-installing these updates as they add some IMAP improvements and you’ll have a better IMAP experience with these updates.

I personally recommend these methods;
The first, and I reckon this is the most effective method is to add Inbox to the IMAP root.
To do this, go to File, Account Settings and double click on your affected IMAP account. Click “More Settings” button then switch to the “Advanced” tab.
In the Root folder path field type Inbox. Exit the dialog and perform a send and receive.

However, there are 2 things you need to take note of;
a. If the Inbox is not your root folder, setting Inbox as the root won’t work, as it will hide all of the other folders.
b. The rules and alerts with folders specified will be disabled due to errors that these locations are no longer exist.

If that doesn’t work, try disabling the option to show only subscribed IMAP folders.
Right click on the IMAP Inbox folder and choose IMAP folders. At the bottom of the dialog is an checkbox for When displaying hierarchy in Outlook, show only the subscribed folders. Remove the check then close the dialog to return to Outlook. Click Send and Receive.

Is it necessary to run fsck after every 180 days or 38 umounts?

By default, a fsck is forced after 38 mounts or 180 days.

To avoid issues such as this, we recommend scheduling fsck to run a basic weekly check on your server to identify and flag errors. Doing so can prevent unwanted, forced fsck from running in situations such as this. You can then, plan for a time at which a full system fsck is run.

This filesystem will be automatically checked every 38 mounts or
 180 days, whichever comes first. Use tune2fs -c or -i to override.

Or you can disable the fsck to be activated by itself by editing fstb, by updating the last digit of the mounted volume to zero “0” instead of 1.

/dev/mapper/vg_centos-lv_root /       ext4    defaults        1 0

Why ISCSI Multipath on Cloud Server and Storage is important?

By now, you’ve heard from a hundred different sources that moving your operations to the cloud is better, save you money and all the incredible things can be done on Cloud and not on the conventional hosting infra. Frankly, many web hosters jumped on the cloud bandwagon with smaller investment and little knowledge.

Getting a cloud infra connected to internet and having a resilient cloud infra are entirely different matters. I’ve been hearing many cases of VM failed due to missing drive and data corruption, which has been identified one of the common short fall on connectivity redundancy from servers to storage.

Main purpose of multi-path connectivity is to provide redundant access to the storage devices when one or more of the components in a path fail. Another advantage of multi-pathing is the increased throughput by way of load balancing. Common example for the use of multi-pathing is a iSCSI SAN connected storage device. You will have redundancy and maximum performance and especially important feature to be deployed for a more populated cloud environment.

How To Set Up Master Slave Replication in MySQL

It’s pretty straight forward to setup Master and Slave replication that might sound complicated. It’s obvious that you 1st need two MySQL running instances and identify which is your master and which is your slave.

On your Master MySQL configuration file, vi /etc.my.cnf
 server-id = 1
 log-bin = mysql-bin
 log-bin-index = mysql-bin.index
 expire-logs-days = 10
 max-binlog-size = 100M
 binlog-do-db = newdatabase

Restart MySQL:service mysqld restart

A new SQL user needs to be created on the master:

create user mysqslave;
 create user 'mysqlslave'@'*';
 grant replication slave on *.* to mysqlslave identified by 'NEWPASSOWRD';
 flush privileges;

Next we need to extract some information on the Master MySQL with this command
show master status G

You will probably see something similar
*************************** 1. row ***************************
File: mysql-bin.000001
Position: 199
Binlog_Do_DB: newdatabase
1 row in set (0.00 sec)

Go to Slave MySQL server
Open up my.cnf and insert these


save and service mysqld restart

Now enter MySQL admin with root to insert this

change master to
 start slave;

Now, check status with
 show slave status G

Look for error if any. Common mistake is usually on the master_log_pos is wrong if you have cut and pasted my command from this tutorial 😛

Congratulation, you have completed the Master and Slave MySQL replication.

“Screen” is very useful working with multiple shell windows on a Single SSH Session

The 1st thing I would tell a junior engineer, learn how to use “screen”. It’s extremely important tool allowing us to run scripts or commands in their own virtual window within the terminal, essentially allowing us to have a terminal on multi-tasking environment where we can switch between windows or another users at will. These are useful features that may help you in your daily administration tasks.

  • Use multiple shell windows from a single SSH session.
  • Keep a shell active even through network disruptions.
  • Disconnect and re-connect to a shell sessions from multiple locations.
  • Run a long running process without maintaining an active shell session.


If you do not have screen, then you can install it easily from an RPM or the package file for your system. For example, on CentOS you can install screen with yum: yum install screen

Basic but useful command with Screen
List a particular users screen sessions:

screen -list username/

(it’s important to have forward slash)

List your own active screen sessions:

 screen -ls

Re-attach to  users screen and session:

screen -x username/shared-session

Start a screen session and give it a unique name:

screen -S desired name

Detach from a running screen session leaving it running in the background:
Hit the key combination: Control + A/a + D/d (not case sensitive)
Re-attach to a specific screen you’ve named:

 screen -R "the screen name to be re-attached"

Power detach a screen that you are logged into from another location:
This is helpful if you’ve been accidentally disconnected from ssh while in a remote screen session and it’s still attached.

screen -D "the screen name to be detached"