Welcome to Vastspace, provides Reliable Web Hosting since 2014

Welcome to Vastspace

Blog

Blog

For SmarterMail user, upgrade to Version 13.3

If you have a backdated copy of the SmarterMail, in particular to two of the vulnerabilities found in the earlier version I would suggest to get the latest copy and move up to 13.3.5535. You can download the latest from here: http://smartertools.com/smartermail/mail-server-download.aspx.

Just in case you have forgotten the steps on how to “properly” upgrade your SmarterMail. Please make sure you have a backup before proceed.

  1. Stop IIs www publishing service or SmarterMail web service.
  2. Uninstall SmarterMail without removing the existing folders or files.
  3. Install the latest copy of SmarterMail.
  4. Once it’s completed, start SmarterMail web service or IIs www publishing service.

 

Wait for a minute or so, sign in to admin portal to make sure everything is working. Sometime it might take a little longer to start up if you have a slower server and many mailboxes. Just be patient, do not attempt to restart your Smartermail Service unless it has stopped for some reasons.

 

  • ADDED: Updated administrative logging to include the friendly name of the event that was fired in addition to it’s id number.
  • FIXED: A temporary disk error when reading an account’s userConfig.xml file will no longer result in the user’s settings being reset to the defaults, including a blank password.
  • FIXED: A user with read-only control of a shared calendar can no longer delete instances of a recurring event.
  • FIXED: A zero byte fileStore.xml file will no longer prevent SmarterMail from starting properly.
  • FIXED: Adding a calendar event using Android’s default calendar app with Exchange ActiveSync now syncs correctly.
  • FIXED: Adding a recurring event that occurs on a specific week of each month now syncs correctly using Exchange ActiveSync.
  • FIXED: Adding a task using Outlook 2013 with Exchange ActiveSync now syncs correctly.
  • FIXED: Adding duplicate entries to trusted senders is no longer allowed.
  • FIXED: Availability conflicts are now calculated correctly when adding or editing a new calendar event in webmail.
  • FIXED: Birth dates set on iOS devices using Exchange ActiveSync now sync correctly.
  • FIXED: Changing an event’s start time that includes a domain resource now properly updates the availability of that domain resource.
  • FIXED: Contacts imported from a CSV file that include only white space in certain imported fields are now saved properly, such that they can be successfully synced with Exchange ActiveSync.
  • FIXED: Creating a calendar and immediately deleting an event using the Mac OSX calendar app with Exchange Web Services now syncs correctly.
  • FIXED: Declude spam weights now save correctly.
  • FIXED: Domain resource availability is now calculated properly when determining scheduling conflicts.
  • FIXED: Editing a password brute force or denial of service abuse detection rule for XMPP now correctly sets the service field to XMPP.
  • FIXED: Email folders that contain special characters are now sorted correctly in webmail.
  • FIXED: Exchange ActiveSync responses will no longer send an empty Exceptions tag, which would cause Outlook 2013 to crash.
  • FIXED: Folders with special characters in their name now sync correctly using Exchange ActiveSync.
  • FIXED: Made changes to how folder renaming is handled to prevent a scenario that could cause mailbox corruption.
  • FIXED: Renaming a folder that contains special characters using Exchange ActiveSync no longer causes an error in webmail when trying to view that folder.
  • FIXED: Setting a contact’s birth date on a client synced using CardDAV will no longer save as one day off for users in time zones with positive offsets from GMT.
  • FIXED: Temporary files created during Exchange ActiveSync SmartForward, SmartReply and other email attachment operations are now immediately cleaned up when no longer needed.
  • FIXED: The number of items sent back per Exchange ActiveSync response is now correctly determined using the WindowSize specified by the client.
  • SECURITY: Resolved an XSS vulnerability related to replying to an email.
  • SECURITY: Resolved an XSS vulnerability related to viewing email.

Speed Up WordPress with Cloudflare

One weakness that WordPress is usually very slow. Vastspace’s website is built with wordpress and installed with many plugins rely on jQuery file and CSS style sheet that hurt the loading time. Result in poor website performance grades with test tools like pingdom website speed test and Google PagesSpeed insights.

We could end up with a very sluggish site that will not only be a hassle for repeat visitors, but will most certainly lose your subscribers and customers due to the impatient nature of web browsers. Also not forgetting that customers are visiting you from different geographical locations.

Think about this, someone just gave you a good reference with a link, and yet you are doing both of you a disservice by having a slow loading site that nobody would want to wait around for. That means if your site takes longer than 10 seconds to load, most people will leave, lost before you even had the chance to convince them to stick around and give your website a glance.

On top of that, many SEO experts have claimed site’s speed affects rankings in search engines. If your site is slow, you are not only losing visitors out of impatience, but you are also losing them by having reduced rankings in search engines.

On wordpress we have tried plugins like WP Super Cache and W3 Total Cache, load time has improved but result is still below satisfactory. We barely passed the 50/100 marks with both pingdom website speed test and Google PageSpeed insights. The load time took much longer because Vastspace server is located in Singapore Data Center was quite a distance from the test locations.

 Cloudflare makes your site faster

Unlike the traditional CDN, CloudFlare is basically a Web Application Firewall, a distributed proxy server, and a content delivery network (CDN). It optimizes your website by acting as a proxy between visitors and your server which helps protecting your website against DDoS attacks.

Unlike many CDN services, CloudFlare does not charge for bandwidth usage on basis that if your site suddenly gets popular or suffers an attack, you shouldn’t have to dread your bandwidth bill. According to CloudFlare, on average a website using the CDN will load twice as fast, use 60 per cent less bandwidth, have 65 per cent fewer requests, and it is more secure with the Web Application Firewall. CloudFlare operates out of 28 data centers around the world and uses a technology called Anycast to route your visitors to the nearest data center.

And most importantly, Cloudflare is free (https://www.cloudflare.com/plans). However,  Vastspace uses Cloudflare PRO for real-time statistic and additional page rules.

With Cloudflare, Vastspace’s website speed test scores 85/100 from 6 different locations and 87/100 for Google PageSpeed insights. Despite of the slower load time was caused by the plugins known as Revolution Slider at the front page we are extremely happy with the result.

 

 

Cloud Server with SSD vs spinning drives

We have been talking much about our new Cloud server with SSD and its performance. Today, we want to make a comparison and benchmark on the cloud servers with spinning drives SSDs.

Vastspace SSD Cloud Server nodes use only enterprise SSD drives ensuring fast and consistent command response times as well as protect data loss and corruption.

We have done the read & write tests for our Cloud SSD VPS against a popular SSD VPS before. Today,  we are carrying out a test on 2  identical Cloud servers with SSD and Raid 10 15,000 rpm SAS drives respectively.
The test Cloud Servers comes with 2 CPU core, 2Gb memory and 20Gb of disk space.

Both test servers are installed with CentOS 6.5 x64 and hosted in Vastspace Singapore Data Center.

The result is obvious that SSD Cloud server beat the Cloud server with spinning drives hands down, despite the Raid 10 15K rpm SAS drives is still slower in terms of write speed compares to the solid-state drives.

Corrupted or blank emails on SmarterMail

I’m have been dealing with Smartermail for many years, probably since V3. The common issue that I’ve been experiencing with SmarterMail is blank and corrupted emails. I’m sure I’m not the only one and if you have googled they were many similar incidents.

I do not have an obvious reason but it has been observed this has happened on IMAP accounts and only on “busy” machine and less likely to happen on the more powerful servers with good IO and less populated servers. Nonetheless, if this is happening to you, try the following,

The blank items indicate that the users mailbox.cfg file became corrupt for their inbox. This can be corrected by performing the following steps:

1. Stop the SmarterMail service and ensure mailservice.exe terminates successfully.
2. Navigate to the users inbox folder, for example C:SmarterMailDomainsdomain.comusersUserIDMailInbox
3. Remove the mailbox.cfg file.
4. Start the SmarterMail service.
5. Sign in as the user, and there should no longer be any blank
messages.

 

VPS with Ploop

To understand the benefits of having PLOOP On OpenVZ container (Linux VPS), we need to knows what are the limitations of the traditional file system on VPS.

  • Since containers are living on one same file system, they all share common properties of that file system (it’s type, block size, and other options). That means we can not configure the above properties on a per-container basis.
  • One such property that deserves a special item in this list is file system journal. While journal is a good thing to have, because it helps to maintain file system integrity and improve reboot times (by eliminating fsck in many cases), it is also a bottleneck for containers. If one container will fill up in-memory journal (with lots of small operations leading to file metadata updates, e.g. file truncates), all the other containers I/O will block waiting for the journal to be written to disk. In some extreme cases we saw up to 15 seconds of such blockage.
  • Since many containers share the same file system with limited space, in order to limit containers disk space we had to develop per-directory disk quotas (i.e. vzquota).
  • Since many containers share the same file system, and the number of inodes on a file system is limited [for most file systems], vzquota should also be able to limit inodes on a per container (per directory) basis.
  • In order for in-container (aka second-level) disk quota (i.e. standard per-user and per-group UNIX dist quota) to work, we had to provide a dummy file system called simfs. Its sole purpose is to have a superblock which is needed for disk quota to work.
  • When doing a live migration without some sort of shared storage (like NAS or SAN), we sync the files to a destination system using rsync, which does the exact copy of all files, except that their i-node numbers on disk will change. If there are some apps that rely on files’ i-node numbers being constant (which is normally the case), those apps are not surviving the migration
  • Finally, a container backup or snapshot is harder to do because there is a lot of small files that need to be copied.

 

In order to address the above problems OpenVVZ decided to implement a container-in-a-file technology, not different from what various VM products are using, but working as effectively as all the other container bits and pieces in OpenVZ.

The main idea of ploop is to have an image file, use it as a block device, and create and use a file system on that device. Some readers will recognize that this is exactly what Linux loop device does! Right, the only thing is loop device is very inefficient (say, using it leads to double caching of data in memory) and its functionality is very limited.

Benefits

  • File system journal is not bottleneck any more
  • Large-size image files I/O instead of lots of small-size files I/O on management operations
  • Disk space quota can be implemented based on virtual device sizes; no need for per-directory quotas
  • Number of inodes doesn’t have to be limited because this is not a shared resource anymore (each CT has its own file system)
  • Live backup is easy and consistent
  • Live migration is reliable and efficient
  • Different containers may use file systems of different types and properties

In addition:

  • Efficient container creation
  • [Potential] support for QCOW2 and other image formats
  • Support for different storage types

 

This article is extracted and found at : https://openvz.org/Ploop/Why

Outlook 2013 IMAP Sync Troubles

Recently, I’m experiencing issue with IMAP Sync troubles with my Outlook2013. Many articles point to  the direction on the update KB2837618 and KB2837643 were released by Microsoft on 12 Nov 2013 that might have caused the hiccups. However, I don’t recommend un-installing these updates as they add some IMAP improvements and you’ll have a better IMAP experience with these updates.

I personally recommend these methods;
The first, and I reckon this is the most effective method is to add Inbox to the IMAP root.
To do this, go to File, Account Settings and double click on your affected IMAP account. Click “More Settings” button then switch to the “Advanced” tab.
In the Root folder path field type Inbox. Exit the dialog and perform a send and receive.

However, there are 2 things you need to take note of;
a. If the Inbox is not your root folder, setting Inbox as the root won’t work, as it will hide all of the other folders.
b. The rules and alerts with folders specified will be disabled due to errors that these locations are no longer exist.

If that doesn’t work, try disabling the option to show only subscribed IMAP folders.
Right click on the IMAP Inbox folder and choose IMAP folders. At the bottom of the dialog is an checkbox for When displaying hierarchy in Outlook, show only the subscribed folders. Remove the check then close the dialog to return to Outlook. Click Send and Receive.

Is it necessary to run fsck?

By default, a fsck is forced after 38 mounts or 180 days.

To avoid issues such as this, we recommend scheduling fsck to run a basic weekly check on your server to identify and flag errors. Doing so can prevent unwanted, forced fsck from running in situations such as this. You can then, plan for a time at which a full system fsck is run.

This filesystem will be automatically checked every 38 mounts or
 180 days, whichever comes first. Use tune2fs -c or -i to override.

Or you can disable the fsck to be activated by itself by editing fstb, by updating the last digit of the mounted volume to zero “0” instead of 1.

/dev/mapper/vg_centos-lv_root /       ext4    defaults        1 0

Why ISCSI Multipath?

By now, you’ve heard from a hundred different sources that moving your operations to the cloud is better, save you money and all the incredible things can be done on Cloud and not on the conventional hosting infra. Frankly, many web hosters jumped on the cloud bandwagon with smaller investment and little knowledge.

Getting a cloud infra connected to internet and having a resilient cloud infra are entirely different matters. I’ve been hearing many cases of VM failed due to missing drive and data corruption, which has been identified one of the common short fall on connectivity redundancy from servers to storage.

Main purpose of multi-path connectivity is to provide redundant access to the storage devices when one or more of the components in a path fail. Another advantage of multi-pathing is the increased throughput by way of load balancing. Common example for the use of multi-pathing is a iSCSI SAN connected storage device. You will have redundancy and maximum performance and especially important feature to be deployed for a more populated cloud environment.