Whatsapp/ Telegram: 65 97765889 Live Chat Submit Ticket   Login
Which type of VPS?

Which type of VPS?

kvm vps

kvm vps

Many have asked which type of VPS to choose and what are the difference? The propose of this post is to help customer to make the right choice. It is not because Vastspace have dropped OpenVZ and we start praising KVM. Vastspace has dropped OpenVZ is base on demand and it is difficult to manage 2 types of virtualization at the same time., Finally we have chosen KVM instead.

We are not saying OpenVZ is bad. Honestly, there are many advantages for hosting vendors like us. Hosting providers can put more instances on one node compares to KVM. As you know OpenVZ is sharing files and kernel. Theoretically, you are using less space or space is available to you dynamically. Even you are given 40 Gb of space but OpenVZ is calculating the space only you are consuming and not allocated.

Because OpenVZ is sharing the node kernel, you cannot reboot the virtualized instance at your own. In other words, you cannot update a kernel bug or security fix. This has to be waited till the virtualization distribution released and schedule to be updated as a whole but not individually.

OpenVZ allows the use of memory does not belong to you or has been allocated to you. There is mean to say, if you are allocated with 1Gb of RAM you might be able to use more. If you look at other angle, you are stealing others RAM. What happen if someone is stealing from you? This will only happen to unused RAM. In many occasions, RAM is taken from the node and the entire server freezes because of overage. All the VPS hosted within the server are affected due to poor management and this will not happen to KVM. However, there are much improvements in virtualization, there is such thing know as burstable memory, this can be done on KVM VPS.

There are some applications required non-shared kernel. For example, a real-time anti-virus which is essential for today cyber world, can only install on a KVM VPS and not OpenVZ VPS due to its shared kernel.

In real world, CPU are shared in OpenVZ . CPU are dynamically shared among the client machine. You can say it is burstable unlike KVM, you only can use what is allocated to you. because of this feature, it has allow more instances to be hosted. In other words, it is cheaper per instance on OpenVZ.

These explained the difference between OpenVZ and KVM virtualization Hope that the article helped you to make a better choice in choosing a VPS.

cPanel 11.52 on LXC

cPanel 11.52 on LXC

Today, I’ve installed cPanel 11.52 on LXC. LXC knowns as Linux Containers certainly are a lightweight virtualization technology. They are more quite like an enhanced chroot instead of full virtualization like Qemu or VMware, they do not emulate hardware and share the same operating system kernel on a host. Linux-vserver and OpenVZ are two pre-existing, independently developed implementations of containers-like functionality for Linux.

Vastspace has no plan to launch LXC any time soon in spite of the benefits and performance gain over OpenVZ. In case you want to try it out yourself, this is the recommendation from cPanel.

To run cPanel & WHM inside an LXC container, cPanel strongly recommend that you use the following settings:

Host

We strongly recommend that you use Red Hat® Enterprise Linux (RHEL) 7, CloudLinux™ 7, or CentOS 7 as your LXC host. This ensures the best compatibility with cPanel & WHM. While other Linux distributions may work, they require that the system administrator performs additional steps, which we do not support.

Guest

We strongly recommend that your LXC containers use CentOS, RHEL, or CloudLinux 6 as the guest. A CentOS, RHEL, or CloudLinux 7 installation requires additional steps to use it as the guest.

Privileged vs unprivileged containers

cPanel & WHM functions in both privileged and unprivileged containers. We strongly recommend that you run cPanel & WHM in a privileged container, because it expects unrestricted access to the system.

The following limitations are inherent to an unprivileged container:

  • The host operating system treats the root user as a non-root user.
  • You cannot raise the hard limit of a process if you previously lowered it. This action could cause EasyApache 3 to fail.
  • Subtle behavior differences may occur.

Required changes for CentOS 7, RHEL 7, or CloudLinux 7

You must make the following configuration changes to run cPanel & WHM inside an LXC container:

  1. After you create the LXC container, change the lxc.include line in the lxc.conf file to the following line:
    lxc.include = /usr/share/lxc/config/fedora.common.conf
  2. Edit the lxc.conf file to drop setfcap and setpcap capabilities. To do this, comment the following lines:
    1
    2
    # lxc.cap.drop = setpcap
    # lxc.cap.drop = setfcap
  3. If your system uses AppArmor, you must uncomment the following line in the lxc.conf file:
    lxc.aa_profile = unconfined

 

cpanel on Linux Container

 

VPS ran out of space, are you informed?

Many users are tied up in their day-to-day routines. It is difficult for them to find time to check disk usage on their VPS on a daily basis, until one day they come to realize server has stopped working, website is down and emails are not sending.

Putting the customer at the heart of our business at Vastspace is our objective. To help customers to save time and ensuring good up-time of their VPS, our monitoring system collects daily disk usage statistic from each VPS. Engineers will identify the VPS have consumed 90% of the total disk storage and inform the customers in a timely manner.

SmarterMail 14.x is here

I urge all smartermail users to upgrade their current SmarterMail server to Version 14.0.5637. One of the significant improvement is the the newly compiled ClamAV 64bit which is more efficient and lower system overhead to replace the outdated and less efficient 32bit ClamAV.

 

Version 14.0.5637 (2015-06-08)

Key Features
  • A temporary address can now be generated by users with a short life span that operates as an alias to their account. This is to allow sign ups to external services without giving out the account’s real email address.
  • Message Sniffer is now available as an antispam add-on.
  • Multiple calendars can now be added to a single account.
  • Option to mark a domain as external and have messages to that domain sent either to the domain’s MX record or to a specified host address.
  • System administrators can now add customized HTML and change the login header text through settings on the General Settings page. They may also allow domain administrators to override these customizations.
  • Deleting email folders in either an email client or webmail will now have their associated folder on disk removed as well. Orphaned folders from earlier releases of SmarterMail will be cleaned up automatically.
Changes
  • IMPORTANT: SmarterMail 14 now requires Microsoft .NET 4.5. This prevents SmarterMail from running on Windows Server 2003.
  • Added: An instance of a recurring calendar event can now be deleted from the context menu.
  • Added: An option for external domains on whether messages should deliver locally or remotely if the user account exists locally.
  • Added: An option to toggle between overlaying multiple selected calendars, contacts, tasks or notes collections in a combined view versus displaying one at a time.
  • Added: CalDAV now supports syncing multiple calendars.
  • Added: Contacts, Tasks and Notes now allow multiple collections to be viewed at the same time.
  • Added: Domain conference rooms can now be selected to view on the calendar page.
  • Added: Dropbox is now available as a connected service allowing links to Dropbox files in email messages.
  • Added: Editing a recurring event now displays the series instance’s start and end dates.
  • Added: Exchange Web Services now supports syncing multiple calendars.
  • Added: Grids now support multi-selection on Apple Mac browsers by holding down the command key.
  • Added: IMAP authentication now supports Cram-MD5.
  • Added: Microsoft OneDrive is now available as a connected service allowing links to OneDrive files in email messages.
  • Added: Migrating Google calendars now migrates all calendars from a Google account.
  • Added: Multiple calendars can now be synced using Exchange ActiveSync.
  • Added: Multiple calendars can now be viewed together in a combined view, which color codes events from the different calendars.
  • Added: SMTP Accounts has been added to the Features tab in domain settings, allowing them to be enabled or disabled per domain.
  • Added: System administrators can now customize the messages sent for certain automated emails.
  • Added: Tasks can now be imported from Gmail.
  • Added: The My Today Page now displays appointments for all calendars in a user’s account.
  • Added: The reminders popup now displays items for all calendars in a user’s account.
  • Added: Unsubscribe links for mailing lists can now be given friendly text instead of just displaying the unsubscribe URL.
  • Changed: Blocked senders will now block on the From address in the header of the message, in addition to the Mail From address given in the SMTP session. Previously it only blocked on the Mail From address of the SMTP session.
  • Changed: Content filtering now decodes base64 and quoted-printable encoded text parts in email messages before applying filters.
  • Changed: Improved the ClamAV definitions update process, including 64 bit support and ClamSup.
  • Changed: Migrating the same Google calendar twice will now overwrite the previously migrated events instead of creating duplicates.
  • Changed: Time zone information now utilizes the built-in system registry time zone information instead of an external file.
  • Efficiency: Deleting a large number of items from the IP blacklist or whitelist is now much faster.
  • Efficiency: The load time of the monthly calendar view is now much faster.
  • Fixed: A recipient address formatted with a quoted username containing a certain sequence of characters will no longer cause high CPU during the SMTP session.
  • Fixed: Changed how recurring calendars with a recurrence count of zero are transmitted via Exchange Web Services to work around an error when syncing with emClient.
  • Fixed: Gmail email migration now functions correctly when one or more Gmail labels contain characters that are not allowed in Window’s folder names.
  • Fixed: IMAP search now handles search commands with multiple levels of parenthesized lists correctly.
  • Fixed: Messages displayed in the mobile interface now wrap text when the length of a line exceeds the width of the display.
  • Fixed: Migrating contacts from Gmail now functions correctly.
  • Fixed: The action “Send VCard(s)” now functions correctly for the Global Address List.
  • Fixed: The date and time input fields can no longer be edited when viewing a read-only appointment.
  • Fixed: Two scenarios within the mobile interface where downloading an attachment could fail.
  • Removed: All day appointments no longer display times in webmail.
  • Removed: The five-ten RBL check is no longer included as part of the default RBL checks for either spam filtering or server blacklist checks.
What is IP Reputation Protection?

What is IP Reputation Protection?

IP Reputation Protection monitoring your IP reputation and DNSBL statusDNSBLs and RBLs are general used on mail servers to reject or flag messages sent from sites that have been Blacklisted. If your mail server has been added to the DNSBL’s database, emails sent are likely rejected or identified as SPAM.

Our IP Reputation Protection System queries major DNS-based Blackhole List databases and SenderBase which is one of the world’s largest email and web traffic monitoring network, and process these results to send alerts to our support team to take immediate action. We help customer to identify the root cause, contacting the various DNSBL agency to request removal and mitigate the impact on emails returned to sender due to blacklist.

If the event your mail server has been blacklisted, we usually take less than an hour to restore your mail service with IP Reputation Protection.

Install Windows 2012 Server with GUI on Vastspace SSD Cloud Server just under 4 minutes

If the article on “Cloud Server with SSD vs Cloud Server with spinning drives” isn’t enough to convince you the superior read and write performance of what SSD Cloud server offers, check out this video.

Installing Windows 2012 Server with GUI, just under 4 minutes is near impossible using the conventional spinning hard drives.

Cloud Server with SSD vs Cloud Server with spinning drives

Cloud Server with SSD vs Cloud Server with spinning drives

We have been talking much about our new Cloud server with SSD and its performance. Today, we want to make a comparison and benchmark on the cloud servers with spinning drives SSDs.

Vastspace SSD Cloud Server nodes use only enterprise SSD drives ensuring fast and consistent command response times as well as protect data loss and corruption.

We have done the read & write tests  for our Cloud SSD VPS against a popular SSD VPS before. Today,  we are carrying out test on 2  identical Cloud servers with SSD and Raid 10 15,000 rpm SAS drives respectively.
The test Cloud Servers comes with 2 CPU core, 2Gb memory and 20Gb of disk space.

Both test servers are installed with CentOS 6.5 x64 and hosted in Vastspace Singapore Data Center.

The result is obvious that SSD Cloud server beat the Cloud server with spinning drives hands down, despite the Raid 10 15K rpm SAS drives is still slower in terms of write speed compares to the solid state drives.

Cloud server vs SSD Cloud Server

 

 

 

 

VPS with Ploop

To understand the benefits of having PLOOP On OpenVZ container (Linux VPS), we need to knows what are the limitations of the traditional file system on VPS.

  • Since containers are living on one same file system, they all share common properties of that file system (it’s type, block size, and other options). That means we can not configure the above properties on a per-container basis.
  • One such property that deserves a special item in this list is file system journal. While journal is a good thing to have, because it helps to maintain file system integrity and improve reboot times (by eliminating fsck in many cases), it is also a bottleneck for containers. If one container will fill up in-memory journal (with lots of small operations leading to file metadata updates, e.g. file truncates), all the other containers I/O will block waiting for the journal to be written to disk. In some extreme cases we saw up to 15 seconds of such blockage.
  • Since many containers share the same file system with limited space, in order to limit containers disk space we had to develop per-directory disk quotas (i.e. vzquota).
  • Since many containers share the same file system, and the number of inodes on a file system is limited [for most file systems], vzquota should also be able to limit inodes on a per container (per directory) basis.
  • In order for in-container (aka second-level) disk quota (i.e. standard per-user and per-group UNIX dist quota) to work, we had to provide a dummy file system called simfs. Its sole purpose is to have a superblock which is needed for disk quota to work.
  • When doing a live migration without some sort of shared storage (like NAS or SAN), we sync the files to a destination system using rsync, which does the exact copy of all files, except that their i-node numbers on disk will change. If there are some apps that rely on files’ i-node numbers being constant (which is normally the case), those apps are not surviving the migration
  • Finally, a container backup or snapshot is harder to do because there is a lot of small files that need to be copied.

 

In order to address the above problems OpenVVZ decided to implement a container-in-a-file technology, not different from what various VM products are using, but working as effectively as all the other container bits and pieces in OpenVZ.

The main idea of ploop is to have an image file, use it as a block device, and create and use a file system on that device. Some readers will recognize that this is exactly what Linux loop device does! Right, the only thing is loop device is very inefficient (say, using it leads to double caching of data in memory) and its functionality is very limited.

Benefits

  • File system journal is not bottleneck any more
  • Large-size image files I/O instead of lots of small-size files I/O on management operations
  • Disk space quota can be implemented based on virtual device sizes; no need for per-directory quotas
  • Number of inodes doesn’t have to be limited because this is not a shared resource anymore (each CT has its own file system)
  • Live backup is easy and consistent
  • Live migration is reliable and efficient
  • Different containers may use file systems of different types and properties

In addition:

  • Efficient container creation
  • [Potential] support for QCOW2 and other image formats
  • Support for different storage types

 

This article is extracted and found at : https://openvz.org/Ploop/Why

Is it necessary to run fsck after every 180 days or 38 umounts?

By default, a fsck is forced after 38 mounts or 180 days.

To avoid issues such as this, we recommend scheduling fsck to run a basic weekly check on your server to identify and flag errors. Doing so can prevent unwanted, forced fsck from running in situations such as this. You can then, plan for a time at which a full system fsck is run.

This filesystem will be automatically checked every 38 mounts or
 180 days, whichever comes first. Use tune2fs -c or -i to override.

Or you can disable the fsck to be activated by itself by editing fstb, by updating the last digit of the mounted volume to zero “0” instead of 1.

/dev/mapper/vg_centos-lv_root /       ext4    defaults        1 0

Why ISCSI Multipath on Cloud Server and Storage is important?

By now, you’ve heard from a hundred different sources that moving your operations to the cloud is better, save you money and all the incredible things can be done on Cloud and not on the conventional hosting infra. Frankly, many web hosters jumped on the cloud bandwagon with smaller investment and little knowledge.

Getting a cloud infra connected to internet and having a resilient cloud infra are entirely different matters. I’ve been hearing many cases of VM failed due to missing drive and data corruption, which has been identified one of the common short fall on connectivity redundancy from servers to storage.

Main purpose of multi-path connectivity is to provide redundant access to the storage devices when one or more of the components in a path fail. Another advantage of multi-pathing is the increased throughput by way of load balancing. Common example for the use of multi-pathing is a iSCSI SAN connected storage device. You will have redundancy and maximum performance and especially important feature to be deployed for a more populated cloud environment.