Whatsapp/ Telegram: +65 9776 5889 Live Chat Submit Ticket   Login
cPanel 11.52 on LXC

cPanel 11.52 on LXC

Today, I’ve installed cPanel 11.52 on LXC. LXC knowns as Linux Containers certainly are a lightweight virtualization technology. They are more quite like an enhanced chroot instead of full virtualization like Qemu or VMware, they do not emulate hardware and share the same operating system kernel on a host. Linux-vserver and OpenVZ are two pre-existing, independently developed implementations of containers-like functionality for Linux.

Vastspace has no plan to launch LXC any time soon in spite of the benefits and performance gain over OpenVZ. In case you want to try it out yourself, this is the recommendation from cPanel.

To run cPanel & WHM inside an LXC container, cPanel strongly recommend that you use the following settings:


We strongly recommend that you use Red Hat® Enterprise Linux (RHEL) 7, CloudLinux™ 7, or CentOS 7 as your LXC host. This ensures the best compatibility with cPanel & WHM. While other Linux distributions may work, they require that the system administrator performs additional steps, which we do not support.


We strongly recommend that your LXC containers use CentOS, RHEL, or CloudLinux 6 as the guest. A CentOS, RHEL, or CloudLinux 7 installation requires additional steps to use it as the guest.

Privileged vs unprivileged containers

cPanel & WHM functions in both privileged and unprivileged containers. We strongly recommend that you run cPanel & WHM in a privileged container, because it expects unrestricted access to the system.

The following limitations are inherent to an unprivileged container:

  • The host operating system treats the root user as a non-root user.
  • You cannot raise the hard limit of a process if you previously lowered it. This action could cause EasyApache 3 to fail.
  • Subtle behavior differences may occur.

Required changes for CentOS 7, RHEL 7, or CloudLinux 7

You must make the following configuration changes to run cPanel & WHM inside an LXC container:

  1. After you create the LXC container, change the lxc.include line in the lxc.conf file to the following line:
    lxc.include = /usr/share/lxc/config/fedora.common.conf
  2. Edit the lxc.conf file to drop setfcap and setpcap capabilities. To do this, comment the following lines:
    # lxc.cap.drop = setpcap
    # lxc.cap.drop = setfcap
  3. If your system uses AppArmor, you must uncomment the following line in the lxc.conf file:
    lxc.aa_profile = unconfined


cpanel on Linux Container


VPS with Ploop

To understand the benefits of having PLOOP On OpenVZ container (Linux VPS), we need to knows what are the limitations of the traditional file system on VPS.

  • Since containers are living on one same file system, they all share common properties of that file system (it’s type, block size, and other options). That means we can not configure the above properties on a per-container basis.
  • One such property that deserves a special item in this list is file system journal. While journal is a good thing to have, because it helps to maintain file system integrity and improve reboot times (by eliminating fsck in many cases), it is also a bottleneck for containers. If one container will fill up in-memory journal (with lots of small operations leading to file metadata updates, e.g. file truncates), all the other containers I/O will block waiting for the journal to be written to disk. In some extreme cases we saw up to 15 seconds of such blockage.
  • Since many containers share the same file system with limited space, in order to limit containers disk space we had to develop per-directory disk quotas (i.e. vzquota).
  • Since many containers share the same file system, and the number of inodes on a file system is limited [for most file systems], vzquota should also be able to limit inodes on a per container (per directory) basis.
  • In order for in-container (aka second-level) disk quota (i.e. standard per-user and per-group UNIX dist quota) to work, we had to provide a dummy file system called simfs. Its sole purpose is to have a superblock which is needed for disk quota to work.
  • When doing a live migration without some sort of shared storage (like NAS or SAN), we sync the files to a destination system using rsync, which does the exact copy of all files, except that their i-node numbers on disk will change. If there are some apps that rely on files’ i-node numbers being constant (which is normally the case), those apps are not surviving the migration
  • Finally, a container backup or snapshot is harder to do because there is a lot of small files that need to be copied.


In order to address the above problems OpenVVZ decided to implement a container-in-a-file technology, not different from what various VM products are using, but working as effectively as all the other container bits and pieces in OpenVZ.

The main idea of ploop is to have an image file, use it as a block device, and create and use a file system on that device. Some readers will recognize that this is exactly what Linux loop device does! Right, the only thing is loop device is very inefficient (say, using it leads to double caching of data in memory) and its functionality is very limited.


  • File system journal is not bottleneck any more
  • Large-size image files I/O instead of lots of small-size files I/O on management operations
  • Disk space quota can be implemented based on virtual device sizes; no need for per-directory quotas
  • Number of inodes doesn’t have to be limited because this is not a shared resource anymore (each CT has its own file system)
  • Live backup is easy and consistent
  • Live migration is reliable and efficient
  • Different containers may use file systems of different types and properties

In addition:

  • Efficient container creation
  • [Potential] support for QCOW2 and other image formats
  • Support for different storage types


This article is extracted and found at : https://openvz.org/Ploop/Why