LXC vs. OpenVZ

LXC (Linux Containers) is the new flagship of container-based virtualization on Linux. Although being around for quite some time, LXC is still not feature complete which leads to many people still using OpenVZ. Mostly based on Michael Renner’s work, here are the features that are missing and those already complete in a nice overview. The data represented here takes Ubuntu 12.04 as a basis, as it is the distribution that supports LXC best.

Fresh up! Read my LXC tutorial!

  OpenVZ LXC
Comes with Linux kernel No Yes
Limiting memory usage Yes Yes
Limiting kernel memory usage Yes No
Limiting CPU usage Yes Yes
Limiting disk usage Yes Partial/Workaround
Limiting disk IO No Yes
Checkpointing Yes No
Live migration Yes No/Workaround
Container lockdown (security) Yes Partial

Comes with Linux kernel

LXC comes with the Linux kernel whereas OpenVZ does not, although OpenVZ uses many facilities that are present in the kernel no small part to their contribution. Installing LXC is much easier, because you don’t need to build a custom kernel. If you are however using Redhat, SuSE or the likes (CentOS, etc), you can use the prebuilt OpenVZ kernel images for your project. Be aware that OpenVZ patches only apply to quite old kernels, so there definitely is a security risk so you may not be able to get them to work. Newer kernels are not supported.

On a side note, OpenVZ is now in the process of porting their much userspace utilities to the LXC base in the Linux kernel, so we will definitely see some improvement there in the future. As Kir Kolyshkin points out in the comments, vzctl 4.3 supports non-OpenVZ containers and even live migration through CRIU.

Limiting memory usage

Limiting user space memory usage works both in LXC and OpenVZ. In LXC you can use two cgroup settings in the VM-s configuration file to adjust the guest’s allowed memory. The most important settings are lxc.cgroup.memory.limit_in_bytes for memory and lxc.cgroup.memory.memsw.limit_in_bytes for swap size. Depending on your exact kernel version there are also some other settings worth exploring. You can list them by looking at /sys/fs/cgroup/memory/lxc/your-vm-name.

Limiting kernel memory

Kernel memory can be used by applications while they interact with the kernel. When using OpenVZ, that memory can be limited per VE. As Kir Kolyshkin pointed out in the comments, with LXC you are a bit out of luck. That feature is something we are to expect in the future.

Limiting CPU usage

Limiting CPU usage again works in LXC and OpenVZ. CPU bandwidth is limited by assigning shares. The more shares a VM has the more likely it is to get CPU time. You can set a VM’s CPU shares by adjusting the lxc.cgroup.cpu.shares setting. You can also limit the CPU cores of a VM using lxc.cgroup.cpuset.cpus

Limiting disk usage

Unfortunately there is no central place for disk quotas in Linux, therefore LXC doesn’t support disk quotas. Limits can be placed on VM’s by putting them on LVM volumes but this means less IO performance. Depending on your use case this may not be ideal.

Limiting disk IO

Limiting disk IO is important in scenarios, where you have IO hungry applications next to each other. Quite surprisingly this feature hasn’t made it into OpenVZ and is only available in the commercial Virtuzzo application. LXC however has this feature via the aforementioned cgroups. The settings for IO limiting are located under lxc.cgroup.blkio. For details please look into /sys/fs/cgroup/blkio/lxc.

Checkpointing

Checkpointing is a feature very close to the traditional hibernation. The state of the VM is saved into a file and can be reloaded. This feature will be available using the lxc-checkpoint command, but is not implemented at the time of writing.

Live migration

Live migration means that a VM can be migrated to another physical host without actually shutting down the VM itself. In other words, the memory contents are also moved to the new host. Where OpenVZ has had this feature for quite a long time, LXC still has parts missing as Michael indicates. Personally I hope that we will see this feature in the near future.

As a workaround you can use CRIU, which is a project that implements user-space live migration.

Container lockdown (security)

Isolation is an important part of virtualization. OpenVZ has done quite a good job at this, but LXC still has issues here. Even with AppArmor enabled, in Ubuntu you still have access to dmesg from the guests and /proc/kcore and /proc/sysrq-trigger are still accessible, so a root user in a guest VM could easily restart the host machine. Improvements are planned for Ubuntu version 13.04.

Conclusions

LXC is well on it’s way, but still not there. The most painful lack of features are the ones locking down the guest properly. Ubuntu has these features planned for 13.04 which is here in a few months, but as it’s not an LTS version, it isn’t going to make sysadmins happier.

To sum it all up, LXC is good if you want to use if for flexibility, but not quite adequate for hosting foreign VM’s yet. If you are in that business, you should really wait another year or so.

Sources

31 thoughts on “LXC vs. OpenVZ

  1. Antho

    Thanks for this overview, it helps me to synthesize the pros and cons.
    LXC has a great potential, outside of personal computers.
    Wait and see the next improvements.

    Reply
  2. Jake

    Thank you for the detailed comparison. We’re using PVC (commercial version of openvz) because the performance is essentially native. (and for the excellent support, and the fact that the commercial version has a few really nice features that openvz does not)

    We’d love to look at moving some of our workloads to lxc to get past problems with big vendors like EMC who will only support native linux kernels. That is really the only problem we’ve ever had with VZ, and solving that issue is a driver for our interest in lxc.

    But the container lockdown is critical, and live migration is something we don’t want to give up either. Hopefully soon.

    Reply
    1. János Pásztor Post author

      Hi Jake, thank you for your response.

      LXC is still a long way from being a full-blown, commercially supportable solution. It’s not only the missing features, but also all the little quirks and hacks you need to employ to use it. It’s cool, it’s useful if you are into taking stuff apart, but it’s still too unrefined and rough around the edges for an enterprise virtualization a company could sell support for.

      In your case I’d recommend waiting for Parallels or the OpenVZ crowd to adapt their userland tools to LXC, which will make it immensely more admin-friendly.

      Reply
    1. János Pásztor Post author

      Sune Beck like with Ubuntu, your only choice is to the yet-incomplete LXC or staying with an older version of Debian.

      I have tried to hack an OpenVZ kernel into Ubuntu, but it just wouldn’t work, probably because of some udev-related problem. It may be done, but I quite frankly lack the free time to get to the bottom of it.

      Reply
  3. Kir Kolyshkin

    Be aware that OpenVZ patches only apply to quite old kernels, so there definitely is a security risk. Newer kernels are not supported.

    It’s a pity to see such piece of FUD here :(

    We at OpenVZ care a lot about security. OpenVZ kernels are based on Red Hat Enterprise Linux (RHEL) kernels. We chose RHEL as a “donor kernel” for its stability, security and good maintenance record. The fact that RHEL6 comes with 2.6.32 does not mean it’s an old kernel. It was 2.6.32 a few years ago, then Red Hat applied a lot of updates and fixes, and they are still doing that. So now the only resemblance to 2.6.32 is the version number.

    So, because of that, the kernel we provide is very stable, but it is also very secure and up to date (in terms of drivers as well). So if you look for stability, security, quality — RHEL6 kernel (or OpenVZ kernel, for that matter) is one of the best options available). But if you only look for 3.x version number — yes, you will be disappointed.

    Reply
    1. hron84

      And OpenVZ team do not plan support other kernels than RHEL6 one? Even if RHEL is the most stable kernel, some 3.x versions has a good reputation in the community, especially Ubuntu LTS ones. But 2.6.32 and 3.x differs enough to porting patches is not be trivial. I think if there will be a 3.x kernel patchset (even if it would be marked as experimental) then support from other distros would be better… And who knows better OpenVZ than its development team? :-)

      Reply
      1. János Pásztor Post author

        The 3.x “patchset” is partially already in the mainline kernel because Open VZ contributed some of it in the first place. It’s the stuff LXC is made of.

        Reply
        1. hron84

          “partially” – it means some hacks required to use OpenVZ with 3.x, or it’s usable flawless?

          Reply
          1. János Pásztor Post author

            Please actually read the blog post above, it’ll explain what’s done in LXC/the 3.x kernel line and what’s not. You can use vzctl as a frontend for the features that are in the 3.x kernel features, but you will lack some features that OpenVZ has like live migration, limiting kernel memory, etc. (That’s detailed in the post.)

  4. Kir Kolyshkin

    On a side note, OpenVZ is now in the process of porting their much userspace utilities to the LXC base in the Linux kernel, so we will definitely see some improvement there in the future.

    vzctl-4.0 that can work with non-openvz/upstream 3.x kernel was available in September 2012. Now, in vzctl-4.3, we even support containers live migration on 3.9 or so kernel (through OpenVZ sub-project CRIU, http://criu.org). I suggest you give vzctl-4.3.1 a test drive and report your experience.

    Reply
  5. Kir Kolyshkin

    Limiting memory usage works both in LXC and OpenVZ

    Kernel memory still can’t be properly limited in the upstream kernel. But we are working on that.

    While at it, I want to point out that about half of upstream kernel functionality for containers/LXC support comes directly from OpenVZ team. PID and net namespaces are two most prominent examples, but it’s not limited to those. Overall, we have 1500+ patches in the kernel, mostly for containers support and resource management, although some are pure fixes in not-directly-related areas like file systems, memory management etc.

    Reply
  6. armand

    is it possible to downgrade kernel in LXC? i.e have a container use an older kernal version than its host?

    Reply
    1. János Pásztor Post author

      Neither OpenVZ nor LXC have a so-called guest kernel, thus the name container virtualization. In other words the guest uses the host’s kernel, using a separate version is not possible.

      Reply
  7. Pingback: 机械鼠的博客 | LXC vs. OpenVZ | János Pásztor

  8. compevo

    A well done article. It seems if anything that OpenVZ should be included in the Linux kernel.

    LXCs lack of basic security will make most hesitant to ever trust it to be secure in the future though.

    Reply
    1. János Pásztor Post author

      OpenVZ is built on top of the latest Redhat kernel, feel free to use it if you’re in the Redhat-line. However if you are using Debian derivatives, OpenVZ in the kernel isn’t really an option since they are following the vanilla kernel a lot closer.

      Also, LXC is not lacking security per se, it’s just that it’s incomplete. Once the user namespaces are done in the kernel, it will be a lot better. (Not that it’s totally insecure, breaking it would still require quite a bit of work even in it’s current state.)

      One thing I’ll grant you, it’s not a toolkit to be hosting a VPS service with.

      Reply
      1. Alex

        You do know that OpenVZ supports Debian directly through their own releases yes? This includes a fully functional OpenVZ built kernel. It’s been that like that for quite a while.

        http://openvz.org/Installation_on_Debian

        Also you state that “you may not be able to get them to work” with regards to the “older” kernels that are pushed out. I suspect you don’t understand completely how the RHEL release cycle works :) Given that the RHEL kernel is the base for the OpenVZ kernel you will find that a lot of hardware is supported – Red Hat backport a lot of the newer features, drivers and security fixes into their kernels. I’m yet to have a server of mine have issues due to newer hardware than is supported. See http://openvz.livejournal.com/45647.html – as it happens Debian gives me more issues hardware wise than RHEL even with their “newer” kernels.

        Reply
        1. János Pásztor Post author

          Hello Alex, I do understand the release cycle of RHEL, but you must understand that even though the kernels are patched and updated, the API’s are not. This means that the userspace utilities will have problems that need to be addressed. In my experience working with RHEL kernels on Debian is just a pain. On the setups I see, the people responsible for them have been switching for CentOS because the Debian path was just a pain to maintain.

          Reply
  9. Guest

    But now LXC has a powerfull tool available to make it more attractive, Docker. What is your thoughts on that ?

    Reply
    1. János Pásztor Post author

      Although I haven’t tried Docker myself, I have done some considerable testing on the underlying filesystem (AUFS). As I reckon it, AUFS is an awesome tool to create flexible setups, but may be a serious IO bottleneck. Keeping that in mind, I think Docker is more a tool to set up less IO intensive setups like development and testing environments, for high performance servers something else should be used.

      Reply
  10. Pingback: [Из песочницы] Виртуализация с OpenVZ » CreativLabs

  11. Pingback: 2014-03-06 聚會手記 - Hacking Thursday

  12. Johnd7

    I’m curious to uncover out what weblog system youre employing? Im experiencing some small security problems with my latest weblog and Id like to locate something a lot more safeguarded. Do you have any recommendations? gkeacddddbeb

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>