注意:

The Funtoo Linux project has transitioned to "Hobby Mode" and this wiki is now read-only.

OpenVZ on Funtoo Linux

From Funtoo
Revision as of 02:02, January 15, 2013 by Cyball (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Introduction

OpenVZ (see wiki.openvz.org) is an OS-level server virtualization solution, built on Linux. OpenVZ allows the creation of isolated, secure virtual Linux containers (called "VE"s) on a single physical server. Each container has its own local uptime, power state, network interfaces, resource limits and isolated portion of the host's filesystem. OpenVZ is often described as "chroot on steroids."

Funtoo supports OpenVZ in the following ways:

  • Building of OpenVZ templates using Metro, our distribution build tool.
  • Improvement of vzctl, by developing an improved/patched version hosted on GitHub.
  • Integration of Funtoo Linux Networking support into vzctl (these patches have been accepted upstream by the OpenVZ project.)
  • Improvement of vzctl startup scripts to do things like properly initialize veth and vzeventd.
  • Integrating additional patches into openvz-rhel6-stable and openvz-rhel5-stable ebuilds in order to ensure production-quality OpenVZ functionality.
  • Maintaining compatibility with production RHEL5-based OpenVZ kernels, as well as instructions on how to get Funtoo Linux set up for these kernels in our RHEL5 Kernel HOWTO. (Note: openvz-rhel6-kernel RHEL6-based kernel is now the recommended kernel for deploying OpenVZ.)

In addition, Daniel is currently employed at Zenoss and is the author and maintainer of the Zenoss OpenVZ ZenPack (GitHub link)

Recommended Versions

For setting up OpenVZ on Funtoo Linux so that you can create Linux-based containers, an x86-64bit version of Funtoo Linux is strongly recommended. The openvz-rhel6-stable ebuild is the recommended kernel to use. If you emerge this kernel with the binary USE flag enabled, it will build a binary kernel and initrd using the default Red Hat configuration which should boot on nearly all hardware. After emerging, you will need to edit /etc/boot.conf, run boot-update, and reboot into the new OpenVZ kernel.

Alternatively, you could emerge openvz-rhel5-stable with the binary USE flag enabled to use the older RHEL5-based OpenVZ kernel. This requires additional steps which are covered in the RHEL5 Kernel HOWTO.

You will also need to emerge vzctl, which are the OpenVZ userspace tools.

Configuration

After booting into an OpenVZ-enabled kernel, OpenVZ can be enabled as follows:

root # emerge vzctl
root # rc-update add vz default
root # rc

Funtoo Linux OpenVZ Templates

The Funtoo Linux stage directory also contains Funtoo Linux OpenVZ templates in the openvz/ directory. These can be used as follows:

root # cd /vz/template/cache
root # wget http://ftp.osuosl.org/pub/funtoo/funtoo-current/openvz/x86-64bit/funtoo-openvz-core2_64-funtoo-current-2011-12-31.tar.xz
root # vzctl create 100 --ostemplate funtoo-openvz-core2_64-funtoo-current-2011-12-31
Creating container private area (funto-openvz-core2-2010.11.06)
Performing postcreate actions
Container private area was created

If you are not using Funtoo Linux, you may need to convert the .xz template to a .gz template for this to work.

Resource Limits

If you do not need to have any resource limits in place for the VE, then on a Funtoo Linux host, they can be enabled as follows:

ninja1 ~ # vzctl set 100 --applyconfig unlimited --save

Starting the Container

Here's how to start the container:

ninja1 ~ # vzctl start 100
Starting container ...
Container is mounted
Setting CPU units: 1000
Container start in progress...
ninja1 ~ # 

Networking

veth networking

OpenVZ has two types of networking. The first is called "veth", which provides the VE with a virtual ethernet interface. This allows the VE to do things like broadcasting and multicasting, which means that DHCP can be used. The best way to set up veth networking is to use a bridge on the physical host machine. For the purposes of this example, we'll assume your server has a wired eth0 interface that provides Internet connectivity - it does not need to have an IP address. To configure a bridge, we will create a network interface called "br0", a bridge device, and assign your static ip to br0 rather than eth0. Then, we will configure eth0 to come up, but without an IP, and add it as a "slave" of bridge br0. Once br0 is configured, we can add other network interfaces (each configured to use a unique static IP address) as slaves of bridge br0, and these devices will be able to communicate out over your Ethernet link.

Let's see how this works.

Network - Before

Before the bridge is configured, we probably have an /etc/conf.d/netif.eth0 that looks like this:

template="interface"
ipaddr="10.0.1.200/24"
gateway="10.0.1.1"
nameservers="10.0.1.1"
domain="funtoo.org"

Network - After

To get the bridge-based network configured, first connect to a physical terminal or management console, as eth0 will be going down for a bit as we make these changes.

We are now going to set up a bridge with eth0's IP address, and add eth0 to the bridge with no IP. Then we can throw container interfaces into the bridge and then can all communicate out using eth0.

We will mv netif.eth0 netif.br0, and then edit the file so it looks like this (first line modified, new line added at end):

template="bridge"
ipaddr="10.0.1.200/24"
gateway="10.0.1.1"
nameservers="10.0.1.1"
domain="funtoo.org"
slaves="netif.eth0"

If you want to bridge the wlan0 device, you'll need the additional wpa_supplicant flag -b br0. In most cases for wlan0 it is much better to use a route:

root # iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -o wlan0 -j SNAT your_host_ip_address

Now, time to create a new /etc/conf.d/netif.eth0, but this time we won't associate an IP address with it. Config file looks like this, a single line:

template="interface-noip"

Now, we need to create a necessary symlink in /etc/init.d and get our bridge added to the default runlevel:

root # cd /etc/init.d
root # ln -s netif.tmpl netif.br0
root # rc-update add netif.br0 default

Now, let's enable our new network interfaces:

root # /etc/init.d/netif.eth0 stop
root # rc

The result of these changes is that you now have initscripts to create a "br0" interface (with static IP), with "eth0" as its slave (with no IP). Networking should still work as before, but now you are ready to provide bridged connectivity to your virtual containers since you can add their "veth" interfaces to "br0" and they will be bridged to your existing network.

Using The Bridge

To add a veth "eth0" interface to your VE, type the following:

root # vzctl stop 100
root # vzctl set 100 --netif_add eth0,,,,br0 --save
root # vzctl start 100

Once the VE is started, the network interface inside the VE will be called "eth0", and the network interface on the host system will be named "veth100.0". Because we specified "br0" after the 4 commas, vzctl will automatically add our new "veth100.0" interface to bridge br0 for us. We can see this by typing "brctl show" after we have started the VE by typing "vzctl start 100".

root # brctl show
bridge name     bridge id               STP enabled     interfaces
br0             8000.0026b92c72f5       no              eth0
                                                        veth100.0

VE Configuration

You will also need to manually configure the VE to acquire/use a valid IP address - DHCP or static assignment will both work; typically, this is done by starting the VE with "vzctl start 100" and then typing "vzctl enter 100", which will give you a root shell inside the VE. Then, once you have configured the network, you can ensure that the VE is accessible remotely via SSH. Note that once inside the VE (with "vzctl enter 100"), you configure the VE's network interface as you would on a regular Linux distribution - the VE will be bridged into your LAN, so it can talk to your DHCP server, and can use an IP address that it acquires via DHCP or it can use a static address.

venet networking

"venet" is OpenVZ's other form of host networking. It can be easier to configure than veth, but does not allow the use of broadcast or multicast, so DHCP is not possible on the VE side. For this reason, an IP address must be statically assigned to the VE, as follows:

root # vzctl set 100 --ipadd 10.0.1.201 --save
root # vzctl set 100 --nameserver 8.8.4.4 --save #google public DNS server
root # vzctl set 100 --hostname foobar --save

With venet configuration, some additional steps are required in case of PPPoE Internet connection. We will use iptables to get network working in all VE's.

root # echo 1 > /proc/sys/net/ipv4/ip_forward

or, alternatively set it in /etc/sysctl.conf to have ip forward at boot

root # echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
root # sysctl -p

Add an iptables rule, save and start the firewall:

root # iptables -t nat -A POSTROUTING -o ppp0 (or your desired interface) -j MASQUERADE
root # /etc/init.d/iptables save
root # rc-update add iptables default
root # rc

All VE's now have a network connection from HN.

When using venet, OpenVZ will handle the process of ensuring the VE has its network properly configured at boot. As of vzctl-3.0.24.2-r4 in Funtoo Linux, Funtoo Linux VEs should be properly auto-configured when using venet.

With venet, there is no need to add any interfaces to a bridge - OpenVZ treats venet interfaces as virtual point-to-point interfaces so that traffic is automatically routed properly from the VE to the host system, out the default route of the host system if necessary.