The Funtoo Linux project has transitioned to "Hobby Mode" and this wiki is now read-only.
Difference between revisions of "First-Generation Compute Infrastructure"
(Adding this documentation to a separate page....) |
m (Adding Compute Initiative category) |
||
Line 140: | Line 140: | ||
# Remember to put a valid email address, above ^^ | # Remember to put a valid email address, above ^^ | ||
}} | }} | ||
[[Category:Official Documentation]] | |||
[[Category:Compute Initiative]] |
Latest revision as of 16:05, September 16, 2018
This page documents the setup of our first-generation OpenVZ-based compute infrastructure.
Reference Hardware
The hardware that the Funtoo Project has used for its last two server deployments are documented below:
Component | Description | Cost | Alternatives |
---|---|---|---|
HP Proliant DL 160 Generation 6 (G6), with 48GB RAM and two 6-core Intel Xeon x5650 processors | 1U HP server, Intel Westmere CPUs, 24 CPU threads total. | $750 USD (used, off-lease, eBay) | HP DL 360 G7 |
Crucial MX200 1TB SSD | Root filesystem and container storage | Approx $330 USD | Consider a 256GB SSD for boot, root, swap and a second 1TB SSD for dedicated OpenVZ container use |
The above hardware allows you to build a 1U, 12-core, 24-thread, 48GB compute platform with 1TB of SSD storage for right around $1100.
Once receiving the off-lease server, it's recommended that you remove the CPU heat sinks, clean them and the CPU contact surface with alcohol cleaning pads, and re-apply high-quality thermal grease. Based on my experience, the OEM thermal grease on off-lease servers is often in need of re-application and this will help keep core temperatures well within a safe range when the server is deployed.
Hardware Deployment and Initial Setup
Place the 1TB in drive bay 1 as a single ATA disk, and install the Intel64-westmere build of Funtoo Linux following our Install instructions, using the following recommended configuration:
Configuration overview:
- I typically allocate 1GB for the /boot filesystem
- It's a good idea to have 24-48GB for swap, for emergencies
- Use ext4 for the root filesystem. OpenVZ is optimized for and tested on ext4. Don't use any other filesystem for container-related applications.
- Rather than using debian-sources, use the
openvz-rhel6-stable
kernel with thebinary
USE flag set. - Emerge
sys-cluster/vzctl
and add thevz
service to the default runlevel (this is covered below.) net.eth0
will be configured using Funtoo Networking asinterface-noip
, and will be connected to a WAN switchnet.brwan
will havenet.eth0
as slave, and will be configured with a routable IPv4 address.net.eth1
will be configured using Funtoo Networking asinterface-noip
, and will be connected to a fast private LAN switch.net.brlan
will havenet.eth1
as slave, and will be configured with a non-routable static IPv4 address.
The network and initial server configuration will be covered in more detail below.
Kernel Setup Steps
To set up the kernel, perform the following steps from the initial chroot during install:
chroot # epro mix-ins +openvz-host chroot # emerge -av openvz-rhel6-stable
After emerging boot-update
, ensure that your /etc/boot.conf references the specific version of openvz-rhel6-sources
that you installed above, such as in this example:
/etc/boot.conf
boot {
generate grub
default "kernel-openvz-rhel6-stable-x86_64-2.6.32-042stab111.12"
timeout 3
}
"Funtoo Linux genkernel" {
kernel kernel[-v]
initrd initramfs[-v]
params += real_root=auto rootfstype=auto
}
Then go through the normal process of grub-install
and boot-update
as detailed in the installation instructions.
Initial Ebuilds
You will need to ensure that OpenVZ's userspace tools are installed, and enabled at startup:
chroot # emerge -av vzctl chroot # rc-update add vz default
Similarly, we will use No results to update DNS entries. In Funtoo master DNS, there is an NS record pointing host.funtoo.org
to each of our OpenVZ hosts. This allows container DNS entries to be managed locally by SheerDNS and containers to have foo.host.funtoo.org domain names:
chroot # emerge -av sheerdns chroot # rc-update add sheerdns default
Initial Network Configuration
Now, to configure the network. As described in our summary above, we are going to create two bridges -- one for outgoing Internet traffic, and one for internal traffic. In this example, we are going to assign a routeable and private IP address to each bridge, respectively. This will allow you to also directly reach the OpenVZ host via the Internet as well as via a private IP when connecting to the LAN. Note that containers will use veth
networking, which is more flexible than OpenVZ's typical venet
networking. However, it does allow containers to make full use of the network interface, so it's recommended that each interface is "locked own" to only be able to use the container's IP, for example. We will cover the iptables rules for this later. For now, let's just get the interfaces set up.
In the example configuration below, net.brwan
is going to be our WAN, or "Internet-connected" bridge. We will use the physical interface eth0
as our WAN interface, which will get plugged into a WAN router. Set the bridge up as follows:
chroot # cd /etc/init.d chroot # ln -s netif.tmpl net.brwan chroot # ln -s netif.eth0 net.eth0 chroot # rc-update add net.brwan default
In /etc/conf.d/net.eth0
, put the following:
/etc/conf.d/net.eth0
template=interface-noip
In /etc/conf.d/net.brwan
, put the following, using your own IPv4 address, of course:
/etc/conf.d/net.brwan
template=bridge
ipaddr="1.2.3.4/24"
gateway="1.2.3.1"
nameservers="8.8.8.8 8.8.4.4"
domain="mydomain.com"
slaves=net.eth0
Follow the same steps to set up net.brlan
, except use eth1
as your physical link, and specify a non-routeable IPv4 address in /etc/conf.d/net.brlan
, and do not specify a gateway
at all. And of course, plug your physical eth1
interface into a private LAN switch.
Congratulations! You've now completed all the variant install steps that are needed prior to reboot. Continue setting up the server after completing the official install steps that were not covered above. Remember to set a root password, reboot, and then continue setup below.
Recommended Ebuilds
The following ebuilds are recommended as part of a Funtoo server deployment. First, No results is recommended. For what purpose? Well, the Linux kernel maintains its own internal entropy (randomness) source, which is actually an essential component for encryption. This entropy source is kept viable by injecting it with a lot random timing information from user input -- but on a headless server, this entropy injection doesn't happen nearly as much as it needs to. In addition, we are going to potentially be running hundreds of OpenSSH daemons and other entropy-hungry apps. The solution is to run haveged which will boost the available entropy on our headless server:
root # emerge -av sys-apps/haveged root # rc-update add haveged default
Mcelog is essential for detecting ECC memory failure conditions. Any such conditions will be logged to /var/log/mcelog
:
root # emerge -av app-admin/mcelog root # rc-update add mcelog default
Smartmontools should be configured to monitor for pre-emptive disk failure for all your disks:
root # emerge -av sys-apps/smartmontools root # rc-update add smartd default
Ensure lines similar to the following appear in your /etc/smartd.conf
:
/etc/smartd.conf
# -M test also ensures that a test alert email is sent when smartd is started or restarted, in addition to regular monitoring
DEVICESCAN -M test -m me@email.com
# Remember to put a valid email address, above ^^