|
|
(7 intermediate revisions by one other user not shown) |
Line 1: |
Line 1: |
| {{Note|1= | | {{Warning|1= |
| This page documents our first-generation compute infrastructure. Our second-generation compute infrastructure has not yet been documented on the wiki.}} | | This page is in need of updates. See [[LXD]] for a more up-to-date page on containers on Funtoo.}} |
|
| |
|
| For many years, the Funtoo project has been using Funtoo Linux for its entire infrastructure. A few years ago, we began to allow Funtoo Linux users to use our original [http://openvz.org OpenVZ-based] infrastructure for hosting, development and other projects, which you can learn more about at [[Funtoo Hosting]]. | | For many years, the Funtoo project has been using Funtoo Linux for its entire infrastructure. A few years ago, we began to allow Funtoo Linux users to use our original [http://openvz.org OpenVZ container-based] infrastructure for hosting, development and other projects. For our second-generation compute infrastructure, we utilized LXD and ZFS and explored the use of Intel Optane. Our third-generation infrastructure now uses LXD combined with a hybrid BTRFS storage solution. If you would like to to learn about how to get a container on our infrastructure, please see [[Funtoo Hosting]]. |
|
| |
|
| The '''Funtoo Compute Initiative''' is an effort to document how Funtoo sets up servers and its container infrastructure, including everything from ordering of bare metal, to deployment, operation and maintenance. In short, it's our effort to share all our tricks with you, so you can use Funtoo Linux to quickly and inexpensively deploy very powerful hosting and container-based compute solutions. | | The '''Funtoo Compute Initiative''' is an effort to document how Funtoo sets up servers and its container infrastructure, including everything from ordering of bare metal, to deployment, operation and maintenance. In short, it's our effort to share all our tricks with you, so you can use Funtoo Linux to quickly and inexpensively deploy very powerful hosting and container-based compute solutions. |
|
| |
|
| === Reference Hardware === | | === Bare-Metal Setup === |
|
| |
|
| The hardware that the Funtoo Project has used for its last two server deployments are documented below:
| | This section documents how we typically configure our servers from bare metal. |
|
| |
|
| {{TableStart}}
| | * [[First-Generation Compute Infrastructure]] - legacy |
| <tr><th>Component</th><th>Description</th><th>Cost</th><th>Alternatives</th></tr>
| | * [[Second-Generation Compute Infrastructure]] - currently in use |
| <tr><td>HP Proliant DL 160 Generation 6 (G6), with 48GB RAM and two 6-core [http://ark.intel.com/products/47922/Intel-Xeon-Processor-X5650-12M-Cache-2_66-GHz-6_40-GTs-Intel-QPI Intel Xeon x5650] processors</td><td>1U HP server, Intel Westmere CPUs, 24 CPU threads total.</td><td>'''$750 USD''' (used, off-lease, eBay)</td><td>HP DL 360 G7</td></tr>
| | * [[Third-Generation Compute Infrastructure]] - in development |
| <tr><td>Crucial MX200 1TB SSD</td><td>Root filesystem and container storage</td><td>Approx '''$330 USD'''</td><td>Consider a 256GB SSD for boot, root, swap and a second 1TB SSD for dedicated OpenVZ container use</td></tr>
| |
| {{TableEnd}}
| |
|
| |
|
| The above hardware allows you to build a 1U, 12-core, 24-thread, 48GB compute platform with 1TB of SSD storage for right around $1100.
| | [[Category:Official Documentation]] |
| | | [[Category:Compute Initiative]] |
| {{Important|Once receiving the off-lease server, it's recommended that you remove the CPU heat sinks, clean them and the CPU contact surface with alcohol cleaning pads, and re-apply high-quality thermal grease. Based on my experience, the OEM thermal grease on off-lease servers is often in need of re-application and this will help keep core temperatures well within a safe range when the server is deployed.}}
| |
| | |
| === Hardware Deployment and Initial Setup ===
| |
| | |
| Place the 1TB in drive bay 1 as a single ATA disk, and install the [[Intel64-westmere]] build of Funtoo Linux following our [[Install|Install instructions]], using the following recommended configuration:
| |
| | |
| '''Configuration overview:'''
| |
| | |
| # I typically allocate 1GB for the /boot filesystem
| |
| # It's a good idea to have 24-48GB for swap, for emergencies
| |
| # Use '''ext4''' for the root filesystem. OpenVZ is optimized for and tested on ext4. Don't use any other filesystem for container-related applications.
| |
| # Rather than using debian-sources, use the {{c|openvz-rhel6-stable}} kernel with the {{c|binary}} USE flag set.
| |
| # Emerge {{c|sys-cluster/vzctl}} and add the {{c|vz}} service to the default runlevel (this is covered below.)
| |
| # {{c|net.eth0}} will be configured using [[Networking|Funtoo Networking]] as {{c|interface-noip}}, and will be connected to a WAN switch
| |
| # {{c|net.brwan}} will have {{c|net.eth0}} as slave, and will be configured with a routable IPv4 address.
| |
| # {{c|net.eth1}} will be configured using [[Networking|Funtoo Networking]] as {{c|interface-noip}}, and will be connected to a fast private LAN switch.
| |
| # {{c|net.brlan}} will have {{c|net.eth1}} as slave, and will be configured with a non-routable static IPv4 address.
| |
| | |
| The network and initial server configuration will be covered in more detail below.
| |
| | |
| ==== Kernel Setup Steps ====
| |
| | |
| To set up the kernel, perform the following steps from the initial chroot during install:
| |
| | |
| {{console|body=
| |
| chroot # ##i##epro mix-ins +openvz-host
| |
| chroot # ##i##emerge -av openvz-rhel6-stable
| |
| }}
| |
| | |
| After emerging {{c|boot-update}}, ensure that your /etc/boot.conf references the specific version of {{c|openvz-rhel6-sources}} that you installed above, such as in this example:
| |
| | |
| {{file|name=/etc/boot.conf|body=
| |
| boot {
| |
| generate grub
| |
| default "kernel-openvz-rhel6-stable-x86_64-2.6.32-042stab111.12"
| |
| timeout 3
| |
| }
| |
| | |
| "Funtoo Linux genkernel" {
| |
| kernel kernel[-v]
| |
| initrd initramfs[-v]
| |
| params += real_root=auto rootfstype=auto
| |
| }
| |
| }}
| |
| | |
| Then go through the normal process of {{c|grub-install}} and {{c|boot-update}} as detailed in the installation instructions.
| |
| | |
| ==== Initial Ebuilds ====
| |
| | |
| You will need to ensure that OpenVZ's userspace tools are installed, and enabled at startup:
| |
| | |
| {{console|body=
| |
| chroot # ##i##emerge -av vzctl
| |
| chroot # ##i##rc-update add vz default
| |
| }}
| |
| | |
| Similarly, we will use {{Package|SheerDNS}} to update DNS entries. In Funtoo master DNS, there is an NS record pointing {{f|host.funtoo.org}} to each of our OpenVZ hosts. This allows container DNS entries to be managed locally by SheerDNS and containers to have foo.host.funtoo.org domain names:
| |
| | |
| {{console|body=
| |
| chroot # ##i##emerge -av sheerdns
| |
| chroot # ##i##rc-update add sheerdns default
| |
| }}
| |
| | |
| ==== Initial Network Configuration ====
| |
| | |
| Now, to configure the network. As described in our summary above, we are going to create two bridges -- one for outgoing Internet traffic, and one for internal traffic. In this example, we are going to assign a routeable and private IP address to each bridge, respectively. This will allow you to also directly reach the OpenVZ host via the Internet as well as via a private IP when connecting to the LAN. Note that containers will use {{f|veth}} networking, which is more flexible than OpenVZ's typical {{f|venet}} networking. However, it does allow containers to make full use of the network interface, so it's recommended that each interface is "locked own" to only be able to use the container's IP, for example. We will cover the iptables rules for this later. For now, let's just get the interfaces set up.
| |
| | |
| In the example configuration below, {{c|net.brwan}} is going to be our WAN, or "Internet-connected" bridge. We will use the physical interface {{c|eth0}} as our WAN interface, which will get plugged into a WAN router. Set the bridge up as follows:
| |
| | |
| {{console|body=
| |
| chroot # ##i##cd /etc/init.d
| |
| chroot # ##i##ln -s netif.tmpl net.brwan
| |
| chroot # ##i##ln -s netif.eth0 net.eth0
| |
| chroot # ##i##rc-update add net.brwan default
| |
| }}
| |
| | |
| In {{f|/etc/conf.d/net.eth0}}, put the following:
| |
| | |
| {{file|name=/etc/conf.d/net.eth0|body=
| |
| template=interface-noip
| |
| }}
| |
| | |
| In {{f|/etc/conf.d/net.brwan}}, put the following, using your own IPv4 address, of course:
| |
| | |
| {{file|name=/etc/conf.d/net.brwan|body=
| |
| template=bridge
| |
| ipaddr="1.2.3.4/24"
| |
| gateway="1.2.3.1"
| |
| nameservers="8.8.8.8 8.8.4.4"
| |
| domain="mydomain.com"
| |
| slaves=net.eth0
| |
| }}
| |
| | |
| Follow the same steps to set up {{c|net.brlan}}, except use {{c|eth1}} as your physical link, and specify a non-routeable IPv4 address in {{c|/etc/conf.d/net.brlan}}, and do not specify a {{c|gateway}} at all. And of course, plug your physical {{c|eth1}} interface into a private LAN switch.
| |
| | |
| '''Congratulations!''' You've now completed all the variant install steps that are needed prior to reboot. Continue setting up the server after completing the official install steps that were not covered above. Remember to set a root password, reboot, and then continue setup below.
| |
| | |
| === Recommended Ebuilds ===
| |
| | |
| The following ebuilds are recommended as part of a Funtoo server deployment. First, {{Package|sys-apps/haveged}} is recommended. For what purpose? Well, the Linux kernel maintains its own internal entropy (randomness) source, which is actually an essential component for encryption. This entropy source is kept viable by injecting it with a lot random timing information from user input -- but on a headless server, this entropy injection doesn't happen nearly as much as it needs to. In addition, we are going to potentially be running hundreds of OpenSSH daemons and other entropy-hungry apps. The solution is to run [http://www.issihosts.com/haveged/ haveged] which will boost the available entropy on our headless server:
| |
| | |
| {{console|body=
| |
| # ##i##emerge -av sys-apps/haveged
| |
| # ##i##rc-update add haveged default
| |
| }}
| |
| | |
| Mcelog is essential for detecting ECC memory failure conditions. Any such conditions will be logged to {{f|/var/log/mcelog}}:
| |
| | |
| {{console|body=
| |
| # ##i##emerge -av app-admin/mcelog
| |
| # ##i##rc-update add mcelog default
| |
| }}
| |
| | |
| Smartmontools should be configured to monitor for pre-emptive disk failure for all your disks:
| |
| | |
| {{console|body=
| |
| # ##i##emerge -av sys-apps/smartmontools
| |
| # ##i##rc-update add smartd default
| |
| }}
| |
| | |
| Ensure lines similar to the following appear in your {{f|/etc/smartd.conf}}:
| |
| | |
| {{file|name=/etc/smartd.conf|body=
| |
| # -M test also ensures that a test alert email is sent when smartd is started or restarted, in addition to regular monitoring
| |
| DEVICESCAN -M test -m me@email.com
| |
| # Remember to put a valid email address, above ^^
| |
| }}
| |