注意:

The Funtoo Linux project has transitioned to "Hobby Mode" and this wiki is now read-only.

Difference between revisions of "Package:Nginx"

From Funtoo
Jump to navigation Jump to search
m (add note that 502 is nginx started with out php fpm running/crashed)
 
(21 intermediate revisions by 3 users not shown)
Line 5: Line 5:
|Repository=Funtoo Overlay
|Repository=Funtoo Overlay
|Overlay=Funtoo
|Overlay=Funtoo
}}  
}}
[[Image:nginx.gif|frame]]
[[Image:nginx.gif|frame]]


nginx (pronounced "engin-x") is a Web and reverse proxy server for HTTP, SMTP, POP3 and IMAP protocols. It focuses on high concurrency, performance and low memory usage. Nginx quickly delivers static content with efficient use of system resources, also dynamic content is delivered on a network using FastCGI, SCGI handlers for scripts, uWSGI application servers or Phusion Passenger module (atm broken in [http://funtoo.org funtoo]), further more it can serve a very capable software load balancer. It uses an asynchronos event-driven approach to handle requests which provides more predictable performance under load, in contrast to the Apache HTTP server model, that uses a threaded or process-oriented approach to handling request. Nginx is licensed under a BSD-like license and it runs on Unix, Linux, BSD variants, Mac OS X, Solaris, AIX and Microsoft Windows.  
nginx (pronounced "engin-x") is a Web and reverse proxy server for HTTP, SMTP, POP3 and IMAP protocols. It focuses on high concurrency, performance and low memory usage. Nginx quickly delivers static content with efficient use of system resources, also dynamic content is delivered on a network using FastCGI, SCGI handlers for scripts, uWSGI application servers or Phusion Passenger module (atm broken in funtoo's nginx, working under {{package|www-servers/tengine}}), further more it can serve a very capable software load balancer. It uses an asynchronos event-driven approach to handle requests which provides more predictable performance under load, in contrast to the Apache HTTP server model, that uses a threaded or process-oriented approach to handling request. Nginx is licensed under a BSD-like license and it runs on Unix, Linux, BSD variants, Mac OS X, Solaris, AIX and Microsoft Windows.  
 
=== Emerging nginx ===
 
Prior to emerging nginx, be sure to do a world update, particularly if you have a new Funtoo system. This will force openssl to rebuild without the bindist option, which is necessary for nginx to work properly with SSL. nginx doesn't have dependency info to enforce this currently, so it is a manual process:
 
{{console|body=
# ##i##emerge -auDN @world
}}
 
Once openssl is updated and rebuilt,  you are ready to install nginx:
{{console|body=
###i## emerge -av nginx
}}
 
===Configuring with SSL===
 
Since SSL is commonplace now, let's look at how to configure a site with SSL. Below is an ideal SSL configuration that should give you an A+ SSL rating under most tests:
 
{{file|name=/etc/nginx/sites-available/www.mydomain.com|desc=ideal SSL configuration|body=
server {
    listen 80;
    server_name www.mydomain.com;
    access_log off;
    return 301 https://$server_name$request_uri;
}
 
server {
    listen 443 default ssl;
    server_name www.mydomain.com;
    ssl on;
    # we will create these with certbot:
    ssl_certificate /etc/letsencrypt/live/www.mydomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/www.mydomain.com/privkey.pem;
 
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_ciphers "ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
    ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0
    ssl_session_cache shared:SSL:10m;
    ssl_session_tickets off; # Requires nginx >= 1.5.9
    ssl_stapling on; # Requires nginx >= 1.3.7
    ssl_stapling_verify on; # Requires nginx => 1.3.7
    # this tells nginx to use google for DNS:
    resolver 8.8.8.8 8.8.4.4 valid=300s;
    resolver_timeout 5s;
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
    add_header X-Content-Type-Options nosniff;
 
  # we will generate this file later
    ssl_dhparam /etc/nginx/dhparams.pem;
 
    root /home/myuser/public_html;
    index index.html index.php;
    access_log      /var/log/nginx/www.mydomain.com.access_log main;
    error_log      /var/log/nginx/www.mydomain.com.error_log info;
 
}
}}
 
To generate {{f|dhparams.pem}}, required for the above nginx configuration, use the following commands:
 
{{console|body=
# ##i##cd /etc/nginx
# ##i##openssl dhparam -out dhparams.pem 2048
}}
 
To generate SSL certificates, we are going to use letsencrypt and certbot. To install certbot, do:
 
{{console|body=
# ##i##emerge certbot
}}
 
Once installed, we will run {{c|certbot certonly}} to start the process of creating the certificate. You must make sure that the following are true:
# nginx is not running
# you have updated mydomain.com DNS so that www.mydomain.com points to your server's IP address
 
You will want to use certbot's built-in Web server. We will also want to update our certificate at least once a day. To do this, perform the following steps to emerge fcron, which will allow us to periodically run a script every 24 hours to do this:
 
{{console|body=
# ##i##emerge fcron
}}
 
Now create a script in {{f|/root}} called {{f|letsencrypt.sh}} that contains the following contents:
 
{{file|name=/root/letsencrypt.sh|body=
#!/bin/bash
/etc/init.d/nginx stop
/usr/bin/certbot renew
/etc/init.d/nginx start
}}
 
This script stops nginx, then runs {{c|certbot renew}}, which sets up a temporary Web server and renews our SSL certificate, and then starts nginx again. The entire process typically takes less than a second so it does not have a significant impact on uptime of your site, but it should be scheduled to run during "off hours". So, let's perform the following steps to make the script executable and then schedule it to run at 3 AM in the morning:
 
{{console|body=
# ##i##rc-update add fcron default
# ##i##rc-update add nginx default
# ##i##rc
# ##i##chmod +x /root/letsencrypt.sh
# ##i##fcrontab -e
}}
 
This will start an editor. Now add the following line to cron:
 
{{file|body=
0 3 * * * /root/letsencrypt.sh
}}
 
Save the file. Our script is now scheduled to run at 3 AM.
 
Let's enable our site:
 
{{console|body=
# ##i##cd /etc/nginx/sites-enabled
# ##i##ln -s ../sites-available/www.mydomain.com www.mydomain.com
# ##i##rm localhost
# ##i##/etc/init.d/nginx restart
# ##i##su myuser
$ ##i##cd
$ ##i##mkdir public_html
$ ##i## echo "hello world!" > public_html/index.html
}}
 
=== Advanced Topics ===


=== USE Expanded flags ===
=== USE Expanded flags ===


Furthermore, you can set the nginx modules you like to use in ''/etc/make.conf'' in the NGINX_MODULES_HTTP variable as NGINX_MODULES_HTTP="variables".
Furthermore, you can set the nginx modules you like to use in <code>/etc/portage/make.conf</code> in the NGINX_MODULES_HTTP variable as NGINX_MODULES_HTTP="variables".


nginx USE flags go into ''/etc/portage/package.use'' or ''/etc/portage/package.use/nginx'', while the HTTP and MAIL modules go as NGINX_MODULES_HTTP or NGINX_MODULES_MAIL are stored in /etc/make.conf. And as you wouldn't server only static html files, but most commonly also php files/scripts you should also install php with fpm enabled and xcache for caching the content, what makes your nginx setup way faster. For xcache you need to set PHP_TARGETS="php5-3" in '/etc/make.conf'.
nginx USE flags go into <code>etc/portage/package.use</code> or <code>/etc/portage/package.use/nginx</code>, while the HTTP and MAIL modules go as NGINX_MODULES_HTTP or NGINX_MODULES_MAIL are stored in <code>/etc/portage/make.conf</code>. And as you wouldn't server only static html files, but most commonly also php files/scripts you should also install php with fpm enabled and xcache for caching the content, what makes your nginx setup way faster. For xcache you need to set PHP_TARGETS="php5-3" in <code>/etc/portage/make.conf</code>.


Example:
Example:
Line 21: Line 144:
</console>
</console>


=== Emerging nginx ===
This config removes things to render html, access directory browsing etc, then enables gzip, and spdy for content delivery since were going to have tengine do the heavy lifting.
{{file|name=/etc/portage/package.use/nginx|desc=ssl/load balance only use flags|body=
www-servers/nginx threads}}
 
{{file|name=/etc/portage/make.conf|desc=ssl/load balance only use flags|body=
NGINX_MODULES_HTTP="access browser charset empty_gif fastcgi gzip limit_conn limit_req map proxy realip referer scgi split_clients secure_link spdy ssi ssl upstream_hash upstream_ip_hash upstream_keepalive upstream_least_conn userid uwsgi"}}
 
==== proxy_pass====
This configuration proxies to other webservers.  In this example we have webrick running on port 3000 behind nginx producing the live link http://localhost/rails
 
{{file|name=/etc/nginx/sites-available/localhost|desc=rails or python configurations|body=
server {
        ...
location /rails/ {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_pass http://127.0.0.1:3000/; #for ruby on rails webrick
            #proxy_pass http://127.0.0.1:8000/; #for python -m http.server
            #proxy_pass http://127.0.0.1:8080/; #for other web servers like apache, lighttpd, tengine, cherokee, etc...
}
        ...
}
}}
 
===== Load Balancing =====
{{file|name=/etc/nginx/sites-available/localhost|desc=setup backend node pool, using host3 3x as much as the others.  We'll also set xforward headers so back end servers see external ip addresses, not localhost or the ip of the load balancer.|body=
upstream backend_nodes {
    server host1.example.com;
    server host2.example.com;
    server host3.example.com weight=3;
}


Now you are ready to install nginx with php and xcache support:
server {
<console>
    listen 80;
###i## emerge -avt nginx php xcache
    server_name example.com;
</console>
so now just check your useflags and press enter to start emerge.


=== Configuring ===
    location / {
        proxy_set_header HOST $host;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass http://backend_nodes;
    }
}
}}


All configuration is done in ''/etc/nginx'' with ''nginx.conf'' as the main configuration file and all virtual hosts in ''/etc/nginx/sites/available'' while you have to symlink ''/etc/nginx/sites-available/{VHOST}'' to ''/etc/nginx/sites-enabled/{VHOST}'' to activate them. An example config for such a {VHOST} looks like that:
===== Passing Requests by Socket =====
{{file|name=/etc/nginx/sites-available/localhost|desc=Make {{package|www-servers/tengine}} do the html rendering work.|body=
upstream backend_nodes {
    server unix:/var/run/tengine.sock;
}


<pre>
server {
server {
     listen         80;
     listen 80;
     server_name     www.example.com;
     server_name example.com;
 
    location / {
        proxy_pass http://backend_nodes;
    }
}
}}


    access_log      /var/log/nginx/www.example.com.access_log main;
===== Proxy Pass Buffering =====
    error_log      /var/log/nginx/www.example.com.error_log info;
{{file|name=/etc/nginx/sites-available/localhost|desc=buffer proxy pass so slow connections will release the backend node connection.|body=
...
proxy_buffering on;
proxy_buffer_size 10k;
proxy_buffers 24 16k;
proxy_busy_buffers_size 16k;
proxy_max_temp_file_size 2048m;
proxy_temp_file_write_size 32k;


     root /var/www/www.example.com/htdocs;
location / {
     proxy_pass http://backend_nodes;
}
}
</pre>
...
}}
 
===== Proxy Pass Caching =====
{{file|name=/etc/nginx/sites-available/localhost|desc=proxy pass cache configuration|body=
proxy_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=proxycache:8m max_size=50m;
proxy_cache_key "$scheme$request_method$host$request_uri$is_args$args";
proxy_cache_valid 200 302 10m;
proxy_cache_valid 404 10m;
server {


The ''nginx.conf'' and ''sites-available/localhost'' file is well commented. Customize it to your needs. Make sure you set the listen option correctly. By default, the listen option is set to listen on the loopback interface. If you leave this unchanged other computers on the network will not be able to connect to the server.
location / {
    proxy_cache proxycache;
    proxy_cache_bypass $http_cache_control;
    add_header X-Proxy-Cache $upstream_cache_status;
 
    proxy_pass http://backend_nodes;
}
}
}}


==== php-fpm ====
==== php-fpm ====


nginx does not natively support php, so we delegate that responsibility to [[Package:Php#Fpm | php-fpm]]
Nginx does not natively support php, so we delegate that responsibility to [[Package:PHP#Fpm | php-fpm]]


{{file|name=/etc/nginx/sites-available/localhost|desc=fpm configuration|body=
{{file|name=/etc/nginx/sites-available/localhost|desc=fpm configuration|body=
Line 76: Line 269:
}}
}}
[https://www.digitalocean.com/community/tutorials/how-to-setup-fastcgi-caching-with-nginx-on-your-vps for more information on php caching]
[https://www.digitalocean.com/community/tutorials/how-to-setup-fastcgi-caching-with-nginx-on-your-vps for more information on php caching]
==== proxy_pass====
This configuration proxies to other webservers.  In this example we have webrick running on port 3000 behind nginx producing the live link http://localhost/rails
{{file|name=/etc/nginx/sites-available/localhost|desc=rails or python configurations|body=
server {
        ...
location /rails/ {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_pass http://127.0.0.1:3000/; #for ruby on rails webrick
            #proxy_pass http://127.0.0.1:8000/; #for python -m http.server
            #proxy_pass http://127.0.0.1:8080/; #for other web servers like apache, lighttpd, tengine, cherokee, etc...
}
        ...
}
}}


=== Location Processing Order ===
=== Location Processing Order ===
Line 139: Line 315:
====502====
====502====
502 Bad Gateway is caused by nginx being started and php-fpm not being started.
502 Bad Gateway is caused by nginx being started and php-fpm not being started.
=== Media ===
{{#widget:YouTube|playlist=PL0k5C_Zqzft0QyD3G4l9cp8h1waSuNWlm}}
{{EbuildFooter}}
{{EbuildFooter}}

Latest revision as of 18:06, October 15, 2017

Nginx

   Tip

We welcome improvements to this page. To edit this page, Create a Funtoo account. Then log in and then click here to edit this page. See our editing guidelines to becoming a wiki-editing pro.

Nginx.gif

nginx (pronounced "engin-x") is a Web and reverse proxy server for HTTP, SMTP, POP3 and IMAP protocols. It focuses on high concurrency, performance and low memory usage. Nginx quickly delivers static content with efficient use of system resources, also dynamic content is delivered on a network using FastCGI, SCGI handlers for scripts, uWSGI application servers or Phusion Passenger module (atm broken in funtoo's nginx, working under www-servers/tengine), further more it can serve a very capable software load balancer. It uses an asynchronos event-driven approach to handle requests which provides more predictable performance under load, in contrast to the Apache HTTP server model, that uses a threaded or process-oriented approach to handling request. Nginx is licensed under a BSD-like license and it runs on Unix, Linux, BSD variants, Mac OS X, Solaris, AIX and Microsoft Windows.

Emerging nginx

Prior to emerging nginx, be sure to do a world update, particularly if you have a new Funtoo system. This will force openssl to rebuild without the bindist option, which is necessary for nginx to work properly with SSL. nginx doesn't have dependency info to enforce this currently, so it is a manual process:

root # emerge -auDN @world

Once openssl is updated and rebuilt, you are ready to install nginx:

root # emerge -av nginx

Configuring with SSL

Since SSL is commonplace now, let's look at how to configure a site with SSL. Below is an ideal SSL configuration that should give you an A+ SSL rating under most tests:

   /etc/nginx/sites-available/www.mydomain.com - ideal SSL configuration
server {
    listen 80;
    server_name www.mydomain.com;
    access_log off;
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 default ssl;
    server_name www.mydomain.com;
    ssl on;
    # we will create these with certbot:
    ssl_certificate /etc/letsencrypt/live/www.mydomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/www.mydomain.com/privkey.pem;

   ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_ciphers "ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
    ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0
    ssl_session_cache shared:SSL:10m;
    ssl_session_tickets off; # Requires nginx >= 1.5.9
    ssl_stapling on; # Requires nginx >= 1.3.7
    ssl_stapling_verify on; # Requires nginx => 1.3.7
    # this tells nginx to use google for DNS:
    resolver 8.8.8.8 8.8.4.4 valid=300s;
    resolver_timeout 5s;
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
    add_header X-Content-Type-Options nosniff;

   # we will generate this file later
    ssl_dhparam /etc/nginx/dhparams.pem;

    root /home/myuser/public_html;
    index index.html index.php;
    access_log      /var/log/nginx/www.mydomain.com.access_log main;
    error_log       /var/log/nginx/www.mydomain.com.error_log info;

}

To generate dhparams.pem, required for the above nginx configuration, use the following commands:

root # cd /etc/nginx
root # openssl dhparam -out dhparams.pem 2048

To generate SSL certificates, we are going to use letsencrypt and certbot. To install certbot, do:

root # emerge certbot

Once installed, we will run certbot certonly to start the process of creating the certificate. You must make sure that the following are true:

  1. nginx is not running
  2. you have updated mydomain.com DNS so that www.mydomain.com points to your server's IP address

You will want to use certbot's built-in Web server. We will also want to update our certificate at least once a day. To do this, perform the following steps to emerge fcron, which will allow us to periodically run a script every 24 hours to do this:

root # emerge fcron

Now create a script in /root called letsencrypt.sh that contains the following contents:

   /root/letsencrypt.sh
#!/bin/bash
/etc/init.d/nginx stop
/usr/bin/certbot renew
/etc/init.d/nginx start

This script stops nginx, then runs certbot renew, which sets up a temporary Web server and renews our SSL certificate, and then starts nginx again. The entire process typically takes less than a second so it does not have a significant impact on uptime of your site, but it should be scheduled to run during "off hours". So, let's perform the following steps to make the script executable and then schedule it to run at 3 AM in the morning:

root # rc-update add fcron default
root # rc-update add nginx default
root # rc
root # chmod +x /root/letsencrypt.sh
root # fcrontab -e

This will start an editor. Now add the following line to cron:

   
0 3 * * * /root/letsencrypt.sh

Save the file. Our script is now scheduled to run at 3 AM.

Let's enable our site:

root # cd /etc/nginx/sites-enabled
root # ln -s ../sites-available/www.mydomain.com www.mydomain.com
root # rm localhost
root # /etc/init.d/nginx restart
root # su myuser
user $ cd
user $ mkdir public_html
user $  echo "hello world!" > public_html/index.html

Advanced Topics

USE Expanded flags

Furthermore, you can set the nginx modules you like to use in /etc/portage/make.conf in the NGINX_MODULES_HTTP variable as NGINX_MODULES_HTTP="variables".

nginx USE flags go into etc/portage/package.use or /etc/portage/package.use/nginx, while the HTTP and MAIL modules go as NGINX_MODULES_HTTP or NGINX_MODULES_MAIL are stored in /etc/portage/make.conf. And as you wouldn't server only static html files, but most commonly also php files/scripts you should also install php with fpm enabled and xcache for caching the content, what makes your nginx setup way faster. For xcache you need to set PHP_TARGETS="php5-3" in /etc/portage/make.conf.

Example:

root # echo "www-servers/nginx USE-FLAG-List" >> /etc/portage/package.use/nginx

This config removes things to render html, access directory browsing etc, then enables gzip, and spdy for content delivery since were going to have tengine do the heavy lifting.

   /etc/portage/package.use/nginx - ssl/load balance only use flags
www-servers/nginx threads
   /etc/portage/make.conf - ssl/load balance only use flags
NGINX_MODULES_HTTP="access browser charset empty_gif fastcgi gzip limit_conn limit_req map proxy realip referer scgi split_clients secure_link spdy ssi ssl upstream_hash upstream_ip_hash upstream_keepalive upstream_least_conn userid uwsgi"

proxy_pass

This configuration proxies to other webservers. In this example we have webrick running on port 3000 behind nginx producing the live link http://localhost/rails

   /etc/nginx/sites-available/localhost - rails or python configurations
server {
        ...
	location /rails/ {
	    proxy_set_header Host $host;
	    proxy_set_header X-Real-IP $remote_addr;
	    proxy_pass http://127.0.0.1:3000/; #for ruby on rails webrick
            #proxy_pass http://127.0.0.1:8000/; #for python -m http.server
            #proxy_pass http://127.0.0.1:8080/; #for other web servers like apache, lighttpd, tengine, cherokee, etc...
	}
        ...
}
Load Balancing
   /etc/nginx/sites-available/localhost - setup backend node pool, using host3 3x as much as the others. We'll also set xforward headers so back end servers see external ip addresses, not localhost or the ip of the load balancer.
upstream backend_nodes {
    server host1.example.com;
    server host2.example.com;
    server host3.example.com weight=3;
}

server {
    listen 80;
    server_name example.com;

    location / {
        proxy_set_header HOST $host;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass http://backend_nodes;
    }
}
Passing Requests by Socket
   /etc/nginx/sites-available/localhost - Make www-servers/tengine do the html rendering work.
upstream backend_nodes {
    server unix:/var/run/tengine.sock;
}

server {
    listen 80;
    server_name example.com;

    location / {
        proxy_pass http://backend_nodes;
    }
}
Proxy Pass Buffering
   /etc/nginx/sites-available/localhost - buffer proxy pass so slow connections will release the backend node connection.
...
proxy_buffering on;
proxy_buffer_size 10k;
proxy_buffers 24 16k;
proxy_busy_buffers_size 16k;
proxy_max_temp_file_size 2048m;
proxy_temp_file_write_size 32k;

location / {
    proxy_pass http://backend_nodes;
}
...
Proxy Pass Caching
   /etc/nginx/sites-available/localhost - proxy pass cache configuration
proxy_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=proxycache:8m max_size=50m;
proxy_cache_key "$scheme$request_method$host$request_uri$is_args$args";
proxy_cache_valid 200 302 10m;
proxy_cache_valid 404 10m;
server {

location / {
    proxy_cache proxycache;
    proxy_cache_bypass $http_cache_control;
    add_header X-Proxy-Cache $upstream_cache_status;

    proxy_pass http://backend_nodes;
}
}

php-fpm

Nginx does not natively support php, so we delegate that responsibility to php-fpm

   /etc/nginx/sites-available/localhost - fpm configuration
server {
        ...
	index index.php index.cgi index.htm index.html;
	location ~ .php$ {
	        fastcgi_pass 127.0.0.1:9000;
		include fastcgi.conf;
        }
        ...
}

php caching

   /etc/nginx/sites-available/localhost - fpm cache configuration
fastcgi_cache_path /etc/nginx/cache levels=1:2 keys_zone=MYAPP:100m inactive=60m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
server {
...
        location ~ \.php$ {
...
		fastcgi_cache MYAPP;
		fastcgi_cache_valid 200 60m;
...

for more information on php caching

Location Processing Order

One often confusing aspect of nginx configuration is the order in which it processes location directives. This section is intended to clarify the confusion and help you to write secure nginx location directives.

Two basic types of Location directives

There are two basic types of location directives. The first is called a "conventional string", and looks something like this:

location /foo { deny all; }

The second basic type of location directive is a regex, or regular expression block. In its most basic form, it looks like this, with a "~" and then a regular expression that is matched against the request path. "^" can be used to match the beginning of the request path, and "$" can be used to match the end of the request path. If you need to match a ".", you must escape it as "\." as per regular expression matching rules:

location ~ \.php$ { blah; }

The basic algorithm

Nginx uses a special algorithm to find the proper location string to match the incoming request. The basic concept to remember is that conventional string directives are placed in one "bucket", and then regular expression strings are placed in another "bucket". Nginx will use the first regular expression match that it finds, when scanning the file from top to bottom. If no matching regular expression is found, nginx will look in its "conventional string" bucket, and try to find a match. In the case of the conventional string matches, the most specific match will be used, in other words, the one will be used that matches the greatest number of characters in the request path.

This is the foundation for nginx location processing, so always use these rules as a starting point for understanding location matching order. Nginx then provides various sub-types of location directives which modify this default behavior in a number of ways. This will be covered in the next section.

Advanced Location Processing

Always use the location processing logic described in the previous section as the foundation for understanding how nginx finds a matching location directive, and then once you are comfortable with how this works, read about these more advanced directives and understand how they fit into nginx's overall logic.

= (equals) Location

One advanced location directive is the "=" location, which can be considered a variant of a "conventional string" directive. "=" directives are searched before all other directives, and if a match found, then the corresponding location block is used. A "=" location must the requested path exactly and completely. For example, the following location block will match only the request /foo/bar, but not /foo/bar/oni.html:

location = /foo/bar { deny all; }

~* (case-insensitive regex) Location

A "~*" regex match is just like a regular "~" regex match, except matches will be performed in a case-insensitive manner. "~*" location directives, being regex directives, fall into the regex "bucket" and are processed along other regex directives. This means that they are processed in the order they appear in your configuration file and the first match will be used -- assuming no "=" directives match.

^~ (short-circuit conventional string) Location

You may think that a "^~" location is a regex location, but it is not. It is a variant of a conventional string location. If you recall, nginx will search for conventional string matches by finding the most specific match. However, when you use a "^~" location, nginx behavior is modified. Imagine the way a conventional string match works. Nginx scans your configuration file, looking at each conventional string match from line 1 to the end of file, but it scans all conventional string matches to find the best match. Well, the "~^" location match short-circuits this process. If, in the process of scanning each conventional string match in the config file, nginx encounters a "^~" match that matches the current request path, then nginx will apply this match, and stop looking for the best match.

Ebuild Update Protocol

To work on a new version of the ebuild, perform the following steps.

First, temporarily set the following settings in /etc/make.conf:

NGINX_MODULES_HTTP="*"
NGINX_MODULES_MAIL="*"

This will enable all available modules for nginx.

Now, create a new version of the ebuild in your overlay, and look at all the modules listed at the top of the ebuild. Visit the URLs in the comments above each one and ensure that the latest versions of each are included. Now run ebuild nginx-x.y.ebuild clean install to ensure that all modules patch/build properly. Basic build testing is now complete.

Troubleshooting

502

502 Bad Gateway is caused by nginx being started and php-fpm not being started.