By Jeff Cleverley, Alibaba Cloud Tech Share Author
In this series of tutorials, we will set up a Server Cluster that is Horizontally Scalable, that is suitable for high traffic Web Applications and Enterprise business sites. It will consist of 3 Web Application Servers and 1 Load Balancing Server. Although we will be setting up and installing WordPress on the cluster, the actual cluster configuration detailed here is suitable for most any PHP based Web Applications. Each server will be running a LEMP Stack (Linux, Nginx, MySQL, PHP).
To complete this tutorial, you will need to have completed the first tutorial in the series. In the first tutorial, we provisioned 3 node servers and a server for load balancing. On the node servers we configured database and web application file system replication. We used Percona XtraDB Cluster Database as a drop in replacement for MySQL to provide the real time database synchronization between the servers. For Web Application file replication and synchronization between servers, we set up a GlusterFS distributed filesystem.
In this tutorial we will complete the installation of our LEMP stack by installing PHP7 and Nginx. We will then configure Nginx on each of our Nodes and our Load Balancer, and issue a Let’s encrypt SSL certificate on the Load Balancer for our domain, before finally installing WordPress to work across the distributed cluster.
By the end of this tutorial we will have the following Cluster Architecture:
<Equally balanced three Node Server Cluster with Load Balancer>
In the final tutorial we will look at more advanced cluster architecture configurations involving Nginx Caching, creating specialized nodes in the load balance for Administration and for Public site access, and finally hardening our database cluster and distributed filesystem.
Throughout the series, I will be using the root user, if you are using your superuser please remember to add the sudo command before any commands where necessary. I will also be using a test domain yet-another-example.com, you should remember to replace this with your domain when issuing commands.
In the commands I will also be using my server's private and public ip addresses, please remember to use your own when following along.
As this tutorial directly follows the first, the sequence of steps is numbered accordingly. Steps 1 to 3 are in the first tutorial. This tutorial begins at Step 4.
Step 4: Install Nginx and PHP
Install Nginx on each node and the load balancer
On every node run the following command to install Nginx:
# apt-get install nginx
Now log in to your load balancer:
$ ssh root@load_balancers_ip_address
Then install Nginx on your load balancer too:
# apt-get update
# apt-get install nginx
Install PHP on each node
On each node, install PHP and the most common packages required to run WordPress:
# apt-get install php-fpm php-mysql
# apt-get install php7.0-curl php7.0-gd php7.0-intl php7.0-mysql php-memcached
php7.0-mbstring php7.0-zip php7.0-xml php7.0-mcrypt
# apt-get install unzip
Step 5: Download WordPress files
Since we have all our web application root directories mounted as part of the glustervolume
, we only need to install our WordPress files onto one node and they will be replicated across the entire cluster.
Since it is always useful having WP-CLI available on a system, I will install that and use WP-CLI commands to download the latest version of WordPress into the mounted directory.
Install WP-CLI
On node1 run the following commands to install WP-CLI.
Download the PHP Archive:
# curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar
Make it executable and move it into your ‘PATH’:
# chmod +x wp-cli.phar
# mv wp-cli.phar /usr/local/bin/wp
Test it is working:
# wp --info
You should now see an output in your terminal showing details of your WP-CLI installation:
<Install WP-CLI and test it is working>
If you want to have WP-CLI available on each node then you can repeat the above on each node. By the end of this series, node1 will be set up as the administration node so it is only really important for me to have WP-CLI set up on this node.
Download WordPress files
On node1 change directory into the mounted directory that will be used for your web application root directory, and download the WordPress core files.
Remember to use the ‘--allow-root’ parameter to use WP-CLI as root, execute the following commands:
# cd /var/www/yet-another-example.com
# wp core download --local=en_GB --allow-root
WP-CLI will download all the core files and unzip them into the directory:
<Download WordPress Core files with WP-CLI>
But if you check the ownership of the directory and files with ‘ls -l’ you will see that there is an ownership problem, we need to change their ownership to ‘www-data’ web server user and group.
Do that with:
# chown -R www-data:www-data /var/www/yet-another-example.com
Now if we check the directory and its contents we can see that it has the correct ownership:
<Give ownership of the Web App directory to the Web Server>
<Check Web App Directory Ownership>
On node1 or node3 we can check that the WordPress files have been replicated:
# cd /var/www/yet-another-example.com
# ls
<The WordPress files have been replicated across the Glustervolume>
Step 6: Configure Nginx
Configure Nginx on each node to serve the WordPress site
On each node, create a Virtual Host Nginx configuration file for the WordPress web application:
# nano /etc/nginx/sites-available/yet-another-example.com
Configure the file as follows:
server {
listen 80;
listen [::]:80;
root /var/www/yet-another-example.com;
index index.php index.htm index.html;
server_name _;
access_log /var/log/nginx/yetanotherexample_access.log;
error_log /var/log/nginx/yetanotherexample_error.log;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
location ~ /\.ht {
deny all;
}
location = /favicon.ico { log_not_found off; access_log off; }
location = /robots.txt { log_not_found off; access_log off; allow all; }
location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
expires max;
log_not_found off;
}
}
<Web Application’s (WordPress) Virtual Host Nginx Configuration file>
Save and close the file, then symlink it into the /etc/nginx/sites-enabled/ directory:
# ln -s /etc/nginx/sites-available/yet-another-example.com /etc/nginx/sites-enabled
If you were to change directory into the ‘sites-enabled’ directory, then list it’s contents you would see this configuration files symlink:
<Symlink the Web Applications Configuration file into site-enabled>
Since we have made changes to Nginx configuration we should check the files for syntax errors:
# nginx -t
You may see a warning conflicting server name '_'
, this is because the configuration file we created are domain name independent and user '_' as server_name.
Don’t worry about that warning, reload Nginx:
# service nginx restart
<Ignore the Nginx syntax warning and reload Nginx>
Configure Nginx on the Load Balancer
So that we can use the Let’s Encrypt Certbot with it’s Nginx Plugin, we need to create a Virtual Host Configuration File for the Web Application on the Load Balancer.
In the previous section, we created configuration files on the nodes that had ‘root’ directives, but were missing 'server_name' directives.
On the load balancer our Virtual Host configuration files will be the opposite, they will have 'server_name' directives, but no 'root' directives
Create and open the Nginx Virtual Host configuration file we need:
# nano /etc/nginx/sites-available/yet-another-example.com
Configure the file as follows, replacing the server IP addresses with the private IP addresses of your node servers:
upstream clusternodes {
ip_hash;
server 172.20.62.56;
server 172.20.213.159;
server 172.20.213.160;
}
server {
listen 80;
server_name yet-another-example.com www.yet-another-example.com;
location / {
proxy_pass http://clusternodes;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
}
}
<The Load Balancers Virtual Host Nginx Configuration>
Save and close the file, then Symlink it into the ‘/etc/nginx/sites-enabled/’ directory:
# ln -s /etc/nginx/sites-available/yet-another-example.com /etc/nginx/sites-enabled
Now delete the ‘default’ virtual host from the ‘/etc/nginx/sites-enabled/’ directory:
# rm /etc/nginx/sites-enabled/default
Now as we have been making changes to Nginx, we should always check our syntax before restarting the service:
# nginx -t
# service nginx restart
<Symlink your configuration file and check the Nginx Syntax>
This has now configured Nginx to serve our site. The server will listen on the HTTP port 80, and redirect traffic to the upstream 'clusternodes'.
We don’t want to be serving our Web App on HTTP/1 though, so we will fix that next.
Step 7: Install Let’s Encrypt SSL on the Load Balancer
Install Certbot
On the Load Balancer install the package that will allow use to add external package repositories to the ‘apt’ package manager:
# apt-get install -y software-properties-common
Then add the Let’s Encrypt external package repository for certbot
# add-apt-repository ppa:certbot/certbot
Now you can install ‘certbot’:
# apt-get update
# apt-get install python-certbot-nginx
Implement an SSL with Certbot
Normally we would now just install our certificate with the following command:
# certbot --nginx -d domain.com -d www.domain.com
However, there was a security issue reported on the 21st January 2018 that means this command has been temporarily disabled. This situation will soon be remedied I am sure, but I’m including the workaround instructions below.
For now, we need to issue a slightly longer command that temporarily stops the Nginx server while the certificate is being obtained, and then restarts it again afterwards. Do so with the following command:
# sudo certbot --authenticator standalone --installer nginx -d yet-another-example.com -d www.yet-another-example.com --pre-hook "service nginx stop" --post-hook "service nginx start"
Your certificate will be issued after you submit your email, and you will need to choose whether to implement a redirect on the server to only allow HTTPS:
<Issue your Let’s Encrypt SSL on the Load Balancer>
Now if you reopen your Load Balancer Nginx Virtual Host Configuration file for the domain again:
# nano /etc/nginx/sites-available/yet-another-example.com
You will see that Certbot has configured the server blocks automagically for you to serve the sites over HTTPS via port 443:
<Certbot Automagically configures your Nginx Virtual Host File>
Step 7: Install & Configure WordPress
Create the WordPress database and user
We only need to do this on one node, on node1 connect to MySql:
# mysql -u root -p
Create the WordPress database and User, and grant the necessary privileges:
CREATE DATABASE wordpress_cluster DEFAULT CHARACTER SET utf8 COLLATE utf8_unicode_ci;
GRANT ALL ON wordpress.* TO 'new_user'@'localhost' IDENTIFIED BY 'new_users_password';
The flush privileges and exit:
FLUSH PRIVILEGES;
EXIT;
Your terminal should look like:
<Create a WordPress Database and User>
Configure WordPress
Visit your domain to go through the ‘famous’ 5 minute WordPress installation procedure:
https://yet-another-example.com
You will notice that at the moment, none of the CSS files are loading. Don’t worry, just complete the first step so that the wp-config.php file is created.
Enter your database name, database user, password, database host, and database prefix, and submit, and the installer will create the wp-config.php file we need.
<Enter the Database details the WordPress installer requires>
On node1 open the newly created wp-config.php file:
# nano /var/www/yet-another-example.com/wp-config.php
Somewhere to the end of the configuration file, before require_once ABSPATH . wp-settings.php, add the following few lines:
/* SSL Settings */
define( 'FORCE_SSL_ADMIN', true );
define( 'WP_HOME', 'https://yet-another-example.com' );
define( 'WP_SITEURL', 'https://yet-another-example.com' );
if (strpos($_SERVER['HTTP_X_FORWARDED_PROTO'], 'https') !== false) {
$_SERVER['HTTPS']='on';
}
/* Disable WP-Cron */
define( 'WP_DISABLE_CRON', true')
In your terminal, it will look like so:
<Add SSL settings and Disable WP_Cron in the WordPress Config file>
The SSL settings will fix the CSS problems we have been having. They force Admin access via SSL, change the site home to server over HTTPS, and ensures that WordPress knows that when we forward traffic from our Load Balancer on HTTP, we want static files to be served by HTTPS if requested.
Notice we also disabled WP_CRON, and for good reason. WordPress Cron Jobs are not true Cron Jobs, they rely on visits to the site to run scheduled tasks. This is terrible.
Now we will schedule WordPress Cron jobs using the Ubuntu system crontab, but we will only run it on the administration node1.
On node1 of your nodes execute:
# crontab -e
Now add an extra line at the bottom of your cron jobs:
* * * * * wget http://yourdomain.com:9443/wp-cron.php?doing_cron &> /dev/null
Your crontab will look similar to this:
<Create a system cron job to run WordPress scheduled tasks>
Now revisit your url
https://yet-another-example.com
And you can complete the installation process with CSS intact:
<Complete the installation process>
<WordPress site on a Cluster>
Success!
Well done, we have completed the installation of our LEMP stack by installing PHP7 and Nginx. We have configured each of our Node servers NGINX virtual Hosts, our Load Balancers Nginx Configuration and its SSL certificate, and installed WordPress.
We now have a fully working equally load balanced server cluster running WordPress being served over HTTPS, according to the following Cluster architecture:
In the next and final tutorial we will reconfigure this cluster architecture so that Node1 is reserved for Web Application Administration duties while Nodes 2 and 3 are used for site traffic.
The final cluster architecture we will build is illustrated thus:
<Three Node Cluster with Load Balancer redirecting Admin traffic and Site traffic>
In the final tutorial we will also add Nginx FastCGi caching to the mix, and harden our database cluster and distributed file system.
See you then.