Building a server for static web content with Nginx
Complete guide for deploying a server, using Nginx for serving static files on the web and automating the deployment flow with GitHub actions.
Sometimes we don’t want to go to Amazon S3, Heroku or other platform to deploy our simple static website, sometimes we just want a simple VM instance to do whatever we want there. This guide is about that interest and how to proceed with the setup of that.
What are we deploying?
We have a Hugo blog which generates our static content that we want to serve over the web. You just need a hello world page, and you’re good to go. So, no backend server logic just simple html/css code.
Requirements
- Go language, so you will have to download and install it in order to use it (Hugo uses it)
- Hugo framework for static site generation
- Git version control
Follow the Hugo quickstart guide, and you should have an empty theme with something to show. Generate the final build, and later we will be able to use it for deploying.
hugo --minify
This will generate the website in the public/
directory. And you can preview it with hugo server
command.
Setting up the VM instance
We will use an Amazon EC2 instance and select the following:
- Instance t2.micro (1vCPU and 1GB) and from there it’s the best to always try and pick an Ubuntu OS instance as it has the most things installed on it, bigger community and much less chance of something not working or going wrong
- Name it
test-web-server
and check checkboxes to allow HTTP and HTTPS traffic from internet. - Configure storage to something between 20GB and 30GB so we have some space
- Create new login KeyPair like
test-web-key
and use the default options - Launch the instance
Don’t forget to download the test-web-key
and keep it safe, we will use it for connecting to the instance via ssh.
# Lets move the key to .ssh directory first
mv ~/Downloads/test-web-key.pem ~/.ssh
# Lets secure the key access
chmod 400 ~/.ssh/test-web-key.pem
# Now we can connect to our instance
ssh -i ~/.ssh/test-web-key.pem ubuntu@your-public-ip
Then once connected when can execute the initial update
sudo apt update
That’s that, our new instance is ready to be used!
If you are finding yourself connecting multiple times from the same terminal window to server via ssh or executing many ssh commands like copying from your file to the server, you can use an ssh session agent.
eval `ssh-agent`
ssh-add ~/.ssh/test-web-key.pem
# ... executing any command against the server
Nginx server
We will need to install Nginx and configure it to serve a basic html page for starters.
Setup
# Setting up nginx
sudo apt install -y nginx
# Create a directory for serving our site, here we will have index.html and everything required
sudo mkdir /var/www/mysite
Now you should be able to see Nginx server hello world on your public ip. It’s good practice to enable Nginx startup on system startup just in case
sudo systemctl enable nginx
Command for starting nginx if it’s not started:
sudo systemctl start nginx
Now that we verified the Nginx is able to deliver content, we can clean up the default site and config it has:
sudo rm /etc/nginx/sites-available/default
sudo rm /etc/nginx/sites-enabled/default
sudo nginx -t
sudo nginx -s reload
In case you need to check the nginx process, and it’s user
ps aux | grep nginx
Output
root 22041 0.0 0.2 4808 684 ? Ss 08:14 0:00 nginx: master process /usr/sbin/nginx
www-data 22042 0.0 0.4 4976 1044 ? S 08:14 0:00 nginx: worker process
root 22044 0.0 0.3 3344 812 pts/1 S+ 08:14 0:00 grep --color=auto nginx
Website configuration
Now that Nginx is ready on our instance, we need to start preparing it for our website, as it can host multiple sites
based on domain and/or ports we want to add configuration specific to our site as mysite.conf
.
sudo vim /etc/nginx/conf.d/mysite.conf
Then paste the following configuration
server {
# Once you set your domain
# server_name mysite.com;
root /var/www/mysite;
autoindex on;
access_log off;
index index.html;
proxy_intercept_errors on;
error_page 404 /404.html;
rewrite ^([^.]*[^/])$ $1/ permanent;
location = /404.html {
internal;
}
location / {
try_files $uri $uri/ =404;
}
}
Note that when you change or add configuration to Nginx you will have to reload it so it can detect changes
# Validates configuration
sudo nginx -t
# Reloads
sudo nginx -s reload
Now lets copy the site we built with our hugo command to our web server
# From the project directory we make a copy to the ubuntu directory because we will need sudo afterwards
scp -r ./public ubuntu@your-public-ip:/home/ubuntu
Then on the server move the files to the correct directory
sudo mv public/* /var/www/mysite/
Now if you go to the ip of your website you will be able to see the home page, and if you add a none existent endpoint
to the url like /random
, you will be showed a proper 404 page. 😎
A really powerful optimization that you can add is caching to your server by just adding this rule to mysite.conf
:
location ~* \.(jpg|jpeg|png|gif|ico|webp|css)$ {
add_header Cache-Control "max-age=3600";
add_header Cache-Control "public, no-transform";
}
It will cache the above files ending with those extensions for 3600 seconds.
Done! Or are we?
Technically we are finished, no? We have the server up on our VM instance serving our site from the public IP, and we can execute the copy of files whenever we want to make an update of the website from our machine.
Well if you have a keen eye, you’ve noticed a couple of problems:
- We don’t have TLS enabled and that we are not using a domain like
mysite.com
at all, just a random IP that we got - We have to do all of this manual work every time
- What if we are on a different machine and want to deploy, do we want duplicate
.pem
keys for access?
Let’s see about that in the next parts.
Domain, certificate and TLS
Since we started with Amazon EC2 we might as well continue and use Route 53, though you can use whichever other provider you want.
I won’t go into the detail about creating a hosting zone there, it’s quite easy, the only main part I will note as
important is: you need to create a record! In order for your site to be accessed via domain to your IP you will
need to create an A pointing mysite.com
to my-site-public-ip
. Once you set it, wait a little bit before trying
to open the website url, you should be able to access it relatively fast.
To get information about the domain you can use dig
command:
# Get full info
dig mysite.com
# Short info
dig +short mysite.com
For setting the certificate on your server it will be an easy process as Certbot will
do all the work for us, but before starting with it ensure that you have added your web domain name to your
mysite.conf
configuration file:
# This part should not be commented
server_name mysite.com;
Since we are using Ubuntu and Nginx just follow this guide and note that we probably already have snapd installed.
After finishing certbot instructions, it should have updated and reloaded nginx config for us. You can check for final look of the configuration to see how it looks via:
cat /etc/nginx/conf.d/mysite.conf
It should look something like this
server {
server_name mysite.com;
root /var/www/mysite;
autoindex on;
access_log off;
index index.html;
proxy_intercept_errors on;
error_page 404 /404.html;
rewrite ^([^.]*[^/])$ $1/ permanent;
location = /404.html {
internal;
}
location ~* \.(jpg|jpeg|png|gif|ico|webp|css)$ {
add_header Cache-Control "max-age=3600";
add_header Cache-Control "public, no-transform";
}
location / {
try_files $uri $uri/ =404;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/mysite.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mysite.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = mysite.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name mysite.com;
listen 80;
listen [::]:80;
return 404; # managed by Certbot
}
On the last part you can see that we are also doing redirections from HTTP to HTTPS.
To confirm that your site is set up properly, visit https://mysite.com/ in your browser and look for the lock icon in the URL bar.
Automating deployment
As mentioned, manually build and deploy is fine and dandy if you update quite rarely, but if you are doing this often it becomes tedious. For that reason we are going to host our code on GitHub and use GitHub Actions to trigger deployments of our code to our server just like we did manually in the steps above.
Acceptance criteria
- 🚀 Deploy our changes when we merge a PR (sometimes we just want to update README.md so PR is a trigger)
- 🕝 Run daily scheduled trigger when we have articles scheduled in future
- 📝 Maintain Nginx conf via repository
First lets create a git repository where we will host the project, I will leave this to you, overall we need to have
our whole Hugo project be there, but exclude via .gitignore
our public/
directory and also add mysite.conf
file,
so we can also manage our Nginx site configuration from our repository. Feel free to copy the entire content of
mysite.conf
from server over to the root of the project.
But before we continue on setting up the GitHub actions we need to create first a separate user only to be used by our CI/CD system.
Secure bot user
It’s important that we create a new user for our CI/CD runner, otherwise if we use our main key we generated on the start and someone obtains it, that person will get complete access to our VM instance. By creating a new user (mysite_cicd_bot) with limited privileges we will minimize the impact to be only on few things needed just to run this site.
# On server
useradd -m -d /home/mysite_cicd_bot -s /bin/bash mysite_cicd_bot
# Remove password access for the user
passwd -d mysite_cicd_bot
# Lets create a key and copy the public one to the server and the new user's directory (no passphrase)
ssh-keygen -t rsa -b 4096 -f /home/mysite_cicd_bot/.ssh/mysite_cicd_bot.key -C "Key of CI/CD bot"
chmod 700 /home/mysite_cicd_bot/.ssh
cat /home/mysite_cicd_bot/.ssh/mysite_cicd_bot.key.pub >> /home/mysite_cicd_bot/.ssh/authorized_keys
# Assign permissions for that user
chwon -R mysite_cicd_bot /home/mysite_cicd_bot/.ssh
chmod 600 mykey.key
.Now we can copy the private key to our local machine and use it for ssh connection to test it out:
ssh -i ~/.ssh/mysite_cicd_bot.key mysite_cicd_bot@my-server-ip
Perfect 🎉, we got connection! You can execute pwd
to see that you are in your own directory and whoami
.
Next steps are that we need to allow the user to reload nginx, currently if you try to execute the command
sudo nginx -s reload
you will get:
mysite_cicd_bot is not in the sudoers file.
We can give the user that access by executing sudo visudo
and at the end of the file add our user privilege:
mysite_cicd_bot ALL=(ALL) NOPASSWD: /usr/sbin/nginx -s reload
Now if you run the above nginx reload command it should show you:
[notice] 41223#41223: signal process started
Change ownership of site directory so that our bot can read and write:
sudo chown -R mysite_cicd_bot /var/www/mysite
# Same for nginx configuration
sudo chown -R mysite_cicd_bot /etc/nginx/conf.d/mysite.conf
Login as the bot user and try to delete the index.html
file now.
Now that everything is set up we are able to upload and rewrite files and reload nginx server with mysite.conf
config.
tar czf - public | ssh -o StrictHostKeyChecking=no mysite_cicd_bot@your-server-ip '
tar xvzf -
rsync -r -P /home/mysite_cicd_bot/public/* /var/www/mysite/
sudo nginx -s reload
'
Reference posts for security and permissions:
GitHub workflow
For GitHub Actions we will need to create a directory .github/workflows
and add inside it build-deploy.yaml
file:
name: build-and-deploy
on:
pull_request_target:
branches:
- main
types:
- closed
schedule:
# * is a special character in YAML, so you have to quote this string
# Every day at 05:35am based on UTC time zone
- cron: '35 5 * * *'
jobs:
build-and-deploy:
# Only trigger on merge of PR or if scheduled event
if: github.event.pull_request.merged == true || github.event.schedule
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
submodules: true # Fetch Hugo themes (true OR recursive)
fetch-depth: 0 # Fetch all history for .GitInfo and .Lastmod
- name: Setup Hugo
uses: peaceiris/actions-hugo@v3
with:
hugo-version: '0.119.0'
extended: true
- name: Build
run: hugo --minify
- name: Deploy
env:
SSH_USER: ${{ vars.SSH_USER }}
EC2_HOST: ${{ vars.EC2_HOST }}
SSH_PRIVATE_KEY: ${{ secrets.SSH_PRIVATE_KEY }}
# When deploying the files we zip them into tar to make the transfer drastically
# faster.
run: |
echo "$SSH_PRIVATE_KEY" > key.pem
chmod 600 key.pem
eval `ssh-agent`
ssh-add key.pem
scp -o StrictHostKeyChecking=no -r mysite.conf ${SSH_USER}@${EC2_HOST}:/etc/nginx/conf.d/mysite.conf
tar czf - public | ssh -o StrictHostKeyChecking=no ${SSH_USER}@${EC2_HOST} '
tar xvzf -
rsync -r -P /home/mysite_ci_bot/public/* /var/www/mysite/
sudo nginx -s reload
'
- name: Liveness test
continue-on-error: true # Optional, will not fail action
run: |
curl -I https://mysite.com
Now you have probably noticed that we have some variables here:
Variables
- SSH_USER
- EC2_HOST
Secrets
- SSH_PRIVATE_KEY
These can be set through the repository settings via: Settings -> Secrets and Variables -> Actions
. Variables aren’t
that sensitive, so they are visible, while the private key we created before for our cicd_bot
should be hidden for
security reasons (though you can even hide all off them for more security, but in reality the key is enough).
Other thing that is different is the way we upload files, here we actually zip files before sending them over the network and unzip them when delivered.
tar czf - public | ssh -o StrictHostKeyChecking=no ${SSH_USER}@${EC2_HOST} '
tar xvzf -
'
# Other commands after 'tar xvzf -' are removed for clarity
This is to prevent sending file by file, as there are a lot of files it takes quite a bit and doing it this way it reduces the time drastically.
On the end we have a Liveness check step:
- name: Liveness test
continue-on-error: true # Optional, will not fail action
run: |
curl -I https://mysite.com
this isn’t necessary and won’t fail the flow if it fails, but is there like a safety check to see if the server will provide a 2xx response.
That’s it, we are completely done! 🎉
Now the following should work:
- When PR is merged to main -> deploy ✅
- When code is pushed to main -> deploy ❌
- When set schedule is triggered -> deploy ✅
Bonus PR workflow
If you want, you can add a sanity check test workflow that will check if the site is buildable for PRs by
adding the following file .github/workflows/pr-test.yaml
.
name: build
on:
# Triggers the workflow on push or pull request events but only for the master branch
pull_request:
types: [opened, reopened]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
submodules: true # Fetch Hugo themes (true OR recursive)
fetch-depth: 0 # Fetch all history for .GitInfo and .Lastmod
- name: Setup Hugo
uses: peaceiris/actions-hugo@v3
with:
hugo-version: '0.119.0'
extended: true
- name: Build
run: hugo --minify
Summary
We are now able to host our site on our own server and TLS secured domain name with full flexibility to put there whatever we want with everything automated, so that we don’t need to deal with the server, just merge the code to the branch and have it deployed automatically (with secure CI/CD access) 😎
Another good thing if you are using Hugo is, you can set the date of publish to be in the future, lets say 2 days from today, that means that the scheduled trigger 🕝 will build and deploy every day at the selected date-time, but Hugo won’t build the post until its date and time has passed! 📅 This means you can create for example 4 posts, set their release date, sit back and enjoy as you wait for them to be automatically deployed once that date has passed 🚀