Using queues and Envoyer to manage Statamic sites on load balanced servers
Statamic's flat file approach has a huge number of benefits over database driven marketing sites, but sometimes you can run into downsides to its content-as-files methodology. Luckily, there's usually a way to make it work (and often it's better than the database alternative).
At my day job, we use Envoyer.io to enable us to use zero downtime deployments for all of our marketing sites. This means that if something ever goes wrong with a deployment, we can instantly rollback the deployment until we figure out what went wrong.
On top of this, our marketing sites are deployed to multiple servers around the world, with a load balancer sending the user to the best server for them.
While this might seem like overkill, we’re a software company that the world’s largest companies rely on, and first impressions count. A broken marketing website – or worse, one that is down – could result in a client not trusting us with their own critical infrastructure.
Flat files and multiple server connumdrum
We are currently in the process of migrating all of our websites over to Statamic from WordPress. A big reason for this is that all of the content is stored as flat files. In my personal experience, whenever a WordPress site goes down or experiences an error, it almost always involves the database in some way.
By using a flat file CMS, we can remove almost all of these headaches, and even benefit from a small performance gain by removing the database from the equation – at least as far visitors are concerned.
However, there is a slight issue with using a flat file CMS like Statamic with load balanced servers. Someone logging into the customer facing site would only end up changing the content on one of the servers. Anyone visiting the site and being directed to the other site wouldn’t see the changes.
We could use a cloning script to copy content across servers – and we do currently with the WordPress sites – but there have been occassions where it’s caused issues for the marketing teams, so I was keen to avoid that approach.
The solution
To get around this issue, we have an additional server which points to https://cms.[sitename].com
, and is locked by IP via firewall rules. This branch is pointed at a cms
branch, which is our production branch. Any edits to the content is automatically committed and pushed to this branch by Spock. However, because the content is stored in files and is only present in the repo at this point, none of the changes are present on the live sites.
Once a member of the marketing team is happy with the changes they’ve made, they click a link in the control panel, which triggers a queued job (stored via a database). That queued job pings the deployment URL in Envoyer and triggers a deployment. Envoyer then runs though its tasks and deploys a new version of the website with the changes to the live sites.
This approach has a number of benefits:
– No one interacts directly with the live servers so we can lock these down and disable access to the control panel.
– Marketing can use the CMS server as a proofing environment, allowing them to get sign off before pushing changes to the live server.
– If something goes wrong, we can quickly rollback the servers.
If you’re interested in the deployment addon, let me know. We won’t be making it publically available, but I am happy to talk through the approach we took to help you out with this problem if you run into it yourself.