If you have a Ghost blog like myself, self-hosted, and you’re trying to find out how to backup your blog. Well, I might be able to help. Having done some Googling and finding out various approaches I thought I should share and outline the approach I took.
Prerequisites
- A self-hosted Ghost blog on Linux using MySQL.
- Familiarity with Linux and basic scripting with Bash.
- Familiarity of Crontab. Not a must, as I will provide a quick intro guide.
The Rough Guide
To provide some context, I created this blog using DigitalOcean’s Ghost droplet. Setup was straight forward and you can get up and running in minutes.
I also wanted a method to backup my blog and content, in the event something catastrophic happened to my DigitalOcean droplet. I’ve heard too many horror stories from other bloggers who have lost their content. The thought of all the hard work in writing content could be obliterated just fills me with fear.
Unfortunately, Ghost out of the box does not support this. Hopefully, in the future there will be some kind of support. For now, its down to ourselves.
In order to do this, the short version is that I need to backup the database in which Ghost uses and also any images and themes uploaded.
To break it down…
- Obtain Ghost MySQL database credentials.
- Establish a method of backing up the Ghost database.
- Establish a method of backing up Ghost content.
- Write a bash script to perform the backups.
- Upload to off-line storage such as AWS S3.
- Automate it using Crontab.
This may sound overwhelming, but its not as bad as it sounds.
All scripts are located in my GitHub repo, so feel free to check it out.
Obtain Ghost MySQL details
If you’re like me, and do not recall Ghost’s database login details and didn’t make a note of them during setup, the good news is, they can be recovered.
Log into the Linux machine where Ghost is currently hosted.
1. Switch logins to the default Ghost user.
sudo -i -u ghost-mgr
2. Navigate to Ghost installation folder.
Navigate to where Ghost is installed. By default it should be installed in /var/www/ghost
cd /var/www/ghost/
3. Confirm Ghost is running on MySQL.
Within the directory of /var/www/ghost/
run this command to obtain the database type that Ghost is currently running on.
ghost config get database.client
You should see a response such as mysql
. Something like this:
ghost-mgr@ghost-v2:/var/www/ghost$ ghost config get database.client
mysql
4. Obtain the database username. Run this command to obtain the username:
ghost config get database.connection.user
5. Obtain the database password. Run this command to obtain the password:
ghost config get database.connection.password
6. Obtain the database name. Run this command to get the database name:
ghost config get database.connection.database
Now that we have the database credentials, we can verify the connection.
Confirm MySQL Connection
To verify that one can connect to Ghost’s database, we can verify if we can login by using the following command:
mysql --user={username} --password={password} {database}
Example:
mysql --user=ghost --password=your-pwd-here ghost_production
For more options:
mysql --help
If successful, you should be prompted into mysql.
Now we can run show tables
. This should bring back a list of tables created by Ghost.
mysql> show tables;
+----------------------------+
| Tables_in_ghost_production |
+----------------------------+
| accesstokens |
| app_fields |
| app_settings |
| apps |
| brute |
| client_trusted_domains |
| clients |
| invites |
| migrations |
| migrations_lock |
| permissions |
| permissions_apps |
| permissions_roles |
| permissions_users |
| posts |
| posts_authors |
| posts_tags |
| refreshtokens |
| roles |
| roles_users |
| settings |
| subscribers |
| tags |
| users |
| webhooks |
+----------------------------+
25 rows in set (0.00 sec)
Backup Ghost’s MySQL Database
Now that its possible to obtain a database connection, we can use the mysqldump
command to backup the database. The benefit of this command is that we do not need to stop Ghost.
mysqldump --user=ghost --password=your-pwd-here --databases ghost_production --single-transaction | gzip > /user/local/backups/ghost_production_backup_`date +%Y_%m_%d_%H%M`.sql.gz
The command above performs a backup of the ghost database using the credentials we obtained earlier. It also compresses the output using gzip
and also saves it to /user/local/backups
.
Backup Ghost’s images and content
Any blog posts that contain images are uploaded into Ghost, which are are then saved in /var/www/ghost/content/images
. Any uploaded themes are stored in /var/www/ghost/content/themes
.
So, it’s probably a good idea to back these up as well. I decided to backup the entire /var/www/ghost/content
folder, just to make things easier.
For this, I just simply used the tar
command.
tar -czf /usr/local/backups/ghost_blog_content_`date +%Y_%m_%d_%H%M`.tar.gz /var/www/ghost/content
Uploading to AWS S3
I discovered a very handy script from http://blog.0xpebbles.org which handles with uploading into AWS S3. Full credit to them, and thank you for sharing! Feel free to obtain the script directly from the blog. Alternatively you can obtain a copy from my GitHub repository.
Usage:
./upload_to_s3.sh {aws_key} {aws_secret} {aws_bucket_name}@{aws_region} {path_to_source_file} {path_to_s3} private
N.B The private
is to state that the AWS bucket is private and not public access.
Example Lets say my details were:
- AWS Key =
FJSIOWJDKSJKJK
- AWS Secret =
80ziK8du2KW4x1y6bpGu
- AWS Bucket name =
ghost_backups
- AWS Region =
eu-west-1
- filename =
/user/local/backups/ghost_db.sql.gz
- AWS path location =
archive/ghost_db.sql.gz
Then, the command will be look something like this:
./upload_to_s3.sh FJSIOWJDKSJKJK 80ziK8du2KW4x1y6bpGu ghost_backups@eu-west-1 /user/local/backups/ghost_db.sql.gz archive/ghost_db.sql.gz private
To help with identifying your AWS regions locations, this link from AWS details regions names.
Bringing it all together with a bash script
Now that we have established how to backup Ghost’s database, its content, and the ability to upload files into AWS S3, we can combine this all together into a bash script.
Below is a bash script which I have concocted. You can also obtain the script from my GitHub repository.
The script
The script may appear to be rather long, but I’ve tried to keep it very basic. The first half are parameters and the second half performs the backup and uploads it the AWS S3.
#!/bin/bash
# timestamp to keep track of files.
date_stamp=$(date +%Y_%m_%d_%H%M)
# path of where to save backups locally. Make sure this directory exists.
path=/usr/local/working/backups/
# path of the upload_to_s3.sh script
aws_upload_script=/usr/local/working/upload_to_s3.sh
# the output filename of the database backup.
filename=ghost_prod_db_$date_stamp.sql.gz
# the output filename of the compressed ghost content folder.
content_filename=ghost_blog_content_$date_stamp.tar.gz
# ghost details to access ghosts mysql database.
ghost_mysql_usr=ghost
ghost_mysql_pwd=ghost-db-password-here
ghost_mysql_db_name=ghost_production
# aws credentials
aws_secret=aws-secret-here
aws_key=aws-key-here
aws_bucket_name=aws-bucket-name
aws_region=eu-west-1
##############################################################################
echo "backing ghost db. Filename: $path$filename"
mysqldump --user=$ghost_mysql_usr --password=$ghost_mysql_pwd --databases $ghost_mysql_db_name --single-transaction | gzip > $path$filename
echo "ghost db backed up complete."
##############################################################################
echo "compressing ghost blog content."
tar -czf $path$content_filename /var/www/ghost/content
echo "compression complete."
##############################################################################
echo "uploading db to s3."
$aws_upload_script $aws_key $aws_secret $aws_bucket_name@$aws_region $path$filename dbs/$filename private
echo "upload db to s3 complete."
##############################################################################
echo "uploading content to s3."
$aws_upload_script $aws_key $aws_secret $aws_bucket_name@$aws_region $path$content_filename dbs/$content_filename private
echo "upload content to s3 complete."
Now, if you’re using the script file provided in my GitHub, make sure the bash script is executable by applying chmod +x backup_ghost.sh
.
The script can be tested by simply executing it.
./backup_ghost.sh
Automate it with Crontab
Now that we have the scripts, all that’s needed is to create a scheduled job using Crontab.
Quick Intro to Crontab
If you’re new to Crontab and Linux, in a nutshell, its a built-in tool that can run scripts at scheduled times. The jobs that are scheduled by Crontab are generally called “cron jobs”. The scheduled times in which the jobs runs are generally called “cron expressions”
You can list the current cron jobs that are enabled and currently running on a schedule.
crontab -l
To create a new schedule also known as a “cron job”, we can run the following:
crontab -e
This should launch an editor where you can add your cron job.
The Cron command
A cron instruction or command is generally made up of 3 parts.
*/5 * * * * /usr/local/working/myscript.sh > /dev/null 2>&1
The general syntax is that the first 5 characters to the left is the cron schedule or cron expression. In the example above, 5 * * * *
specifies to run every 5 minutes. How does that work? Well, there is a very useful tool called crontab.guru that helps generate these expressions. Also, the cron wiki page will be able to assist in the syntax.
The next part /usr/local/working/myscript.sh
is the script to run.
The third and final part > /dev/null 2>&1
is where to send the output of the result. For now, this just means not to send it anywhere.
In my example, I am running my backup job daily at 6 am.
# everyday at 06:00
0 6 * * * /usr/local/backups/backup_ghost.sh > /dev/null 2>&1
To exit the Crontab editor, press Ctrl + X
and enter Y
to save changes.
Tidying up
You might be thinking, if we continue to perform this backup process, on a daily basis, eventually we are going to run out of disk space.
To address this, we will need to delete old backups from two locations. 1) From the server itself and 2) from AWS S3.
Fortunately, a script to delete old files is relatively simple.
#!/bin/bash
directory=/usr/local/backups
days=+30
find $directory -mtime $days -type f -delete
The directory=/usr/local/working/backups
specifies the root directory of where to delete old files.
The days=+30
specifies any files older than 30 days.
In order to delete files from AWS S3, fortunately, AWS has something called a life cycle policy. This allows you to specify conditions on what to do with certain files after a period of time. Be sure to check the life cycle policy guide.
Summary
In this post, I demonstrated how its possible to perform regular backups of a Ghost blog. I also ensured that the backups were uploaded to AWS S3. This is so that in the event the server of where the Ghost is hosted, the backups are intact.
I’ll admit, its not one of those tasks that I enjoy doing, as its falls under maintenance and administrative tasks. However, it’s one of those things where its well worth the investment upfront and I suppose you will be grateful you did it.