Whether you’re an Ubuntu newbie, an Arch veteran, or dabbling in the abstruse world of Gentoo, backups are a topic that you should give at least occasional thought to.

Because, even if you’re sticking to Long Term Support (LTS) releases, Linux distributions are often fundamentally more at risk than Windows machines of going — suddenly and spectacularly — out of business.

Why, in so many cases, is this so?

  • Hardware compatibility, including for essential components like GPUs, remains a significant challenge with many vendors still not supporting Linux distributions, leaving it to the community to create workarounds;
  • Open source’s financial model doesn’t incentivize, much less require, thorough QA processes;
  • And for those keeping up with bleeding edge releases, fundamental changes to package management tools have a nasty habit of sometimes bricking the system by opening up an irreparable Pandora’s Box of dependency errors. Repairing these, even when possible, can involve doing down days-long rabbit holes. What might seem like a good learning experience for a first-time user can become a deal-breaking frustration for a veteran user on the verge of jumping ship to Windows.

And Linux’s stability issue has enraged plenty of users. Browse many user-in-distress threads on AskUbuntu.com and you’ll come across plenty of frustrated posters who have tried everything and ultimately resolved that the only way forward is to install from scratch.

While doing this can initially be a learning process of sorts, encouraging users to periodically rethink how they can make their system leaner and streamline the recovery process, after a while it becomes nothing better than a big, time-draining nuisance. Sooner or later, even the most advanced power users will begin to crave stability.

I’ve been using Linux as my day-to-day OS for more than 10 years and have gone through my fair share of unwanted clean installations. So many, in fact, that I promised that my last re-installation would be my last. Since then, I’ve developed the following methodology. And it’s worked to keep my Lubuntu system running as good as the day I installed it without a re-installation since. Here’s what I do.

Considerations: What Do You Need To Back Up?

Before deciding upon a backup strategy, you need to figure out some fundamentals:

  • What do you need to back up? Do you need to backup the full partition/volume or just the home user directory?
  • Will an incremental backup strategy suffice for your use case? Or do you need to take full backups?
  • Does the backup need to be encrypted?
  • How easy do you need the restore process to be?

My backup system is based on a mixture of methodologies.

3-2-1: A Common-Sense Approach For Backing Up Ubuntu — And Keeping It In Good Order Backup Restore ubuntu

I use Timeshift as my primary backup system, which takes incremental snapshots. And I keep a full disk backup on site that excludes directories that do not contain user data. Relative to the system root these are:

  • /dev
  • /proc
  • /sys
  • /tmp
  • /run
  • /mnt
  • /media
  • /lost found

Finally, I keep two more backups. One of these is a (real) full system partition to image backup using a Clonezilla live USB. Clonezilla packages a series of low-level tools for replicating installations. And the second is an offsite full system backup that I upload to AWS S3 about once a year whenever I have a great data uplink at my disposal.

Backup Tools Options

These days, the selection of tools you can use is large.

It includes:

  • Well-known CLIs such as rsync which can scripted and called as cron job manually
  • Programs like Déjà Dup, Duplicity, Bacula that provide GUIs to create and automate backup plans to local or off-site destination servers, including those operated by common cloud providers
  • And tools that interface with paid cloud services like CrashPlan, SpiderOak One, and CloudBerry. The last category includes services that provide cheap cloud storage space themselves so the offering is totally end to end.

The 3-2-1 Rule

3-2-1: A Common-Sense Approach For Backing Up Ubuntu — And Keeping It In Good Order Backup Restore ubuntu

I’m going to give a quick overview of the tools I’m currently using on my main machine.

Although I’ve written some Bash scripts to get essential config files into my main cloud storage, which I use for day to day files, this (the essential) component of my backup plan simply backs up the entire machine, including virtual machines and system files that should be left out or backed up separately in more nuanced approaches.

Its central premise is adherence to the 3-2-1 backup rule. This approach should keep your data — including your main OS — safe in almost any failure scenario.

The Rule states that you should keep:

  • 3 copies of your data. I always say that this is a bit of a misnomer because it actually means that you should keep your primary data source and two backups. I would simply refer to this as “two backups”
  • These two backup copies should be kept on different storage media. Let’s bring this back to simple home computing terms. You could write a simple rsync script that (incrementally) copies your main SSD into another attached storage media — let’s say a HDD attached to the next SATA port on your motherboard. But what happens if your computer catches fire or your house is robbed? You would be left without your primary data source and have no backup. Instead, you could back up your primary disk to a Network Attached Storage (NAS) or simply use Clonezilla to write it to an external hard drive.
  • One of the two backups copies should be stored offsite. Offsite backups are vital because, in the event of a catastrophic natural event such as flooding for instance, your entire house could be destroyed. Less dramatically, a major oversurge event could fry all connected electronics in a house or all those on a particular circuit (this is why keeping one of the onsite backups unconnected to a power supply makes sense – an example would be a simple external HDD/SDD).Technically, “offsite” is anywhere that is a remote location. So you could use Clonezilla to remotely write an image of your operating system to your work PC, or a drive attached to it, over the internet. These days, cloud storage is cheap enough to affordably install even full drive images. For that reason, I back up my system in full, once a year, to an Amazon S3 bucket. Using AWS also gives you massive additional redundancy.

My Backup Implementation

3-2-1: A Common-Sense Approach For Backing Up Ubuntu — And Keeping It In Good Order Backup Restore ubuntu

My approach to backups is based on a few simple policies:

  • I want to keep things as simple as possible;
  • I want to give myself the most redundancy that I can reasonably achieve ;
  • I want to, at a minimum, follow the 3-2-1 rule

So I do as follows.

  • I keep an additional drive in my desktop that is solely used to house Timehsift restore points. Because I dedicate a whole disk to it, I have quite a lot of room to play around with. I keep a daily, a monthly, and a weekly backup. So far, Timeshift is all that I have needed to roll the system back a few days to a point before something, like a new package, had an adverse impact on other parts of the system. Even if you can’t get past GRUB, Timeshift can be used as a CLI with root privileges to repair the system. It’s an amazingly versatile and useful tool. This is a first on-site copy.
  • I keep an additional drive in my desktop that is solely used for housing Clonezilla images of my main drive. Because these images would only really be useful to me in the event that Timeshift failed, I only take these once every three to six months. This is a second on-site copy.
  • Using Clonezilla, I create an additional hard drive that I keep at home external to the PC. Except that, for this hard drive, I use a device-device backup rather than a device-image backup as in the previous image — so that it would be good to go instantaneously if my primary drive were bricked. If I were to recover from the internal Clonezilla backup drive, for instance, I would need to first follow a restore process. Assuming the other system components are in good working order following a hard drive failure, I would theoretically only need to connect this drive to the motherboard to begin using it. This is a third on-site copy.
  • Finally, once every six months or so, I upload a Clonezilla-generated image of my system to AWS S3. Needless to say, this is a long multipart upload and needs to be undertaken from an internet connection with a good upload link.

Altogether, my system involves three on-site copies and one off-site copy of my main desktop.

Main Takeaways

  • All Linux users should have robust backup strategies in place
  • The 3-2-1 backup rule is a good yardstick for ensuring that your data is safe in virtually all circumstances.
  • I use a combination of Timeshift and Cloudzilla to create my backups although there are plenty of other options, including paid ones, on the market. For cloud storage, I use a simple AWS S3 bucket, although again, there are integrated services that include both software and storage tools.

About the author

3-2-1: A Common-Sense Approach For Backing Up Ubuntu — And Keeping It In Good Order Backup Restore ubuntu

Daniel Rosehill

Daniel Rosehill has been using Linux as his day to day operating system for more than 10 years. He is a lightweight distro purist — using Lubuntu for most of that time and then Ubuntu LXDE after the Lubuntu project adopted LXQt. By day, Daniel is a ghostwriter working primarily with B2B technology clients. He is currently studying for his first LPIC certification.