My two favorite performances in the Disney+ version of Hamilton are Renée Goldsberry’s Angelica Schuyler and Daveed Diggs’ twin roles as Marquis de Lafayette and Thomas Jefferson. But perhaps a close third is Jonathan Groff’s turn as King George III. Ever since we watched the show a few weeks ago, my kids and I have been walking around the house breaking into song and talking a lot about what was so mesmerizing about Groff’s performance.
So, I was excited to run across this video of musical theatre coach Marc Daniel Patrick breaking down his performance of “You’ll Be Back”. Yes, he covered the spitting but I found the discussion of how he held specific members of the audience with his gaze for extended periods much more relevant in understanding what made that performance work. I particularly loved how still Groff held his body and the rest of his head as his lower jaw pistoned up and down like a ventriloquist dummy’s singing the “da da da dat” parts. The audio is funny enough, but his possessed mandible makes me laugh every time I see it.
I look at a lot of articles on the web. And by “look at” I mean “skim distractedly without actually reading”. What happens is that I click a link and sort of scan the article until becoming distracted or interrupted by something else on the screen. I waste a lot of time this way, with little gain.
Mike’s approach to reading articles makes sense to me, so I’ve adopted a similar process and it’s working well.
I no longer try to read longer-form articles right away. I instead send them to Instapaper and, after a day or two, review the inbox, delete the ones I no longer care about, and print the ones I do. For printing, I use Mike’s user stylesheet for Firefox reader mode. The print layout is compact and readable and I can mark them up with a pencil and highlighter while away from the distractions of a screen.
I keep recent articles scattered around my desk until I feel I’ve gotten what I need out of them. I then scan the marked up versions into DEVONthink and manually enter the highlights from the most important articles into Roam.
This print-first process is a good way for me to actually learn from things I find on the web.
In Episode 4 of my Deep Questions podcast (posted Monday), a reader named Jessica asked my opinion about the future of social media. I have a lot of thoughts on this issue, but in my response I focused on one point in particular that I’ve been toying with recently: Facebook may have accidentally developed a fatal flaw.
To understand this claim, we have to rewind to the early days of this social platform. The original pitch for Facebook was that it made it easier to connect online with people you knew. The content model was simple: you setup a profile, people you knew setup profiles, and everyone could then check each others’ vacation pictures and relationship statuses.
For this model to be valuable, the people you knew had to also use the service. This is why Mark Zuckerberg focused at first on college campuses. These were closed communities in which it was easy to build up enough critical user mass to make Facebook fun.
Once Facebook moved into the range of hundreds of millions of users, competition became difficult. The value of a network with a hundred million users was exponentially larger than one with a million, as the former was much more likely to connect you with the people you cared about. It was on the strength of this model that Facebook emerged as a powerful social internet monopoly.
The problem, however, was that they weren’t making enough money.
As their IPO loomed, Facebook executives feared that the appeal of checking the profiles of friends and family wasn’t strong enough to get people to use the service all day long. It was an activity you would occasionally do when bored; they needed to find a way to make their platform stickier.
So Facebook did something radical: it blew up its original content model and replaced it with something novel: the bottomless scrolling newsfeed. Instead of checking the profiles of friends and family, you now encounter a stream of articles sourced from all over the network, handpicked by optimized statistical algorithms to push your buttons and stoke the fires of the elusive quality known as engagement.
Facebook shifted from connection to distraction; an entertainment giant built on content its users produced for free.
This shift was massively profitable because it significantly increased the time Facebook’s gigantic user base spent on the platform each day. Tapping that blue and gray icon on your phone now promised instant satisfaction, and our days are filled with endless moments were such appeasement is welcome.
The thought that keeps capturing my attention, however, is that perhaps in making this short term move toward increased profit, Facebook set itself up for long term trouble.
When this platform shifted from connection to distraction it abdicated its greatest advantage: network effects. If Facebook’s main pitch is that it’s entertaining, it must then compete with everything else that’s entertaining. This includes podcasts, and YouTube, and streaming video services, not to mention niche long tail social media platforms that can’t offer you access to your old roommate, but can connect you with a small number of people who are interested in the same things as you. Meanwhile the social interactions that used to occur on these platforms have moved to more flexible and simpler mediums, like group text messages. Facebook used to be the place where grandparents sought new baby pictures. Today, these images are just as likely to be spread in a nondescript iMessage thread, with no creepy data mining or malicious attention engineering required.
I’m not so sure that a newsfeed made up of posts and links generated by random social media users can compete with this increasingly optimized world of targeted entertainment and streamlined digital socialization. Facebook found a way to grow to a market capitalization of $600 billion, but may have accidentally crippled itself in the process.
Or not. But one thing I know for sure is that it would be myopic to believe that the future of social media is going to look just like it does today.
No it isn’t. Nextcloud is not a backup solution, it’s a way of syncing your data, but it’s not a backup. Think about it, if you delete a file from computer A, that deletion will immediately be synced everywhere via Nextcloud. There are protections in place, such as the trash bin and version control, but Nextcloud is not a backup solution.
Since building my own server I have come up with a pretty decent way of backing up my data that follows the 3-2-1 principle of backing data up.
At least 3 copies of your data, on 2 different storage media, 1 of which needs to be off-site.
— The 3-2-1 backup rule
In order to effectively backup Nextcloud, there are a few pieces of hardware and software involved. There is an initial cost to the hardware, but it isn’t significant.
To backup Nextcloud you will need:
An Ubuntu based server running the Nextcloud Snap
A USB hard drive that is at least double the size of the data you’re backing up (I’d recommend getting the biggest you can afford)
Duplicati backup software installed on your Nextcloud server
At this point I will assume that you have connected and mounted your USB hard drive to the server. If you haven’t done that yet, take a look at my guide on how to mount a partition in Ubuntu.
Note: this process is designed around the Nextcloud Snap installation, not the manual installation.
Following this post, you will be able to do the following:
Automatically backup your entire Nextcloud instance (including your database) every day
Create a log file so you can see if the backup worked
Sync the backup to B2 cloud storage (it will be encrypted before transmission)
Delete old backups so your hard drive doesn’t fill up
Receive email alerts once the backup completes
I would reccomend using a dedicated user for backing up. This will allow us to keep the backup routine separate from the normal user account you use, making the setup more secure.
In this guide, I will be using ncbackup as the user account. You can use whatever username you feel is appropriate. Let’s start by creating the user and the directories we will need to store our backups.
# Create new user
sudo adduser ncbackup
# Switch to new user account
su - ncbackup
# Make directories for Backups
# Logout to switch back to normal user
Now we have the directories setup, let’s create the script that will run our backups. In this example, I’m using nano, but feel free to use any text editor you like. To learn more about nano, click here.
We’re using the usr/sbin directory because it is used for system-wide binaries that require elevated privileges. You can store your script wherever you like, but usr/sbin is good practice.
Populate the file with the following, ensuring you change the username and path to whatever the appropriate values are for your setup.
# Output to a logfile
exec &> /home/ncbackup/Backups/Logs/"$(date '+%Y-%m-%d').txt"
echo "Starting Nextcloud export..."
# Run a Nextcloud backup
echo "Export complete"
echo "Compressing backup..."
# Compress backed up folder
tar -zcf /home/ncbackup/Backups/"$(date '+%Y-%m-%d').tar.gz" /var/snap/nextcloud/common/backups/*
echo "Nextcloud backup successfully compressed to /home/ncbackup/Backups"
# Remove uncompressed backup data
rm -rf /var/snap/nextcloud/common/backups/*
echo "Removing backups older than 14 days..."
# Remove backups and logs older than 14 days
find /home/ncbackup/Backups -mtime +14 -type f -delete
find /home/ncbackup/Backups/Logs -mtime +14 -type f -delete
echo "Nextcloud backup completed successfully."
Now we need to make our backup script executable:
sudo chmod +x /usr/sbin/ncbackup.sh
A lot of the commands in our script will require sudo access, but we don’t want to give full sudo access to our ncbackup user, as it doesn’t need elevated rights globally. However, we do want to be able to run the backup script with sudo rights, and we want to do it without requiring a password.
To accomplish this, we need to use visudo. We can configure visudo to allow the ncbackup user to run the backup script as sudo, without a password. Crucially, the ncbackup user will not be able to run anything else as sudo.
# Allow ncbackup to run script as sudo
ncbackup ALL=(ALL) NOPASSWD: /usr/sbin/ncbackup.sh
# Open visudo
# Allow ncbackup to run script as sudo
ncbackup ALL=(ALL) NOPASSWD: /usr/sbin/ncbackup.sh
Enabling sudo access for the backup script introduces another potential security risk. The ncbackup user can run the backup script as sudo without a password. So a threat actor could potentially edit the script and run any command as sudo without a password.
However, we saved the script in /usr/sbin, which means the ncbackup user will not be able to edit the ncbackup.sh script. By doing so, we have prevented the system from becoming insecure.
As an extra layer of security, we will stop the ncbackup user from being able to login to the server at all:
sudo usermod -s /sbin/nologin ncbackup
If at a later date you need to be able to login using the ncbackup user, you can revert change this by running the following command:
sudo usermod -s /bin/bash ncbackup
Now have the backup script setup, we need to schedule the backup to run automatically; for this, we will use Cron.
Run the following command to enter the Cron settings for the ncbackup user:
sudo crontab -u ncbackup -e
Once you’re in crontab, you need to add the following lines to the bottom of the file:
That’s most of the setup complete at this point. The next thing to do is to wait 24 hours for your backup to complete automatically (or you could run the script manually yourself).
Once the script has run, you should see a tar.gz file within your backup folder with a name that corresponds to the date the backup ran:
kev@server:~$ ls /home/ncbackup/Backups/
Within the Logs folder, you should also see a <date>.txt file that corresponds to the backup. You can open this to see how your backup went:
kev@server:~$ cat /home/ncbackup/Backups/Logs/2020-06-10.txt
Starting Nextcloud export...
WARNING: This functionality is still experimental and under
development, use at your own risk. Note that the CLI interface is unstable, so beware if using from within scripts.
Enabling maintenance mode...
0 100% 0.00kB/s 0:00:00 (xfr#0, to-chk=0/1)
15.90M 100% 109.87MB/s 0:00:00 (xfr#105, to-chk=0/139)
Successfully exported /var/snap/nextcloud/common/backups/20190703-130201
Disabling maintenance mode...
tar: Removing leading `/' from member names
Nextcloud backup successfully compressed to /home/ncbackup/Backups
Removing backups older than 14 days...
find: ‘./home/ncbackup/Backups/’: No such file or directory
Nextcloud backup completed successfully.
With the echo statements we put in the script, you can see at what point in the backup things failed, if they do in fact fail.
Note: there are masses of improvements that can be added to this script, but this satisfies my needs. If you do add improvements, please let me know and I’ll post an update.
You now have a single layer of backups for Nextcloud. However, if you want to abide by the 3-2-1 rule of backups (which I highly recommend), then we now need to use Duplicati to add additional layers to our backup routine.
To install Duplicati, go to this link and right click ‘copy link location‘ on the Ubuntu DEB. Then amend the commands below as appropriate.
# Download Duplicati DEB
# Install Duplicati
sudo dpkg -i duplicati_[version].deb
# If you get a dependency error, run the following
sudo apt --fix-broken install
We now need to enable the Systemd service for Duplicati so it runs automatically on boot:
# Enable Duplicati service
sudo systemctl enable duplicati
# Start the Duplicati service
sudo systemctl start duplicati
By default the Duplicati service will only listen on localhost, so if you try to access the IP of the server from another device, you won’t get the Duplcati webGUI.
To fix this, edit the DAEMON_OPTS option within the Duplicati config to the following:
# Open Duplicati config
sudo nano /etc/default/duplicati
# Additional options that are passed to the Daemon.
Restart Duplicati so the config changes take affect:
sudo systemctl restart duplicati
You should now be able to access the Duplicati web interface by going to http://server-ip:8200. You will be asked to set a password for Duplicati when you first login, make sure this is a strong one!
Security Note: My server is hosted at home, and I don’t expose port 8200 to the internet. If your server is not at home, then I would strongly suggest you configure something like IP Tables, or Digital Ocean firewall, to restrict access to port 8200.
Configure Duplicati Backups
Now its time to configure our backups in Duplicati. We will configure 2 backup routines – 1 to USB and another to Backblaze B2 for off-site.
Let’s do the USB backup first. Within the Duplicati webGUI, click on the Add Backup button to the left of the screen.
This is a very straightforward process where you choose the destination (our USB drive), the source (the output from our backup script) and the schedule.
When creating your backup routines in Duplicati, always ensure you encrypt your backups and use a strong passphrase.
Also, always make sure your Duplicati backups run at different times to your other backups. Personally, I go for the following setup:
02:00 – Local Nextcloud backup script runs via Cron
03:00 – Duplicati backs up to USB
04:00 – Duplicati backs up to Backblaze B2
I always leave the Backblaze backup to run last, as it then has up to 22 hrs to complete the upload before the next backup starts, so they shouldn’t interfere with one another.
When it comes to configuring your Backblaze backups, change the destination from Local to B2 Cloud Storage. You will need your B2 bucket information and application keys from to complete the config.
Once you have entered your Backblaze Bucket information, click Test Connection to make sure Duplicati can write to your B2 bucket correctly.
Important note: You will need to add payment information to your Backblaze account before backing up, otherwise your backups will fail.
To give you an idea of what Backblaze costs, I’m currently backing up around 150GB of data to my Buckets, and I’m charged less than $1/month.
Personally, I only keep 7 days of backups on BackBlaze, as I only have it for disaster recovery, where all my local backups have failed. I don’t need data retention in the cloud, that’s what my USB drive is for.
Duplicati Email Notifications
You can configure email notifications for Duplicati backups, this way you will always know if your backups are working.
To do this, head into the Duplicati WebGUI and click on the Settings option to the left of screen, scroll all the way down to the bottom where it says Default options. Click the option that says Edit as text, the paste the following into the field:
# Change as needed
--send-mail-subject=Duplicati %PARSEDRESULT%, %OPERATIONNAME% report for %backup-name%
--send-mail-from=Backup Mailer <firstname.lastname@example.org>
I personally use Amazon SES for this, but you should be able to use any SMTP server.
You’re done. That’s it. Finito. You now know how to backup Nextcloud in such a way that it abides by the cardinal 3-2-1 backup rule, and it lets you know when your backups have run.
TEST YOUR BACKUPS!
I can’t stress this enough. Once your backups have been running for a few days, make sure you run a test restore (not on your live system) to make sure you can get your data back. After all, there’s no point in having backups if you can’t restore from them!
To restore the backups you have made of Nextcloud into a vanilla Nextcloud snap installation, you need to decompress your backup to /var/snap/nextcloud/common then use the nextcloud.import command to restore it:
# Decompress your backup
tar -xvzf /path/to/nexcloud/backup.tar.gz -C /var/snap/nextcloud/common
# Restore your Nextcloud backup
sudo nextcloud.import /var/snap/nextcloud/common/backup-to-restore
Yes, restoring your Nextcloud snap from backup really is that simple!
This is by no means the perfect way to backup Nextcloud, but it does work and it has worked for me for quite some time now. You may have a different/better way of backing up, if you do, please leave comment below, or get in touch with me.
Finally, I’d like to thank my friend Thomas from work, who helped improve my script a little and gave me a couple of ideas to improve to the security.