Anatomy of a Cloud, pt. II
Getting fancy
By the end of part I I deployed a cloud but it was still missing some of the bells and whistles that come with the OneDrive subscription that I got tired of paying for. In this part we’ll complete the bona fide cloud experience: I explain how to set up a web front end and how to sync your Android photos to the cloud. Plus I’ll go over some lessons learned. I want you to be so prepared that when you do this for someone else, they’ll think you’re a certified tech GOD. You don’t even need to give me credit, it’s a little service I provide.
Why build when you can buy?
Before we get started, let’s address the elephant in the room: NASs already exist, you don’t need to build one.
First off, get out of here with that nonsense. I’m a builder, if you’re reading this you’re probably a builder too. Beyond that obvious reason, I’ve used a couple NASs and I hate their UIs, I hate all the features they offer that I don’t use, and it’s still proprietary bullshit that I can’t control. And you still have to provide your own disks so buying a NAS doesn’t offer any cost or other hardware advantage there either. I vote my own, home-brewed cloud for the win.
Deploy File Browser
Let’s set up File Browser first and verify it works as expected before we put it on the actual internet. I put the web face on my remote NAS, not the local NAS. Since that is simply a remote copy of my data (not intended to be changed) I felt safer putting File Browser there knowing that if someone did manage to get in and delete everything, the original data is preserved because of my syncing policies. I don’t need to manage files and folders, create users, share files with others, etc. I’m a simple man, I like simple things. Like setting up my own cloud.
- Create a File Browser user, then check the account and make note of the UID and GID:
1 2
sudo adduser filebrowser id filebrowser
-
Set up the container:
Inside the existing
docker
dir from part I, make a new directory for File Browser:1 2 3
cd /mnt/syncthing/docker/ sudo mkdir filebrowser cd filebrowser
Make a Docker compose file:
1
sudo nano compose.yaml
Add this, save and exit:
1 2 3 4 5 6 7 8 9 10 11 12 13
services: filebrowser: image: filebrowser/filebrowser:latest container_name: filebrowser restart: always volumes: - /mnt/syncthing/pimpdata:/srv environment: - PUID=filebrowseruser-uid - PGID=filebrowseruser-gid ports: - 80:80 - 443:443
The
volumes
variable will be the root directory when you log into File Browser. Since my files are stored inpimpdata
and that’s what I want to be available, that’s the path I provided. -
Start the container by running
docker compose up -d
. Open a web browser to http://[remote-NAS-IP] to reach the login page on your very own cloud. The default login credentials areadmin
,admin
. It should dump you to the root of your own pimp data. -
Configure the app:
You definitely want to edit some of the default options before you put it on the internet, like changing the default login password. Click Settings on the left side and you’ll see ‘Change Password’ is one of the first options. Explore the other options and do what feels right. Personally, I set a really strong admin password, then turned off all the default permissions for new users. For extra security, I also created a read-only user which is the one I’ll use to access the site. There are also branding options, which you can use if you want to customize the site
The web front end is finished! You can stop here if you don’t want to put this on the actual internet. You can access it from the local network, or remotely if you’re connected to a Twingate network (which mimics the experience of having it on the internet). Check it out, poke around, it’s pretty sweet. But if you want to take this to the next level, let’s keep going.
Buy a website
That’s right, you’re about to be a web admin. Go ahead, it’s not scary. As of the time of this writing, you can get a domain for $10 USD on Cloudflare Registrar.
During registration you have to provide your personal information but that info is withheld from public DNS records so your information is not exposed:
Yes, I purchased a domain to show you how easy it is:
As promised none of my personal data shows in the public DNS record, that info is limited to Cloudflare Registrar:
Why did I choose Cloudflare to buy my domain? Because you can use your free Cloudflare account to create tunnels that allow secure web connections to your home servers. Then the tunnel, the domain record, and DNS records all live in the same space and that just makes the whole thing easier to manage. This is the setup for the rest of this guide.
Deploy Cloudflare tunnel
The final step to putting your new web app online is by allowing internet traffic into your server. Since your home server is on your LAN, there is a default barrier protecting you from the iNtErNeT via your home router and crazy a little thing called NAT. Basically, nobody on the internet knows how to talk to your devices at home and if you haven’t allowed any ports through your router, nothing from the outside can get in. This makes hosting a web server at home impossible without some additional networking business.
A Cloudflare tunnel is a secure tunnel between your self-hosted server and Cloudflare, which lets the web traffic in but with the built-in security of Cloudflare.
-
Log into the Cloudflare Zero Trust dashboard and navigate to Networks > Tunnels. ‘Your tunnels’ display no tunnels, unless you’ve done this before. Click the Create a tunnel button.
-
Select the Cloudflared option, give your tunnel a name, then click Save tunnel.
-
You can install the tunnel right now by selecting the appropriate connector for your server. Since I’m using a Raspberry Pi and I’m not using Docker for the connector, I selected Debian and the arm64 architecture. That prints an install command to the screen, which automagically deploys a Cloudflare tunnel on your device logged in with your credentials:
If you run the deploy commands now, the connector will show up in this screen. You can also come back here later to deploy it. Click Next when you’re finished
-
The last step is to route web requests destined for bananasandcoconuts.com to your new tunnel via DNS. Rather than edit the actual DNS config, you can use the tunnel config to make that change for you. Click the Domain drop down and select your domain, then select service type HTTP and add the [IP of the NAS device]:80, like so:
Click Save tunnel and that’s it. If your File Browser container is running, it should now be available from the regular ol’ internet at your new domain. You just set up a cloud website, son!
Set up Android photo sync
When I rolled out my cloud, there were three key functionalities I needed:
Resiliency and redundancyAccess to my files from anywhere over the internet-
Automatic sync location for my phone photos
Almost there! Soon I can stop writing this godforsaken blog. Anyways, Syncthing retired the official Android app and it doesn’t sound like there’s an official iPhone app either. But thanks to the open source community, a few alternatives exist that work with the Syncthing operation we have spent so much time lovingly crafting. I have an Android phone so I’ll be using Syncthing-Fork which is just a wrapper for the official Syncthing that’s still being maintained. For iPhones, you can download Möbius Sync. Or at least that’s what the internet told me.
-
Install Syncthing-Fork from the Google Play store
-
Add your local NAS device:
When you open the app, you’ll be met with the familiar Syncthing interface I know you’ve come to know and love. Tap the Device tab, then tap the Add Device button in the upper right. Because my phone is connected to my home network via wifi, the app detected my local NAS along with its server and sync information. I populated the name and left the folders blank, then hit the check in the corner. Next you have to accept the device add request from the local NAS Syncthing interface.
-
Add your pictures folder:
Back on the Syncthing app tap the Folders tab, then the Add Folder button. Here you specify the local path to your pictures. On my phone the path is
/storage/emulated/0/DCIM/Camera
, your results may vary. You can also share the folder with the device you just added. When you hit the check button, the files will be ready to start syncing once you accept the folder invitation from your local Syncthing. Since I wanted my phone to be the source of truth I’m setting up a one-way sync: phone > cloud. Make sure to set your sync settings appropriately. -
Sync to the rest of the cloud:
By now we have a local NAS, a remote NAS, and (in my case) a local Windows computer that makes up our ‘cloud’. The folder of photos from your phone needs to be replicated in the same fashion as my other data, for that added resiliency. From the local NAS Syncthing interface, click the folder being shared from your phone to open the options. You can select the two sync devices and click Save.
On those devices, accept the folder sync invitation but be sure to specify where the synced directory will be on the local device. I’ll say it again, if you forget to specify the directory then Syncthing will likely put it in the local user directory or some other place on the OS disk, which is bad news if the new files take up more storage than that disk has. I also set the share to be
receive-only
. Files should start syncing when you accept. -
Add the files to File Browser:
I wanted my phone pictures available from my cloud browser just the same as my other data so I had to edit the File Browser Docker Compose file to include it. Before you start, copy the path to the local directory where the phone pics are being synced. Mine is in the root dir of Syncthing, the path to which on the remote NAS is
/mnt/syncthing/pimp-phone
.Next, open the File Browser Compose file in an editor:
1
sudo nano /mnt/syncthing/docker/filebrowser/compose.yaml
Add this additional line to the
volumes
variables, then save and continue:1 2 3
volumes: - /mnt/syncthing/pimpdata:/srv - /mnt/syncthing/pimp-phone:/srv/pimp-phone
Subsequent folders can be added in this way, where each additional folder is mapped to the root
/srv
directory of the container, thus making it available in the web interface.
Limitations/caveats/lessons learned
This… has been a process. I’m extremely proud of setting up an actual cloud and it’s performing exactly how I designed but the way to get here was fraught with mistakes and lessons learned. And since my cloud isn’t sitting in a massive datacenter, there are some caveats and limitations to this specific overall solution.
There isn’t any disk fault tolerance at any site
To keep costs down, I only purchased enough disk space to host my current usage plus growth but not enough to be able to survive a single disk failure at any one location. Two of the three sync locations only have one disk for crying out loud. If I wanted fault tolerance, I would have to set up my disk arrays in RAID 1 and not RAID 0 which means buying twice the amount of storage. I’m leaning heavily on the three copies of data, but I have considered setting up a fourth copy at a second remote site.
Recovering from a failure kinda sucks
During the deployment and while writing part I, the working local NAS with which I reported having issues and that I thought was okay later suffered a drive failure and ruined my disk array. Turns out, this specific brand of NVMe on my Pis in a RAID setup is just not compatible. My remote NAS still has two NVMe drives in RAID but they are a different brand and perform fabulously.
I replaced the two drives with a single NVMe SSD on the local NAS but the array was broken so I couldn’t recover any data. Not to mention, I deployed the container on the array so when it failed it took all the container data with it like device info, sharing info, UI settings, etc. Long story short, I basically had to set it all up again. I set aside the two previous SSDs for repurposing, they are completely functional otherwise.
Syncing data over the internet is slooooooow
After I set up Syncthing again, I wasn’t able to cleanly sync the files with how I had set things up. Since my Syncthing instance was technically new, my two other instances were confused by what I was trying to tell them was the same. There are ways to clean up Syncthing configurations but I would’ve needed to do that on all the Syncthing instances so I took the easy way out - I loaded the primary data again, wiped out the remote data and re-synced 2 terabytes of data over the internet lol. Over the Twingate tunnel it took about 4.5 days total at a blistering upload speed of ~5 MBPS. This is otherwise fine for a remote receive-only cloud site. I hope Twingate isn’t reading this.
Additional security safeguards
With my stuff on the internet, I have taken additional steps to make sure any potential data breach is not catastrophic. I absolutely do not store personal, health, financial, nor any other type of secure record, on the cloud without first encrypting it. I use a third party tool to create a secure drive to save my secret pimpdata
, which is just an encrypted file. You can do this for as many of your files as helps you sleep at night.