• 0 Posts
  • 46 Comments
Joined 2 years ago
cake
Cake day: September 25th, 2023

help-circle
  • I’d say that the most important takeover of this approach is to stop all the containers before the backup. Some applications (like databases) are extremely sensitive to data corruption. If you simply ´cp´ while they are running you may copy files of the same program at different point in time and get a corrupted backup. It is also important mentioning that a backup is good only if you verify that you can restore it. There are so many issues you can discover the first time you recover a backup, you want to be sure you discover them when you still have the original data.



  • The decryption key is more than 20 random character, so if you get only half of it is not a biggie and it doesn’t look like anything interesting.

    It is on the internet mostly because I don’t have anything else to host it locally. But I see some benefit: I wanted for the server to be available immediately after a power failure. If it fetches the key from internet I just need for the router to be online, if it fetches it from the local network I need another server running unencrypted disk.



  • Second reason. It may run your vpn, with the server down you cannot connect to it and provide the decryption key unless you are connected to the same network.

    There are some good answer around where the server can easily decrypt automatically as long as it is connected in your home but will likely fail at a thief’s home. These are a much safer setup than keeping data unencrypted even if they are not bullet proof.



  • I’ve configured something similar. The /boot partition is the only unencrypted. In the initramfs there is a script that downloads half of the decryption key from http, while the other half is stored in the script itself. The script implements automated retry until it can fetch the key and decrypt the root partition.

    My attack model here is that, as soon as I realize someone stole my NAS I can shutdown the server hosting half of the decryption key making my data safe. There is a window where the attacker could connect it to a network and decrypt the data, but it is made more difficult by the static network configuration: they should have a default gateway with the same IP address of mine.

    On my TODO list I also have to implement some sort of notification to get an alert when the decryption key is fetched from internet.


  • They also says that installing a different os will invalidate the warranty. But their x86 models (I wasn’t aware of the arm) literally ship with a USB drive connected to an internal USB port which starts the setup of their custom Linux if it detects no OS on the internal drives. You just swap that pendrive and you install whatever you want. I cannot say it works for all the models, but I did a little research before buying mine and I can say it run debian for more that one year without any compatibility issue.





  • lorentz@feddit.ittoSelfhosted@lemmy.worldTesting vs Prod
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    I don’t have a testing environment, but essentially all my services are on docker saving their data in a directory mounted on the local filesystem. The dockerfile reads the sha version of the image from an env file. I have a shell script which:

    1. Triggers a new btrfs snapshot of the volume containing everyithing
    2. Pulls the new docker images and stores their hashes in the env file
    3. Restarts all the containers.

    if a new Docker version is broken rolling back is as simple as copying the old version in the env file and recreating the container. If data gets corrupted I can just copy the last working status from an old snaphot.

    The whole os is on a btrfs volume which is snapshotted regularly, so ideally if an update fucks it up beyond recovery I can always boot from a rescue image and restore an old snapshot. But I honestly feel this is extra precaution: in years that I run debian on all my computers, it never reached the point of being not bootable.


  • My Synology has an auto block feature that from my understanding is essentially fail2ban, what I don’t know is if such a feature works for all my exposed services but Synology’s

    I’d be surprised if it works for custom services. Fail2ban has to know what’s running and haw to have access to its log file to know what is a failed authentication request. The best you can do without log access is to rate limit new tcp connections. But still you should know what’s the service behind because 5 new SSH sessions per minute and IP can be reasonable 5 new http1.0 connections likely cannot load a single html page.


  • If you want to encrypt only the data partition you can use an approach like https://michael.stapelberg.ch/posts/2023-10-25-my-all-flash-zfs-network-storage-build/#encrypted-zfs to ulock it at boot.

    TL;DR: store half of the decryption key on the computer and another half online and write a script that at boot fetches the second half and decrypt the drive. There is a timewindow where a thief could decrypt your data before you remove the key if they connect your computer to the network, but depending on your thread model can be acceptable. you can also decrypt the root portion with a similar approach but you need to store the script in the initramfs and it is not trivial.

    Another option I’ve seen suggested is storing the decryption key on a USB pendrive and connect it with a long extension cord to the server. The assumption is that a thief would unplug all the cables before stealing your server.



  • As other mentioned, an advantage is that it blocks ads on phone apps too. My other use case is to add extra DNS entries to name devices on my local network. Finally, after using pihole for a while I switched to blocky. It has similar features but it lacks the UI and the dchp server, but in exchange it uses much less resources. Since I didn’t use either of these it sounded a good trade to me


  • I started using headscale (the opensource reimplementation of tailscale server) on a private vps. It is incredibly better compared to plain wireguard. I regret waiting so much before switching.

    Something that really made my life easier: wireguard is poor at roaming: switching to and from my wifi created issues because the server wasn’t reachable anymore from its public ip and wireguard didn’t bother to query the DNS again to check the new IP. Also, configuration is dead simple because it takes care of iptables for you (especially good when you enables forwarding to a node).

    Since the server just sends small messages for the control plane and all the traffic is p2p between the devices, the smallest vps with the smaller connectivity is more than enough to handle it.



  • Nginx for my intranet because configuration is fully manual and I have complete control over it.

    Caddy for the public services on my vps because it handles cert renewal automatically and most of its configuration is magic which just works.

    It is unbelievable how shorter caddy configuration is, but on my intranet:

    1. I don’t want my reverse proxy to dial on internet to try to fetch new SSL certs. I know it can be disabled, but this is the default.
    2. I like to learn how stuff works, Nginx forces you to know more details but it is full of good documentation so it is not too painful compared to Caddy.