I stopped reading after this line.
Raspberry Pi won’t do unfortunately, unless you run up to 4 lightweight containers.
Does the author know how much compute power a Raspberry Pi 5 has? If the software that just hosts personal data can’t run in Raspberry Pi 5, that should be a terrible software. For most people and their families, a RPi5 is enough to host anything that they would ever need.
Well I run an ntp stratum 1 server handling 2800 requests a second on average (3.6mbit/s total average traffic), and a flight radar24 reporting station, plus some other rarely used services.
The fan only comes on during boot, I’ve never heard it used in normal operation. Load averages 0.3-0.5. Most of that is Fr24. Chrony takes <5% of a single core usually.
It’s pretty capable.
I’m in the ntppool.org pool for the UK. It randomly assigns servers which could be any stratum really (but there is quality control on the time provided). I also have stratum 2 servers in .fi, and .fr (which are dedicated servers I also use for other things, rather than a raspberry pi).
I’ve ran multiple containers on a Pi 3 before “upgrading” to a Pi 4. Yes not even a Pi 5. Sure it’s not rapid and drags it’s heels at times but for the most part it’s great for hosting stuff for my household.
Home assistant, Plex, Syncthing, Wireguard, Ad Guard, nginx, nginx proxy manager, duckdns, mongodb and unifi network appliance. I was also running Jellyfin along side Plex but it keeps causing the Pi to lock up.
Perhaps this was written much earlier than v5.
May 27th 2024? O.o
deleted by creator
removed by mod
I self host mail/smtp(opensmtpd)+imap(dovecot), znc (irc bouncer), ssh, vpn (ipsec/ikev2), www/http (httpd), git (git-daemon), and gotweb, on an extremely cheap ($2 a month, 512M ram 10G storage) vps all very easily on openbsd. With all these servers I’m using an immense 178M/512M of my available memory.
buyvm/frantech
I agree, and I think there’s some reliability arguments for certain services, too.
I’ve been using self-hosted Bitwarden. That’s something I really want to be reliable anywhere I happen to be. I don’t want to rely on my home Internet connection always being up and dyn DNS always matching. An AWS instance or something like that which can handle Bitwarden would be around $20/month (it’s kinda heavy on RAM). Bitwarden’s own hosting is only $3.33/month for a family plan.
Yes, Bitwarden can work with its local cache only, but I don’t like not being able to sync everything. It’s potentially too important to leave to a residential-level Internet connection.
Is your home connection down that much? I’d think that even syncing once every day or so would populate everything fine, and if you’re at home it should update over wifi.
I might just be spoiled because I’m the only one using mine and only for a handful of devices.
Not really, I just have trust issues with my ISP, and I’m willing to spend three bucks a month to work around them.
…Happy cake day?
I wasn’t aware it was on Lemmy too.
I’d agree but you can expand this quite widely then. You think they don’t need their pictures anymore, in case you host something like Immich/Photoprism? If you host movies, series, games, they may not need them anymore but it would still be noticeable that they are not accessible anymore.
Not that I am saying you are wrong or what a good way of doing that would be. I don’t know myself.
Ideally you want something that gracefully degrades.
So, my media library is hosted by Plex/Jellyfin and a bunch of complex firewall and reverse proxy stuff… And it’s replicated using Syncthing. But at the end of the day it’s on an external HDD that they can plug into a regular old laptop and browse on pretty much any OS.
Same story for old family photos (Photoprism, indexing a directory tree on a Synology NAS) and regular files (mostly just direct SMB mounts on the same NAS).
Backups are a bit more complex, but I also have fairly detailed disaster recovery plans that explain how to decrypt/restore backups and access admin functions, if I’m not available (in the grim scenario, dead - but also maybe just overseas or otherwise indisposed) when something bad happens.
Aside from that, I always make sure that all of all the selfhosting stuff in my family home is entirely separate from the network infra. No DNS, DHCP or anything else ever runs on my hosting infra.
I do self-host some services but it bugs me that a lot of articles that talk about costs do not factor in a lot of additional costs. Drives for NAS need replacement. Running NUCs means quite an energy draw compared to most ARM based SBCs.
And it dismisses the time component of self hosting. It’s not going to be zero.
I recently decided to get more serious about self hosting and gotta say,
use TrueNAS scale, just do it, literally everything is 1 click… While it can be complicated, it is most definitely worth it, not just to stick it to big tech, but because some of the selfhosted apps genuinely provide a better experience than centralized alternatives. NextCloud surprised me especially with how genuinely nice it is. Installed it, got an SSL certificate and replaced google services almost entirely in a few hours of work.I’ve still got a few things I wanna do which look very complicated… Stuff like a mail server and pfsense (the stuff of nightmares) are among the 1st on my list…
OPNSense is generally pretty easy, more powerful, and more open than pfsense. I started with pf but went to OPNSense and have loved it!
removed by mod
I am very much into the nitty gritty of Linux (I use Alpine fyi) the problem is, pf/opnsense aren’t based on Linux…
And I also don’t really know how to set them up… Yk as routers, mainly because my internet comes through PPPoE and I just cannot for the life of me figure out how to pass that through to a VM. I bound the VM to its own NIC, did everything, did not work…
Honestly, I found it really easy. I don’t have a background in IT or anything either.
What did you find difficult? Setting custom firewall rules is harder to understand, but the general functionality of setting up a NAT and even installing and configuring ZenArmor were super super easy.
removed by mod
removed by mod
Old ThinkPad with Win 10 Pro, Plex, Plexamp, and several 14TB drives so I can stream my home media library on the go.
Why Win 10?
It’s the OS I know how to use. The Thinkpad is a P50 with a Xeon processor and lots of RAM so it runs it easily.
It really bugs me in general how often the term “home lab” is conflated with a “home server”, but in the context of what this article is trying to communicate, it’s only going to turn the more casually technical people it’s trying to appeal to off.
For many people, their home lab can also function as a server for self hosting things that aren’t meant to be permanent, but that’s not what a home lab is or is for. A home lab is a collection of hardware for experimenting and prototyping different processes and technologies. It’s not meant to be a permanent home for services and data. If the server in your house can’t be shut down and wiped at any given time without any disruption to or loss of data that’s important to you, then you don’t have a home lab.
removed by mod
Only if nothing on it is permanent. You can have a home lab where the things you’re testing are self hosted apps. But if the server in question is meant to be permanent, like if you’re backing up the data on it, or you’ve got it on a UPS you make sure it stays available, or you would be upset if somebody came by and accidentally unplugged it during the day, it’s not a home lab.
A home lab is an unimportant, transient environment meant for tinkering, prototyping, and breaking.
A box that’s a solution to something, that’s hosting anything you can’t get rid of at a moments notice, is just a home server.
removed by mod
removed by mod
removed by mod
removed by mod
removed by mod
I still use the label ‘homelab’ for everything in my house, including the production services. It’s just a convenient term and not something I’ve seen anyone split hairs about until now.
if nothing on it is permanent. You can have a home lab where the things you’re testing are self hosted apps. But if the server in question is meant to be permanent, like if you’re backing up the data on it, or you’ve got it on a UPS you make sure it stays available, or you would be upset if somebody came by and accidentally unplugged it during the day, it’s not a home lab.
A home lab is an unimportant, transient environment me
Based on what I’ve seen, I’d also say a homelab is often needlessly complex compared to what I’d consider a sane approach to self hosting. You’ll throw all sorts of complexity to imitate the complexity of things you are asked to do professionally, that are either actually bad, but have hype/marketing, or may bring value, but only at scales beyond a household’s hosting needs and far simpler setups will suffice that are nearly 0 touch day to day.
Oh yeah like that’s part of it. If this article is supposed to be a call to action, somebody who starts looking into “homelabs” is going to get confused, they’ll get some sticker shock, and they won’t understand how they apply to what’s said in the article. They’ll see a mix of information from small home servers to hyperconverged infrastructure, banks of Cisco routers and switches, etc. my first home lab was a stack of old Cisco gear I used to study for my network engineering degree. If you stumbled upon an old post of mine talking about my setup and all you’re looking for is a Plex box you’ll be like “What the fuck is all this shit, I’m not trying to deal with all that”
“Self hosting”, and “home server” are just more accurate keywords to look into and actually see things more closely related to what you want.
Yep, and I see evidence of that over complication in some ‘getting started’ questions where people are asking about really convoluted design points and then people reinforcing that by doubling down or sometimes mentioning other weird exotic stuff, when they might be served by a checkbox in a ‘dumbed down’ self-hosting distribution on a single server, or maybe installing a package and just having it run, or maybe having to run a podman or docker command for some. But if they are struggling with complicated networking and scaling across a set of systems, then they are going way beyond what makes sense for a self host scenario.
I’m tired of the argument that the solution to fight tracking/ads/subscription/gafam is self hosting.
It’s a solution for some nice people that have knowledge, time and money for.
But it’s not a solution for everyone.
We need more small nice open source association and company that provide services for people that don’t know the difference between a web search engine and a navigator or just a server and a client. I think that initiatives like “les chatons” in France are amazing for that!!! ( https://www.chatons.org/en )And just to be clear, I think that self-hosted services are a part of the solution. :)
Agreed. Most people online think having a personal website on their own domain is too much of a hassle, they won’t have the knowledge or time to setup a homelab server.
We need more of the nice people you mention — with the tech knowhow and surplus of time — to maintain community services as alternatives to corporate platforms. I see a few co-op services around where member-owners pay a fee to have access to cloud storage and social platforms; that is one way to ensure the basic upkeep of such a community. I’m not sure how Chatons is financed but they certainly have a wide range of libre and private offerings!
I’m hoping my makerspace will be able to do something like that in the future. We’d need funding for a much bigger internet connection, at least three full time systems people paid market wages and benefits (three because they deserve to go on vacation while we maintain a reasonable level of reliability), and also space for a couple of server racks. Equipment itself is pretty cheap–tons of used servers on eBay are out there–but monthly costs are not.
It’s a lot, but I think we could pull it off a few years from now if we can find the right funding sources. Hopefully can be self-funding in the long run with reasonable monthly fees.
IIRC, it’s nearly impossible to self-host email anymore, unless you have a long established domain already. Gmail will tend to mark you as spam if you’re sending from a new domain. Since they dominate email, you’re stuck with their rules. The only way to get on the good boy list is to host on Google Workspace or another established service like Protonmail.
That’s on top of the fact that correctly configuring an email server has always been a PITA. More so if you want to avoid being a spam gateway.
We need something better than email.
On top of that, most ISPs block port 25 on residential IP addresses to combat spam, making it impossible to go full ”DIY”
I self-host mine using Mailcow, but I use an outbound SMTP relay for sending email so I don’t have to deal with IP reputation. L
We need something better than email.
Say everyone agrees and the entire world swaps to some alternative. Email 3.0 or whatever.
Wouldn’t we just have the same issue? Any form of communication protocol (that can be self host able) will get abused by spam. Requiring a lot of extra work to manage.
Setting up a web of trust could cut out almost all spam. Of course, getting most people to manage their trust in a network is difficult, to say the least. The only other solution has been walled gardens like Facebook or Discord, and I don’t have to tell anyone around here about the problems with those.
Isn’t the current email system kind of a web of trust. Microsoft, Google etc… trust each other. But little me and my home server is not part of that web of trust making my email server get blocked.
Yeah, that’s kinda what my GP post was getting at. But it’s all managed by corporations, not individuals.
Realistically I don’t see how it would ever not be managed by a corporation. Your average person doesn’t know how and doesn’t want to manage their own messaging system. They are just going to offload that responsibility to a corporation to do it for them. We are just going to have exactly the same system we have now. Just called some else besides email.
I wish there was a better solution but I am not seeing a way that doesn’t just end up the same as email.
Well, there’s always, you know, mail.
Aah, the good ol‘ wooden variety
Unfortunately he is not talking about security?
All of these types are articles always leave out the calculations of what your time is worth to you and the maintenance costs of spare hard drives and other equipment. The TCO is not just the initial investment in hardware/software alone. Unless you plan to host something unreliably and value your time at nothing. In which case I hope you don’t get friends or family hooked on your stuff or everyone will have a bad time and be back to Google Drive/Docs and Netflix within 5 years.
The reason they leave it out I feel is because once you factor all of that stuff in the $10/month your paying for Google Drive storage or the ~$25 your paying Netflix starts to make a lot more sense when pared with a decent local backup from a Synology NAS for the “I can’t lose this” stuff like baby pictures of your kids. Which blows their entire premise out of the water.
I self host a lot, but I host a lot on cheap VPS’s, mostly, in addition to the few services on local hardware.
However, these also don’t take into account the amount of time and money to maintain these networks and equipment. Residential electricity isn’t cheap; internet access isn’t cheap, especially if you have to get business class Internet to get upload speeds over 10 or 15 mbps or to avoid TOS breaches of running what they consider commercial services even if it’s just for you, mostly because of of cable company monopolies; cooling the hardware, especially if you live in a hotter climate, isn’t cheap; and maintaining the hardware and OS, upgrades, offsite backups for disaster recovery, and all of the other costs. For me, VPS’s work, but for others maintaining the OS and software is too much time to put in. And just figuring out what software to host and then how to set it up and properly secure it takes a ton of time.
Residential electricity isn’t cheap
This is a point many folks don’t take into account. My average per Kwh cost right now is $0.41 (yes, California, yay). So it costs me almost $400 per year just to have some older hardware running 24x7
This sounds excessive, that’s almost 1.1$/day, amounting to more than 2kWh/24hrs, ie ~80W/hr? You will need to invest in a TDP friendly build. I’m running a AMD APU (known for shitty idle consumption) with Raid 5 and still hover less than 40W/h.
This sounds excessive, that’s almost 1.1$/day, amounting to more than 2kWh/24hrs, ie ~80W/hr? You will need to invest in a TDP friendly build. I’m running a AMD APU (known for shitty idle consumption) with Raid 5 and still hover less than 40W/h.
This isn’t speculation on my part, I measured the consumption with a Kill-a-watt. It’s an 11 year old PC with 4 hard drives and multiple fans because it’s in a hot environment and hard drive usage is significant because it’s running security camera software in a virtual machine. Host OS is Linux MInt. It averages right around 110w. I’m fully aware that’s very high relative to something purpose built.
You will need to invest in a TDP friendly build
Right, and spend even more money.
I think the main culprit is CPU/MB, so that’s the only thing needed a replacement. Many cheap alternatives (less than 200$) that can half the consumption and would pay itself in a year of usage easily. There is a Google doc floating around listing all the efficient CPUs and their TDPs. Just a suggestion, I’m pretty sure after a year it would payoff its price, there is absolutely no need for a 110w/h unless you’re running LLMs on that and even then it shouldn’t be that high.
I solved this by installing solar panels. They produce more electricity than I need (enough to cover charging an EV in when I get one in the future), and I should break even (in terms of cost) within 5-6 years of installation. Had them installed last year under NEM 2.0.
I know PG&E want to introduce a fixed monthly fee at some point, which throws off my break-even calculations a bit.
Some VPS providers have good deals and you can often find systems with 16GB RAM and NVMe drives for around $70-100/year during LowEndTalk Black Friday sales, so it’s definitely worth considering if your use cases can be better handled by a VPS. I have both - a home server for things like photos, music, and security camera footage, and VPSes for things that need to be reliable and up 100% of the time (websites, email, etc)
Omg, I pay 30€ for 1Gb/0.7Gb (ten more for symmetrical 10Gb, I don’t need it and can’t even use more than 1Gb/s but my inner nerd wants it) and 0.15€/KWh.
BTW the electricity cost is somewhat or totally negated when you heat your apartment/house depending on your heating system. For me in the winter I totally write it off.
An article telling people to self host read only by those who already self host. Okay.
I think it’s so people here can give themselves a pat on the pack for self hosting lol.
Like how the Linux Lemmy community has so many “Windows is bad, Linux is good” posts. Practically everyone in there already knows that Linux is good.
Welcome to the internet, where people try their best to find people with the same opinions so they can feel good and get pissed when they can’t.
Someday I hope we have a server technology that’s platform-agnostic and you can just add things like “Minecraft Server” or “Email Server” to a list and it’ll install, configure, and host everything in the list with a sensible default config. I imagine you could make the technology fairly easily, although keeping up with new services, versions, security updates, etc. would be quite the hassle. But that’s what collaboration is for!
Unraid does this via docker. It’s amazing. You can do this live and on the fly.
Cloudron does that,not for free, though. But cheap
deleted by creator
Neat!
…is as mod by Vaskii
Sounds kinda like NixOS, although that’s not platform-agnostic.
Funnily enough I do use NixOS for my server! It’s not quite what I was describing but it does allow me to host easily.
Docker is in theory nice, if it works. Docker doesn’t run on my computer(i have no fucking clue why). Every time I try to do anything I get the Error “Unknown Server: OS” also there is literally nothing you can find online about how to Fux this problem.
I use EndeavourOS, but had the same problem on Arch.
Hardware wise I have an 75800x, a RX 6700XT and 32GB 3200mhz Ram.
The weird thing is, that some time ago I was actually able to use docker, but now I’m not.
That doesn’t make any sense to me. It can be installed directly from pacman. It may be something silly like adding docker to your user group. Have you done something like below for docker?
- Update the package index:
sudo pacman -Syu
- Install required dependencies:
sudo pacman -S docker
- Enable and start the Docker service:
sudo systemctl enable docker.service sudo systemctl start docker.service
- Add your user to the docker group to run Docker commands without sudo:
sudo usermod -aG docker $USER
-
Log out and log back in for the group changes to take effect.
Verify that Docker CE is installed correctly by running:
docker --version
If you get the above working docker compose is just
sudo pacman -S docker-compose
I didnt start docker and didn’t add it to my user group. Maybe this will fix it.
sudo pacman -S docker-compose
I did all the steps you mentioned and now it works(at least if use sudo to run the commands).
I thought it would. If it still requires sudo to run it is probably just docker wanting your user account added to the docker group. If the “docker” group doesn’t exist you can safely create it.
You will likely need to log out and log back in for the system to recognize the new group permissions.
As someone who has had a career in hosting: good luck.
Don’t forget backups, logging, monitoring, alerting on top of security updates, hardware failure, power outages, OS updates, app updates, and tech being deprecated and obsolete at a rapid pace.
I’m in favor of a decentralized net with more self-hosting, but that requires more education and skill. You can’t automate away all the unpleasant and technical bits.
But if we hide the complexity, surely we won’t ever have to deal with it! /s
You can’t automate away all the unpleasant and technical bits.
But it’s our job to try