Back

CLI agents make self-hosting on a home server easier and fun

775 points27 daysfulghum.io
simonw27 days ago

This posts lists inexpensive home servers, Tailscale and Claude Code as the big unlocks.

I actually think Tailscale may be an even bigger deal here than sysadmin help from Claude Code at al.

The biggest reason I had not to run a home server was security: I'm worried that I might fall behind on updates and end up compromised.

Tailscale dramatically reduces this risk, because I can so easily configure it so my own devices can talk to my home server from anywhere in the world without the risk of exposing any ports on it directly to the internet.

Being able to hit my home server directly from my iPhone via a tailnet no matter where in the world my iPhone might be is really cool.

drnick127 days ago

I'd rather expose a Wireguard port and control my keys than introduce a third party like Tailscale.

I am not sure why people are so afraid of exposing ports. I have dozens of ports open on my server including SMTP, IMAP(S), HTTP(S), various game servers and don't see a problem with that. I can't rule out a vulnerability somewhere but services are containerized and/or run as separate UNIX users. It's the way the Internet is meant to work.

buran7726 days ago

> I'd rather expose a Wireguard port and control my keys than introduce a third party like Tailscale.

Ideal if you have the resources (time, money, expertise). There are different levels of qualifications, convenience, and trust that shape what people can and will deploy. This defines where you draw the line - at owning every binary of every service you use, at compiling the binaries yourself, at checking the code that you compile.

> I am not sure why people are so afraid of exposing ports

It's simple, you increase your attack surface, and the effort and expertise needed to mitigate that.

> It's the way the Internet is meant to work.

Along with no passwords or security. There's no prescribed way for how to use the internet. If you're serving one person or household rather than the whole internet, then why expose more than you need out of some misguided principle about the internet? Principle of least privilege, it's how security is meant to work.

lmm26 days ago

> It's simple, you increase your attack surface, and the effort and expertise needed to mitigate that.

Sure, but opening up one port is a much smaller surface than exposing yourself to a whole cloud hosting company.

+5
appplication26 days ago
+1
mycall25 days ago
+1
xnickb26 days ago
observationist26 days ago

This is where using frontier models can help - You can have them assist with configuring and operating wireguard nearly as easily as you can have them walk you through Tailscale, eliminating the need for a middleman.

The mid-level and free tiers aren't necessarily going to help, but the Pro/Max/Heavy tier can absolutely make setting up and using wireguard and having a reasonably secure environment practical and easy.

You can also have the high tier models help with things like operating a FreePBX server and VOIP, manage a private domain, and all sorts of things that require domain expertise to do well, but are often out of reach for people who haven't gotten the requisite hands on experience and training.

I'd say that going through the process of setting up your self hosting environment, then after the fact asking the language model "This is my environment: blah, a, b, c, x, y, z, blah, blah. What simple things can I do to make it more secure?"

And then repeating that exercise - create a chatgpt project, or codex repo, or claude or grok project, wherein you have the model do a thorough interrogation of you to lay out and document your environment. With that done, you condense it to a prompt, and operate within the context where your network is documented. Then you can easily iterate and improve.

Something like this isn't going to take more than a few 15 minute weekend sessions each month after initially setting it up, and it's going to be a lot more secure than the average, completely unattended, default settings consumer network.

You could try to yolo it with Operator or an elevated MCP interface with your system, but the point is, those high tier models are sufficiently good enough to make significant self hosting easily achievable.

prmoustache26 days ago

> Ideal if you have the resources (time, money, expertise). There are different levels of qualifications, convenience, and trust that shape what people can and will deploy. This defines where you draw the line - at owning every binary of every service you use, at compiling the binaries yourself, at checking the code that you compile.

Wireguard is distributed by distros in official packages. You don't need time, money and expertise to setup unattended upgrades with auto reboot on a debian or redhat based distro. At least it is not more complicated than setting an AI agent.

+1
madeofpalk26 days ago
heavyset_go26 days ago

> I'd rather expose a Wireguard port and control my keys than introduce a third party like Tailscale.

This is what I do. You can do Tailscale like access using things like Pangolin[0].

You can also use a bastion host, or block all ports and set up Tor or i2p, and then anyone that even wants to talk to your server will need to know cryptographic keys to route traffic to it at all, on top of your SSH/WG/etc keys.

> I am not sure why people are so afraid of exposing ports. I have dozens of ports open on my server including SMTP, IMAP(S), HTTP(S), various game servers and don't see a problem with that.

This is what I don't do. Anything that needs real internet access like mail, raw web access, etc gets its own VPS where an attack will stay isolated, which is important as more self-hosted services are implemented using things like React and Next[1].

[0] https://github.com/fosrl/pangolin

[1] https://news.ycombinator.com/item?id=46136026

edoceo26 days ago

Is a container not enough isolation? I do SSH to the host (alt-port) and then services in containers (mail, http)

+1
heavyset_go26 days ago
Imustaskforhelp26 days ago

I understand where you are coming from but no, containers aren't enough isolation.

If you are running some public service, it might have bugs and of course we see some RCE issues as well or there can be some misconfig and containers by default dont provide enough security if an hacker tries to break in. Containers aren't secure in that sense.

Virtual machines are the intended use case for that. But they can be full of friction at time.

If you want something of a middle compromise, I can't recommend incus enough. https://linuxcontainers.org/incus/

It allows you to setup vm's as containers and even provides a web ui and provides the amount of isolation that you can trust (usually) everything on.

I'd say to not take chances with your home server because that server can be inside your firewall and can infect on a worst case scenario other devices but virtualization with things like incus or proxmox (another well respected tool) are the safest and provide isolation that you can trust with. I highly recommend that you should take a look at it if you deploy public serving services.

zamadatix26 days ago

It's the way the internet was meant to work but it doesn't make it any easier. Even when everything is in containers/VMs/users, if you don't put a decent amount of additional effort into automatic updates and keeping that context hardened as you tinker with it it's quite annoying when it gets pwned.

There was a popular post less than a month ago about this recently https://news.ycombinator.com/item?id=46305585

I agree maintaining wireguard is a good compromise. It may not be "the way the internet was intended to work" but it lets you keep something which feels very close without relying on a 3rd party or exposing everything directly. On top of that, it's really not any more work than Tailscale to maintain.

drnick126 days ago

> There was a popular post less than a month ago about this recently https://news.ycombinator.com/item?id=46305585

This incident precisely shows that containerization worked as intended and protected the host.

zamadatix26 days ago

It protected the host itself but it did not protect the server from being compromised and running malware, mining cryptocurrency.

Containerizing your publicly exposed service will also not protect your HTTP server from hosting malware or your SMTP server from sending SPAM, it only means you've protected your SMTP server from your compromised HTTP server (assuming you've even locked it down accurately, which is exactly the kind of thing people don't want to be worried about).

Tailscale puts the protection of the public portion of the story to a company dedicated to keeping that portion secure. Wireguard (or similar) limit the protection to a single service with low churn and minimal attack surface. It's a very different discussion than preventing lateral movement alone. And that all goes without mentioning not everyone wants to deal with containers in the first place (though many do in either scenario).

SoftTalker26 days ago

I just run an SSH server and forward local ports through that as needed. Simple (at least to me).

+1
zamadatix26 days ago
+2
Rebelgecko26 days ago
Imustaskforhelp26 days ago

Also to Simon: I am not sure about how Iphone works but in android, you could probably use mosh and termux to then connect to the server as well and have the end result while not relying on third party (in this case tailscale)

I am sure there must be an Iphone app which could probably allow something like this too. I highly recommend more people take a look into such workflow, I might look into it more myself.

Tmate is a wonderful service if you have home networks behind nat's.

I personally like using the hosted instance of tmate (tmate.io) itself but It can be self hosted and is open source

Once again it has third party issue but luckily it can be self hosted so you can even have a mini vps on hetzner/upcloud/ovh and route traffic through that by hosting tmate there so ymmv

nobody999926 days ago

>It's the way the internet was meant to work but it doesn't make it any easier. Even when everything is in containers/VMs/users, if you don't put a decent amount of additional effort into automatic updates and keeping that context hardened as you tinker with it it's quite annoying when it gets pwned.

As someone who spent decades implementing and securing networks and internet-facing services for corporations large and small as well as self-hosting my own services for much of that time, the primary lesson I've learned and tried to pass on to clients, colleagues and family is:

   If you expose it to the Internet, assume it will be pwned at some point.
No, that's not universally true. But it's a smart assumption to make for several reasons:

1. No software is completely bug free and those bugs can expose your service(s) to compromise;

2. Humans (and their creations) are imperfect and will make mistakes -- possibly exposing your service(s) to compromise;

3. Bad actors, ranging from marginally competent script kiddies to master crackers with big salaries and big budgets from governments and criminal organizations are out there 24x7 trying to break into whatever systems they can reach.

The above applies just as much to tailscale or wireguard as it does to ssh/http(s)/imap/smtp/etc.

I'll say it again as it's possibly the most important concept related to exposing anything:

   If you expose it to the Internet, assume that, at some point, it will be 
   compromised and plan accordingly.
If you're lucky (and good), it may not happen while you're responsible for it, but assuming it will and having a plan to mitigate/control an "inevitable" compromise will save your bacon much better than just relying on someone else's code to never break or have bugs which put you at risk.

Want to expose ports? Use Wireguard? Tailscale? HAProxy? Go for it.

And do so in ways that meet your requirements/use cases. But don't forget to at least think (better yet script/document) about what you will do if your services are compromised.

Because odds are that one day they will.

Etheryte26 days ago

Every time I put anything anywhere on the open net, it gets bombarded 24/7 by every script kiddie, botnet group , and these days, AI company out there. No matter what I'm hosting, it's a lot more convenient to not have to worry about that even for a second.

drnick126 days ago

> Every time I put anything anywhere on the open net, it gets bombarded 24/7 by every script kiddie, botnet group , and these days, AI company out there

Are you sure that it isn't just port scanners? I get perhaps hundreds of connections to my STMP server every day, but they are just innocuous connections (hello, then disconnect). I wouldn't worry about that unless you see repeated login attempts, in which case you may want to deploy Fail2Ban.

+1
TheCraiggers26 days ago
NewJazz26 days ago

This is a good reason not to expose random services, but a wireguard endpoint simply won't respond at all if someone hits it with the wrong key. It is better even than key based ssh.

epistasis26 days ago

I've managed wireguard in the past, and would never do it again. Generating keys, distributing them, configuring it all...... bleh!

Never again, it takes too much time and is too painful.

Certs from Tailscale are reason enough to switch, in my opinion!

The key with successful self hosting is to make it easy and fast, IMHO.

alpn26 days ago

> I'd rather expose a Wireguard port and control my keys than introduce a third party like Tailscale.

I’m working on a (free) service that lets you have it both ways. It’s a thin layer on top of vanilla WireGuard that handles NAT traversal and endpoint updates so you don’t need to expose any ports, while leaving you in full control of your own keys and network topology.

https://wireplug.org

copperx26 days ago

Apparently I'm ignorant about Tailscale, bacause your service description is exactly what I thought Tailscale was.

+1
SchemaLoad26 days ago
hamandcheese26 days ago

This is very cool!

But I also think it's worth a mention that for basic "I want to access my home LAN" use cases you don't need P2P, you just need a single public IP to your lan and perhaps dynamic dns.

+1
digiown26 days ago
+2
kevin_thibedeau26 days ago
byb26 days ago

My biggest source of paranoia is my open home assistant port, while it requires a strong password and is TLS-encrypted, I'm sure that one day someone will find an exploit letting them in, and then the attacker will rapidly turn my smart home devices on and off until they break/overheat the power components until they start a fire and burn down my house.

seszett26 days ago

That seems like a very irrational fear. Attackers don't go around trying to break into Home Assistant to turn the lights on at some stranger's house.

There's also no particular reason to think Home Assistant's authentication has to have a weakness point.

And your devices are also unlikely to start a fire just by being turned on and off, if that's your fear you should replace them at once because if they catch fire it doesn't matter if it's an attacker or yourself turning them on and off.

timc326 days ago

People are putting their whole infrastructure onto HA - cars, Apple/Google/other accounts, integrations to grid companies, managing ESP software etc..

I think that has more potential for problems than turning lights on and off and warrants strong security.

wao0uuno26 days ago

Why expose HA to the internet? I’m genuinely curious.

Topgamer726 days ago

I don't have a static IP, so tailscale is convenient. And less likely to fail when I really need it, as apposed to trying to deal with dynamic dns.

Frotag26 days ago

Speaking of Wireguard, my current topology has all peers talking to a single peer that forwards traffic between peers (for hole punching / peers with dynamic ips).

But some peers are sometimes on the same LAN (eg phone is sometimes on same LAN as pc). Is there a way to avoid forwarding traffic through the server peer in this case?

Frotag26 days ago

I guess I'm looking for wireguard's version of STUN. And now that I know what to google for, finally found some promising leads.

https://github.com/jwhited/wgsd

https://www.jordanwhited.com/posts/wireguard-endpoint-discov...

https://github.com/tjjh89017/stunmesh-go

megous26 days ago

Have your network managing software setup a default route with a lower metric than wireguard default route based on wifi SSID. Can be done easily with systemd-networkd, because you can match .network file configurations on SSID. You're probably out of luck with this approach on network-setup-challenged devices like so called smart phones.

darkwater26 days ago

I don't fully understand your topology use case. You have different peers that are "road-warriors" and that sometimes happen to be both on the same LAN which is not your home LAN, and need to speak the one to the other? And I guess you are connecting to the other peer via DNS, so your DNS record always points to the Wireguard-provided IP?

torcete26 days ago

The way I do it is to have two different first level domains. Let's say:

- w for the wireguard network. - h for the home network.

Nothing fancy, just populate the /etc/hosts on every machine with these names.

Now, it's up to me to connect to my server1.h or server1.w depending whether I am at home or somewhere else.

wooptoo26 days ago

Two separate WG profiles on the phone; one acting as a Proxy (which forwards everything), and one acting just as a regular VPN without forwarding.

digiown26 days ago

A mesh-type wireguard network is rather annoying to set up if you have more than a few devices, and a hub-type network (on a low powered router) tends to be so slow that it necessitates falling back to alternate interfaces when you're at home. Tailscale does away with all this and always uses direct connections. In principle it is more secure than hosting it on some router without disk encryption (as the keys can be extracted via a physical attack, and a pwned router can also eavesdrop on traffic).

gambiting26 days ago

"Back in the day"(just few years ago) I used to expose a port for RDP on my router, on a non-standard port. Typically it would be fine and quiet for a few weeks, then I assume some automatic scanner would find it and from that point onwards I could see windows event log reporting a log in attempt every second, with random login/password combinations, clearly just looking for something that would work. I would change the port and the whole dance would repeat all over again. Tens of thousands of login attempts every day, all year round. I used to just ignore it, since clearly they weren't going to log in with those random attempts, but eventually just switched to OpenVPN.

So yeah, the lesson there is that if you have a port open to the internet, someone will scan it and try to attack it. Maybe not if it's a random game server, but any popular service will get under attack.

drnick126 days ago

> someone will scan it and try to attack it. Maybe not if it's a random game server, but any popular service will get under attack.

That's fine, it's only people knocking on a closed door. You cannot host things such as email or HTTP without open ports, your service needs to be publicly accessible by definition.

pacija26 days ago

Of course. A port is a door. If service listening on a port is secure and properly configured (e.g. ssh), whole Internet can bang on it all day every day, they won't let through without proper key. Same for imap, xmpp or any othet service.

But what can you expect from people who provide services but won't even try to understand how they work and how they are configured as it's 'not fun enough', expecting claude code to do it right for them.

Asking AI to do thing you did 100 times before is OK I guess. Asking AI to do thing you never did and have no idea how it's properly done - not so much I'd say. But this guy obviously does not signal his sysadmin skills but his AI skills. I hope it brings him the result he aimed for.

eqvinox26 days ago

> I am not sure why people are so afraid of exposing ports.

Similar here, I only build & run services that I trust myself enough to run in a secure manner by themselves. I still have a VPN for some things, but everything is built to be secure on its own.

It's quite a few services on my list at this point and really don't want to have a break in one thing lead to a break in everything. It's always possible to leave a hole in one or two things by accident.

On the other side this also means I have a Postgres instance with TCP/5432 open to the internet - with no ill effects so far, and quite a bit of trust it'll remain that way, because I understand its security properties and config now.

sauercrowd27 days ago

People are not full time maintainers of their infra though, that's very different to companies.

In many cases they want something that works, not something that requires a complex setup that needs to be well researched and understood.

buildfocus26 days ago

Wireguard is _really_ simple in that sense though. If you're not doing anything complicated it's very easy to set up & maintain, and basically just works.

You can also buy quite a few routers now that have it built in, so you literally just tick a checkbox, then scan a QR code/copy a file to each client device, done.

vladvasiliu26 days ago

This may come with its own limitations, though.

My ISP-provided router (Free, in France) has WG built-in. But other than performance being abysmal, its main pain point is not supporting subnet routing.

So if all you want is to connect your phone / laptop while away to the local home network, it's fine. If you want to run a tunnel between two locations with multiple IPs on the remote side, you're SoL.

PeterStuer26 days ago

Defence in dept. You have a layer of security even before a packet reaches your port. I might have a zero day on your service, but now I also need to breach your reverse proxy to get to it.

CSSer27 days ago

The answer is people who don't truly understand the way it works being in charge of others who also don't in different ways. In the best case, there's an under resourced and over leveraged security team issuing overzealous edicts with the desperate hope of avoiding some disaster. When the sample size is one, it's easy to look at it and come to your conclusion.

In every case where a third party is involved, someone is either providing a service, plugging a knowledge gap, or both.

dpacmittal26 days ago

Tailscale works behind NAT, wireguard does not unless you also have a publicly reachable relay server which introduces its own maintenance headaches and cost.

SchemaLoad26 days ago

If you expose ports, literally everything you are hosting and every plugin is an attack surface. Most of this stuff is built by single hobbiest devs on the weekend. You are also exposed to any security issues you make in your configuration. My first attempt self hosting I had redis compromised because I didn't realise I had exposed it to the internet with no password.

Behind a VPN your only attack surface is the VPN which is generally very well secured.

sva_26 days ago

You exposed your redis publicly? Why?

Edit: This is the kind of service that you should only expose to your intranet, i.e. a network that is protected through wireguard. NEVER expose this publicly, even if you don't have admin:admin credtials.

+1
SchemaLoad26 days ago
vladvasiliu26 days ago

Isn't GP's point inadvertently exposing stuff? Just mention docker networking on HN and you'll get threadfuls of comments on how it helpfully messes with your networking without telling you. Maybe redis does the same?

I mitigate this by having a dedicated machine on the border that only does routing and firewalling, with no random services installed. So anything that helpfully opens ports on internal vms won't automatically be reachable from the outside.

Jach26 days ago

I have a VPS with OVH, I put Tailscale on it and it's pretty cool to be able to install and access local (to the server) services like Prometheus and Grafana without having to expose them through the public net firewall or mess with more apache/nginx reverse proxies. (Same for individual services' /metrics endpoints that are served with a different port.)

twelvedogs26 days ago

i tried wireguard and ended up giving up on it, too many isps just block it here or use some kind of tech that fucks with it and i have no idea why, i couldn't connect to my home network because it was blocked on whatever random wifi i was on

the new problem is now my isp uses cgnat and there's no easy way around it

tailscale avoids all that, if i wanted more control i'd probably use headscale rather than bother with raw wireguard

pferde26 days ago

And there's nothing wrong with it. That is what wireguard is meant to be - a rock-solid secure tunneling implementation that's easy to build higher-level solutions on.

BatteryMountain26 days ago

Which router OS are you using? I have openwrt + daily auto updates configure with a couple of packages blacklisted that I manually update now & then.

Ir0nMan25 days ago

It's worth considering: Run the PiVPN script on a Ubuntu/Debian based VM. Set it to use a non-standard random port. That will be your only port exposed to the internet.

Add the generated Wireguard key to any device (laptops, phones, etc) and access your home LAN as if it was local from anywhere in the world for free.

Works well, super easy to setup, secure, and fast.

rubatuga26 days ago

Yggdrasil network is probably the future. At Hoppy Network we're about to release private yggdrasil relays as a service so you don't get spammed with "WAN" traffic. With Yggdrasil, IP addresses aren't allocated by an authority - they are owned and proven by public key cryptography.

esseph26 days ago

With ports you have dozens or hundreds of applications and systems to attack.

With tailscale / zerotier / etc the connection is initiated from inside to facilitate NAT hole punching and work over CGNAT.

With wireguard that removes a lot of attack surfaces but wouldn't work if behind CGNAT without a relay box.

nialv726 days ago

> introduce a third party like Tailscale.

Well just use headscale and you'll have control over everything.

vladvasiliu26 days ago

That just moves the problem, since headscale will require a server you manage with an open port.

Sure, tailscale is nice, but from an open-port-on-the-net perspective it's probably a bit below just opening wireguard.

cirelli9425 days ago

I did it and I was just hacked because of a CVE on my pangolin reverse proxy! Sadly, I didn't knew of the CVE soon enough and I only noticed when a crypto malware took my fan 100% all day long...

abc123abc12326 days ago

This is the truth. I've been exposing 22 and 80 for decades, and nothing has happened. The ones I know who had something bad happen to them exposed proprietary services or security nightmares like wordpress.

arjie26 days ago

I used to do that, but Tailscale with your own headscale server is pretty snazzy. The other thing is with cloudflared running your server doesn't have to be Internet-routable. Everything is tunneled.

1vuio0pswjnm726 days ago

"I'd rather expose a Wireguard port and control my keys than introduce a third party like Tailscale."

It's always perplexing to me how HN commenters replying to a comment with a statement like this, e.g., something like "I prefer [choice with some degree of DIY]", will try to "argue" against it

The "arguments" are rarely, "I think that is a poor choice because [list of valid reasons]"

Instead the responses are something like, "Most people...". In other words, a nonsensical reference to other computer users

It might make sense for a commercial third party to care about what other computer users do, but why should any individual computer user care what others do (besides genuine curiosity or commercial motive)

For example, telling family, friends, colleagues how you think they should use their computers usually isn't very effective. They usually do not care about your choices or preferences. They make their own

Would telling strangers how to use their computers be any more effective

Forum commenters often try to tell strangers what to do, or what not to do

But every computer user is free to make their own choices and pursue their own preferences

NB. I am not commenting on the open ports statement

1vuio0pswjnm725 days ago

O5QXGIBLGMQC4LQK

inapis26 days ago

Skill issue. Not to mention the ongoing effort required to maintain and secure the service. But even before that, a lot of people are behing CGNAT. Tailscale makes punching a hole through that very easy. Otherwise you have to run your own relay server somewhere in the cloud.

catlifeonmars26 days ago

Honestly the managed PKI is the main value-add from Tailscale over plain wireguard.

I’ve been meaning to give this a try this winter: https://github.com/juanfont/headscale

zobzu26 days ago

put simply and fairly bluntly: because they do not know how things work.

but actually it's worse. this is HN - supposedly, most commenters are curious by nature and well versed into most basic computer stuff. in practice, it's slowly less and less the case.

worse: what is learned and expected is different from what you'd think.

for example, separating service users sure is better than nothing, but the OS attack surface as a local user is still huge, hence why we use sandboxes, which really are just OS level firewalls to reduce the attack surface.

the open port attack surface isnt terrible though: you get a bit more of the very well tested tcp/ip stack and up to 65k ports all doing the exact same thing, not terrible at all.

Now, add to it "AI" which can automatically regurgitate and implement whatever reddit and stack overflow says.. it makes for a fun future problem - such forums will end up with mostly non-new AI content (new problem being solved will be a needle in the haystack) - and - users will have learned that AI is always right no matter what it decides (because they don't know any better and they're being trained to blindly trust it).

Heck, i predict there will be a chat, where a bunch of humans will argue very strongly that an AI is right while its blatantly wrong, and some will likely put their life on the line to defend it.

Fun times ahead. As for my take: humans _need_ learning to live, but are lazy. Nature fixes itself.

johnisgood26 days ago

Tailscale does not solve the "falling behind on updates" problem, it just moves the perimeter. Your services are still vulnerable if unpatched: the attacker now needs tailnet access first (compromised device, account, or Tailscale itself).

You have also added attack surface: Tailscale client, coordination plane, DERP relays. If your threat model includes "OpenSSH might have an RCE" then "Tailscale might have an RCE" belongs there too.

WireGuard gives you the same "no exposed ports except VPN" model without the third-party dependency.

The tradeoff is convenience, not security.

BTW, why are people acting like accessing a server from a phone is a 2025 innovation?

SSH clients on Android/iOS have existed for 15 years. Termux, Prompt, Blink, JuiceSSH, pick one. Port N, key auth, done. You can run Mosh if you want session persistence across network changes. The "unlock" here is NAT traversal with a nice UI, not a new capability.

Galanwe26 days ago

> BTW, why are people acting like accessing a server from a phone is a 2025 innovation?

> SSH clients on Android/iOS have existed for 15 years

That is not the point, Tailscale is not just about having a network connection, it's everything that goes with. I used to have OpenVPN, and there's a world of difference.

- The tailscale client is much nicer and convenient to use on Android than anything I have seen.

- The auth plane is simpler, especially for non tech users (parents, wife) whom I wish to access my photo album. They are basically independent with tailscale.

- The simplicity also allows me to recommend it to friends and we can link between our tailnet, e.g. to cross backup our NAS.

- Tailscale can terminate SSH publicly, so I can selectively expose services on the internet (e.g. VaultWarden) without exposing my server and hosting a reverse proxy.

- ACLs are simple and user friendly.

johnisgood26 days ago

You are listing conveniences, which is fair. I said the tradeoff is convenience, not security.

> "Tailscale can terminate SSH publicly"

You are now exposing services via Tailscale's infrastructure instead of your own reverse proxy. The attack surface moved, it did not shrink.

twelvedogs26 days ago

> Tailscale does not solve the "falling behind on updates" problem, it just moves the perimeter.

nothing 100% fixes zero days either, you are just adding layers that all have to fail at the same time

> You have also added attack surface: Tailscale client, coordination plane, DERP relays. If your threat model includes "OpenSSH might have an RCE" then "Tailscale might have an RCE" belongs there too.

you still have to have a vulnerable service after that. in your scenario you'd need an exploitable attack on wireguard or one of tailscale's modifications to it and an exploitable service on your network

that's extra difficulty not less

johnisgood26 days ago

The "layers" argument applies equally to WireGuard without Tailscale. Attacker still needs VPN exploit + vulnerable service.

The difference: Tailscale adds attack vectors that do not exist with self-hosted WireGuard: account compromise, coordination plane, client supply chain, other devices on your tailnet. Those are not layers to bypass, they are additional entry points.

Regardless, it is still for convenience, not security.

twelvedogs21 days ago

yeah i agree, it's less secure than just wireguard + self hosted, to be honest i didn't thoroughly read your original comment

philips27 days ago

I agree! Before Tailscale I was completely skeptical of self hosting.

Now I have tailscale on an old Kindle downloading epubs from a server running Copyparty. Its great!

ryandrake27 days ago

Maybe I'm dumb, but I still don't quite understand the value-add of Tailscale over what Wireguard or some other VPN already provides. HN has tried to explain it to me but it just seems like sugar on top of a plain old VPN. Kind of like how "pi-hole" is just sugar on top of dnsmasq, and Plex is just sugar on top of file sharing.

Jtsummers27 days ago

I think you answered the question. Sugar. It's easier than managing your own Wireguard connections. Adding a device just means logging into the Tailscale client, no need to distribute information to or from other devices. Get a new phone while traveling because yours was stolen? You can set up Tailscale and be back on your private network in a couple minutes.

Why did people use Dropbox instead of setting up their own FTP servers? Because it was easier.

+3
johnisgood26 days ago
simonw26 days ago

If you're confident that you know how to securely configure and use Wireguard across multiple devices then great, you probably don't need Tailscale for a home lab.

Tailscale gives me an app I can install on my iPhone and my Mac and a service I can install on pretty much any Linux device imaginable. I sign into each of those apps once and I'm done.

The first time I set it up that took less than five minutes from idea to now-my-devices-are-securely-networked.

Cyph0n27 days ago

It’s a bit more than sugar.

1. 1-command (or step) to have a new device join your network. Wireguard configs and interfaces managed on your behalf.

2. ACLs that allow you to have fine grained control over connectivity. For example, server A should never be able to talk to server B.

3. NAT is handled completely transparently.

4. SSO and other niceties.

For me, (1) and (2) in particular make it a huge value add over managing Wireguard setup, configs, and firewall rules manually.

zeroxfe26 days ago

> Plex is just sugar on top of file sharing.

right, like browsers are just sugar on top of curl

+1
edoceo26 days ago
InfinityByTen26 days ago

At least postman is :P

SchemaLoad26 days ago

Tailscale is Wireguard but it automatically sets everything up for you, handles DDNS, can punch through NAT and CGNAT, etc. It's also running a Wireguard server on every device so rather than having a hub server in the LAN, it directly connects to every device. Particularly helpful if it's not just one LAN you are trying to connect to, but you have lots of devices in different areas.

drnick127 days ago

> Kind of like how "pi-hole" is just sugar on top of dnsmasq, and Plex is just sugar on top of file sharing.

Speaking of that, I have always preferred a plain Unbound instance and a Samba server over fancier alternatives. I guess I like my setups extremely barebone.

ryandrake26 days ago

Yea, my philosophy for self-hosting is "use the smallest amount of software you can in order to do what you really need." So for me, sugar X on top of fundamental functionality Y is always rejected in favor of just configuring Y."

tech_ken26 days ago

Managing the wg.conf is a colossal PITA, especially if I'm trying to like provision a new client and don't have access to my main laptop. It's crying out for a CRUD app on top of it, and I think tailscale is basically that plus a little. The value add seems obvious.

Also plex is way more than sugar on top of file sharing; it's like filesharing, media management, and a CDN rolled into one product. Soulseek isn't going to handle transcoding for you.

epistasis26 days ago

I use Tailscale for exactly those reasons, plus the easy SSL certificates and clients for Android and iOS.

From this thread, I've learned about Pangolin:

https://github.com/fosrl/pangolin

Which seems very compelling to me too. If it has apps that allow various devices connect to the VPN it might be worth it to me to trial using it instead of Tailscale...

lelandbatey26 days ago

If Plex is "just file sharing" then I guarantee you'd find Tailscale "just WireGuard".

I enjoy that relative "normies" can depend on it/integrate it without me having to go through annoying bits. I like that it "just works" without requiring loads of annoying networking.

For example, my aging mother just got a replacement computer and I am able to make it easy to access and remotely administer by just putting Tailscale on it, and have that work seamlessly with my other devices and connections. If one day I want to fully self-host, then I can run Headscale.

Frotag27 days ago

I always assumed it was because a lot of ISPs use CGNAT and using tailscale servers for hole punching is (slightly) easier than renting and configuring a VPS.

mfcl27 days ago

It's plug and play.

Forgeties7926 days ago

And some people may not value that but a lot of people do. It’s part of why Plex has become so popular and fewer people know about Jellyfin. One is turnkey, the other isn’t.

I could send a one page bullet point list of instructions to people with very modest computer literacy and they would be up and running in under an hour on all of their devices with Plex in and outside of their network. From that point forward it’s basically like having your own Netflix.

atmosx27 days ago

You don’t have to run the control plane and you don’t have to manage DNS & SSL keys for the DNS entries. Additionally the RBAC is pretty easy.

All these are manageable through other tools, but it’s more complicated stack to keep up.

navigate831026 days ago

Tailscale is able to punch holes in CGNAT which a vanilla wireguard cannot

BatteryMountain26 days ago

Setting up wireguard manually can be a pain in the butt sometimes. Tailscale makes it super easy but then your info flows through their nodes.

Skunkleton27 days ago

Yes, that is really all it is.

hexfish26 days ago

Is Tailscale still recording metadata about all your connections? https://github.com/tailscale/tailscale/issues/16165

MattSayar26 days ago

Just be sure to run it with --accept-dns=false otherwise you won't have any outbound Internet on your server if you ever get logged out. That was annoying to find out (but easy to debug with Claude!)

josecodea21 days ago

Great! I have looked into this and I have a few questions though...

Basically, I feel that tailscale does not make it very easy to set up services this way, and the only method I have figured out has a bit too many steps for my liking, basically:

- to expose some port to the tailnet, there needs to be a `tailscale serve` command to expose its ports

- in order for this command to run on startup and such, it needs to be made into a script that is run as a SystemD service

- if you want to do this with N services, then you need to repeat these steps N times

Is this how you do it? is there a better way?

PaulKeeble26 days ago

Its especially important in the CGNAT world that has been created and the enormous slog that IPv6 rollout has ultimately become.

SchemaLoad26 days ago

Yeah same story for me. I did not trust my sensitive data on random self hosting apps with no real security team. But now I can put the entire server on the local network only and split tunnel VPN from my devices and it just works.

LLMs are also a huge upgrade here since they are actually quite competent at helping you set up servers.

znpy26 days ago

> The biggest reason I had not to run a home server was security: I'm worried that I might fall behind on updates and end up compromised.

In my experience this is much less of an issue depending on your configuration and what you actually expose to the public internet.

Os-side, as long as you pick a good server os (for me that’s rocky linux) you can safely update once every six months.

Applications-wise, i try and expose as little as possible to the public internet and everything exposed is running in an unprivileged podman container. Random test stuff is only exposed within the vpn.

Also tailscale is not even a hard requirement: i rub openvpn and that works as well, on my iphone too.

The truly differentiating factor is methodological, not technological.

mtoner2326 days ago

People are way too worried about security imo. Statistically, no one is targeting you to be hacked. By the time you are important and valuable enough for your home equipment to be a target you would have hired someone else to manage this for you

fetzu26 days ago

I think this is very dangerous perspective. A lot of attacks on infra are automated, just try to expose a Windows XP machine to the internet for a day and see with how much malware you end up with. If you leave your security unchecked, you will end up attacked; not by someone targeting you specifically, but having all your data encrypted for ransom might still create a problem for you (even if the attacker doesn’t care about YOUR data specifically).

subscribed26 days ago

Oh, sure, no one is targeting me specifically.

Its only swarms of bots and scripts going through the entire internet, including me.

iptables and fail2ban should be installed pretty early, and then - just watch the logs.

johnfn26 days ago

Once, when I was young and inexperienced, I left a server exposed to the Internet by accident (I accidentally exposed a user with username postgres, password postgres). In hours the machine had been hacked to run a botnet. Was I stupid? Yes. But I absolutely wasn't a high-profile enough person to "be a target" - clearly someone was just scanning IP addresses.

cirelli9425 days ago

Crying inside myself after a crypto miner took my VM this past week.

ErikBjare22 days ago

And mine last year

miki12321126 days ago

Now I wish there was some kind of global, single-network version of Tailscale...

TS is cool if you have a well-defined security boundary. This is you / your company / your family, they should have access. That is the rest of the world, they should not.

My use case is different. I do occasionally want to share access to otherwise personal machines around. Tailscale machine sharing sort of does what I want, but it's really inconvenient to use. I wish there was something like a Google Docs flow, where any Tailscale user could attempt to dial into my machine, but they were only allowed to do so after my approval.

josecodea21 days ago

Tailscale Funnel, no?

For the permissions, just add basic auth in the reverse proxy and choose whom to share the passwd with.

Now if you want OAuth or something like that... well tough luck, you need to set up OIDC or whatever and that's going to be taking you some time, but it still works how you want.

fartfeatures26 days ago

Take a look at Zrok it might be what you want: https://zrok.io

PLG8826 days ago

You have more or less described OpenZiti. Just mint a new identity/JWT for the user, create a service, and viola, only that user has access to your machine. Fully open source and self-hostable.

comrade123426 days ago

I just have a vpn server on my fiber modem/router (edgerouter-4) and use vpn clients on my devices. I actually have two vpn networks - one that can see the rest of my home network (and server) and the other that is completely isolated and can't see anything else and only does routing. No need to use a third-party and I have more flexibility

PeterStuer26 days ago

These are two very separate issues. Tailscale or other reverse proxies will give you access from the WAN.

Claude Code or other assistants will give you conversational management.

I already do the former (using Pangolin). I'm building towards the latter but first need to be 100% sure I can have perfect rollback and containement across the full stack CC could influence.

lee_ars26 days ago

I've started experimenting with Claude Code, and I've decided that it never touches anything that isn't under version control.

The way I've put this into practice is that instead of letting claude loose on production files and services, i keep a local repo containing copies of all my service config files with a CLAUDE.md file explaining what each is for, the actual host each file/service lives on, and other important details. If I want to experiment with something ("Let's finally get around to planning out and setting up kea-dhcp6!"), Claude makes its suggestions and changes in my local repo, and then I manually copy the config files to the right places, restart services, and watch to see if anything explodes.

Not sure I'd ever be at the point of trusting agentic AI to directly modify in-place config files on prod systems (even for homelab values of "prod").

BatteryMountain26 days ago

Tailscale is a good first step, but its best to configure wireguard directly on your router. You can try headscale but it seems to be more of a hobby project - so native wireguard is the only viable path. Most router OS's supports wireguard these days too. You can ask claude to sanity check your configuration.

shadowgovt26 days ago

Besides the company that operates it, what is the big difference between Tailscale and Cloudflare tunnels? I've seen Tailscale mentioned frequently but I'm not quite sure what it gets for me. If it's more like a VPN, is it possible to use on an arbitrary device like a library kiosk?

ssl-326 days ago

I don't use Cloudflare tunnels for anything.

But Tailscale is just a VPN (and by VPN, I mean: Something more like "Connect to the office networ" than I do "NordVPN"). It provides a private network on top of the public network, so that member devices of that VPN can interact together privately.

Which is pretty great: It's a simple and free/cheap way for me to use my pocket supercomputer to access my stuff at home from anywhere, with reasonable security.

But because it happens at the network level, you (generally) need to own the machines that it is configured on. That tends to exclude using it in meaningful ways with things like library kiosks.

vachina26 days ago

You can self host a tailscale network entirely on your own, without making a single call to Tailscale Inc.

Your cloudflare tunnel availability depends on Cloudflare’s mood of the day.

throwup23826 days ago

There’s also Cloudflare tunnels for stuff that you want to be available to the internet but dont want to open ports and deal with that. You can add an auth policy that only works with your email and Github/whatever SSO.

JamesSwift26 days ago

Just use subpath routing and fail2ban and Im very comfortable with exposing my home setup to the world.

The only thing served on / is a hello world nginx page. Everything else you need to know the randomly generated subpath route.

Melatonic26 days ago

Why not cloudflare tunnels ?

driton26 days ago

Even behind a tunnel, if you happen to be running an older version of a service (like Immich) with a known exploit, you are still vulnerable to attacks. Tailscale sidesteps this by keeping the service completely "invisible" to the outside world, so the two don't quite compare in my view.

mobilio26 days ago

CF tunnels are game changers for me!

dangoodmanUT27 days ago

definitely, but to be fair, beyond that it's just linux. Most people would need claude code to get what ever they want to use linux for running reliably (systemd service, etc.)

dangoodmanUT27 days ago

i'm still waiting for ECC minipcs, then i'll go all in on local DBs too

aaronax26 days ago

Supermicro has some low power options such as https://www.supermicro.com/en/products/system/Mini-ITX/SYS-E...

thrownawaysz26 days ago

I went down the self host route some years ago but once critical problems hit I realized that beyond a simple NAS it can be a very demanding hobby.

I was in another country when there was a power outage at home. My internet went down, the server restart but couldn't reconnect anymore because the optical network router also had some problems after the power outage. I could ask my folks to restart, and turn on off things but nothing more than that. So I couldn't reach my Nextcloud instance and other stuff. Maybe an uninterruptible power supply could have helped but the more I was thinking about it after just didn't really worth the hassle anymore. Add a UPS okay. But why not add a dual WAN failover router for extra security if the internet goes down again? etc. It's a bottomless pit (like most hobbies tbh)

Also (and that's a me problem maybe) I was using Tailscale but I'm more "paranoid" about it nowadays. Single point of failure service, US-only SSO login (MS, Github, Apple, Google), what if my Apple account gets locked if I redeem a gift card and I can't use Tailscale anymore? I still believe in self hosting but probably I want something even more "self" to the extremes.

Aurornis26 days ago

I thought I was smart because I invested in UPS backup from the start.

Then 5 years later there was a power outage and the UPS lasted for about 10 seconds before the batteries failed. That's how I learned about UPS battery maintenance schedules and the importance of testing.

I have a calendar alert to test the UPS. I groan whenever it comes up because I know there's a chance I'm going to discover the batteries won't hold up under load any more, which means I not only have to deal with the server losing power but I have to do the next round of guessing which replacement batteries are coming from a good brand this time. Using the same vendor doesn't even guarantee you're going to get the same quality when you only buy every several years.

Backup generators have their own maintenance schedule.

I think the future situation should be better with lithium chemistry UPS, but every time I look the available options are either exorbitantly expensive or they're cobbled together from parts in a way that kind of works but has a lot of limitations and up-front work.

kalaksi26 days ago

My APC UPS self-tested and monitored battery status automatically. Then started to endlessly beep when it noticed the battery needed replacing (could be muted though). Eventually, I stopped using UPS since I rarely needed it and it was just another thing to keep and maintain.

richwater26 days ago

Check out some non-lead acid battery solutions like: https://www.ecoflow.com/us/blog/use-portable-power-station-a...

Another maker is Goldenmate (less I be accused of being an ad)

zrail26 days ago

My spouse and I work at home and after the first couple multi-day power outages we invested in good UPSs and a whole house standby generator. Now when the power goes out it's down for at most 30 seconds.

This also makes self-hosting more viable, since our availability is constrained by internet provider rather than power.

rootusrootus26 days ago

Yeah we did a similar thing. Same situation, spouse and I both work from home, and we got hit by a multiple day power outage due to a rare severe ice storm. So now I have an EV and a transfer switch so I can go for a week without power, and I have a Starlink upstream connection in standby mode that can be activated in minutes.

Of course that means we’ll not have another ice storm in my lifetime. My neighbors should thank me.

VTimofeenko26 days ago

We had a 5 day outage last year, got a generator at the tail end of the windy season and made exact same jokes.

A year later another atmospheric river hit and we had a 4 hour outage. No more jokes.

Make sure to run that generator once every few months with some load to keep it happy.

+1
rootusrootus26 days ago
mlrtime25 days ago

I just asked my wife the other day: "Does it feel like we're having less power outages now that we got the generator?" :)

kiddico26 days ago

Thanks for taking one for the team.

mlrtime25 days ago

Same! Texas?

gorgoiler26 days ago

2025 was the year of LiFePo power packs for me and my family. Absolute game changers: 1000Wh of power with a multi-socket inverter and UPS-like failover. You lose capacity over a gas genny but the simplicity and lack of fumes adds back a lot of value. If it’s sunny you can also make your own fuel.

https://www.ankersolix.com/ca/products/f2600-400w-portable-s...

mlrtime25 days ago

Yeah how does that work if you statistically have a outage > 24 hours a few times a year? How long does that last?

Also generators are still cheap compared to batteries?

gorgoiler25 days ago

You’re right, it’s not much, but it is convenient and clean. A few lamps, USB charging, and a router/modem will use a few tens of watts and the big power pack will keep that going for eight hours.

For longer outages there is an outhouse with triple-redundant generators:

- Honda c. 2005

- Honda c. 1985

- Briggs & Stratton c. 1940

The “redundancy” here is that the first is to provide power in the event of a long power outage, and the other two are redundant museum pieces (which turn over!)

uhfraid25 days ago

> My spouse and I work at home and after the first couple multi-day power outages we invested in good UPSs and a whole house standby generator.

What setup did you go with for whole house backup power?

zrail25 days ago

Generac 26kW Guardian, natural gas fueled, connected to a pair of automatic transfer switches. We have two electric meters due to having a ground source heat pump on its own meter.

uhfraid25 days ago

During winter outages, do you stick to the heat pump or switch to a backup heat (e.g. furnace)?

I regrettably removed our old furnace/tank when installing the air source heat pump we have now (northeast), but that’s been my biggest concern power wise

advael26 days ago

Yea I think my own preference for self-hosting boils down to a distrust of a continuous dependency on a service in control of a company and a desire to minimize such dependencies. While there are FOSS and self-hostable alternatives to tailscale or indeed claude code, using those services themselves simply replaces old dependencies on externally-controlled cloud-based services on new ones

timwis26 days ago

You can self-host Pocket ID (or another OIDC auth service) on a tiny $1/mo box and use that as your identity provider for Tailscale. Here's a video explaining how: https://www.youtube.com/watch?v=sPUkAm7yDlU

ekianjo26 days ago

> I was in another country when there was a power outage at home.

If you are going to be away from home a lot, then yes, it's a bottomless pit. Because you have to build a system that does not rely on the possibility of you being there, anytime.

CGamesPlay26 days ago

I really enjoy self-hosting on rented compute. It's theoretically easy to migrate to an on-prem setup, but I don't have to deal with the physical responsibilities while it's in the cloud.

Gigachad26 days ago

Depends what you are trying to host. For many people it’s either to keep their private data local, or stuff that has to be on the home network (pi hole / home assistant)

If you just want to put a service on the internet, a VPS is the way to go.

cyberax26 days ago

Long time ago, it was popular for ISPs offer a small amount of space for personal websites. We might see a resurgence of this, but with cheap VPS. Eventually.

SchemaLoad26 days ago

Free static site hosting and cheap VPSs already exist. Self hosting is less about putting sites on the internet now and more about replicating cloud services locally.

Imustaskforhelp26 days ago

VPS's are really so dirt cheap that some of them only work because people dont use the servers 100% that they are allocated at or when people dont use the resources they have for most part because of economies of scale but vps's are definitely subsidized.

Cheap vps servers 1 gb ram and everything can cost around 10-11$ per year and using something like hetzner's cheap as well for around 30$ ish an year or 3$ per month most likely while having some great resilient numbers and everything

If anything, people self host because they own servers so upgrading becomes easier (but there are vps's which target a niche which people should look at like storage vps, high perf vps, high mem vps etc. which can sometimes provide servers for dirt cheap for your specific use case)

The other reason I feel like are the ownership aspect of things. I own this server, I can upgrade this server without costing a bank or like I can stack up my investment in a way and one other reason is that with your complete ownership, you don't have to enforce t&c's so much. Want to provide your friends or family vps servers or people on internet themselves? Set up a proxmox or incus server and do it.

Most vps servers sometimes either outright ban reselling or if they allow, they might sometimes ban your whole account for something that someone else might have done so somethings are at jeopardy if you do this simply because they have to find automated ways of dealing with abuse at scale and some cloud providers are more lenient than others in banning matters. (OVH is relaxed in this area whereas hetzner, for better or for worse, is strict on its enforcement)

SchemaLoad26 days ago

Self hosting for me is important because I want to secure the data. I've got my files and photos on there, I want to have the drive encrypted with my key. Not just sitting on a drive I don't have any control over. Also because it plugs in to my smart home devices which requires being on the local network.

For something like a website I want on the public internet with perfect reliability, a VPS is a much better option.

newsclues26 days ago

I have a desktop I use but if I had to start again, I’d build a low power r pi or n100 type system that can be powered by a mobile battery backup with solar (flow type with sub 10ms switching and good battery chemistry for long life) that can do the basic homelab tasks. Planning for power outages from the get go rather than assuming unlimited and cheap power

digiown26 days ago

Tailscale has passkey-only account support but requires you to sign up in a roundabout way (first use an SSO, then invite another user, throw away the original). The tailnet lock feature also protects you to some extent, arguably more so than solutions involving self-hosting a coordination server on a public cloud.

_the_inflator26 days ago

I made the same revelation.

Self hosting sounds so simple, but if you consider all the critical factors involved, in becomes a full time job. You own your server. In every regard.

And security is only one crucial aspect. How spam filters react to your IP is another story.

In the end I cherrish the dream but rely on third party server providers.

4k93n226 days ago

syncthing might be worth looking into. ive been using that more and more the last few years for anything that i use daily, things like keepass, plain-text notes, calendars/contacts, rss feeds, then everything else that im "self hosting" are just things that i might only use a few times a week so its no big deal if i lose access.

its so much simpler when you have the files stored locally, then syncing between devices is just something that can happen whenever. anything that is running on a server needs user permissions, wifi, a router etc etc, its just a lot of complexity for very little gain.

although keep in mind im the only one using all of this stuff. if i needed to share things with other people then syncthing gets a bit trickier and a central server starts to make more sense

JamesSwift26 days ago

Well, its not a bottomless pit really. Yes you need a UPS. That’s basically it though.

Aurornis26 days ago

UPS batteries don't last forever.

So now you need to test them regularly. And order new ones when they're not holding a charge any more. Then power down the server, unplug it, pull the UPS out, swap batteries, etc.

Then even when I think I've got the UPS automatic shutdown scripts and drivers finally working just right under linux, a routine version upgrade breaks it all for some reason and I'm spending another 30 minutes reading through obscure docs and running tests until it works again.

JamesSwift26 days ago

Not sure what to say then. I run nixos on ~15 different VMs / minipcs, a total of I guess 6 physical machines. Never had to deal with a UPS battery dying, and havent had to do anything to address NUT breaking. I broadcast NUT via synology NAS though, so the only direct client of the UPS status is the NAS. Ive never once had an issue in the ~5 years Ive had it setup like this.

joshvm26 days ago

My home server doesn't need to be high availability, and the BIOS is set to whatever state prior to power loss. I don't have a UPS. However, we were recently hit with a telco outage while visiting family out of town. As far as I can tell there wasn't a power outage, but it took a hard reboot of the modem to get connectivity back. Frustrating because it meant no checking home automation/security and of course no access to the servers. I'm not at a point where my homelab is important enough that I would invest in a redundant WAN though.

I've also worked in environments where the most pragmatic solution was to issue a reboot periodically and accept the minute or two of (external) downtime. Our problem is probably down to T-Mobile's lousy consumer hardware.

JamesSwift26 days ago

As another commenter said (but got downvoted to oblivion for some reason), its not really about uptime for the homelab, its about graceful shutdown/restart. And theres well defined protocols for it (look up network ups tools, aka NUT).

joshvm24 days ago

> its not really about uptime for the homelab, its about graceful shutdown/restart.

These are different requirements. The issue I described was not a power outage and having a well managed UPS wouldn't have made a difference. Nothing shut down, but we lost 5G in the area and T-Mobile's modem is janky. My point is that it's another edge case that you need to consider when self hosting, because all the remote management and PDUs in the world can't save you if you can't log into the system.

Of course there's all you need is a smart plug and a script/Home Assistant routine which pings every now and again. There are enterprise versions of this, but simple and cheap works for me.

bisby26 days ago

Power outages here tend to last an hour or more. A UPS doesn't last forever, and depending on how much home compute you have, might not last long enough for anything more than a brief outage. A UPS doesn't magically solve things. Maybe you need a home generator to handle extended outages...

How bottomless of a pit it becomes depends on a lot of things. It CAN become a bottomless pit if you need perfect uptime.

I host a lot of stuff, but nextcloud to me is photo sync, not business. I can wait til I'm home to turn the server back on. It's not a bottomless pit for me, but I don't really care if it has downtime.

jmb9926 days ago

Fairly frequently, 6kVA UPSs come up for sale locally to me, for dirt cheap (<$400). Yes, they're used, and yes, they'll need ~$500 worth of batteries immediately, but they will run a "normal" homelab for multiple hours. Mine will keep my 2.5kW rack running for at least 15 minutes - if your load is more like 250W (much more "normal" imo) that'll translate to around 2 hours of runtime.

Is it perfect? No, but it's more than enough to cover most brief outages, and also more than enough to allow you to shut down everything you're running gracefully, after you used it for a couple hours.

Major caveat, you'll need a 240V supply, and these guys are 6U, so not exactly tiny. If you're willing to spend a bit more money though, a smaller UPS with external battery packs is the easy plug-and-play option.

> How bottomless of a pit it becomes depends on a lot of things. It CAN become a bottomless pit if you need perfect uptime.

At the end of the day, it's very hard to argue you need perfect uptime in an extended outage (and I say this as someone with a 10kW generator and said 6kVA UPS). I need power to run my sump pumps, but that's about it - if power's been out for 12-18 hours, you better believe I'm shutting down the rack, because it's costing me a crap ton of money to keep running on fossil fuels. And in the two instances of extended power outages I've dealt with, I haven't missed it - believe it or not, there's usually more important things to worry about than your Nextcloud uptime when your power's been out for 48 hours. Like "huh, that ice-covered tree limb is really starting to get close to my roof."

+1
Aurornis26 days ago
+1
bisby25 days ago
bongodongobob26 days ago

[dead]

neoromantique26 days ago

For this reason I have hybrid homelab, with most stuff hosted at home, but critical things I'd need to have running are on a VM in cloud. Best of both worlds.

altmanaltman26 days ago

I mean you're right in terms of it being a demanding hobby. The question is, is it worth the switch from other services.

I have 7 computers on my self-hosted network and not all of them are on-prem. With a bit of careful planning, you can essentially create a system that will stay up regardless of local fluctuations etc. But it is a demanding hobby and if you don't enjoy the IT stuff, you'll probably have a pretty bad time doing it. For most normal consumers, self-hosting is not really an option and the isn't worth the cost of switching over. I justify it because it helps me understand how things work and tangentially helps me get better my professional skills as well.

gessha26 days ago

Tailscale recently added passkey log in. Would that alleviate the SSO login?

Tailscale also has a self-hosted version I believe.

baq26 days ago

I went with home assistant and zigbee smart plugs to restart the router and the optical terminator.

Imustaskforhelp26 days ago

Hey, if tailscale is something you are worried about. There are open source alternatives to it as well but I think if your purpose is to just port forward a simple server port, wouldn't ssh in general itself be okay with you.

You can even self host tailscale via headscale but I don't know how the experience goes but there are some genuine open source software like netbird,zerotier etc. as well

You could also if interested just go the normal wireguard route. It really depends on your use case but for you in this case, ssh use case seems normal.

You could even use this with termux in android + ssh access via dropbear I think if you want. Tailscale is mainly for convenience tho and not having to deal with nats and everything

But I feel like your home server might be behind a nat and in that case, what I recommend you to do is probably A) run it in tor or https://gitlab.com/CGamesPlay/qtm which uses iroh's instance but you can self host it too or B (recommended): Get a unlimited traffic cheap vps (I recommend Upcloud,OVH,hetzner) which would cost around 3-4$ per month and then install something like remotemoe https://github.com/fasmide/remotemoe or anything similar to it effectively like a proxy.

Sorry if I went a little overkill tho lol. I have played too much on these things so I may be overarchitecting stuff but if you genuinely want self hosting to the extreme self, tor.onion's or i2p might benefit ya but even buying a vps can be a good step up

> I was in another country when there was a power outage at home. My internet went down, the server restart but couldn't reconnect anymore because the optical network router also had some problems after the power outage. I could ask my folks to restart, and turn on off things but nothing more than that. So I couldn't reach my Nextcloud instance and other stuff. Maybe an uninterruptible power supply could have helped but the more I was thinking about it after just didn't really worth the hassle anymore. Add a UPS okay. But why not add a dual WAN failover router for extra security if the internet goes down again? etc. It's a bottomless pit (like most hobbies tbh)

Laptops have in built ups and are cheap, Laptops and refurbished servers are good entry point imo and I feel like sure its a bottomless pit but the benefits are well worth it and at a point you have to look at trade offs and everything and personally laptops/refurbished or resale servers are that for me. In fact, I used to run a git server on an android tab for some time but been too lazy to figure out if I want it to charge permanently or what

CGamesPlay26 days ago

Thanks for the shout-out! If you have any experiential reports using QTM, I'd love to hear them!

Imustaskforhelp26 days ago

Oh yeah this is a really funny story considering what thread we are on, but I remember asking chatgpt or claude or gemini or anything xD to make QTM work and none of them could figure out

But I think in the end what ended up working was my frustration took over and I just copy pasted the commands from readme and if I remember correctly, they just worked.

This is really ironical considering on what thread we are on but in the end, Good readme's make self hosting on a home server easier and fun xD

(I don't exactly remember chatgpt's conversations, perhaps they might have helped a bit or not, but I am 99% sure that it was your readme which ended up helping and chatgpt etc. in fact took an hour or more and genuinely frustrated me from what I remember vaguely)

I hope QTM reaches more traction. Its build on solid primitives.

One thing I genuinely want you to perhaps take a look at if possible is creating an additional piece of software or adding the functionality where instead of the careful dance that we have to make it work (like we have to send two large data pieces from two computers, I had to use some hacky solution like piping server or wormhole itself for it)

So what I am asking is if there could be a possibility that you can make the initial node pairing (ticket?) [Sorry, I forgot the name of primitive] between A and B, you use wormhole itself and now instead of these two having to send large chunks of data between each other, they can now just send 6 words or similar

Wormhole: https://github.com/magic-wormhole/magic-wormhole

I even remember building some of my own CLI for something liek this and using chatgpt to build it xD but in the end gave up because I wasn't familiar with the codebase or how to make these two work together but I hope that you can add it. I sincerely hope so.

Another minor suggestion I feel like giving is to please have asciinema demo. I will create an asciinema patch if you want between two computers but a working demo gif from 0 -> running really really would've helped me save some/few hours

QTM has lots of potential. Iroh is so sane, it can run directly on top of ipv4 itself and talk directly if possible but it can even break through nats and you can even self host the middle part itself. I had thought about building such a project when I had first discovered QTM and you can just imagine my joy when I discovered QTM from one of your comments a long time ago for what its worth

Wishing the best of luck of your project! The idea is very fascinating. I would appreciate a visual demo a lot though and I hope we can discuss more!

Edit: I remember that qtm docs had this issue of where they really felt complex for me personally when all I wanted was one computer port mapped to another computer port and I think what helped in the end was the 4th comment if I remember correctly, I might have used LLM assistance or not or if it helped or not, I genuinely don't remember but it definitely took me an hour or two to figure things out but its okay since I still feel like the software is definitely positive and this might have been a skill issue from my side but I just want if you can add asciinema docs, I can't stress it enough if possible on how much it can genuinely help an average person to figure out the product.

(Slowly move towards the complex setups with asciinema demos for each of them if you wish)

Once again good luck! I can't stress qtm and I still strongly urge everyone to try qtm once https://gitlab.com/CGamesPlay/qtm since its highly relevant to the discussion

+1
CGamesPlay26 days ago
tehlike26 days ago

Starlink backup sounds fun now!

thrownawaysz26 days ago

Way too expensive for that imo (but then again might as well just go all in). Probably a 5G connection is more than enough

mlrtime25 days ago

$5/mo is too expensive for starlink backup?

Imustaskforhelp26 days ago

Honestly I think that there must be adapters which can use unlimited 5g sim's data plans as fallback network or perhaps (even primary?)

They would be cheaper than starlink fwiw and most connections can be robust usually.

That being said, one can use tailscale or cloudflare tunnels to expose the server even if its behind nat which you mention in your original comment that you might be against at for paranoid reasons and thats completely fine but there are ways to go do that if you want as well which I have talked about it on the other comment I have written here in-depth.

+2
numpad026 days ago
valcron100026 days ago

> When something breaks, I SSH in, ask the agent what is wrong, and fix it.

> I am spending time using software, learning

What are you actually learning?

PSA: OP is a CEO of an AI company

enos_feedler26 days ago

you are learning what it takes to keep a machine up and running. You still witness the breakage. You can still watch the fix. You can review what happened. What you are implying from your question is that compared to doing things without AI, you are learning less (or perhaps you believe nothing). You definitely are learning less about mucking around in linux. But, if the alternative was not ever running a linux machine at all because you didn't want to deal with running it, you are learning infinitely more.

croes26 days ago

How can you review if you don‘t know in the first place?

You can watch your doctor, your plumber, your car mechanic and still wouldn’t know if they di something wrong if you don’t know the subject as such.

doctoboggan26 days ago

You can learn a lot from watching your doctor, plumber or mechanic work, and you could learn even more if you could ask them questions for hours without making them mad.

defrost26 days ago

You learn less from watching a faux-doctor, faux-plumber, faux-mechanic and learn even less by engaging in their hallucinations without a level horizon for reference.

Bob the Builder doesn't convey much about drainage needs for foundations and few children think to ask. Who knows how AI-Bob might respond.

rkomorn26 days ago

> You can learn a lot from watching your doctor [...] work

Very true but I'll still opt for that general anesthesia...

xboxnolifes26 days ago

The primary way humans learn anything at all is by watching and mimicking. Sure, there will be mistakes, but that doesn't preclude learning.

ruszki26 days ago

Your hypothetical situation would cause all progress to halt. Nobody would be able to fix genuine problems.

jordanf26 days ago

who cares if I'm the CEO of an AI company? I didn't mention anything related to my company once in the post.

Wrote about learning and fun here: https://fulghum.io/fun2

q3k26 days ago

It's (at least) common courtesy to declare potential conflicts of interest.

jordanf26 days ago

I would have if there were potential conflicts.

+1
fhennig26 days ago
fhennig26 days ago

I think it's great that people are getting into self-hosting, but I don't think it's _the_ solution to get us off of big tech.

Having others run a service for you is a good thing! I'd love to pay a subscription for a service, but ran as a cooperative, where I'm not actually just paying a subscription fee, instead I'm a member and I get to decide what gets done as well.

This model works so well for housing, where the renters are also the owners of the building. Incentives are aligned perfectly, rents are kept low, the building is kept intact, no unnecessary expensive stuff added. And most importantly, no worries of the building ever getting sold and things going south. That's what I would like for my cloud storage, e-mail etc.

sroerick26 days ago

Hey, I was thinking about this same idea lately. What exactly would you want hosted by somebody?

I was thinking about what if your "cloud" was more like a tilde.club, with self hosted web services plus a Linux login. What services would you want?

Email and cloud make sense. I think a VPN and Ad Blocker would too. Maybe Immich and music hosting? Calendar? I don't know what people use for self hosting

fhennig26 days ago

I don't actually need much, I think basically just encrypted cloud storage would be great. If there was something like proton mail, but ran as a co-op, I'd also use that (it has great calendar support too).

I'd really focus on it being usable for non-techies, I don't think I'd want a linux login for anything. IMO, the focus should be on the basic infrastructure of digital life for the everyday person.

tilde.club sounds interesting though! Hadn't heard of it before.

tech_ken26 days ago

My (admittedly a bit tinfoil) take on the recent self-hosting boom is that it's highly compatible with individualist suburban capitalism; and that while there are elements of it that offer an alternative path to techno-feudalism, by itself it doesn't really challenge the underlying ideology. It's become highly consumerist, and seems more like a way of expressing taste/aesthetics than something that's genuinely revolutionary. Cooperative services (as you describe) seem like they offer a way more legitimate challenge, but I feel like that's a big reason why they don't see as much fete-ing in the mainstream tech media and industry channels.

I say all this as someone who's been self-hosting services in one form or another for almost a decade at this point. The market incorporation/consumerfication of the hobby has been so noticeable in the last five years. Even this AI thing seems like another step in that direction; now even non-experts can drop $350+ on consumer hardware and maybe $100 on some network gear so that they can control their $50/bulb Hue lights and manage their expansive personal media collection.

fhennig26 days ago

Interesting! I'm not sure how severe the consumerisation really is, but yeah I can totally see the whole home-automation thing playing into it too.

I don't think mainstream tech media is deliberately omitting co-ops in their reporting due to them challenging the status quo. I think it's rather that actually, there aren't really many initiatives in the space.

And I think that is due to a lot of tech people thinking that if only the technology becomes good enough, then the problem will be solved, then, finally, everyone can have their own cloud at home.

I think that's wrong though, I think the solution in this case is that we organize the service differently, with power structured in a different way. We don't need more software to solve the problem. We know how to build a cloud services, technically. We know how to do it will. It's just that if the service is run for-profit, counter to the interests of the users, it will eventually become a problem for the users. That's the problem to fix, and it's not one to fix with technology, but just with organizing it differently.

It works for housing, in some areas it also works for utilities like internet, there are also co-ops for food. Why shouldn't it also work for modern-day utilities like cloud storage and email?

As a techie, don't be content with just running your own self-hosted service. Run it for your family, run it for your friends, run it for your neighborhood! Band together!

tech_ken26 days ago

> It's just that if the service is run for-profit, counter to the interests of the users, it will eventually become a problem for the users. That's the problem to fix, and it's not one to fix with technology, but just with organizing it differently.

100% agree with you here, and yeah I'm definitely leaning a bit too conspiratorial about it. It's probably not actually intentional, and instead just a product of the larger dynamics.

A while ago I read some interesting economic analysis about why more co-ops hadn't popped up specifically in the gig worker space, since it seems to natural to cut out the platform rent that eg. Uber extracts as profit. I'm failing to recall the specific conclusions, but IIRC the authors seemed to feel that there were some structural obstacles preventing co-ops from growing in those space. Something something capex and unit costs. It's certainly an area I'd be interested to see further analysis in.

Also you sounds like you might get a kick out of mayfirst.coop (if you're not familiar with them already). It's not exactly what you're describing, but the spirit is there. I use them for my web-hosting needs and have been extremely satisfied.

ibizaman26 days ago

What about self-hosting as a service? You get a server in your home which you own with your open source software and data in it. And you pay a subscription to have a remote sysadmin take care of maintenance for you and can train you on the software? What happens if you don’t pay anymore is you keep everything. But like a good insurance, you’d keep the subscription because of top notch customer service.

fhennig26 days ago

I want something that can work for non-techies too, that I can recommend to my friends as well.

ibizaman26 days ago

I fully understand. That’s my goal too and what I want to provide here. No technical knowledge is required.

+1
mlrtime25 days ago
Humorist229027 days ago

Fun. I don't agree that Claude Code is the real unlock, but mostly because I'm comfortable with doing this myself. That said, the spirit of the article is spot on. The accessibility to run _good_ web services has never been better. If you have a modest budget and an interest, that's enough -- the skill gap is closing. That's good news I think.

But Tailscale is the real unlock in my opinion. Having a slot machine cosplaying as sysadmin is cool, but being able to access services securely from anywhere makes them legitimately usable for daily life. It means your services can be used by friends/family if they can get past an app install and login.

I also take minor issue with running Vaultwarden in this setup. Password managers are maximally sensitive and hosting that data is not as banal as hosting Plex. Personally, I would want Vaultwarden on something properly isolated and locked down.

heavyset_go27 days ago

I believe Vaultwarden keeps data encrypted at rest with your master key, so some of the problems inherent to hosting such data can be mitigated.

Humorist229026 days ago

I can believe this, and it's a good point. I believe Bitwarden does the same. I'm not against Vaultwarden in particular but against colocation of highly sensitive (especially orthogonally sensitive) data in general. It's part of a self-hoster's journey I think: backups, isolation, security, redundancy, energy optimization, etc. are all topics which can easily occupy your free time. When your partner asks whether your photos are more secure in Immich than Google, it can lead to an interesting discussion of nuances.

That said, I'm not sure if Bitwarden is the answer either. There is certainly some value in obscurity, but I think they have a better infosec budget than I do.

InfinityByTen26 days ago

I was just thinking I should write something about this, because the words needs spreading.

I cannot say how happy I am configuring my own immich server on a decade old machine. I just feel empowered. Because despite my 9 years of software development, I haven't gotten into the nitty gritties of networking, VPN and I always see something non-standard while installing an open source package and without all of this custom guidance, I always would give up after a couple of hours of pulling my hair apart.

I really want to go deeper and it finally feels this could be a hobby.

PS: The rush was so great I was excitedly talking to my wife how I could port our emails away from google, considering all of the automatic opt in for AI processing and what not. The foolhardy me thought of even sabbatical breaks to work on long pending to-do's in my head.

lee_ars26 days ago

> PS: The rush was so great I was excitedly talking to my wife how I could port our emails away from google, considering all of the automatic opt in for AI processing and what not. The foolhardy me thought of even sabbatical breaks to work on long pending to-do's in my head.

I've been email self-hosting for a decade, and unfortunately, self-hosting your email will not help with this point nearly as much as it seems on first glance.

The reason is that as soon as you exchange emails with anyone using one of the major email services like gmail or o365, you're once again participating in the data collection/AI training machine. They'll get you coming or they'll get you going, but you will be got.

InfinityByTen26 days ago

Words of wisdom. Hear hear!

Maledictus26 days ago

Email is endgame, I suggest you get more experience self hosting in other areas first.

InfinityByTen26 days ago

I concur. I did mention there was a rush and foolhardiness. That's my mid 30s excitement. Let me revel a bit :P

I do want to be able to take control; with photos and Google not giving me a folder view to manage them was the last straw that pushed me deep into the self hosted world. I just want to de-google as much as reasonable.

tbyehl26 days ago

My favorite genre of post in r/homelab and r/selfhosted this past year has been "I used AI to set all this stuff up and something broke so I asked AI to fix it and now all my data is gone."

There are so many NAS + Curated App Catalog distros out there that make self-hosting trivial without needing to Vibe SysAdmin.

jordanf26 days ago

I keep hearing this, and asking for examples, and there aren't really any.

iLoveOncall26 days ago

I've broken my internet many times by asking ChatGPt for help setting up PiHole as a DHCP server. I'll post conversation excerpts later if I remember.

It was just giving commands to run that were plain wrong and extremely destructive, and unless you already knew what they were doing you were screwed.

Here: https://chatgpt.com/share/696539b6-65f0-8010-9324-5e35da42ee...

I have 4-5 more conversations like this. It's honestly almost a piece of art, the LLM keeps spouting out shit like "Ah got it, your issue is clear now", and digging deeper into the wrong direction.

mlrtime25 days ago

I'm a sysadmin / infra engineer by trade. DNS is something I stopped hosting myself because it's always DNS and when it goes down everything else does to.

Email/DNS I outsource, everything else I homelab.

iLoveOncall25 days ago

Well my PiHole uses the DNS servers from CloudFlare so I don't actually self-host DNS, but having PiHole as DHCP server was the only way for me to have all my devices going through the PiHole.

In the end I literally had to give up, it's just too problematic.

legoxx26 days ago

I am building a homelab with the help of various AI services. I started with ChatGPT, then moved to Claude, and I am now working with Cursor and Gemini.

In my experience, this approach works extremely well—I would not have been able to accomplish this much on my own. However, there is an important caveat: you must understand what you are doing. AI systems sometimes propose solutions that do not work, and in some cases they can be genuinely dangerous to your data integrity or your privacy.

AI is therefore a powerful accelerator, not a replacement for expertise. You still need to critically evaluate its suggestions and veto roughly 10% of them.

dwd26 days ago

Been self-hosting for last 20 years and I would have to say LLMs were good for generating suggestions when debugging an issue I hadn't seen before, or for one I had seen before but was looking for a quicker fix. I've used it to generate bash scripts, firewall regex.

On self-hosting: be aware that it is a warzone out there. Your IP address will be probed constantly for vulnerabilities, and even those will need to dealt with as most automated probes don't throttle and can impact your server. That's probably my biggest issue along with email deliverability.

MrDarcy26 days ago

The best solution I’ve found for probes is to put all eggs into the basket listening on 443.

Haproxy with SNI routing was simple and worked well for many years for me.

Istio installed on a single node Talos VM currently works very well for me.

Both have sophisticated circuit breaking and ddos protection.

For users I put admin interfaces behind wireguard and block TCP by source ip at the 443 listener.

I expose one or two things to the public behind an oauth2-proxy for authnz.

Edit: This has been set and forget since the start of the pandemic on a fiber IPv4 address.

aaronax26 days ago

And use a wildcard cert so that all your services don't get proved due to cert transparency logs.

FaradayRotation26 days ago

~10 years ago I remember how shocked I was the first time I saw how many people were trying to probe my IP on my home router, from random places all over the globe.

Years later I still had the same router. Somewhere a long the line, I fired the right neurons and asked myself, "When was the last time $MANUFACTURER published an update for this? It's been awhile..."

In the context of just starting to learn about the fundamentals of security principles and owning your own data (ty hackernews friends!), that was a major catalyst for me. It kicked me into a self-hosting trajectory. LLMs have saved me a lot of extra bumps and bruises and barked shins in this area. They helped me go in the right direction fast enough.

Point is, parent comment is right. Be safe out there. Don't let your server be absorbed into the zombie army.

SchemaLoad26 days ago

These days I just wouldn't put my homeserver exposed to the internet only. LAN only with a VPN. Does mean you can't share links and such with other people, but your server is now very secure and most of the stuff you do on it doesn't need public access anyway.

chaz626 days ago

I would really like some kind of agnostic backup protocol, so I can simply configure my backup endpoint using an environment variable (e.g. `-e BACKUP_ENDPOINT=https://backup.example.com/backup -e BACKUP_IDENTIFIER=xxxxx`), then the application can push a backup on a regular schedule. If I need to restore a backup, I log onto the backup app, select a backup file and generate a one time code which I can enter into the application to retrieve the data. To set up a new application for backups, you would enter a friendly name into the backup application and it would generate a key for use in the application.

ibizaman26 days ago

I’m working on introducing this kind of protocol in NixOS. I called it contracts. https://github.com/NixOS/rfcs/pull/189

The idea is a contract is defined saying which options exist and what they mean. For backups, you’d get the Unix user doing the backup, what folders to backup and what patterns to exclude. But also what script can be run to create a backup and restore from a backup.

Then you’d get a contract consumer, the application to be backup, which declares what folders to backup either which users.

On the other side you have a contract provider, like Restic or Borgbackup which understand this contract and know thanks to it how to backup the application.

As the user, your role is just to plug-in a contract provider with a consumer. To choose which application backs up which application.

This can be applied to LDAP, SSO, secrets and more!

PaulKeeble26 days ago

At the moment I am docker compose down everything, run the backup of their files and then docker compose up -d again afterwards. This sort of downtime in the middle of the night isn't an issue for home services but its also not an ideal system given most wont be mid writing a file at the time of backup anyway because its the middle of the night! But if I don't do it the one time I need those files I can guarantee it will be corrupted so at the moment don't feel like there are a lot of other options.

Waterluvian26 days ago

Maybe apps could offer backup to stdout and then you pipe it. That way each app doesn’t have to reason about how to interact with your target, doesn’t need to be trusted with credentials, and we don’t need a new standard.

dangus26 days ago

I use Pika Backup which runs on the BorgBackup protocol for backing up my system’s home directory. I’m not really sure if this is exactly what you’re talking about, though. It just sends backups to network shares.

cryostasis26 days ago

I'm actively in the process of setting this up for my devices. What have you done for off-site backups? I know there are Borg specific cloud providers (rsync.net, borgbase, etc.). Or have you done something like rclone to an S3 provider?

dangus26 days ago

No off-site backup for me, these items aren’t important enough, it’s more for “oops I broke my computer” or “set my new computer up faster” convenience.

Anything I really don’t want to lose is in a paid cloud service with a local backup sync over SMB to my TrueNAS box for some of the most important ones.

An exception is GitHub, I’m not paying for GitHub, but git kinda sorta backs itself up well enough for my purposes just by pulling/pushing code. If I get banned from GitHub or something I have all the local repos.

+2
cryostasis26 days ago
mlrtime25 days ago

Why not virtualize everything and then just backup the entire cluster?

Proxmox Backup Server?

catlifeonmars26 days ago

> I have flirted with self-hosting at home for years. I always bounced off it - too much time spent configuring instead of using. It just wasn't fun.

No judgement, but wanting to tinker/spend time on configuration is a major reason why many people do self-host.

jordanf26 days ago

yeah, for sure! i realize that and respect it. i wrote a little bit about it here actually: https://fulghum.io/fun2

tezza26 days ago

Wait… tailscale connection to your own network, and unsupervised sysadmin from an oracle that hallucinates and bases its decisions on blog post aggregates?

p0wnland. this will have script kiddies rubbing their hands

asciii26 days ago

Hope OP has nice neighbors because sharing that password is basically keys to this kingdom

jordanf26 days ago

sharing what password?

chasd0026 days ago

What I do at home is ubuntu on a cheap small computer I found on ebay. ufw blocks everything except 80, 443, and 22. Setup ssh to not use passwords and ensure nginx+letsencrypt doesn’t run as root. Then, forward 80 and 443 from my home router to the server so it’s reachable from the internet. That’s about it, now I have an internet accessible reverse proxy to surface anything running on that server. The computers on the same LAN (just my laptop basically) have host file entries for the server. My registrar handles DNS for the external side (routers public ip). Ssh’ing to the server requires a lan IP but that’s no big deal I’m at home whenever I’m working on it anyway.

dizhn26 days ago

Put wireguard on that thing and don't expose anything on your public IP. Better yet don't have a public IP. Just port forward the wireguard IP from your router. That's it. No firewall no nothing. Not even accidental exposure.

drnick126 days ago

> Put wireguard on that thing and don't expose anything on your public IP. Better yet don't have a public IP.

This is nonsense. You can't self-host services meant to interact with the public (such as email, websites, Matrix servers, etc.) without a public IP, preferably one that is fixed.

tstrimple26 days ago

Sure you can. It’s what cloudflared and services like it are designed for.

+2
drnick126 days ago
legojoey1726 days ago

I just got around to a fresh NixOS install and I couldn't be happier as I've been able to do practically everything via Codex while keeping things concise and documented (given it's nix, not a bunch of commands of the past).

I recently had a bunch of breakages and needed to port a setup - I had a complicated k3s container in proxmox setup but needed it in a VM to fix various disk mounts (I hacked on ZFS mounts, and was swapping it all for longhorn)

As is expected, life happens and I stopped having time for anything so the homelab was out of commission. I probably would still be sitting on my broken lab given a lack of time.

ibizaman26 days ago

You might be interested in checking out my project SelfHostBlocks which allows you to declaratively setup quite a few services with declarative LDAP and SSO integration with LLDAP and Authelia. Even if you don’t end up using it, it might inspire you. Also, all integrations are tested with NixOS VM tests using playwright to ensure no breakage.

https://github.com/ibizaman/selfhostblocks

legojoey1720 days ago

Cool, I'll definitely take a look! I do have a preference for container-oriented setups and do have an elaborate set of plumbing on kuberenetes at the moment.

That being said, I procrastinated on getting postgres backups working and ended up causing self-inflicted corruption, so it is nice to see you've got that setup and have thought of pretty much everything!

wswin26 days ago

Home NAS servers are already shipped with user friendly GUI. Personally I haven't used them, but I certainly would prefer it, or recommend it to tech-illitarate people instead of allowing LLM to manage the server.

Finbarr26 days ago

I used Codex to set up a raspberry pi as a VPN with WireGuard. I had no similar experience before and it was super easy. I used Claude Code to audit and clean up a 10+ year old AWS account- patching security, shutting down redundant services, simplifying the structure. I want Claude Code to replace every bad UI out there. I know what outcome I want and don’t need to learn all the details to get there.

comrade123426 days ago

Prices are going to have an effect here. I have a 76TB backup drive of 8 drives. A few months ago one of my 10TB drives failed and I replaced it with a 12 TB WD gold for 269CHF. I was thinking of building a new backup drive (for fun) and so I priced the same drive and now it's 409CHF.

It's not tariffs (I'm in Switzerland). It's 100% the buildout of data centers for AI.

dpe8226 days ago

I've recently begun moving the systems I administer to Claude-written NixOS configs. Nix is great but can be a real pain to write yourself; Claude removes the pain.

hooo26 days ago

Me too... using that same logic.

dpe8226 days ago

Now if only there were a Nix-like system for FreeBSD! :)

duttish26 days ago

I've been building a home library system mainly for personal use, I want to run it cheaply so a $4 black Friday sale OVH vps is perfect.

But I wanted decent deployments. Hosting a image repository cost 3-4x of the server. Sending over the container image took over an hour due to large image processing python dependencies.

Solution? Had a think and a chat with Claude code, now I have blue-green deployments where I just upload the code which takes 5 seconds, everything is then run by systemd. I looked at the various PaaSes but they ran up to $40/month with compute+database etc.

I would probably never have built this myself. I'd have gotten bored 1/3 through. Now it's working like a charm.

Is it enterprise grade? Gods no. Is it good enough? Yes.

Draiken26 days ago

This summarizes what LLMs are best at: hobby projects that you care mostly about the outcome and won't have to actively maintain forever.

When using them with production code they are a liability more than a resource.

recvonline26 days ago

I started the same project end of last year and it’s true - having an LLM guide you through the setup and writing docs is a real game changer!

I just wish this post wasn’t written by an LLM! I miss the days where you can feel the nerdy joy through words across the internet.

amelius26 days ago

> The reason is simple: CLI agents like Claude Code make self-hosting on a cheapo home server dramatically easier and actually fun.

But I want to host an LLM.

minihoster27 days ago

Might as well ask here in case author or anyone else with a similar setup is reading. Has anyone run into stability issues running a bunch of self-hosting stuff on a mac mini M1 (8GB)? My setup is pretty basic - docker running Jellyfin, Immich, *arr software, qbittorrent. Stuff is stored on a NAS over SMB. Usually within a few hours of rebooting, the OS or at least userspace totally freezes. SSH connections are instantly closed, screen share doesn't work. It responds to ping for a while but that also goes down eventually. Pretty stumped...

tamimio26 days ago

Nope, never trust AI to do such things, it’s imminent to cause issues. Maybe as an assistant only but never installed on the same server and worse, the privilege to access/execute commands.

cmiles827 days ago

Anyone seriously about tech should have a homelab. It’s a small capital investment that lasts for years and with proxmox or similar having your own personal “private cloud” on demand is simple.

shamiln26 days ago

Tailscsle was never the unlock for me, but I guess I never was the typical use case here.

I have a 1U (or more), sitting in a rack in a local datacenter. I have an IP block to myself.

Those servers are now publicly exposed and only a few ports are exposed for mail, HTTP traffic and SSH (for Git).

I guess my use case also changes in that I don’t use things just for me to consume, select others can consume services I host.

My definition here of self-hosting isn’t that I and I only can access my services; that’s be me having a server at home which has some non critical things on it.

zrail26 days ago

Curious how long you've been sitting on the IP block. I've been nosing around getting an ASN to mess around with the lower level internet bones but a /24 is just way too expensive these days. Even justifying an ASN is hard, since the minimum cost is $275/year through ARIN.

bakies26 days ago

Is that the minimum for an ASN? /24 is a lot of public IP space! I'd expect just to get a static IP from and ISP if I were to coloc like this

zrail26 days ago

The minimum publicly routable IPv4 subnet is /24 and IPv6 is /48. IPv6 is effectively free, there are places that will lease a /48 for $8/year, whereas as far as I can tell it's multiple thousands of USD per year to acquire or lease a /24 of IPv4.

nojs26 days ago

This post is spot on, the combo of tailscale + Claude Code is a game changer. This is particularly true for companies as well.

CC lets you hack together internal tools quickly, and tailscale means you can safely deploy them without worrying about hardening the app and server from the outside world. And tailscale ACLs lets you fully control who can access what services.

It also means you can literally host the tools on a server in your office, if you really want to.

Putting CC on the server makes this set up even better. It’s extremely good at system admin.

jackschultz27 days ago

I literally did this yesterday and had the same thought. Older computer (8 gigs ram) with crappy windows I never used and I thought huh, I wonder how good these models can take me through installing linux with goal of docker deploys of relatively basic things like cron tasks, personal postgres, and minio that I can used for self shared data.

Took a couple hours with some things I ran across, but the model had me go through the setup for debian, how to go through the setup gui, what to check to make it server only, then it took me through commands to run so it wouldn't stop when I closed the laptop, helped with tailscale, getting the ssh keys all setup. Heck it even suggested doing daily dumps of the database and saving to minio and then removing after that. Also knows about the limitations of 8 gigs of ram and how to make sure docker settings for the difference self services I want to build don't cause issues.

Give me a month and true strong intention and ability to google and read posts and find the answer on my own and I still don't think I would have gotten to this point with the amount of trust I have in the setup.

I very much agree with this topic about self hosting coming alive because these models can walk you through everything. Self building and self hosting can really come alive. And in the future when open models are that much better and hardware costs come down (maybe, just guessing of course) we'll be able to also host our own agents on these machines we have setup already. All being able to do it ourselves.

efilife27 days ago

how many times will I get clickbaited by some cool title only to see AI praise in the article and nothing more? It's tiring and happens way too often

related "webdev is fun again": claude. https://ma.ttias.be/web-development-is-fun-again/

Also the "Why it matters" in the article. I thought it's a jab at AI-generated articles but it starts too look like the article was AI written as well

jacobthesnakob26 days ago

Maybe because I don’t do SWE for my job, but I have fun writing docker-compose files, troubleshooting them, and adding containers to my server. Then I understand how/why stuff works if it breaks, why would I want to hand that over to an AI?

Waiting for the follow-on article “Claude Code reformatted my NAS and I lost my entire media collection.”

chasing0entropy26 days ago

ROFL. There have been at least two posts of Claude without confirmation deleting a repository and one where it wiped an entire partition

keybored27 days ago

Everything is now not-niche but on the cusp of hitting the mainstream. Like Formal Methods.[1] But they were nice enough to put it in the title. Then tptacek replied that he “called it a little bit” because of: Did Semgrep Just Get A Lot More Interesting?[2] (Why? What could the reason be?)

[1] https://martin.kleppmann.com/2025/12/08/ai-formal-verificati...

[2]: https://fly.io/blog/semgrep-but-for-real-now/

efilife26 days ago

psa: The title has been changed since

wantlotsofcurry26 days ago

Was this article written entirely by Claude for the most part? It definitely reads like it was.

jordanf26 days ago

No

loufe26 days ago

Threads like this one make me feel at home. Last night I spent an hour trying to figure out a way to adjust tailscale to allow me access to containers on a MacVLAN on my NAS when I connect in away from home. Claude's an excellent tool to help me make informed decisions. I find the knowledge needs to be double checked more than some domains (I'm a big fan of requesting Claude search online for information before using its discourse as a basis for any decisions) but I still feel like I'm learning the WHY and HOW because I can still ask.

I share a lot of the same hesitations as others in the thread - using a giant US-based tech giant's tool for research as well as another US giant's tool to manage access, but it's really a game change and I'd be unable to find the time to do everything I want if I didn't have access to these otherwise.

I'm not even a software guy by engineering, my network is already complicated enough that learning and correctly securing things otherwise would simply just not be feasible with the time and energy I'd like to dedicate to it.

JodieBenitez26 days ago

So it's self hosting but with a paid and closed saas dependency ? I'll pass.

HarHarVeryFunny26 days ago

Doesn't have to be that way though. As discussed here recently, a basic local agent like Claude Code is only a couple hundred lines of code, and could easily be written by something like Claude Code if you didn't want to do it yourself.

If you have your own agent, then it can talk to whatever you want - could be OpenRouter configured to some free model, or could be to a local model too. If the local model wasn't knowledgeable enough for sysadmin you could perhaps use installable skills (scripts/programs) for sysadmin tasks, with those having been written by a more powerful model/agent.

visageunknown26 days ago

I find LLMs remove all the fun for me. When I build my homelab, I want the satisfaction of knowing that I did it. And the learning gains that only come from doing it manually. I don't mind using an LLM to shortcut areas that are just pure pain with no reward, but I abstain from using it as much as possible. It gives you the illusion that you've accomplished something.

lurking_swe26 days ago

> It gives you the illusion that you've accomplished something.

What’s the goal? If the act of _building_ a homelab is the fun then i agree 100%. If _having_ a reliable homelab that the family can enjoy is the goal, then this doesn’t matter.

For me personally, my focus is on “shipping” something reliable with little fuss. Most of my homelab skills don’t translate to my day job anyway. My homelab has a few docker compose stacks, whereas at work we have an internal platform team that lets me easily deploy a service on K8s. The only overlap here is docker lol. Manually tinkering with ports and firewall rules, using sqlite, backups with rsync, etc…all irrelevant if you’re working with AWS from 9-5.

I guess I’m just pointing out that some people want to build it and move on.

visageunknown26 days ago

If your sole goal is to have a homelab that self-hosts services, I completely agree. I'm speaking for those who are interested in developing their skills and knowledge, and believe that building something with AI somehow does that.

I'll agree to disagree on it not being applicable. Having fundamental knowledge on topics like networking thru homelabbing have helped me develop my understanding from the ground up. It helps in ways that are not always obvious. But if your goal is purely to be better at your job at work, it is not the most efficient path.

lee_ars26 days ago

>I don't mind using an LLM to shortcut areas that are just pure pain with no reward...

Enlightenment here comes when you realize others are doing the exact same thing with the exact same justification, and everyone's pain/reward threshold is different. The argument you are making justifies their usage as well as yours.

visageunknown26 days ago

That may be true. Ultimately, what I'd advise is for people to be cognizant of their goals and whether AI does or does not help to achieve them.

torginus26 days ago

The thing about anything that actually gets used, is what removes the fun the quickest is when it breaks and people who actually want to use it start complaining.

In that case, it's not about the 'joy of creation', but actually getting everything up and running again, in which case LLMs are indispensable.

visageunknown26 days ago

I don't disagree. All depends on what you're looking to get out of it.

cyberrock26 days ago

Getting it up and running is fun but I find maintaining some services a pain. For example, Authelia has breaking configuration changes every minor release, and fixing that easily takes 1-X hours every time. I gave up for 4.38 and just tossed the patch notes into NotebookLM.

visageunknown26 days ago

Definitely. That's a great use case. How do you use NotebookLM? First I'm hearing about it

cyberrock25 days ago

I've been mostly using it as what I would call a "medium scope search engine". Instead of searching "$topic" or "$topic site:wikipedia.org", I can pick a few dozen links from different sources (wiki, documentation, tax code, papers, videos), toss it in NotebookLM, submit my search query in the form of a question, and look at the linked source. I see it as an evolution of doing research through library books, Internet search, and Wikipedia. I didn't know I wanted something like this until I used NotebookLM this way. It also seems to handle multiple languages reasonably well.

visageunknown25 days ago

very cool. thanks!

Gigachad26 days ago

I don’t give them direct access to my computer. I just use them as an alternative to scrolling reddit for answers. Then I take the actions myself.

jordanf26 days ago

yeah. I wrote a little about that here: https://fulghum.io/fun2

nickdothutton26 days ago

On the one hand, self-hosting, even at home, is more accessible than it has ever been. Hardware, software, and agents to help with setup and maintenance. While at the same time ISPs, the big email providers, and even (in the UK) government legislation makes it more difficult or risky than it has ever been. We have gained much but also lost much since the mid 1990s.

atmosx27 days ago

Just make sure you have a local and remote backup server.

From to time, test the restore process.

__MatrixMan__26 days ago

I haven't tried it yet, but the evil twin to this practice is to nuke everything periodically to ensure that your agent isn't relying on any filesystem state that it hasn't specified builds for (i.e. https://grahamc.com/blog/erase-your-darlings/).

They tend to slip out of declarative mode and start making untracked changes to the system from time to time.

yencabulator26 days ago

Claude with root access will ensure there's "motivation" to run the restore process regularly.

compounding_it26 days ago

I don't really understand this post completely.

>I am spending time using software, learning, and having fun - instead of maintaining it and stressing out about it.

Using software, learning and having fun with with what? everything is being done by Claude here. The part of fun and learning is to learn to use and maintain it in the first place. How will you learn anything if Claude is doing everything for you ? You are not understand how things work and where everything goes.

This post could be written or at least modified by an LLM, but more importantly I think this person is completely missing the point of self hosting and learning.

dannersy26 days ago

They get to feel like hackerman without understanding any of it. Also, this feels like a security nightmare. I wouldn't self host anything without understanding what you're opening yourself up to.

Draiken26 days ago

LLMs give you that dopamine hit without the effort.

I did it! Except you didn't and you don't know anything about what it did or learned anything along the way. Success?

jordanf26 days ago

hi, OP here. people have different reasons/motivations for doing stuff, right? i wrote about it here: https://fulghum.io/fun2

river_otter26 days ago

Next level up is self hosting your LLM! I put LM Studio on a mac mini at home and have been extremely happy with it. Then you can use a tool like opencode to connect to that LLM and boom, Claude Code dependency is removed and you just got even more self-hosted. For what you're using Claude Code for, a smaller open-weight model would probably work fine

NicoJuicy26 days ago

Well, to a limit. I have an RTX 3090 24gb that enables a lot of use-cases.

But for what i'm using Agents right now, claude code is the tool to go.

river_otter26 days ago

makes sense. You could look at something like https://github.com/musistudio/claude-code-router if at some point you're interested in going down that path. I've been using gpt-oss-20b which would fit on your GPU and I've found useful for basic tasks like recipe creation and agentic tool usage (I use it with Notion MCP tools)

NicoJuicy25 days ago

It's a really good model for its size, but context length is a serious issue to avoid hallucination

tawman25 days ago

I do the same thing on my Hostinger VPS with Claude even though I have been using Linux 30 years. Just removes the friction and time. I version control the DevOps with git, and even had Claude setup automated backups to my Google Drive via cron.

workdir/ ├── README.md ├── CLAUDE.md # Claude Code instructions ├── BACKUP.md # Backup documentation ├── .gitignore ├── traefik/ │ ├── docker-compose.yml │ └── config/ │ └── traefik.yml ├── authentik/ │ ├── docker-compose.yml │ └── .env.example ├── umami/ │ ├── docker-compose.yml │ └── .env.example ├── n8n/ │ ├── docker-compose.yml │ └── .env.example └── backup/ ├── backup.sh # Automated backup script ├── restore.sh # Restore from backup ├── verify.sh # Verify backup integrity ├── list-backups.sh # List available backups └── .env.example

benzguo26 days ago

Great post! Totally agree – agents like Claude Code make self-hosting a lot more realistic and low maintenance for the average dev.

We've gone a step further, and made this even easier with https://zo.computer

You get a server, and a lot of useful built-in functionality (like the ability to text with your server)

danpalmer26 days ago

There's something ironic about using Claud Code – a closed source service, that you can't self-host the hardware for, and that you can't get access to the data for – to self-host so that you can reduce your dependencies on things.

SchemaLoad26 days ago

Before you had to rely on blog posts and reddit for information, something you also couldn't self host. And if you are just asking it questions and taking actions yourself, you are learning how it works to do it yourself next time.

danpalmer26 days ago

Or you could read man pages, ask people for help, read books... all of which are more closely aligned with self-hosting than outsourcing the whole process.

I agree you could use LLMs to learn how it works, but given that they explain and do the actions, I suspect the vast majority aren't learning anything. I've helped students who are learning to code, and very often they just copy/paste back and forth and ignore the actual content.

SchemaLoad26 days ago

Sure, you could. But this isn't my job, it isn't my career. I just want Nextcloud running on a machine at home. I know linux and docker well enough to validate the ideas coming out of Gemini, and it helps me find stuff much faster than if I had to read man pages or read books.

And I find the stuff that the average self hoster needs is so surface level that LLMs flawlessly provide solutions.

+1
danpalmer26 days ago
johnisgood26 days ago

That would be ideal, but there are software engineers who use Tailscale, so I think our expectations are too high.

__MatrixMan__26 days ago

There is, but if I have to chose between tolerating the irony, and waiting for the hardware/model performance situation to improve before getting started, I'll ironically mark self-hosting a claude-equivalent as a TODO and get started on the other stuff now.

raincole26 days ago

If I google how to host a Wordpress blog are you going to tell me what I am doing is "ironic" because Google is not hosted by me? Even more ironic, Google has a competing product, blogspot! How ironic!

itchingsphynx26 days ago

Ahh yes, the irony is not lost on using a paid closed-source service to create and help manage a self-hosted service running FOSS. I thought it was because I didn't want to pay SAAS subscription costs, but now I just need Claude Pro...

I'm asking Claude technical questions about setup, e.g., read this manual, that I have skimmed but don't necessarily fully understand yet. How do I monitor this service? Oh connect Tailscale and manage with ACLs. But what do I do when it doesn't work or goes down? Ask Claude.

To get more accurate setup and diagnostics, I need to share config files, firewall rules, IPv6 GUAs, Tailscale ACLs... and Claude just eats it up, and now Anthropic knows it forever too. Sure, CGNET, Wireguard, and ssh logins stand between us, but... Claude is running a terminal window on a LAN device next to another terminal window that does have access to my server. Do I trust VS Code? Anthropic? The FOSS? Is this really self-hosting? Ahh, but I am learning stuff, right?

hendry26 days ago

Timely! I just re-setup my Pi5 with the help of Claude. https://github.com/kaihendry/ai-pi

Tbh I did the mistake of throwing away Ansible, so testing my setup was a pain!

Since with AI, the focus should be on testing, perhaps it's sensible to drop Ansible for something like https://github.com/goss-org/goss

Things are happening so fast, I was impressed to see a Linux distro embrace using a SKILL.md! https://github.com/basecamp/omarchy/blob/master/default/omar...

notesinthefield27 days ago

I find myself a bit overwhelmed with hardware options during recent explorations. Seemingly everything can handle what I want a local copy of my Bandcamp archive to stream via jellyfin. Good times we’re in but even having good sysadmin skills, I wish someone would just tell me exactly what to buy.

devonhk26 days ago

> I wish someone would just tell me exactly what to buy.

I’ll bite. You can save a lot of money by buying used hardware. I recommend looking for old Dell OptiPlex towers on Facebook Marketplace or from local used computer stores. Lenovo ThinkCentres (e.g., m700 tiny) are also a great option if you prefer something with a smaller form factor.

I’d recommend disregarding advice from non-technical folks recommending brand new, expensive hardware, because it’s usually overkill.

SchemaLoad26 days ago

I spent so long trying to make Raspberry Pis work but they just kind of suck and everything is harder on them. I only just discovered that there are an infinite supply of these micro desktops second hand from offices/government. I was able to pick up a 9th gen intel with 16gb ram for less than the cost of a Pi 5, and it's massively more powerful.

devonhk26 days ago

Yeah, they’re amazing value. I paid $125 CAD for a 4th gen i7 with 16GB of RAM about 5 years ago. It’s been running almost 24/7 ever since with no issues.

SchemaLoad26 days ago

You also don't have to deal with the usual annoyance of second hand gear like facebook marketplace and no delivery. These companies / governments have contracts with reseller companies who will buy the entire stock and sell them online just like buying new.

jacobthesnakob26 days ago

Pi’s are incredible little basic home servers but they can’t handle transcoding. Great option for places with very expensive electricity too.

+1
SchemaLoad26 days ago
+1
drnick126 days ago
lucb1e26 days ago

What's the power consumption on those?

I'm not familiar with Dell product names specifically but 'tower' sounds like it'll sit there burning 200W idle. Old laptops (sliding out the battery) is what I've been opting for, which use barely anything more than the router it sits next to. Especially if you just want to serve static files as GP seems to be looking for, an old smartphone will be enough but there you can't remove the battery (since it won't run off of just the charger)

notesinthefield25 days ago

Old optiplex’es sff or not idled between 15w and 30w. Id aim for sff’s specifically. I have run an ftp server for lab iso’s on a very old android phone - not fun.

notesinthefield26 days ago

I forgot all about these after I stopped doing desktop support, thanks!

rr80826 days ago

Get started a corporate surplus mini pc on ebay. They super cheap - search for micro pc - if you get a recent CPU from Dell or Lenovo should be under $200, you can install Fedora or other Linux distribution. Ask Claude for everything else.

lucb1e26 days ago

That's twice what I'd spend on a first server when you're still figuring out what you need!

My first "server" was a 65€ second-hand laptop including shipping iirc, in ~2010 euros so say maybe 100€ now when taking inflation into account. I used that for a number of years and had a good idea of what I wanted from my next setup (which wasn't much heavier, but a little newer cpu wasn't amiss after 3 years). Don't think one needs to even go so far as 200$ for a "local Bandcamp archive" (static file storage) and serving that via some streaming webserver

Jellyfin docs do mention "Not having a GPU is NOT recommended for Jellyfin, as video transcoding on the CPU is very performance demanding" but that's for on-the-fly video transcoding. If you transcode your videos to the desired format(s) upon import, or don't have any videos at all yet as in GP's case, it doesn't matter if the hardware is 20x slower. Worst case, you just watch that movie in source material quality: on a LAN you won't have network speed bottlenecks anyway, and transcoding on GPU is much more expensive (purchase + ongoing power costs) than the gigabit ethernet that you can already find by default on every laptop and router

StrLght27 days ago

> Your home server's new sysadmin: Claude Code

(In)famous last words?

le_meer26 days ago

Just got a home-server. Immich is awesome! How's Caddy working out though? I need a way to expose immich to public internet (not just a VPN). Something like photos.domain.com

For now I'm just using Cloudflare tunnels, but ideally I also want to do that myself (without getting DDoS)

digiown26 days ago

Look up mutual TLS / client authentication. Caddy and Immich supports it. Then you can expose it to the internet reasonably securely.

kilobaud26 days ago

I am curious what you mean by doing it yourself, i.e., do you mean (as perhaps an oversimplification) having a DNS record pointing at your home IP address? What are you wanting to see as the alternative to a Cloudflare tunnel?

le_meer25 days ago

I mean, how do I expose my home server to the internet, without relying on externally hosted platforms like Cloudflare or Tailscale? While still minimising the risk of DoS

mr-karan26 days ago

I've landed on a similar philosophy but with a slightly different approach to orchestration. Instead of managing everything interactively, I built a lightweight bash-based deployment system that uses rsync + docker compose across multiple machines.

The structure is dead simple: `machines/<hostname>/stacks/<service>/` with a `config.sh` per machine defining SSH settings and optional pre/post deploy hooks. One command syncs files and runs `docker compose up -d`.

I could see Claude Code being useful for debugging compose files or generating new stack configs, but having the deployment itself be a single `./deploy.sh homeserver media` keeps the feedback loop tight and auditable.

Draiken26 days ago

I use Ansible.

It's simple enough and I had some prior experience with it, so I merely have some variables, roles that render a docker-compose.yml.j2 template and boom. It all works, I have easy access to secrets, shared variables among stacks and run it with a simple `ansible-playbook` call.

If I forget/don't know the Ansible modules, Claude or their docs are really easy to use.

Every time I went down a bash script route I felt like I was re-inventing something like Ansible.

neoromantique26 days ago

I have very similar setup, but I use komo.do with netbird.

Which basically accomplishes same thing, but gives a bit more UI for debugging when needed.

piqufoh26 days ago

I'm working on something very similar, but I've found that if I'm not doing the work - I forget what has been set up and how its running a lot faster.

For example - I have ZFS running with a 5-bay HDD enclosure, and I honestly can't remember any of the rules about import-ing / export-ing to stop / start / add / remove pools etc.

I have to write many clear notes, and store them in a place where future me will find them - otherwise the system gets very flaky through my inability to remember what's active and what isn't. Running the service and having total control is fun, but it's a responsibility too

mvanbaak26 days ago

This is the reason one should always ask the LLM to create scripts to complete the task. Asking it to do things is fine, but as you stated you will forget. If you ask the LLM to do something, but always using a script first, and if you ask: 'Create a well documented shell script to <your question here>', you will have auto documentation. One could go one step further and ask it to create a documented terraform/ansible/whatever tooling setup you prefer.

Draiken26 days ago

Write scripts for everything.

If you need to run the command once, you can now run it again in the future.

It's very tempting to just paste some commands (or ask AI to do it) but writing simple scripts like this is an amazing solution to these kinds of problems.

Even if the scripts get outdated and no longer work (maybe it's a new version of X) it'll give you a snapshot of what was done before.

ibizaman26 days ago

This is the reason I adore NixOS. The documentation is the code. Seriously.

Maledictus26 days ago

Which enclosure do you use, and can you recommend it?

elemdos26 days ago

I’ve also found AI to be super helpful for self-hosting but in a different way. I set up a Pocketbase instance with a Lovable-like app on top (repo here: https://github.com/tinykit-studio/tinykit) so I can just pull out my phone, vibecode something, and then instantly host it on the one server with a bunch of other apps. I’ve built a bunch of stuff for myself (journal, CRM, guitar tuner) but my favorite thing has been a period tracker for a close friend who didn’t want that data tracked + sold.

chromehearts26 days ago

Me personally; I have a similar mini pc with kubuntu installed, coolify to deploy my projects & cloudflare tunnels to expose them to the internet. the mini pc is still usable for daily use so that's great too

tietjens26 days ago

This is very cool and I'm doing something similar but without the Claude interface as the contact point for manipulating the server. What happens if one day Claude is down, or it becomes too expensive, or it is purchased by another company, etc.

In this case you will be completely unable to navigate the infrastructure of your homeserver that your life will have become dependent on.

But a homeserver is always about your levels of risk, single points of failure. I'm personally willing to accept Tailscale but I'm not willing to give the manipulation of all services directly over to Claude.

HarHarVeryFunny26 days ago

Interesting use case for Claude Code, or any similar local executor talking to a remote AI (Gemini suggests that "Hybrid-Local AI Agent" is a generic name for these, although I've never heard it called that before).

I wonder if a local model might be enough for sysadmin skills, especially if were trained specifically for this ?

I wonder if iOS has enough hooks available that one could make a very small/simple agentic Siri replacement like this that was able to manage the iPhone at least better than Siri (start and stop apps, control them, install them, configure iPhone, etc) ?

elitan26 days ago

Been using Claude Code to build a small deployment tool (Frost) for exactly this use case. The meta experience is interesting - using an AI agent to build tooling that makes self-hosting easier.

What I've found: Claude Code is great at the "figure out this docker/nginx/systemd incantation" part but the orchestration layer (health checks, rollbacks, zero-downtime deploys) still benefits from purpose-built tooling. The AI handles the tedious config generation while you focus on the actual workflow.

github.com/elitan/frost if curious

syndacks26 days ago

Can the same thing be said for using docker compose etc on a VPS to host a web app? Ie you can get the ergonomic / ease of using Fly, Renderer?

Historically, managed platforms like Fly.io, Render, and DigitalOcean App Platform existed to solve three pain points: 1. Fear of misconfiguring Linux 2. Fear of Docker / Compose complexity 3. Fear of “what if it breaks at 2am?”

CLI agents (Claude Code, etc.) dramatically reduce (1) and (2), and partially reduce (3).

So the tradeoff has changed from:

“Pay $50–150/month to avoid yak-shaving” → “Pay $5–12/month and let an agent do the yak-shaving”

bicepjai27 days ago

I feel the same way. I now have around 7 projects hosted on a home server with Coolify + Cloudflare. Always worry about security and I have seen many posts related to self hosting on HN trending recently

SchemaLoad26 days ago

For security just don't expose the server to the internet. Either set up wireguard or tailscale. You can set it up in a split tunnel config so your phone only uses the VPN for LAN requests.

bicepjai26 days ago

I am expecting Cloudflare Tunnel to take care of security. In fact, that is the only reason I am okay hosting from home. Are you talking about something more on top of Cloudflare Tunnel or extra security features or a replacement?

SchemaLoad26 days ago

Cloudflare Tunnel is a very similar solution. Just a different product for the same task.

sambuccid26 days ago

And if you prefer to learn well how to do it without AI, you can always try to do it manually the old way but then use AI at the end to review your config and spot any security issues

kzahel26 days ago

As an added bonus you could add on a mobile-first claude code UI on top of claude. I've been working on this and use it on my pi5 at home. https://yepanywhere.com/

(and no, this product is not against TOS as it is using the official claude code SDK unlike opencode https://yepanywhere.com/tos-compliance.html)

csomar26 days ago

Vibe-setting up a home network server with VaultWarden is beyond reckless. LLMs have tendency to overlook security in order to get things working. You are, thereby, exposing your passwords (and potentially your 2FA as bitwarden supports that) to the whole world. This is beyond stupid. Even before LLMs my main concern with setting up BitWarden on my own server was two folds: security and availability. LLMs doesn't fix the second point but they make the first point much worse.

teiferer26 days ago

Vibe-maintaining is even worse than vibe-setting up.

And ironically all in the name of "self hosting". Claude code defies both words in that.

1shooner26 days ago

Others here mention Coolify for a homeserver. If you're looking for turnkey docker-compose based apps rather than just framework/runtime environments, I will recommend the runtipi project. I have found it to be simple and flexible. It offers an 'app store' like interface, and supports hosting your own app store. It manages certs and reverse proxy via traefik as well.

https://runtipi.io/

indigodaddy26 days ago

Cosmos Cloud is great too. I use it on a free tier OCI Ampere 24G VM

https://cosmos-cloud.io/

easterncalculus27 days ago

Nice. This is a great start. The next steps are backups and regular security updates. The former is probably pretty easy with Claude and a provider like Backblaze, for updates I wonder if "check for security issues with my software and update anything in need" will work well (and most importantly, how consistently). Alternatively, getting the AI to threat model and perform any docker hardening measures.

Then someday we self-host the AI itself, and it all comes together.

zrail26 days ago

My security update system is straightforward but it took quite a lot of thought to get here.

My self hosted things all run as docker containers inside Alpine VMs running on top of Proxmox. Services are defined with Docker Compose. One of those things is a Forgejo git server along with a runner in a separate VM. I have a single command that will deploy everything along with a Forgejo action that invokes that command on a push to main.

I then have Renovate running periodically set to auto-merge patch-level updates and tag updates.

Thus, Renovate keeps me up to date and git keeps everyone honest.

hmontazeri26 days ago

Love this. I run also all my stuff by myself and I’m not an infra expert by all means just know enough to self host my app and services. I also built an remote monitoring agent using Go and rails I call it https://bareagent.io which monitors servers, docker containers and sends notifications when in any of those containers an error occurres as it is attached to the container logs

sprainedankles27 days ago

Impeccable timing, I finally got around to putting some old hardware to use and getting a home assistant instance (and jellyfin, and immich, and nextcloud, ...) set up over winter break. Claude (and tailscale) saved hours of my time and enabled me to build enough momentum to get things configured. It's now feasible for me to spend 15-20 minutes knocking down homeserver tasks that I otherwise would've ignored. Quite fun!

hinkley27 days ago

What I’d really like is to run the admin interface for an app on a self hosted system behind firewalls, and push read replicas out into the cloud. But I haven’t seen a database where the master pushes data to the replicas instead of the replicas contacting the master. Which creates some pretty substantial tunneling problems that I don’t really want on my home network.

Is there a replica implementation that works in the direction I want?

chasing0entropy26 days ago

Use NAT hole punching if you're advanced, or you could fall back to IP/port filtering

hinkley26 days ago

Why do I have to use a tunnel and empower a machine I don’t control to mess with a machine I do? Why has this been made so difficult? Why wouldn’t a master be aware of all of its replicas? Raft does.

bakies26 days ago

Tailscale will take care of the networking if you install it in both locations.

jeena26 days ago

I self host a lot of stuff myself: https://uptime.jeena.net/status/everything

And until now without AI, but I'm kind of curious but afraid that it will bring my servers down and then I can't roll back :D But perhaps if I would move over to NixOS, then it would be easy to roll back.

sciences4427 days ago

Interesting subject, thank you! I have a cluster of 2 Orange Pis (16 GB RAM each) plus a Raspberry Pi. I think it's high time to get them back on my desk. I never had time to get very far with the setup due to a lack of time. It took so long to write the Ansible scripts/playbooks, but with Claude Code, it's worth a try now. So thanks for the article; it makes me want to dust it off!

pmihaylov26 days ago

I also built a "devops" agent on top of claude code like that - I deployed it on my server and let it debug all the gnarly infra issues for me.

I route it through a familiar interface like slack tho as I don't like to ssh from phone or w/e using a tool I built - https://www.claudecontrol.com/

everlier26 days ago

I use coding agents for similar kind of problem very frequently. It makes wonders debugging obscure system issues related to components that I have no faintest idea about. Also building a homelab very soon. I think you may find this project useful: https://github.com/av/harbor

Havoc26 days ago

I’d suggest rather asking it to write you bash scripts

And ideally doing it via lxc or vm.

Extra complication but gives you something repeatable that you can stick on git

bambax26 days ago

I self-host many things on a NAS (Asustor) using Portainer (a Docker UI/facilitator). It all works perfectly and has a marginal cost of about zero, since I need the NAS in any case.

But I wouldn't give the keys of the house to Claude or any LLM for that matter. When needed, I ask them questions and type commands myself. It's not that hard.

Gualdrapo27 days ago

One day when I have some extra bucks I'd try to get a home server running, but the idea of having something eating grid electricity 24/7 doesn't seem to play along well with this 3rd world budget. Are there some foolproof and not so costly off-grid/solar setups to look at (like a Raspberry-based thingy or similar)?

imiric26 days ago

Your fridge and other home appliances likely use much more power than whatever a small server would. The mini PC in the article is very power efficient. You likely won't notice it in your power bill, regardless of your budget. You could go with a solar-powered setup if you prefer, but IMO for this type of use case it would be overengineering.

noname12026 days ago

Mac Mini (M1 and later) under Asahi Linux just uses 5 W for a normal workload. If you push it to 100% of CPU it reaches 20 W. That’s very little.

atahanacar26 days ago

I doubt anyone who is too tight on cash that they have to think about the electricity cost of a home server can afford a Mac.

SchemaLoad26 days ago

Only thing is you can't run Proxmox which makes self hosting much better, and you'll be limited to ARM builds, which on server is at least a lot easier than trying to run desktop apps. Modern micro desktops are also fairly power efficient, perhaps not quite as low as the mac, but much lower than a regular gaming desktop idling.

Avoid stacking in too many hard drives since each one uses almost as much power as the desktop does at idle.

donatj26 days ago

I have been self hosting since the late 90s, but I've always just installed everything on Bare metal. I hear more and more about these elaborate Docker setups. What does a setup like this actually look like?

Is it just a single docker-compose.yml with everything you want to run and 'docker compose up'?

abc123abc12326 days ago

And why would I bother with a home setup? Sure, for industrial IT go for it, VM:s and/or containers, but for my own personal stuff, baremetal, packages, and good old fashioned way is more than enough.

jordanf26 days ago

yeah basically.

didntknowyou26 days ago

idk exposing your home network to the world and trusting AI will produce secure code is not a risk I want to take

reactordev26 days ago

I just recently wrote my own agent that can gdb, objdump, nasm, cc, make, and more.

Agents are powerful. Even more so with skills and command line tools they can call to do things. You can even write custom tools (like I did) for them to use that allows for things like live debugging.

The tailscale piece to this setup is key.

megous26 days ago

My idea of fun is deeply tied to understanding how things work—learning them, then applying that knowledge in my own way, as simply as possible. That process gives me a sense of ownership and control, which is not something I get from an approach where AI does things for me that I do not understand.

geooot26 days ago

I also liked using AI agents to do sysadmin stuff, especially with Nix OS. On top of Nix being great, the configuration of a system being files gives the agent good context on the current state the system is. Then when it does make changes, its great to be able to review its work via diffs.

austin-cheney26 days ago

I have found that storage is up in price more than 60% from last year.

I am writing a personal application to simplify home server administration if anybody is interested: https://github.com/prettydiff/aphorio

walterraj26 days ago

I have a hard time reading things like “The last one is the real unlock.” or “That alone justified the box.” without immediately thinking of an AI trying to explain something. Not to say this was written with one, but the frequency with which I see phrasing like this nowadays is skyrocketing...

Sirikon26 days ago

Self hosting post. Tailscale.

Its comedic at this point.

teiferer26 days ago

Can just "self host" documents, email and chat on google workspace.

kissgyorgy26 days ago

My non-technical friend, never learned coding, doesn't know Linux, zero sysadmi experience does this and he can do anything and doesn't even know what Clause is doing. He learned some concepts recently like Docker, SSH, but that's basically it.

tech_ken26 days ago

I think this is a good idea so long as you ensure you've got a good backup going or don't put anything super critical on there. I think it's seriously outside odds that Claude `rm -rf /`s your server, but definitely not 0%.

drchaim26 days ago

My workflow is a bit different in the sense I open my claude session in my laptop, at the directory of my ansible homelab code, and I also give Claude access to ssh to my homelab. But at the end it's almost the same, great tool.

bilekas26 days ago

I recently got a zimaboard2 and have been blown away how powerful it is, x86 and 16GB I think it was around 250$. I have it running proxmox. Dedicated GPU for transcoding, all working out of the box with the ZimaOS.. And no AI needed.

larodi25 days ago

Author fails to recognize the fact that CLI agents make all kind of hoisting easier and fun. Like publishing to CloudFlare Pages which costs close to nothing and now takes seconds, while previously could taker days.

windex26 days ago

I had problems with tailscale being flaky about a year ago and it would stop responding taking down networking with it. I've since ripped it out and went with a VPS based wireguard for all PCs and mobiles. Stable since then.

cryptica26 days ago

I started self-hosting after noticing that my AWS bill increased from like $300 per month to $600 per month within a couple of years. When looking at my bill, 3/4 of the cost was 'AWS Other'; mostly bandwidth. I couldn't understand why I was paying so much for bandwidth given that all my database instances ran on the same host as the app servers and I didn't have any regular communication between instances.

I suspect it may have been related to the Network File System (NFS)? Like whenever I read a file on the host machine, it goes across the data-center network and charges me? Is this correct?

Anyway, I just decided to take control of those costs. Took me 2 weeks of part-time work to migrate all my stuff to a self-hosted machine. I put everything behind Cloudflare with a load balancer. Was a bit tricky to configure as I'm hosting multiple domains from the same machine. It's a small form factor PC tower with 20 CPU cores; easily runs all my stuff though. In 2 months, I already recouped the full cost of the machine through savings in my AWS bill. Now I pay like $10 a month to Cloudflare and even that's basically an optional cost. I strongly recommend.

Anyway it's impressive how AWS costs had been creeping slowly and imperceptibly over time. With my own machine, I now have way more compute than I need. I did a calculation and figured out that to get the same CPU capacity (no throttling, no bandwidth limitations) on AWS, I would have to pay like $1400 per month... But amortized over 4 years my machine's cost is like $20 per month plus $5 per month to get a static IP address. I didn't need to change my internet plan other than that. So AWS EC2 represented a 56x cost factor. It's mind-boggling.

I think it's one of these costs that I kind of brushed under the carpet as "It's an investment." But eventually, this cost became a topic of conversation with my wife and she started making jokes about our contribution to Jeff Bezos' wife's diamond ring. Then it came to our attention that his megayacht is so large that it comes with a second yacht beside it. Then I understood where he got it all from. Though to be fair to him, he is a truly great businessman; he didn't get it from institutional money or complex hidden political scheme; he got it fair and square through a very clever business plan.

Over 5 years or so that I've been using AWS, the costs had been flat. Meanwhile the costs of the underlying hardware had dropped to like 1/56th... and I didn't even notice. Is anything more profitable than apathy and neglect?

jdsully26 days ago

The most likely culprit was talking to other nodes via their public IP instead of their local ones. That gets billed as interent traffic (most expensive). The second culprit is your database or other nodes are in different AZs and you get a x-zone bandwidth charge.

Bandwidth inside the same zone is free.

cyber_kinetist26 days ago

No, 2026 is definitely not the year of home servers, because hardware has become too expensive.

Maybe viable if you have a bunch of spare parts laying around. But probably not when RAM and storage prices are off the charts!

tkgally26 days ago

I used Claude Code just yesterday in a similar way: to solve a computer problem that I previously would have tried googling.

I had a 30-year-old file on my Mac that I wanted to read the content of. I had created it in some kind of word processing software, but I couldn’t remember which (Nexus? Word? MacWrite? ClarisWorks? EGWORD?) and the file didn’t have an extension. I couldn’t read its content in any of the applications I have on my Mac now.

So I pointed CC at it and asked what it could tell me about the file. It looked inside the file data, identified the file type and the multiple character encodings in it, and went through a couple of conversion steps before outputting as clean plain text what I had written in 1996.

Maybe I could have found a utility on the web to do the same thing, but CC felt much quicker and easier.

pixelbyindex26 days ago

I also started started experimenting with self-hosting in the last few years. Started with a simple Plex server, then gradually evolved my little setup into a handful of open-source apps that now cover most of what I use during my day to day.

There are a few important things to consider, like unstable IPs, home internet limits, and the occasional power issue. Cloud providers felt overpriced for what I needed, especially once storage was factored in.

In the end, I put together a small business where people can run their own Mac mini with a static IP: https://www.minimahost.com/

I’m continuing to work on it while keeping my regular software job. So far, the demand is not very high, or perhaps I am not great at marketing XD

jawns26 days ago

Remember: In all likelihood, your residential ISP does not permit you to operate a server.

Granted, that's rarely enforced, but if you're a stickler for that sort of thing, check your ISP's Acceptable Use Policy.

cafebeen27 days ago

This is great and echoes my experience. Although I would add a caveat that this mostly applies to solo work. Once you need to collaborate or operate on a team, many of limits of self-hosting return.

FatherOfCurses26 days ago

Telling us you did all this without sharing how is just bragging.

jaime-ez26 days ago

has any one experience using cloudflare tunnels in a (small scale - 5000 user/day) self hosted web service? I just got 2 dynabook XJ-40 (32 gb ram, 512 gb ssd) for 200 usd each and I'm going to replace my DO droplets with them (usd150+ per month). I plan to use cloudflare tunnel to make the service available to the internet without exposing my home network. Any downsides ? (besides that cloudflare will be MITM for the service but it is not a privacy focused business)

WiSaGaN26 days ago

I have a similar experience when I found out that claude code can use ssh to conect to remote server and diagnose any sysadmin issue there. It just feels really empowered.

RicoElectrico26 days ago

I just use Proxmox on Optiplex 3060 micro. On it, a Wireguard tunnel for remote admin. The ease of creating and tearing down dedicated containers makes it easy to experiment.

esbeeb26 days ago

I too have that same Dell Optiplex 3060 micro. I love it for experimenting also. Also use wireguard for remote access. I use incus for my Linux containers, preferring it to proxmox.

timwis26 days ago

Great article! I think a paragraph on your backup strategy would make it even more complete and compelling, particularly given you put your passwords and photos in there.

jordanf26 days ago

thanks. I fleshed that out a bit more. appreciate the feedback.

micw26 days ago

For me the most important benefit is that the agent can keep the docs up to date. When I do a change, I let it document what is changed, how and why.

nick2k326 days ago

All fine and great with Tailscale until you company places an iOS restriction on external VPNs and your work phone is also your primary phone :(

ivanjermakov26 days ago

Usually you can ask for a separate phone for work. I can't stand when personal devices are poisoned with Intune and other company crap.

jacobthesnakob26 days ago

My work WiFi blocked traffic to port 51820, the default WireGuard port. I was wondering why my VPN started failing to handshake one day. I changed my ports to 51821 that night and back in business. I checked our technology policy and there’s no “thou shalt not use a VPN” clause so no clue why someone one day decided to drop WireGuard traffic on the network.

teiferer26 days ago

Restrict use of private devices?

Though just blocking particular ports for this purpose is very 90s and obviously ineffective, as you demonstrated. Anybody proficient in installing wireguard also knows how to change ports.

teiferer26 days ago

> your work phone is also your primary phone :(

That's the flaw right there. Don't mix company assets with pricate use. Phone, laptop, car. Your life is already very dependent on your employer (through income), don't get yourself locked in even more by depending on them for personal tech. Plus it's a security risk to your company.

Unless you have a low paying job, which rarely anybody on HN does, you can afford your own phone and laptop. And IT won't find your messages to girlfriend or pictures you don't want others to see or browsing history.

mzhaase26 days ago

Instead of the vibe-admin approach, why not have the LLM write an Ansible playbook? At least its repeatable and auditable that way.

imadierich26 days ago

[dead]

fergie26 days ago

I see why this is easy and fun, but is it really "self-hosting" if you are dependent on a $1200 a year AI-service to build and maintain it?

reachableceo27 days ago

Cloudron makes this even easier. Well worth 1.00 a day! Handles the entire stack (backups , monitoring , dns , ssl , updates ).

mintflow26 days ago

This is the reason why I am creating a Debian VM on my macOS to let Claude code in yolo mode to do some experiment:)

khalic26 days ago

To the tailscale promotion team: can you guys please dial it back? The half hidden ads are seriously annoying

stuaxo26 days ago

Is everyone just running claude code not even in a container, letting it go wild and change stuff?

raxxorraxor26 days ago

I use Cursor and quickly let it run pretty wild. Claude doesn't seem to mind to extract auth info from everywhere. Cursor usually blacklists some files for AI access depending on language and environment, but Claude just queries environment variables without even simulating a bad conscience. Probably info that gets extracted by the next programmer using it. Well, whoops...

sgt26 days ago

Try Claude and LVM, Linux software RAID and partitions though, it's hilariously bad at it.

larodi26 days ago

System Concierge, not sysadmin.

HeartofCPU26 days ago

Great until Claude decides to delete your storage and all your containers are gone

Fokamul26 days ago

>Your home server's new sysadmin: Claude Code

Lol, no thank you. Btw do your knees hurt?

krupan26 days ago

Oh my gosh, everything you want to host comes with a docker compose file that requires you to tweak maybe two settings. Caddy as your web proxy has the absolute simplest setup possible. You don't need AI to help you with this. You got this. You want to make sure you understand the basics so you (or your LLM doesn't do anything brain dead stupid). It's not that hard, you can do it!

tomashubelbauer26 days ago

I have a love-hate relationship with Home Assistant. I love its mission and I love it in spirit, but whenever I need to add or change something in it, I don't love the process. Without disparaging the work already done on improving it in recent years, I still find the UI and UX to be lacking. Claude Code has been shifting my perception much closer to the love end of the axis, because it allows me to side-step the boring parts of managing my Home Assistant instance and it is able to carry out the changes I want very reliably.

I still struggle with letting go of writing code and becoming only a full-time reviewer when it comes to AI agents doing programming, but I don't struggle in the slightest with assuming the position of a reviewer of the changes CC does to my HA instance, delegating all the work to it. The progress I made on making my house smart and setting up my dashboards has skyrocketed compared to before I started using CC to manage HA via its REST and WS APIs.

oulipo226 days ago

I would also suggest the great Karakeep for read-it-later :)

zmmmmm26 days ago

it's kind of fascinating, LLMs suddenly are making the Linux Desktop waaay more accessible, of all things.

All those fancy GUIs in Mac and Windows designed to be user friendly (but which most users hate and are baffled by anyway) are very hostile for models to access. But text configuration files? it's like a knife through butter for the LLMs to read and modify them. All of a sudden, Linux is MORE user friendly because you can just ask an LLM to fix things. Or even script them - "make it so my theme changes to dark at night and then back to light each morning" becomes something utterly trivial compared to the coding LLMs are being built to handle. But hey, if your OS really doesn't support something? the LLM can probably code up a whole app for you and integrate it in.

I think it's going to be fascinating to see if the power of text based interfaces and their natural compatibility with LLMs transfers over into an upswing in open source operating systems.

zebnyc26 days ago

Basic question: If I wanted a simple self hosting solution for a bot with a database, what is the simplest solution / provider I can go with. This bot is just for me doesn't need to be accessible to the general public.

Thanks

chasing0entropy26 days ago

Ask chatGPT bro

e2e427 days ago

My stack. Claude code working via CLIs: Coolify on hetzner

pablonaj26 days ago

Can you comment a bit on your setup? Sounds interesting.

Dbtabachnik26 days ago

How is readcheck any different than using raindrop.io?

teiferer26 days ago

Opens with "self-hosting" and then brings claude code into the mix. You realize it's not actually running locally right? Privcy-wise that's a nightmare. A non-deterministic blackbox running in somebody's AI cloud is controlling your server. Congrats.

apexalpha26 days ago

I am in the process of doing the same. I have a Netbird mesh (Tailscale but open source) with 3 k3s nodes. They are geographically separated for HA.

Claude and Gemini have been instrumental in helping me understand the core concepts of kubernetes, how to tune all these enterprise applications for high latency, think about architecture etc...

My biggest "wow, wtf?" moment was ben I was discussing the cluster architecture with Claude. It asked: want me to start the files?

I thought it meant update the notes, so replied 'yes'.

It spit out 2 sh files and 5 YAMLs that completely bootstrapped my cluster with a full GitOps setup using ArgoCD.

Learning while having a 24/7 senior tutor next to me has been insane value.

CuriouslyC26 days ago

Tailscale is pretty sweet. Cloudflare WARP is also pretty sweet, a little clunkier but you get argo routing for free and I trust Cloudflare for security.

noncoml26 days ago

Any opinions on Readeck vs Karakeep?

fnwbr26 days ago

why does a post from january 2026 recommend ubuntu version 22.04?

journal26 days ago

none of you have what it takes to self host your perfect self hosting fantasy because most of you won't cooperate with others. keep waiting for that unicorn you wouldn't see standing right in front of you.

yyaakkqq26 days ago

"piping everything to sudo bash makes a home server easier and fun"

fassssst26 days ago

Umm, what happened to zero trust? Network security is not sufficient.

drnick126 days ago

Reminder: If you are using Tailscale or a VPS you aren't really self-hosting.

teiferer26 days ago

Or a non-local LLM to keep it all maintained.

alexdns26 days ago

"another few hundred USD for 8TB in NVMe SSD" lol

holyknight27 days ago

not with these hardware prices...

SchemaLoad26 days ago

Second hand micro desktops are still cheap, at least for now.

drnick126 days ago

Hardware that is considered e-waste (like a Core 2 Duo) makes a wonderful home server.

SchemaLoad26 days ago

You can go much newer than that and get semi modern intel chips second hand. For something that runs 24/7, the power cost will exceed the savings from using long obsolete chips.

maximgeorge26 days ago

[dead]

cowboy7q26 days ago

[dead]

timwalz26 days ago

[flagged]

MORPHOICES26 days ago

[flagged]