Back

A decade of Docker containers

358 points2 dayscacm.acm.org
bmitch30202 days ago

I've seen countless attempts to replace "docker build" and Dockerfile. They often want to give tighter control to the build, sometimes tightly binding to a package manager. But the Dockerfile has continued because of its flexibility. Starting from a known filesystem/distribution, copying some files in, and then running arbitrary commands within that filesystem mirrored so nicely what operations has been doing for a long time. And as ugly as that flexibility is, I think it will remain the dominant solution for quite a while longer.

kccqzy2 days ago

> But the Dockerfile has continued because of its flexibility.

The flip side is that the world still hasn’t settled on a language-neutral build tool that works for all languages. Therefore we resort to running arbitrary commands to invoke language-specific package managers. In an alternate timeline where everyone uses Nix or Bazel or some such, docker build would be laughed out of the window.

muvlon2 days ago

As a Nix evangelist, I have to say: Nix is really not capable of replacing languag-specific package managers.

> running arbitrary commands to invoke language-specific package managers.

This is exactly what we do in Nix. You see this everywhere in nixpkgs.

What sets apart Nix from docker is not that it works well at a finer granularity, i.e. source-file-level, but that it has real hermeticity and thus reliable caching. That is, we also run arbitrary commands, but they don't get to talk to the internet and thus don't get to e.g. `apt update`.

In a Dockerfile, you can `apt update` all you want, and this makes the build layer cache a very leaky abstraction. This is merely an annoyance when working on an individual container build but would be a complete dealbreaker at linux-distro-scale, which is what Nix operates at.

kccqzy2 days ago

Fundamentally speaking, the key point is really just hermeticity and reliable caching. Running arbitrary commands is never the problem anyways. What makes gcc a blessed command but the compiler for my own language an "arbitrary" command anyways?

And in languages with insufficient abstraction power like C and Go, you often need to invoke a code generation tool to generate the sources; that's an extremely arbitrary command. These are just non-problems if you have hermetic builds and reliable caching.

xhcuvuvyc1 day ago

I mean, I guess at a theoretical level. In practice, it's just not a large problem.

poly2it2 days ago

Well, arbitrary granularity is possible with Nix, but the build systems of today simply do not utilise it. I've for example written an experimental C build system for Nix which handles all compiler orchestration and it works great, you get minimal recompilations and free distributed builds. It would be awesome if something like this was actually available for major languages (Rust?). Let me know if you're working on or have seen anything like this!

+1
nh21 day ago
+1
vatsachak1 day ago
brightball2 days ago

Reminds me of the “Electric cars in reverse” video where the guy envisions a world where all vehicles are electric and tries to make the argument for gas engines.

anal_reactor2 days ago

Link?

+1
fittom2 days ago
_the_inflator1 day ago

There is some truth to it, however in production it is simple: There is a working deployment or not.

Therefore I would rephrase your remarks as upside: let others continue scratch their head while others deploy working code to PROD.

I am glad there is a solution like Docker - with all it flaws. Nothing is flawless, there is always just yet another sub-optimal solution weighting out the others by a large margin.

kccqzy14 hours ago

Popularity of a technology usually isn’t perfectly correlated with how good it is.

> let others continue scratch their head while others deploy working code to PROD.

You make it sound like when docker build arrived on the scene, a cross-language hermetic build tool was still a research project. That’s just untrue.

whurley2320 hours ago

This calls for https://xkcd.com/927/.

__MatrixMan__2 days ago

There are some hurdles preventing that flow from achieving reproducible builds. As the bad guys get more sophisticated, it's going to become more and more important that one party can say "we trust this build hash" and a separate party to say "us too".

That's not going to work if both parties get different hashes when they build the image, which won't happen as long as file modification timestamps (and other such hazards) are part of what gets hashed.

bmitch30202 days ago

Recent versions of buildkit have added support for SOURCE_DATE_EPOC. I've been making the images reproducible before that with my own tooling, regctl image mod [1] to backdate the timestamps.

It's not just the timestamps you need to worry about. Tar needs to be consistent with the uid vs username, gzip compression depends on implementations and settings, and the json encoding can vary by implementation.

And all this assumes the commands being run are reproducible themselves. One issue I encountered there was how alpine tracks their package install state from apk, which is a tar file that includes timestamps. There are also timestamps in logs. Not to mention installing packages needs to pin those package versions.

All of this is hard, and the Dockerfile didn't make it easy, but it is possible. With the right tools installed, reproducing my own images has a documented process [2].

[1]: https://regclient.org/cli/regctl/image/mod/

[2]: https://regclient.org/install/#reproducible-builds

codethief1 day ago

> I've been making the images reproducible before that with my own tooling

I've been doing the same, using https://github.com/reproducible-containers/repro-sources-lis... . It allows you to precisely pin the state of the distro package sources in your Docker image, using snapshot.ubuntu.com & friends, so that you can fearlessly do `apt-get update && apt-get install XYZ`.

computerfriend24 hours ago

I'm not sure if I was just holding it wrong, but I couldn't create images reproducibly using Docker. (I could get this working with Podman/buildah however.)

hsbauauvhabzb1 day ago

Does any of that matter if you’re not auditing the packages you install?

I’m more concerned about sources being poisoned over the build processes. Xz is a great example of this.

__MatrixMan__1 day ago

Both are needed, but you get more bang for your buck focusing on build security than on audited sources. If the build is solid then it forces attackers to work in the open where all auditors can work together towards spoiling the attack.

If you flip it around and instead have magically audited source but a shaky build, then perhaps a diligent user can protect themself, but they do so by doing their own builds, which means they're unaware that the attack even exists. This allows the attacker to just keep trying until they compromise somebody who is less diligent.

Getting caught requires a user who analyses downloaded binaries in something like ghidra... who does that when it's much easier to just build them from sources instead? (answer: vanishingly few people). And even once the attacker is found out, they can just hide the same payload a bit differently, and the scanners will stop finding it again.

Also, "maybe the code itself is malicious" can only ever be solved the hard way, whereas we have a reasonable hope of someday providing an easy solution to the "maybe the build is malicious" problem.

miladyincontrol2 days ago

The lack of docker registry-like solutions really does seem to be the chokepoint for many alternatives.

Personally I love using mkosi and while it has all the composability and deployment options I'd care for, its clear not everyone wants to build starting only with a blank set of OS templates.

nunez1 day ago

There are lots of alternative container image registries: quay, harbor, docker's open sourced one, the cloud providers, github, etc.

Or do you mean a replacement for docker hub?

whateveracct2 days ago

Nix is exceptionally good at making docker containers.

stabbles2 days ago

Does Nix do one layer per dependency? Does it run into >=128 layers issues?

In Spack [1] we do one layer per package; it's appealing, but I never checked if besides the layer limit it's actually bad for performance when doing filesystem operations.

[1] https://spack.readthedocs.io/en/latest/containers.html

hamandcheese1 day ago

This post has a great overview: https://grahamc.com/blog/nix-and-layered-docker-images/

tl;dr it will put one package per layer as much as possible, and compress everything else into the final layer. It uses the dependency graph to implement a reasonable heuristic for what is fine grained and what get combined.

ghthor1 day ago

That layering algorithm is also configurable, though I couldn’t really understand how to configure it and just wrote my own post processing to optimize layering for my internal use case. I believe I can open source this w/o much work.

The layer layout is just a json file so it can be post processed w/o issue before passing to the nix docker builders

Spivak2 days ago

Yes but then you're committed to using Nix which doesn't work so well the moment you need some software not packaged by Nix.

Want to throw a requirements.txt in there? No no, why would you even ask that? Meanwhile docker says yeah sure just run pip install, why should I care?

okso2 days ago

LLMs are getting very good at packaging software using Nix.

+1
mort962 days ago
CuriouslyC2 days ago

This. I wouldn't have touched Nix when you needed someone who was really good at Nix to keep it working, but agents make it viable to use in a number of place.

gnull2 days ago

Packaging for nix is exceptionally easy once you learn it. And once something is packaged, it's solved for all, it's not going to randomly break.

If you care about getting it to work with minimal effort right now more thar about it being sustainable later, then sure.

saghm1 day ago

> Packaging for nix is exceptionally easy once you learn it

Most of the complaints I've seen about Nix about around documentation, so "once you learn it" might be the larger issue.

hamandcheese1 day ago

I don't in ow if I'd say it's "easy". The Python ecosystem in particular is quite hard to get working in a hermetic way (Nix or otherwise). Multiple attempts at getting Python easy to package with Nix have come and gone over the years.

whateveracct2 days ago

I use software from pretty much every language with Nix. And I package it myself too when needed. Including Python often :)

nothrabannosir2 days ago

Nix doesn't make sense if all you're going to use it for is building Docker images. It only makes sense if you're all in in the first place. Then Docker images are free.

ghthor1 day ago

Packing software with nix is easier than any other system TBH and just seems to be just getting easier.

mikepurvis2 days ago

Especially if you use nix2container to take control over the layer construction and caching.

osigurdson1 day ago

I'm not sure if this is what you mean but in some ways it would be nice to have tighter coupling with a registry. Docker build is kind of like a multiplexer - pull from here or there and build locally, then tag and push somewhere else. Most of the time all pulls are from public registries, push to a single private one and the local image is never used at all.

It seems overly orthogonal for the typical use case but perhaps just not enough of an annoyance for anyone to change it.

alex_dev421 day ago

[dead]

Ameo1 day ago

^ this account has posted nothing but AI generated spam since it was created 6 hours ago

phplovesong2 days ago

You can pretty much replace "docker build" with "go build".

But as long as people want to use scripting languages (like php, python etc) i guess docker is the neccessary evil.

well_ackshually2 days ago

>You can pretty much replace "docker build" with "go build".

I'll tell that to my CI runner, how easy is it for Go to download the Android SDK and to run Gradle? Can I also `go sonarqube` and `go run-my-pullrequest-verifications` ? Or are you also going to tell me that I can replace that with a shitty set of github actions ?

I'll also tell Microsoft they should update the C# definition to mark it down as a scripting language. And to actually give up on the whole language, why would they do anything when they could tell every developer to write if err != nil instead

Just because you have an extremely narrow view of the field doesn't mean it's the only thing that matters.

phplovesong23 hours ago

My point was that 90% of "dockerized" stuff is just scripting langs

garganzol2 days ago

Go is just one language, while Dockerfile gives you access to the whole universe with myriads of tools and options from early 1970s and up to the future. I don't know how you can compare or even "replace" Docker with Go; they belong to different categories.

osigurdson2 days ago

In some situations, yes, others no. For instance if you want to control memory or cpu using a container makes sense (unless you want to use cgroups directly). Also if running Kubernetes a container is needed.

matrss2 days ago

You have to differentiate container images, and "runtime" containers. You can have the former without the latter, and vice versa. They are entirely orthogonal things.

E.g. systemd exposes a lot of resource control as well as sandboxing options, to the point that I would argue that systemd services can be very similar to "traditional" runtime containers, without any image involved.

+1
osigurdson1 day ago
aobdev2 days ago

Wasn’t this the same argument for .jar files?

yunwal2 days ago

> You can pretty much replace "docker build" with "go build".

Interesting. How does go build my python app?

phplovesong22 hours ago

It obviously means you dont use a scripting language, instead use a real langauge with a compiler.

yunwal10 hours ago

Ok yeah let me just port pytorch over that should be quick

LtWorf13 hours ago

Calling "go" a "real language" is stretching the definition quite a bit.

Real languages don't let errors go silently unnoticed.

speedgoose2 days ago

It doesn't sound like Golang is going to dominate and replace everything else, so Docker is there to stay.

saghm1 day ago

At the risk of stating the obvious, there's quite a lot of languages besides just scripting languages and Go that get run in containers.

zbentley2 days ago

> the Dockerfile has continued because of its flexibility

I wish we had standardized on something other than shell commands, though. Puppet or terraform or something more declarative would have been such a better alternative to “everyone cargo cults ‘RUN apt-get upgrade’ onto the top of their dockerfiles”.

Like, the layer/stage/caching behavior is fine. I just wish the actual execution parts had been standardized using something at a higher level of abstraction than shell.

bheadmaster2 days ago

> Puppet or terraform or something more declarative would have been such a better alternative

Until you need to do something that isn't covered with its DSL, and you extend it with an external command execution declaration... At which point people will just write bash scripts anyway and use your declarative language as a glorified exec.

sofixa2 days ago

If you have 90-95% of everyone's needs (installing packages, compiling, putting files) covered in your DSL, and it has strong consistency and declarativeness, it's not that big of a problem if you need an escape hatch from time to time. Terraform, Puppet, Ansible, SaltStack show this pretty well, and the vast majority of them that isn't bash scripts is better and more maintainable than their equivalents in pure bash would be.

bheadmaster1 day ago

The problem is, ironically, that each DSL has its own execution platform, and is not designed for testability. Bash scripts may be hard to maintain, but at least you can write tests for them.

In Azure YAML I had an odd bug because I used succeeded() instead of not(failed()) as a condition. I had no way of testing the pipeline without executing it. And each DSL has its own special set of sharp edges.

At least Bash's common edges are well known.

avsm2 days ago

Docker broke out the build layer into a separate component called BuildKit (see HN discussion recently https://news.ycombinator.com/item?id=47166264).

However, Dockerfiles are so popular because they run shell commands and permit 'socially' extending someone else shell commands; tacking commands onto the end of someone else's shell script is a natural process. /bin/sh is unreasonably effective at doing anything you need to a filesystem, and if the shell exposes a feature, it has probably been used in a Dockerfile somewhere.

Every other solution, especially declarative ones, tend to come up short when _layering_ images quickly and easily. However, I agree they're good if you control the entire declarative spec.

mihaelm2 days ago

I'd say LLB is the "standard", Dockerfile is just one of human-friendly frontends, but you can always make one yourself or use an alternative. For example, Dagger uses BuildKit directly for building its containers instead of going through a Dockerfile.

cpuguy832 days ago

Give https://github.com/project-dalec/dalec a look. It is more declarative. Has explicit abstractions for packages, caching, language level integrations, hermetic builds, source packages, system packages, and minimal containers.

Its a Buildkit frontend, so you still use "docker build".

harrall2 days ago

Declarative methods existed before Docker for years and they never caught on.

They sounded nice on paper but the work they replaced was somehow more annoying.

I moved over to Docker when it came out because it used shell.

mort962 days ago

Dockerfile has the flexibility to do what you want though, no? Use a base image with terraform or puppet or opentofu or whatever pre-installed, then your Dockerfile can just run the right command to apply some declarative config file from the build context.

And if you want something weird that's not supported by your particular tool of choice, you have the escape hatch of running arbitrary commands in the Dockerfile.

What more do you want?

zbentley15 hours ago

The loose integration between the declarative tools and the container build system drags down performance and creates a lot of footguns re: image size and inert declarative-build-system transitive deps left lying around, I’ve found.

mort963 hours ago

Why would terraform leave transitive steps around? To my knowledge, Docker doesn't record a log the IO syscalls performed by a RUN directive, the layer just reflects the actual changes it makes. It uses overlayfs, doesn't it? If you create a temporary file and then delete it within the same layer, there's no trace that the temporary file ever existed in overlayfs, correct?

I'd get your worry if we were talking about splitting up a terraform config and running it across multiple RUN directives, but we're not, are we?

toast02 days ago

Oof, not terraform please. If you use foreach and friends, dependency calculations are broken, because dependency happens before dynamic rules are processed.

I'd get much better results it I used something else to do the foreach and gave terraform only static rules.

esseph2 days ago

The more you try and abstract from the OS, the more problems you're going to run into.

zbentley2 days ago

Bash is pretty darn abstracted from the OS, though. Puppet vs Bash is more about abstraction relative to the goal.

If your dockerfile says “ensure package X is installed at version Y” that’s a lot clearer (and also more easy to make performant/cached and deterministic) than “apt-get update; apt-get install $transitive-at-specific-version; apt-get install $the-thing-you-need-atspecific-version”. I’m not thrilled at how distro-locked the shell version makes you, and how easy it is for accidental transitive changes to occur too.

But neither of those approaches is at a particularly low abstraction level relative to the OS itself; files and system calls are more or less hidden away in both package-manager-via-bash and puppet/terraform/whatever.

pixelmonkey2 days ago

The math of “a decade” seemed wrong to me, since I remembered Docker debuting in 2013 at PyCon US Santa Clara.

Then I found an HN comment I wrote a few years ago that confirmed this:

“[...] I remember that day pretty clearly because in the same lightning talk session, Solomon Hykes introduced the Python community to docker, while still working on dotCloud. This is what I think might have been the earliest public and recorded tech talk on the subject:”

YouTube link: https://youtu.be/1vui-LupKJI?t=1579

Note: starts at t=1579, which is 26:19.

Just being pedantic though. That’s about 13 years ago. The lightning talk is fun as a bit of computing history.

(Edit: as I was digging through the paper, they do cite this YouTube presentation, or a copy of it anyway, in the footnotes. And they refer to a 2013 release. Perhaps there was a multi-year delay between the paper being submitted to ACM with this title and it being published. Again, just being pedantic!)

avsm2 days ago

From another comment below, it's just a nice short title to convey that we're going back in time and not one to set your watch by.

    We first submitted the article to the CACM a while ago.
    The review process takes some time and "Twelve years of
    Docker containers" didn't have quite the same vibe.
(The CACM reviewers helped improve our article quite a bit. The time spent there was worth it!)
pixelmonkey2 days ago

Makes sense! Thanks for working on it -- truly a wonderful paper!

FlyingSnake2 days ago

You’re right, it was 2014. I was there on HN when docker was announced by shykes. It was a godsend because I was getting bummed by the alternatives like LXC, juju charms or vagrant.

Here’s the announcement from 2013:

https://news.ycombinator.com/item?id=5408002

pixelmonkey1 day ago

Nice find. Check out shykes commenting here on that thread!

https://news.ycombinator.com/item?id=5409678

musicale1 day ago

I still prefer LXC to docker. Improving libvirt and making virtualization a first-class OS feature with a library interface - vs. relying on an external tool and company interested in monetization - was and is the right approach.

moondev1 day ago

That's why I love Incus. It offers all three so you don't have to choose. OCI app containers, LXC containers and KVM.

throwaway_249422 hours ago

Back in the day when it came out, I gotta admit, Docker sort of got my my nerves as yet another thing coming to 'disrupt' how things are done.

(Half assed NOSQL 'databases' with poorly thought out storage models, everything having to be a microservice, turning every function call into a fallible RPC call etc...)

But I've come to appreciate it more, and i use it regularly now. I appreciate its relative simplicity.

But as in life, hell is other people's containers. My own I can at least try to keep them simple and minimal.

But I have seen many use the kitchen sink approach, giving me the feeling that even the developer don't seem to know how they arrived at their deployment anymore.

But this all seems quaint today. With LLMs, now we can look forward to a flood of code the developers haven't even looked at, but which is widely believed to work...

jellyfishbeaver22 hours ago

Having been on way too many middle-of-the-night Zoom calls watching my company's DevOps and development teams aimlessly troubleshoot issues with containers and similar cloud first technologies, I am convinced that nobody really understands what's happening.

LtWorf22 hours ago

I have read the setns() manpage. I'm feeling very special and unique :D :D

talkvoix2 days ago

A full decade since we took the 'it works on my machine' excuse and turned it into the industry standard architecture ('then we'll just ship your machine to production').

avsm2 days ago

(coauthor of the article here)

Well, before Docker I used to work on Xen and that possible future of massive block devices assembled using Vagrant and Packer has thankfully been avoided...

One thing that's hard to capture in the article -- but that permeated the early Dockercons -- is the (positive) disruption Docker had in how IT shops were run. Before that going to production was a giant effort, and 'shipping your filesystem' quickly was such a change in how people approached their work. We had so many people come up to us grateful that they could suddenly build services more quickly and get them into the hands of users without having to seek permission slips signed in triplicate.

We're seeing the another seismic cultural shift now with coding agents, but I think Docker had a similar impact back then, and it was a really fun community spirit. Less so today with the giant hyperscalars all dominating, sadly, but I'll keep my fond memories :-)

talkvoix2 days ago

Great point about coding agents! Back then, Docker gave us 'it works on my machine, let's ship the machine'. Now, AI agents are giving us 'I have no idea how this works, let's ship the prompt'. The early Docker community spirit really was legendary though—before every hyperscaler wrapped it in 7 layers of proprietary managed services. Thanks for the memories and the write-up!

avsm2 days ago

Thanks for the kind words! I've been prodding @justincormack to resurrect the single most fun OS unconference I've ever attended -- New Directions in Operating Systems (last held back in 2014). https://operatingsystems.io

Some of those talks strangely make more sense today (e.g. Rump Kernels or unikernels + coding agents seems like a really good combination, as the agent could search all the way through the kernel layers as well).

supriyo-biswas1 day ago

It seems that your entire profile is LLM generated comments, would appreciate it if you'd stop. Thanks.

talkvoix20 hours ago

I think you're too concerned with how others speak to believe that someone can actually write good, well-written comments. Obviously, in the age of LLMs, we might use one or two things to produce a better response. But that doesn't mean I don't think about what I wrote. Especially since English isn't my native language. Anyway, don't worry about me!

bongripper1 day ago

[dead]

throwawaypath2 days ago

>massive block devices assembled using Vagrant and Packer has thankfully been avoided...

Funny comment considering lightweight/micro-VMs built with tools like Packer are what some in the industry are moving towards.

avsm2 days ago

And those lightweight VM base images are possible because Docker applied a downward pressure on OS base image sizes! Alpine Linux doesn't get enough credit for this; in addition to being a great base image, it was also the first distro to prioritise fast and small image creation (Gentoo and Arch were small, but not fast to create).

+1
kgwgk2 days ago
imtringued21 hours ago

There's also the part where you have a much easier time building alpine packages inside a container rather than a VM. There is no docker run equivalent that lets you quickly run a shell script inside the deployment Linux distribution to build the package.

ewams19 hours ago

Professor Madhavapeddy, am I understanding your comment correctly? "without having to seek permission slips signed in triplicate", the motivation to create Docker was because of IT bureaucracy?

syncsynchalt2 days ago

I see this take a lot but I'd argue what Docker did was to entice everyone to capture their build into a repeatable process (via a Dockerfile).

"Ship your machine to production" isn't so bad when you have a ten-line script to recreate the machine at the push of a button.

lioeters2 days ago

Exactly my feeling. Docker is "works on this machine" with an executable recipe to build the machine and the application. Newer better solutions like OCI-compliant tools will gradually replace Docker, but the paradigm shift has provided a lot of lasting value.

Gigachad2 days ago

Yeah docker codifies what the process to convert a base linux distro in to a working platform for the app actually is. Every company I've worked at that didn't use docker just has this tribal knowledge or an outdated wiki page on the steps you need to take to get something to work. Vs a dockerfile that exactly documents the process.

chuckadams2 days ago

It's the ultimate in static linking. Perhaps a question that should be asked is why that approach is so compelling?

blackcatsec2 days ago

I question that as well, it's also why Go is extremely popular. Could it just be a pendulum swing back towards static linking?

Wonder when some enterprising OSS dev will rebrand dynamic linking in the future...

jfjasdfuw2 days ago

CGO_ENABLED=0 is sigma tier.

I don't care about glibc or compatibility with /etc/nsswitch.conf.

look at the hack rust does because it uses libc:

> pub unsafe fn set_var<K: AsRef<OsStr>, V: AsRef<OsStr>>(key: K, value: V)

+1
jcgl2 days ago
redhanuman2 days ago

the real trick was making "ship your machine" sound like best practice and ten years later we r doing the same thing with ai "it works in my notebook" jst became "containerize the notebook and call it a pipeline" the abstraction always wins because fixing the actual problem is just too hard.

zbentley2 days ago

> fixing the actual problem is just too hard.

I think it’s laziness, not difficulty. That’s not meant to be snide or glib: I think gaining expertise in how to package and deploy non-containerized applications isn’t difficult or unattainable for most engineers; rather, it’s tedious and specialized work to gain that expertise, and Docker allowed much of the field to skip doing it.

That’s not good or bad per se, but I do think it’s different from “pre-container deployment was hard”. Pre-container deployment was neglected and not widely recognized as a specialty that needed to be cultivated, so most shops sucked at it. That’s not the same as “hard”.

skydhash2 days ago

It's not even laziness or expertise. A lot of people are against learning conventions. They want their way, meaning what works on their computer. That's why they like the current scope of package managers, docker, flatpack,... They can do what they want in the sandbox provided however nonsensical and then ship the whole thing. And it will break if you look at it the wrong way.

Bratmon2 days ago

I mean, walking through a door is easier than tearing down a wall, walking through it, and rebuilding the wall. That doesn't mean the latter is a good idea.

goodpoint2 days ago

...while completely forgetting about security

siva719 hours ago

I think it was all about efficient large-scale server machine reproducibility and not about making local development workflows easier. If it were i'm sure Docker would look much more friendly for that case but it still advanced to an industry-standard even for small software departments because everyone used it.

curt152 days ago

>'then we'll just ship your machine production'

Minus the kernel of course. What is one to do for workloads requiring special kernel features or modules?

avsm2 days ago

Those are global to the machine; generally not an issue and seccomp rules can filter out undesirable syscalls to other containers. But GPU kernel/userspace driver matching has been a huge headache; see https://cacm.acm.org/research/a-decade-of-docker-containers/... in the article for how the CDI is (sort of) helping standardise this.

Skywalker132 days ago

Oh, thank you... I'm not alone... I'm so tired of seeing crappy containers with pseudo service management handled by Dockerfiles, used instead of proper and serious packaging like that of many venerable Linux distributions.

hwhshs2 days ago

In 2002 I used to think why cant they package a website. These .doc installation instructions are insane! What a waste of someones time.

I sort of had the problem in mind. Docker is the answer. Not clever enough to have inventer it.

If I did I would probably have invented octopus deploy as I was a Microsoft/.NET guy.

forrestthewoods2 days ago

Linux user space is an abject disaster of a design. So so so bad. Docker should not need to exist. Running computer programs need not be so difficult.

esafak2 days ago

Who does it right?

jjmarr2 days ago

Nix and Guix.

Good luck convincing people to switch!

+1
abacate2 days ago
+2
zbentley2 days ago
jfjasdfuw2 days ago

Plan9 or Inferno.

ufocia20 hours ago

Wow! That's digging deep in history. Has Plan9 been updated for modern hardware?

forrestthewoods2 days ago

Windows is an order of magnitude better in this regard.

+1
vanviegen2 days ago
+2
robmusial2 days ago
whateverboat2 days ago

Windows.

mrbluecoat2 days ago

> Docker repurposed SLIRP, a 1990s dial-up tool originally for Palm Pilots, to avoid triggering corporate firewall restrictions by translating container network traffic through host system calls instead of network bridging.

Genuinely fascinating and clever solution!

mmh00002 days ago

Until recently, Podman used slirp4net[1] for its container networking. About two years ago, they switched over to Pasta[2][3] which works quite a bit differently.

[1] https://github.com/rootless-containers/slirp4netns

[2] https://blog.podman.io/2024/03/podman-5-0-breaking-changes-i...

[3] https://passt.top/passt/about/#pasta-pack-a-subtle-tap-abstr...

toast02 days ago

I don't think SLIRP was originally for palm pilots, given it was released two years before.

SLIRP was useful when you had a dial up shell, and they wouldn't give you slip or ppp; or it would cost extra. SLIRP is just a userspace program that uses the socket apis, so as long as you could run your own programs and make connections to arbitrary destinations, you could make a dial script to connect your computer up like you had a real ppp account. No incomming connections though (afaik), so you weren't really a peer on the internet, a foreshadowing of ubiquitous NAT/CGNAT perhaps.

avsm2 days ago

> I don't think SLIRP was originally for palm pilots, given it was released two years before.

That's a mistake indeed; "popularised by" might have been better. Before my beloved Palmpilot arrived one Christmas, I was only using SLIRP to ninja in Netscape and MUD sessions onto a dialup connection which wasn't a very mainstream use.

ufocia21 hours ago

SLIP not PPP. Those are two very different protocols. Otherwise your comment is fairly accurate. There were dial-in terminals, whether more expensive or not, that could be repurposed for generic Internet access.

I don't recall whether you could technically open listening ports, at least for a single connection, using slirp, but many, if not all systems, limited opening ports under 1024 to superusers, which (would have?) made running servers on standard ports more difficult.

In any case, I'm glad that you pointed out ACM's apparent revisionist history. They should know better.

redhanuman2 days ago

repurposing a Palm Pilot dial-up tool to sneak container traffic past enterprise firewalls is unhinged and yet it worked the best infrastructure hacks are never clever in the moment they are just desperate that the cleverness only shows up after someone else has to maintain it.

avsm2 days ago

VPNKit (the SLIRP component) has been remarkably bug free over the years, and hasn't been much of a burden overall.

There was another component that we didn't have room to cover in the article that has been very stable (for filesystem sharing between the container and the host) that has been endlessly criticised for being slow, but has never corrupted anyone's data! It's interesting that many users preferred potential-dataloss-but-speed using asynchronous IO, but only on desktop environments. I think Docker did the right thing by erring on the side of safety by default.

Normal_gaussian2 days ago

Exactly. "so I hung the radiator out the window" vibes.

arcanemachiner2 days ago

I am trying to decipher the meaning of your comment, to no avail.

+1
diroussel2 days ago
justsomehnguy1 day ago

Except it's the plain NAT which was named 'bridge' because there were no sysadmin around slap some sense into the authors. Slirp is for 'unprivileged network namespaces' which is for a 'rootless' variants of docker ie attaching a container network to the host without the need for the root-level privileges.

avsm2 days ago

An extremely random fact I noticed when writing the companion article [1] to this (an OCaml experience report):

    "Docker, Guix and NixOS (stable) all had their first releases
    during 2013, making that a bumper year for packaging aficionados."
Now we get coding agent updates every week, but has there been a similar year since 2013 where multiple great projects all came out at the same time?

[1]: https://anil.recoil.org/papers/2025-docker-icfp.pdf

ezst1 day ago

hg and git came weeks apart and fossil shortly after if that counts

esseph2 days ago

TBH I feel as if only docker belongs in that list. Guix and nix have users, sure, but not remotely like docker.

NewJazz2 days ago

Yeah they are way better than docker for packaging

atomicnumber31 day ago

Why is docker used by far the most, then?

+1
ufocia20 hours ago
tzs2 days ago

I've not done serious networking stuff for over two decades, and never in as complex an environment as that in the article, so the networking part of the article went pretty much over my head.

What I want to do when running a Docker container on Mac is to be able to have the container have an IP address separate from the Mac's IP address that applications on the Mac see. No port mapping: if the container has a web server on port 80 I want to access it at container_ip:80, not 127.0.0.1:2000 or something that gets mapped to container port 80.

On Linux I'd just used Docker bridged networking and I believe that would work, but on Mac that just bridges to the Linux VM running under the hypervisor rather than to the Mac.

Is there some officially recommended and supported way to do this?

For a while I did it by running WireGuard on the Linux VM to tunnel between that and the Mac, with forwarding enabled on the Linux VM [1]. That worked great for quite a while, but then stopped and I could not figure out why. Then it worked again. Then it stopped.

I then switched to this [2] which also uses WireGuard but in a much more automated fashion. It worked for quite a while, but also then had some problems with Docker updates sometimes breaking it.

It would be great if Docker on Mac came with something like this built in.

[1] https://news.ycombinator.com/item?id=33665178

[2] https://github.com/chipmk/docker-mac-net-connect

djs552 days ago

(co-author of the article and Docker engineer here) I think WireGuard is a good foundation to build this kind of feature. Perhaps try the Tailscale extension for Docker Desktop which should take care of all the setup for you, see https://hub.docker.com/extensions/tailscale/docker-extension

BTW are you trying to avoid port mapping because ports are dynamic and not known in advance? If so you could try running the container with --net=host and in Docker Desktop Settings navigate to Resources / Network and Enable Host Networking. This will automatically set up tunnels when applications listen on a port in the container.

Thanks for the links, I'll dig into those!

tzs1 day ago

I'm basically using Docker on Mac as an alternative to VMWare Fusion with a much faster startup startup time and more flexible directory sharing.

I want to avoid port mapping because I already have things on the Mac using the ports that my things in the container are using.

I have a test environment that can run in a VM, container, or an actual machine like an RPi. It has copies of most of our live systems, with customer data removed. It is designed so that as much as possible things inside it run with the exact same configuration they do live. The web sites in then are on ports 80 and 443, MySQL/MariaDB is on 3306, and so on. Similarly, when I'm working on something that needs to access those services from outside the test system I want to as much as possible use the same configuration they will use when live, so they want to connect to those same port numbers.

Thus I need the test environment to have its own IP that the Mac can reach.

Or maybe not...I just remembered something from long ago. I wanted a simpler way to access things inside the firewall at work than using whatever crappy VPN we had, so I made a poor man's VPN with ssh. If I needed to access things on say port 80 and 3306 on host foo at work, I'd ssh to somewhere I could ssh to inside the firewall at work, setting that up to forward say local 10080 and 13306 to foo:80 and foo:3306. I'd add an /etc/hosts entry at foo giving it some unused address like 10.10.10.1. Then I'd use ipfw to set it up so that any attempt to connect to 10.10.10.1:80 or 10.10.10.1:3306 would get forwarded to 127.0.0.1:10080 or 127.0.0.1:13306, respectively. That worked great until Apple replaced ipfw with something else. By then we had a decent VPN for work and so I no longer need my poor man's VPN and didn't look into how to do this in whatever replaced ipfw.

Learning how to do that in whatever Apple now uses might be a nice approach. I'll have to look into that.

johanbcn2 days ago

I don't have a Mac environment, but I have researched a bit for devex purposes, and I would go with the Colima project as a open source solution for containers on mac. Have you tried it?

tzs2 days ago

I'll look into that. Thanks.

the__alchemist2 days ago

I'm optimistic we will succeed in efforts to simplify linux application / dependency compatibility instead of relying on abstractions that which work around them.

mihaelm2 days ago

Maybe if you only look at it through the lens of building an app/service, but containers offer so much more than that. By standardizing their delivery through registries and management through runtimes, a lot of operational headaches just go away when using a container orchestrator. Not to mention better utilization of hardware since containers are more lightweight than VMs.

hrmtst938371 day ago

From what I've seen, standardizing delivery through registries and runtimes does reduce friction, but containers mostly move operational complexity around rather than eliminate it. You still get image sprawl, registry auth and storage quotas, supply chain issues like unsigned images, runtime quirks between runc and crun, and networking and storage headaches when an orchestrator like Kubernetes turns deployment into an availability and observability problem.

If you want the gains mentioned, you have to invest in governance: immutable tags, automated image scanning with Trivy, signing with cosign, and sensible image retention policies in your registry. Accept the tradeoff that you will be operating a distributed control plane and therefore need real observability like Prometheus plus request and limit discipline or you'll get the utilization benefits in graphs only while production quietly melts down.

the__alchemist2 days ago

Hah indeed that's my perspective. I'm used to being able to compile program, distribute executable, "just works", across win, Linux, MacOs. (With appropriate compile targets set)

Hackbraten2 days ago

> Not to mention better utilization of hardware

When compared to a VM, yes. But shipping a separate userspace for each small app is still bloat. You can reuse software packages and runtime environments across apps. From an I/O, storage, and memory utilization point of view, it feels baffling to me that containers are so popular.

Gigachad2 days ago

"bloat" has always been the last resort criticism from someone who has nothing valid. Containers are incredibly light, start very rapidly, and have such low overhead in general that the entire industry has been using them.

Docker containers also do reuse shared components, layers that are shared between containers are not redownloaded. The stuff that's unique at the bottom is basically just going to be the app you want to run.

esseph2 days ago

> From an I/O, storage, and memory utilization point of view, it feels baffling to me that containers are so popular.

Why? It's not virtualization, it's containerization. It's using the host kennel.

Containers are fast.

+2
Hackbraten2 days ago
__MatrixMan__2 days ago

Agreed.

I've recently switched from docker compose to process compose and it's super nice not to have to map ports or mount volumes. What I actually needed from docker had to do less with containers and more with images, and nix solves that problem better without getting in the way at runtime.

onei2 days ago

Assuming I've found the right process-compose [1], it struck me as having much overlap with the features of systemd. Or at least, I would tend to reach for systemd if I wanted something to run arbitrary processes. Is there something additional/better that process-compose does for you?

[1]: https://github.com/F1bonacc1/process-compose

__MatrixMan__2 days ago

That's the one, although I tend to reference it through https://github.com/juspay/services-flake because that way I end up using the community-maintained configs for whatever well-known services I've enabled (I'll use postgres as an example below, but there are many: https://community.flake.parts/services-flake/services)

What process-compose gives me is a single parent with all of that project's processes as children, and a nice TUI/CLI for scrolling through them to see who is happy/unhappy and interrogating their logs, and when I shut it down all of that project's dependencies shut down. Pretty much the same flow as docker-compose.

It's all self-contained so I can run it on MacOS and it'll behave just the same as on Linux (I don't think systemd does this, could be wrong), and without requiring me to solve the docker/podman/rancher/orbstack problem (these are dependencies that are hard to bundle in nix, so while everything else comes for free, they come at the cost of complicating my readme with a bunch of requests that the user set things up beforehand).

As a bonus, since it's a single parent process, if I decide to invoke it through libfaketime, the time inherited by subprocess so it's consistently faked in the database and the services and in observability tools...

My feeling for systemd is that it's more for system-level stuff and less for project-level dependencies. Like, if I have separate projects which need different versions of postgres, systemd commands aren't going to give me a natural way to keep track of which project's postgres I'm talking about. process-compose, however, will show me logs for the correct postgres (or whatever service) in these cases:

    ~/src/projA$ process-compose process logs postgres
    ~/src/projB$ process-compose process logs postgres
This is especially helpful because AI agents tend to be scoped to working directory. So if I have one instance of claude code on each monitor and in each directory, which ever one tries to look at postgres logs will end up looking at the correct postgres's logs without having to even know that there are separate ones running.

Basically, I'm alergic to configuring my system at all. All dependencies besides nix, my text editor, and my shell are project level dependencies. This makes it easy to hop between machines and not really care about how they're set up. Even on production systems, I'd rather just clone the repo `nix run` in that dir (it then launches process compose which makes everything just like it was in my dev environment). I am however not in charge of any production systems, so perhaps I'm a bit out of touch there.

Bratmon2 days ago

I'm curious why. To me "We updated our library to change some things in a way that's an improvement on net but only mostly backwards compatible" seems like an extremely common instinct in software development. But in an environment where people are doing that all the time, the only way to reliably deploy software is to completely freeze all your direct and indirect dependencies at an exact version. And Docker is way better at handling that than traditional Linux package managers are.

Why do you think other tools will make a comeback?

the__alchemist1 day ago

You can write any software you want without worrying about depending on a specific set of system dependencies. I like software that "just works", and making something that will give you inscrutable linking or dependency errors if the OS isn't set up just so is a practice I think should go away.

Bratmon16 hours ago

But that's exactly what Docker provides.

Joker_vD2 days ago

I am also optimistic we will succeed in efforts to properly annotate the data on the Internet with useful and accurate meta-data and achieve the semantic web vision instead of relying on search engines and LLMs.

andrewmcwatters2 days ago

[dead]

zacwest2 days ago

The historic information in here was really interesting, and a great example of an article rapidly expanding in scope and detail. How they combatted corporate IT “security” software by pretending to be a VPN is quite unexpected.

netrem2 days ago

With ML and AI now being pushed into everything, images have ballooned in size. Just having torch as a dependency is some multiple gigabytes. I miss the times of aiming for 30MB images.

Have others found this to be the case? Perhaps we're doing something wrong.

a_t481 day ago

I’ve seen images that accidentally install tensorflow twice, too. It wouldn’t be so bad if large files were shared between layers but they aren’t. It’s bad enough that I’m building an alternative registry and snapshotter with file level dedupe to deal with it.

netrem1 day ago

Sounds like it would be useful. Many common dev workflows started falling apart when it's not just tiny code files they need to deal with. In the python world, uv has helped massively, with pip we were seeing 30+ min build times on fairly simple images with torch

a_t481 day ago

uv is one of my inspirations. Take a familiar interface, do the same thing but better/faster.

Joe_Cool2 days ago

I have an immutable Alpine Linux running from an ISO that includes a few docker containers (mostly ruby and php). All in about 750MB.

brtkwr2 days ago

I realise apple containers haven't quite taken off yet as expected but omission from the article stands out. Nice that it mentions alternative approaches like podman and kata though.

avsm2 days ago

> but omission from the article stands out.

(article author here)

Apple containers are basically the same as how Docker for Mac works; I wrote about it here: https://anil.recoil.org/notes/apple-containerisation

Unfortunately Apple managed to omit the feature we all want that only they can implement: namespaces for native macOS!

Instead we got yet another embedded-Linux-VM which (imo) didn't really add much to the container ecosystem except a bunch of nice Swift libraries (such as the ext2 parsing library, which is very handy).

with1 day ago

docker is bloated. i'm almost certain half of every image is dead weight. unused apt packages, full distros for a single binary, shell configs nobody touches. but the incentive is to make things work, not make them small. so bloat wins.

still, i use it every day and i don't see what replaces it. every "docker killer" solves one problem while ignoring the 50 things docker does well enough.

stephbook1 day ago

Docker released "Docker Hardened Images" last year and made them free. They contain less bloat.

Buying more RAM for your server or only touching a select few images that are run most often is also a way to make things work. It might not be the most elegant software engineering approach, but it just works.

Zizizizz1 day ago
with1 day ago

nice share!

rando12342 days ago

Didn't Vagrant/Vagrantfiles precede Docker? Unclear why that would be the key to its success if so.

idoubtit21 hours ago

Vagrant was a layer over virtualization, with hypervisors like virtualbox, kvm or vmware. The article mention virtualization and virtual machines several times, e.g. "unlike the virtual machine experience (which involved installing an entire operating system)".

For instance, deploying a complex Python application was hell, for lack of proper packaging. Using Vagrant was easy, but the image was huge (full system) and the software slow (full virtualization), among other problems. Containers like LXC and Docker were a bit easier to setup, much smaller, almost as performant as native packaging, and with a larger spectrum of features for sharing things with the host (e.g. overlay mounts).

rando123418 hours ago

Sure - I was just referring to the use of a Vagrantfile to configure the VM. The start of the article seemed to be pushing the Dockerfile itself as a big innovation.

rr8082 days ago

Back then I didn't foresee the 22GB image our jupyter/ML is in 2026. There must be a better way.

therealdrag02 days ago

Is that dockers fault? A basic Linux image is like 400MB right?

hei-lima1 day ago

Great. I started developing in the Docker era, and while I can see some flaws, it is one of the easiest, most reliable tools I constantly use. I can't imagine how people dealt with those problems before Docker

pmarreck1 day ago

A Docker image is an accidentally-correct cache of a nondeterministic build, and nothing will ever change that

Surac21 hours ago

I always wanted to use docker. Installed it and every time there was a feature i needed not included. Perhaps its because tried to use a technologie not commonly used

oceansky20 hours ago

What kind of features?

phplovesong2 days ago

We have shipped unikernels for the last decade. Zero sec issues so far. I highly recommend looking into the unikernel space for a docker alternative. MirageOS being a good start.

avsm2 days ago

cool! What services have you shipped as unikernels? Docker doesn't have to be an alternative; it can help with the build/run pipeline for them too: https://www.youtube.com/watch?v=CkfXHBb-M4A (Dockercon 2015!)

phplovesong23 hours ago

Mostly finance stuff, and all the sensitive stuff that comes with it.

But the main benefit is the attack surface is greatly reduced when running a unikernel. Also we use way less resources and get really good perf.

eudamoniac1 day ago

I've been a professional in the industry for over a decade and I've still not found any meaningful benefit for learning or using containerization in the real world. I just install my dependencies with good old fashioned version managers (asdf) and I develop the project. I ignore all docker documentation, and everything works fine. When I try to use containers to develop, it's seemingly two dozen gotchas that sum up to my IDE needing endless configuration to function properly. I don't get it.

JodieBenitez1 day ago

I don't get it either. Worse: it even makes developer lazy as they don't put the effort to ensure their development is portable.

tripledry1 day ago

What's a good alternative if I want to self host and convenience?

I have some hobby sites I host on a VM and currently I use docker-compose mainly because it's so "easy" to just ssh into the machine and run something like "git pull && docker-compose up" and I can have whatever services + reverse proxy running.

If I were to sum up the requirements it would be to run one command, either it succeeds or fails in it's entirety, minimal to no risk of messing up the env during deployment.

Nix seems interesting but I don't know how it compares (yet to take a good look at it).

JodieBenitez1 day ago

I have some hobby sites I host on a raspberry pi and currently I use make mainly because it's so "easy" to just ssh into the machine and run something like "pip install thing.whl && sudo systemctl restart thing" and I can have whatever services + reverse proxy running.

eudamoniac20 hours ago

I don't know, that set of requirements sounds like containers are a good fit. I don't have an alternative for you. I would just ssh into the server and run the commands needed to update/start the services; it wouldn't be one command and it is not impossible to mess up.

I will say that consuming other people's services that I don't intend to develop on is easier with containers. I use podman for my jellyfin and Minecraft servers based on someone else's configs. My only issue with them is the complexity during development.

scubbo1 day ago

Curious what you build/work on? If you're only shipping software library or tools, then yeah, your perspective makes sense - `asdf` or `mise` seem superior to `devcontainer`s. But I wouldn't want to deploy a web application without Dockerizing it.

eudamoniac1 day ago

Mostly web applications actually. Most web stuff is not that complicated. Usually the deployment itself is from docker (ops insists) but I just do development without it and I've never had a problem. I understand in theory I could get mismatched versions of things from production and thereby introduce a bug; in practice this has never happened a single time.

politelemon2 days ago

Somewhere along the line they started prioritising docker desktop over docker. It's a bit jarring to see new features coming to desktop before it comes to Linux, such as the new sandbox features.

Is there any insight into this, I would have thought the opposite where developers on the platform that made docker succeed are given first preview of features.

krapht2 days ago

Paying customers use docker desktop.

mberning2 days ago

I remember being pretty skeptical of “dockerizing” applications when it first becamee popular. But I’ve come around to it, if for no other reason than it provided an easily understandable concept which anyone could understand and more importantly use. The onramp to using docker is very gentle.

oceansky20 hours ago

Shoutout to Rancher Desktop

amelius1 day ago

I can remember the first time I used docker, and the layer limit was a big bummer that made me stop using it initially.

JodieBenitez1 day ago

A decade of seeing my colleagues machines crumble under the weight of containers.

> It is also something developers seem to enjoy using

Count me out.

einpoklum1 day ago

> While convenient, this single shared filesystem makes it very difficult to install multiple applications at the same time if they have conflicting dynamic library requirements.

No, it does not make it "very difficult".

And like other comments here grumble - this rationale is essentially a sanctification of the sentiment of "It builds and runs on my system and I can't be bothered to make fewer assumptions so that it runs on yours".

arikrahman2 days ago

I'm hoping the next decade introduces more declarative workflows with Nix and work with docker to that end.

INTPenis2 days ago

I thought it was 2014 when it launched? The article says the command line interface hasn't changed since 2013.

avsm2 days ago

We first submitted the article to the CACM a while ago. The review process takes some time and "Twelve years of Docker containers" didn't have quite the same vibe.

neop1x23 hours ago

No mention of OpenVZ or LXC

benatkin2 days ago

> If you are a developer, our goal is to make Docker an invisible companion

I want it not to just be invisible but to be missing. If you have kubernetes, including locally with k3s or similar, it won't be used to run containers anyway. However it still often is used to build OCI images. Podman can fill that gap. It has a Containerfile format that is the same syntax but simpler than the Docker builds, which now provides build orchestration features similar to earthly.dev which I think are better kept separate.

heraldgeezer2 days ago

I still havent learned it being in IT its so embarassing. Yes I know about the 2-3h Youtube tutorials but just...

vegabook1 day ago

Nix and never looked back.

nudpiedo1 day ago

I often want to try, but never get the time, aren’t you missing packages and solutions out of the shelf and ready to use?

vegabook1 day ago

Increasingly flake.nix is present in good repos. Zillions of packages are available. But yes, you will need to learn and sometimes File System Hierarchy assumptions need to be worked around. The rewards however, dominate the (few) inconveniences once you know your way around.

1970-01-012 days ago

I now wonder if we'll end up switching it all back to VMs so the LLMs have enough room to grow and adapt.

skybrian2 days ago

Maybe, but the install will often be done using a Docker file.

callamdelaney2 days ago

The fact that docker still, in 2026, will completely overwrite iptables rules silently to expose containers to external requests is, frankly, fucking stupid.

netrem2 days ago

Indeed. I've had even experienced sysadmins be surprised that their ufw setup will be ignored.

forrestthewoods2 days ago

I am so thoroughly convinced that Docker is a hacky-but-functional solution to an utterly failed userspace design.

Linux user space decided to try and share dependencies. Docker obliterates this design goal by shipping dependencies, but stuffing them into the filesystem as-if they were shared.

If you’re going to do this then a far far far simpler solution is to just link statically or ship dependencies adjacent to the binary. (Aka what windows does). Replicating a faux “shared” filesystem is a gross hack.

This is a distinctly Linux problem. Windows software doesn’t typically have this issue. Because programs ship their dependencies and then work.

Docker is one way to ship dependencies. So it’s not the worst solution in the world. But I swear it’s a bad solution. My blood boils with righteous fury anytime anyone on my team mentions they have a 15 minute docker build step. And don’t you damn dare say the fix to Docker being slow is to add more layers of complexity with hierarchical Docker images ohmygodiswear. Running a computer program does not have to be hard I promise!!

ahnick2 days ago

Okay, so what's the best solution? What's even just a better solution than Docker? I mean really truly lay out all the details here or link to a blog post that describes in excruciating detail how they shipped a web application and maintained it for years and was less work than Docker containers. Just saying "a far far simpler solution is to just link statically or ship dependencies adjacent to the binary" is ignoring huge swaths of the SDLC. Anyone can cast stones, very few can actually implement a better solution. Bring the receipts.

BirAdam1 day ago

Given that distributions are the distributors of packages and not the upstream developers, I think static linking is fine as is dep-shipping. The now dead Clear Linux was great at handling package distribution.

Personally, I think docker is dumb, so is AppImage, so is FlatPak, so are VMs… honestly, it’s all dumb. We all like these various things because they solve problems, but they don’t actually solve anything. They work around issues instead. We end up with abstractions and orchestrations of docker, handling docker containers running inside of VMs, on top of hardware we cannot know, see, control, or inspect. The containers are now just a way to offer shared hosting at a price premium with complex and expensive software deployment methods. We are charged extortionate prices at every step, and we accept it because it’s convenient, because these methods make certain problems go away, and because if we want money, investors expect to see “industry standards.”

forrestthewoods2 days ago

The first half of my career was spent shipping video games. There is no such thing as shipping a game in Docker. Not even on Linux. You depend on minimum version of glibc and then ship your damn dependencies.

The more recent half of my career has been more focused on ML and now robotics. Python ML is absolute clusterfuck. It is close to getting resolved with UV and Pixi. The trick there is to include your damn dependencies… via symlink to a shared cache.

Any program or pipeline that relies on whatever arbitrary ass version of Python is installed on the system can die in a fire.

That’s mostly about deploying. We can also talk about build systems.

The one true build system path is a monorepo that contains your damn dependencies. Anything else is wrong and evil.

I’m also spicy and think that if your build system can’t crosscompile then it sucks. It’s trivial to crosscompile for Windows from Linux because Windows doesn’t suck (in this regard). It almost impossible to crosscompile to Linux from Windows because Linux userspace is a bad, broken, failed design. However Andrew Kelley is a patron saint and Zig makes it feasible.

Use a monorepo, pretend the system environment doesn’t exist, link statically/ship adjacent so/dll.

Docker clearly addresses a real problem (that Linux userspace has failed). But Docker is a bad hack. The concept of trying to share libraries at the system level has objectively failed. The correct thing to do is to not do that, and don’t fake a system to do it.

Windows may suck for a lot of reasons. But boy howdy is it a whole lot more reliable than Linux at running computer programs.

curt151 day ago

> The trick there is to include your damn dependencies… via symlink to a shared cache.

Isn't composefs[1] aiming to do basically just that?

[1] https://github.com/composefs/composefs

jt219021 hours ago

What's your take on WASM/WASI?

forrestthewoods18 hours ago

Little to no interest.

Part of it is ignorance. I write a lot of C++. They support C++…. but with all kinds of restrictions wrt to memory and threading?

Doesn’t seem like a particularly interesting angle to me.

user39393822 days ago

It solves a practical problem that’s obvious. And on one hand the practical where-were-at-now is all that matters, that’s a legitimate perspective.

There’s another one, at least IMHO, that this entire stack from the bottom up is designed wrong and every day we as a society continue marching down this path we’re just accumulating more technical debt. Pretty much every time you find the solution to be, “ok so we’ll wrap the whole thing and then…” something is deeply wrong and you’re borrowing from the future a debt that must come due. Energy is not free. We tend to treat compute like it is.

Maybe I’m in a big club but I have a vision for a radically different architecture that fixes all of this and I wish that got 1/2 the attention these bandaids did. Plan 9 is an example of the theme if not the particular set of solutions I’m referring to.

JEONSEWON1 day ago

[dead]

alex_dev421 day ago

[dead]

gogasca2 days ago

Something that I recently have explored is the optimization of Docker layers and startup time for large containers. Using shared storage, tar layers preload, overlayBD https://github.com/codeexec/overlaybd-deploy is something that I would like to see more natively. Great article

a_t481 day ago

This is neat. I’m about to dive into snapshooters myself, any pitfalls to watch out for?

paddle_work23 hours ago

[dead]

tsoukiory2 days ago

[dead]

webdevver1 day ago

[dead]

nudpiedo1 day ago

This was AI generated right? And saying that in 2010 Linux was complex and needed to compile everything across virtual machines for cloud solutions and that socket solved that… really in what universe people lived?

Docker made convenient distribute some functionalities which were planned as stack of services rather than packaging appropriately for each distribution and handling to an administrator the configuration. That’s it. And it has many inconveniences. And Linux as well as BSD had containers before and chroots and many other things.

brcmthrowaway2 days ago

I dont use Dockerfile. Am i slumming it?

vvpan2 days ago

Probably? How do you deploy?

rglover2 days ago

Just pull a tarball from a signed URL, install deps, and run from systemd. Rolls out in 30 seconds, remarkably stable. Initial bootstrap of deps/paths is maybe 5 minutes.

hard_times1 day ago

So is it 5 minutes or 30 seconds then? And yes, you're missing out.

Docker images come in layers, which may or may not change depending on your release, and may or may not be shared across services.

rglover21 hours ago

Read what I wrote. The answer to your question is there.