this post was submitted on 28 Apr 2025
265 points (95.5% liked)

Selfhosted

46676 readers
365 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
265
What is Docker? (lemmy.world)
submitted 1 week ago* (last edited 1 week ago) by Jofus@lemmy.world to c/selfhosted@lemmy.world
 

Hi! Im new to self hosting. Currently i am running a Jellyfin server on an old laptop. I am very curious to host other things in the future like immich or other services. I see a lot of mention of a program called docker.

search this on The internet I am still Not very clear what it does.

Could someone explain this to me like im stupid? What does it do and why would I need it?

Also what are other services that might be interesting to self host in The future?

Many thanks!

EDIT: Wow! thanks for all the detailed and super quick replies! I've been reading all the comments here and am concluding that (even though I am currently running only one service) it might be interesting to start using Docker to run all (future) services seperately on the server!

top 50 comments
sorted by: hot top controversial new old
[–] Professorozone@lemmy.world 1 points 6 days ago

Thank you for the thorough response. After looking carefully at what you wrote I didn't really see a difference between the term self-hosting and home network.

You said you have software that automatically downloads media. The way I see this using movies for instance, if I own the movies and have them on my machine, then I can stream them over my network and have full control. Whereas if I "own" them on Amazon and steam it from there, they can track the viewing experience, push ads, or even remove the content completely. I understand that.. But if I want a NEW movie, I'm back to Amazon to get it in the first place (or Netflix, or Walmart, etc. I get it). I'm fact, personally I've started actually buying disks of the movies/music I like most so that it can't really be taken away and I can enjoy it even without an Internet connection. Am I missing something? Unless of course the media you are downloading is pirated.

I know I'm asking what seems to be a huge question but I'm really only asking for a broad description, sort of an ELI5 thing.

[–] grue@lemmy.world 93 points 1 week ago (12 children)

A program isn't just a program: in order to work properly, the context in which it runs — system libraries, configuration files, other programs it might need to help it such as databases or web servers, etc. — needs to be correct. Getting that stuff figured out well enough that end users can easily get it working on random different Linux distributions with arbitrary other software installed is hard, so developers eventually resorted to getting it working on their one (virtual) machine and then just (virtually) shipping that whole machine.

[–] anachrohack@lemmy.world 15 points 1 week ago (2 children)

Docker is not a virtual machine, it's a fancy wrapper around chroot

[–] possiblylinux127@lemmy.zip 13 points 1 week ago (2 children)

No, chroot is kind of its own thing

It is just a kernel namespace

[–] fishpen0@lemmy.world 7 points 1 week ago

Yes, technically chroot and jails are wrappers around kernel namespaces / cgroups and so is docker.

But containers were born in a post chroot era as an attempt at making the same functionality much more user friendly and focused more on bundling cgroups and namespaces into a single superset, where chroot on its own is only namespaces. This is super visible in early docker where you could not individually dial those settings. It’s still a useful way to explain containers in general in the sense that comparing two similar things helps you define both of them.

Also cgroups have evolved alongside containers at this point and work rather differently now compared to 18 years ago when cgroups were invented and this differentiation mattered more than now. We’re at the point where differentiation between VMs and Containers is getting really hard since both more and more often rely on the same kernel features that were developed in recent years on top of cgroups

load more comments (1 replies)
[–] grue@lemmy.world 7 points 1 week ago

I'm aware of that, but OP requested "explain like I'm stupid" so I omitted that detail.

[–] Scrollone@feddit.it 9 points 1 week ago* (last edited 1 week ago) (8 children)

Isn't all of this a complete waste of computer resources?

I've never used Docker but I want to set up a Immich server, and Docker is the only official way to install it. And I'm a bit afraid.

Edit: thanks for downvoting an honest question. Wtf.

It can be, yes. One of the largest complaints with Docker is that you often end up running the same dependencies a dozen times, because each of your dozen containers uses them. But the trade-off is that you can run a dozen different versions of those dependencies, because each image shipped with the specific version they needed.

Of course, the big issue with running a dozen different versions of dependencies is that it makes security a nightmare. You’re not just tracking exploits for the most recent version of what you have installed. Many images end up shipping with out-of-date dependencies, which can absolutely be a security risk under certain circumstances. In most cases the risk is mitigated by the fact that the services are isolated and don’t really interact with the rest of the computer. But it’s at least something to keep in mind.

[–] EncryptKeeper@lemmy.world 30 points 1 week ago* (last edited 1 week ago)

If it were actual VMs, it would be a huge waste of resources. That’s really the purpose of containers. It’s functionally similar to running a separate VM specific to every application, except you’re not actually virtualizing an entire system like you are with a VM. Containers are actually very lightweight. So much so, that if you have 10 apps that all require database backends, it’s common practice to just run 10 separate database containers.

[–] dustyData@lemmy.world 16 points 1 week ago

On the contrary. It relies on the premise of segregating binaries, config and data. But since it is only running one app, then it is a bare minimum version of it. Most containers systems include elements that also deduplicate common required binaries. So, the containers are usually very small and efficient. While a traditional system's libraries could balloon to dozens of gigabytes, pieces of which are only used at a time by different software. Containers can be made headless and barebones very easily. Cutting the fat, and leaving only the most essential libraries. Fitting in very tiny and underpowered hardware applications without losing functionality or performance.

Don't be afraid of it, it's like Lego but for software.

[–] possiblylinux127@lemmy.zip 14 points 1 week ago

Docker has very little overhead

The main "wasted" resources here is storage space and maybe a bit of RAM, actual runtime overhead is very limited. It turns out, storage and RAM are some of the cheapest resources on a machine, and you probably won't notice the extra storage or RAM usage.

VMs are heavy, Docker containers are very light. You get most of the benefits of a VM with containers, without paying as high of a resource cost.

[–] anachrohack@lemmy.world 8 points 1 week ago

No because docker is not actually a VM

[–] couch1potato@lemmy.dbzer0.com 7 points 1 week ago (6 children)

I've had immich running in a VM as a snap distribution for almost a year now and the experience has been leaps and bounds easier than maintaining my own immich docker container. There have been so many breaking changes over the few years I've used it that it was just a headache. This snap version has been 100% hands off "it just works".

https://snapcraft.io/immich-distribution

load more comments (6 replies)
load more comments (1 replies)
load more comments (10 replies)
[–] Black616Angel@discuss.tchncs.de 88 points 1 week ago (1 children)

Please don't call yourself stupid. The common internet slang for that is ELI5 or "explain [it] like I'm 5 [years old]".

I'll also try to explain it:

Docker is a way to run a program on your machine, but in a way that the developer of the program can control.
It's called containerization and the developer can make a package (or container) with an operating system and all the software they need and ship that directly to you.

You then need the software docker (or podman, etc.) to run this container.

Another advantage of containerization is that all changes stay inside the container except for directories you explicitly want to add to the container (called volumes).
This way the software can't destroy your system and you can't accidentally destroy the software inside the container.

[–] entwine413@lemm.ee 22 points 1 week ago (1 children)

It's basically like a tiny virtual machine running locally.

[–] folekaule@lemmy.world 35 points 1 week ago (1 children)

I know it's ELI5, but this is a common misconception and will lead you astray. They do not have the same level of isolation, and they have very different purposes.

For example, containers are disposable cattle. You don't backup containers. You backup volumes and configuration, but not containers.

Containers share the kernel with the host, so your container needs to be compatible with the host (though most dependencies are packaged with images).

For self hosting maybe the difference doesn't matter much, but there is a difference.

[–] fishpen0@lemmy.world 14 points 1 week ago (2 children)

A million times this. A major difference between the way most vms are run and most containers are run is:

VMs write to their own internal disk, containers should be immutable and not be able to write to their internal filesystem

You can have 100 identical containers running and if you are using your filesystem correctly only one copy of that container image is on your hard drive. You have have two nearly identical containers running and then only a small amount of the second container image (another layer) is wasting disk space

Similarly containers and VMs use memory and cpu allocations differently and they run with extremely different security and networking scopes, but that requires even more explanation and is less relevant to self hosting unless you are trying to learn this to eventually get a job in it.

load more comments (2 replies)
[–] echutaaa@sh.itjust.works 27 points 1 week ago

It’s a container service. Containers are similar to virtual machines but less separate from the host system. Docker excels in creating reproducible self contained environments for your applications. It’s not the simplest solution out there but once you understand the basics it is a very powerful tool for system reliability.

[–] CodeBlooded@programming.dev 23 points 1 week ago* (last edited 1 week ago)

Docker enables you to create instances of an operating system running within a “container” which doesn’t access the host computer unless it is explicitly requested. This is done using a Dockerfile, which is a file that describes in detail all of the settings and parameters for said instance of the operating system. This might be packages to install ahead of time, or commands to create users, compile code, execute code, and more.

This instance of an operating system, usually a “server,” is great because you can throw the server away at any time and rebuild it with practically zero effort. It will be just like new. There are many reasons to want to do that; who doesn’t love a fresh install with the bare necessities?

On the surface (and the rabbit hole is deep!), Docker enables you to create an easily repeated formula for building a server so that you don’t get emotionally attached to a server.

[–] Cenzorrll@lemmy.world 18 points 1 week ago (1 children)

EDIT: Wow! thanks for all the detailed and super quick replies! I've been reading all the comments here and am concluding that (even though I am currently running only one service) it might be interesting to start using Docker to run all (future) services seperately on the server!

This is pretty much what I've started doing. Containers have the wonderful benefit that if you don't like it, you just delete it. If you install on bare metal (at least in Linux) you can end up with a lot of extra packages getting installed and configured that could affect your system in the future. With containers, all those specific extras are bundled together and removed at the same time without having any effect on your base system, so you're always at your clean OS install.

I will also add an irritation with docker containers as well, if you create something in a container that isn't kept in a shared volume, it gets destroyed when starting the container again. The container you use keeps the maintainers setup, for instance I do occasional encoding of videos in a handbrake container, I can't save any profiles I make within that container because it will get wiped next time I restart the container since it's part of the container, not on any shared volume.

[–] InvertedParallax@lemm.ee 1 points 1 week ago (1 children)

Worst part about docker: insane volume management.

[–] Cenzorrll@lemmy.world 2 points 6 days ago

Agreed, I just spent a week (very intermittently) trying to figure out where all my free space had gone, turns out it was a bunch of abandoned docker volumes taking up. I have 32gb on my laptop, so space is at an absolute premium.

I guess I learned my lesson about trying out docker containers on my laptop just to check them out.

[–] Ozymandias88@feddit.uk 15 points 1 week ago

I don't think I really understood docker until I watched this video which takes you through building up a docker-like container system from scratch. It's very understandable and easy to follow if you have a basic understanding of Linux operating systems. I recommend it to anyone I know working with docker:

https://youtu.be/8fi7uSYlOdc

Alternative Invidious link: https://yewtu.be/watch?v=8fi7uSYlOdc

[–] xavier666@lemm.ee 13 points 1 week ago

Learn Docker even if you have a single app. I do the same with a Minecraft server.

  • No dependency issues
  • All configuration (storage/network/application management) can be done via a single file (compose file)
  • Easy roll-backs possible
  • Maintain multiple versions of the app while keeping them separate
  • Recreate the server on a different server/machine using only the single configuration file
  • Config is standardized so easy to read

You will save a huge amount of time managing your app.

PS: I would like to give a shout out to podman as the rootless version of Docker

[–] InvertedParallax@lemm.ee 12 points 1 week ago* (last edited 1 week ago) (1 children)

This thread:

Jails make docker look like windows 11 with copilot.

load more comments (1 replies)
[–] Vinny_93@lemmy.world 12 points 1 week ago

Containerized software. The main advantage of this is that every application, or stack of applications, runs in its own ecosystem. You can restart a container whenever without having to reboot your entire system. You can store all data off a container in a volume, so if you hit a snag, you can recreate the container without actually losing any of your configs.

You can also create networks so that apps run in different subnets than other apps.

Very simply put, a docker container is like a mini system that runs on your main system.

Something else I like about docker is docker compose. You can create a container or stack of containers with a single simple YAML file without actually having to install anything yourself. I manage my containers in Portainer.

[–] LovableSidekick@lemmy.world 11 points 1 week ago (1 children)
load more comments (1 replies)

Docker is a set of tools, that make it easier to work with some features of the Linux kernel. These kernel features allow several degrees of separating different processes from each other. For example, by default each Docker container you run will see its own file system, unable to interact (read: mess) with the original file system on the host or other Docker container. Each Docker container is in the end a single executable with all its dependencies bundled in an archive file, plus some Docker-related metadata.

[–] Professorozone@lemmy.world 9 points 1 week ago (6 children)

I've never posted on Lemmy before. I tried to ask this question of the greater community but I had to pick a community and didn't know which one. This shows up as lemmy.world but that wasn't an option.

Anyway, what I wanted to know is why do people self host? What is the advantage/cost. Sorry if I'm hijacking. Maybe someone could just post a link or something.

[–] sugar_in_your_tea@sh.itjust.works 20 points 1 week ago* (last edited 1 week ago)

It usually comes down to privacy and independence from big tech, but there are a ton of other reasons you might want to do it. Here are some more:

  • preservation - no longer have to care if Google kills another service
  • cost - over time, Jellyfin could be cheaper than a Netflix sub
  • speed - copying data on your network is faster than to the internet
  • hobby - DIY is fun for a lot of people

For me, it's a mix of several of reasons.

[–] Kirk@startrek.website 12 points 1 week ago

People are talking about privacy but the big reason is that it gives you, the owner, control over everything quickly without ads or other uneeded stuff. We are so used to apps being optomized for revenue and not being interoperable with other services that it's easy to forget the single biggest advantage of computers which is that programs and apps can work together quickly and quietly and in the background. Companies provide products, self-hosting provides tools.

[–] irmadlad@lemmy.world 12 points 1 week ago (1 children)

Anyway, what I wanted to know is why do people self host?

Wow. That's a whole separate thread on it's on. I selfhost a lot of my services because I am a staunch privacy advocate, and I really have a problem with corporations using my data to further bolster their profit margins without giving me due compensation. I also self host because I love to tinker and learn. The learning aspect is something I really get in to. At my age it is good to keep the brain active and so I self host, create bonsai, garden, etc. I've always been into technology from the early days of thumbing through Pop Sci and Pop Mech magazines, which evolved into thumbing through Byte mags.

load more comments (1 replies)
load more comments (3 replies)
[–] edifier@feddit.it 9 points 1 week ago

..baby don't hurt me.. No more..

I'm not sure how familiar you are with computers in general, but I think the best way to explain Docker is to explain the problem it's looking to solve. I'll try and keep it simple.

Imagine you have a computer program. It could be any program; the details aren't important. What is important, though, is that the program runs perfectly fine on your computer, but constantly errors or crashes on your friend's computer.

Reproducibility is really important in computing, especially if you're the one actually programming the software. You have to be certain that your software is stable enough for other people to run without issues.

Docker helps massively simplify this dilemma by running the program inside a 'container', which is basically a way to run the same exact program, with the same exact operating system and 'system components' installed (if you're more tech savvy, this would be packages, libraries, dependencies, etc.), so that your program will be able to run on (best-case scenario) as many different computers as possible. You wouldn't have to worry about if your friend forgot to install some specific system component to get the program running, because Docker handles it for you. There is nuance here of course, like CPU architecture, but for the most part, Docker solves this 'reproducibility' problem.

Docker is also nice when it comes to simply compiling the software in addition to running it. You might have a program that requires 30 different steps to compile, and messing up even one step means that the program won't compile. And then you'd run into the same exact problem where it compiles on your machine, but not your friend's. Docker can also help solve this problem. Not only can it dumb down a 30-step process into 1 or 2 commands for your friend to run, but it makes compiling the code much less prone to failure. This is usually what the Dockerfile accomplishes, if you ever happen to see those out in the wild in all sorts of software.

Also, since Docker puts things in 'containers', it also limits what resources that program can access on your machine (but this can be very useful). You can set it so that all the files it creates are saved inside the container and don't affect your 'host' computer. Or maybe you only want to give permission to a few very specific files. Maybe you want to do something like share your computer's timezone with a Docker container, or prevent your Docker containers from being directly exposed to the internet.

There's plenty of other things that make Docker useful, but I'd say those are the most important ones--reproducibility, ease of setup, containerization, and configurable permissions.

One last thing--Docker is comparable to something like a virtual machine, but the reason why you'd want to use Docker over a virtual machine is much less resource overhead. A VM might require you to allocate gigabytes of memory, multiple CPU cores, even a GPU, but Docker is designed to be much more lightweight in comparison.

A little box you can put your app.

If the app does bad, it doesn't sink your ship. Just throw the box over board and repackage the app.

I'm not sure most people need it, but it could be fun to use a new app inside a container. Also makes updating that needs a restarting without shutting down your other services.

[–] kernelle@lemmy.world 8 points 1 week ago

If 'but it works on my computer' was a software service

load more comments
view more: next ›