ramielrowe

joined 1 year ago
[–] ramielrowe@lemmy.world 2 points 3 days ago* (last edited 3 days ago)

In general, on bare-metal, I mount below /mnt. For a long time, I just mounted in from pre-setup host mounts. But, I use Kubernetes, and you can directly specify a NFS mount. So, I eventually migrated everything to that as I made other updates. I don't think it's horrible to mount from the host, but if docker-compose supports directly defining an NFS volume, that's one less thing to set up if you need to re-provision your docker host.

(quick edit) I don't think docker compose reads and re-reads compose files. They're read when you invoke docker compose but that's it. So...

If you're simply invoking docker compose to interact with things, then I'd say store the compose files where ever makes the most sense for your process. Maybe think about setting up a specific directory on your NFS share and mount that to your docker host(s). I would also consider version controlling your compose files. If you're concerned about secrets, store them in encrypted env files. Something like SOPS can help with this.

As long as the user invoking docker compose can read the compose files, you're good. When it comes to mounting data into containers from NFS.... yes permissions will matter and it might be a pain as it depends on how flexible the container you're using is in terms of user and filesystem permissions.

[–] ramielrowe@lemmy.world 1 points 4 days ago

Docker's documentation for supported backing filesystems for container filesystems.

In general, you should be considering your container root filesystems as completely ephemeral. But, you will generally want low latency and local. If you move most of your data to NFS, you can hopefully just keep a minimal local disk for images/containers.

As for your data volumes, it's likely going to be very application specific. I've got Postgres databases running off remote NFS, that are totally happy. I don't fully understand why Plex struggles to run it's Database/Config dir from NFS. Disappointingly, I generally have to host it on a filesystem and disk local to my docker host.

[–] ramielrowe@lemmy.world 4 points 5 days ago (3 children)

In general, container root filesystems and the images backing them will not function on NFS. When deploying containers, you should be mounting data volumes into the containers rather than storing things on the container root filesystems. Hopefully you are already doing that, otherwise you're going to need to manually copy data out of the containers. Personally, if all you're talking about is 32 gigs max, I would just stop all of the containers, copy everything to the new NFS locations, and then re-create the containers to point at the new NFS locations.

All this said though, some applications really don't like their data stored on NFS. I know Plex really doesn't function well when it's database is on NFS. But, the Plex media directories are fine to host from NFS.

[–] ramielrowe@lemmy.world 11 points 2 weeks ago (1 children)

I mean, if you get hit by something, that tends to happen suddenly.

[–] ramielrowe@lemmy.world 6 points 1 month ago (1 children)

Realistically, probably not. If your workload is highly memory bound, and sensitive to latency, you would be leaving a little performance on the table. But, I wouldn't stress over it. It's certainly not going to bottleneck your CPU.

[–] ramielrowe@lemmy.world 1 points 1 month ago (1 children)

In a centralized management scenario, the central controlling service needs the ability to control everything registered with it. So, if the central controlling service is compromised, it is very likely that everything it controlled is also compromised. There are ways to mitigate this at the application level, like role-based and group-based access controls. But, if the service itself is compromised rather than an individual's credentials, then the application protections can likely all be bypassed. You can mitigate this a bit by giving each tenant their own deployment of the controlling service, with network isolation between tenants. But, even that is still not fool-proof.

Fundamentally, security is not solved by one golden thing. You need layers of protection. If one layer is compromised, others are hopefully still safe.

[–] ramielrowe@lemmy.world 3 points 1 month ago* (last edited 1 month ago) (4 children)

If we boil this article down to it's most basic point, it actually has nothing to do with virtualization. The true issue here is actually centralized infra/application management. The article references two ESXi CVE's that deal with compromised management interfaces. Imagine a scenario where we avoid virtualization by running Kubernetes on bare metal nodes, and each Pod gets exclusive assignment to a Node. If a threat actor has access to the Kubernetes management interface, and can exploit a vulnerability to access that management interface, it can immediately compromise everything within that Kubernetes cluster. We don't even need to have a container management platform. Imagine a collection of bare-metal nodes managed by Ansible via Ansible Automation Platform (AAP). If a threat actor has access to AAP and exploit it, it then can compromise everything managed by that AAP instance. This author fundamentally misattributes the issue to virtualization. The issue is centralized management and there are significant benefits to using higher-order centralized management solutions.

[–] ramielrowe@lemmy.world 28 points 2 months ago (4 children)

Perhaps as the more experienced smoker, you can be a good friend and offer a lower dose that is more suited for their tolerance. Maybe don't pack a big-ol bong rip for someone who hasn't smoked in months. Chop up that chocolate bar into something a little more manageable. If they wanna buy something, suggest something a little more controllable like a vape. And most of all, if you're pressuring people who are on the fence into smoking, maybe just stop doing that.

[–] ramielrowe@lemmy.world 5 points 2 months ago* (last edited 2 months ago) (1 children)

Yea, I don't think this is necessarily a horrible idea. It's just that this doesn't really provide any extra security, but even the first line of this blog is talking about security. This will absolutely provide privacy via pretty good traffic obfuscation, but you still need good security configuration of the exposed service.

[–] ramielrowe@lemmy.world 33 points 2 months ago* (last edited 2 months ago) (4 children)

If I understand this correctly, you're still forwarding it a port from one network to another. It's just in this case, instead of a port on the internet, it's a port on the TOR network. Which is still just as open, but also a massive calling card for anyone trolling around the TOR network for things to hack.

[–] ramielrowe@lemmy.world 42 points 4 months ago (4 children)

After briefly reading about systemd's tmpfiles.d, I have to ask why it was used to create home directories in the first place. The documentation I read said it was for volatile files. Is a users home directory considered volatile? Was this something the user set up, or the distro they were using. If the distro, this seems like a lot of ire at someone who really doesn't deserve it.

view more: next ›