Performance parity? Heck no, not until this bug with the GSP firmware is solved: https://github.com/NVIDIA/open-gpu-kernel-modules/issues/538
Performance parity? Heck no, not until this bug with the GSP firmware is solved: https://github.com/NVIDIA/open-gpu-kernel-modules/issues/538
Yeah, in a Reddit comment, Hector Martin himself said that the memory bandwidth on the Apple SIlicon GPU is so big that any potential performance problems due to TBDR vs IMR are basically insignificant.
…which is a funny fact because I had another Reddit user swear up and down that TBDR was a big problem and that’s why Apple decided not to support Vulkan and instead is forcing everyone to go Metal.
I’ve heard something about Apple Silicon GPUs being tile-based and not immediate mode, which means the Vulkan API is different compared to regular PCs. How has this been addressed in the Vulkan driver?
I am so happy power-profiles-daemon now sets the CPU driver instead of only setting the platform_driver when it is present. It was a big pain point of mine.
Definitely not necessary. If that was the case, it wouldn’t live up to it’s claims of being a transparent Docker replacement at all. I think you do need to use systemd if you want to go full rootless, but I haven’t tried it enough to make a solid call on that.
But yeah, with the above steps, I’ve moved seamlessly over to Podman for my self hosting stack and I’ve never looked back. It’s also great because I can take literally any Docker Compose I find on the Internet and it will most likely just work.
You can avoid a lot of trouble by running the containers as root and using network=host
Root yes, but you can avoid network=host most of the time pretty easily. I am still struggling with going rootless myself tbh.
Have you tried it with podman-docker? I’ve basically switched my entire self-hosting stack onto podman without much issue using that compatibility layer.
It does. You probably did not enable docker.service
to start on boot.
God the sound design in Alyx was insanely good. I felt like I was legitimately in City 17 and it was terrifying. It was a really good showcase of what Steam Audio can do.
Seriously. I really hope this allows Ken to get some additional developers onboard. Dude sounds like he’s shouldering a ton of responsibility at the moment.
Your issues stem from going rootless. Podman Compose creates rootless containers and that may or may not be what you want. A lot more configuration needs to be done to get rootless containers working well for persistent services that use low ports, like enabling linger for specific users or enabling low ports for non-root users.
If you want the traditional Docker experience (which is rootful) and figure out the migration towards rootless later, I’d recommend the following:
podman-docker
. This provides a seamless Docker compatibility layer for podman, allowing you to even use regular docker commands that get translated behind the scenes into Podman.docker-compose
. This will work via podman-docker
and gives you the native docker compose experience.podman.socket
and podman-restart.service
. First one socket-activates the central Podman daemon, second one restarts any podman containers with a restart-policy
of always
on boot.sudo
, so sudo docker-compose up -d
etc. You can run this with sudo podman compose
as well if you’re allergic to hyphenation. Podman allows both rootful and rootless containers and the way you choose is by running the commands with sudo
or not.This gets you to a very Docker-like experience and is what I am currently using to host my services. I do plan on getting familiar with rootless and systemd services and Kubernetes files, but I honestly haven’t had the time to figure all that out yet.
Thanks! Yeah i am already using a nginx reverse proxy in a docker container to expose my other docker containers so I was thinking two reverse proxies in a row might be too inefficient. Will definitely look into nftables. Nftable rules are temporary though right? What’s the correct way to automate running these rules on boot?
I was thinking the same thing regarding VPS and Wireguard. I use Wireguard personally to VPN into my home network for remote management, but I still haven’t looked up how to make a VPS as a proxy using it. I know they can join the same network and talk with each other but what’s the best way to route port 80 and 443 on the VPS to my server at home? Iptables?
Not OP, but I’ve been looking into Cloudflare tunnels on my end as well and ended up not going with them because you’re forced to use their own certs so they can decrypt and see the data. I mean most likely they aren’t doing anything untoward, but it’s still a consideration with regards to data privacy.
Yep! I was surprised at how power efficient the build was myself. It really pays to go with an APU both because it doesn’t go ham with the core count and clocks and also because you don’t need an external GPU. As long as you’re just doing light to medium loads and not transcoding at maximum speed 24/7, your power usage will be fine.
The reader itself leaves a lot to be desired though. There’s literally no UI besides the arrow keys and no way to configure font rendering etc. It’s cool that the functionality is there, but it needs work.
It idles anywhere from 28-33W, but when its doing heavy processing it spikes up to the full power consumption of the CPU (max I’ve seen is like 120W according to my UPS). I run it in Balanced performance profile so there’s essentially no limiter to the power consumption. I figured I spent all this money on a CPU, I might as well take advantage of its processing power when I need it.
Lately I’ve been running a 24/7 Palworld server, and that is constantly running at 65%-85% CPU (out of a possible 1200%). My UPS reports 45W.
If Palworld isn’t running and someone watches movies off of my Jellyfin, usage is around 40W-50W when doing transcoding, and 35W when doing direct play.
I went with Arch Linux, mostly because I am the most familiar with it, how barebones I can make it, and how rolling updates are generally easier for me to deal with than large break-the-world distro upgrades. All my services are running in Podman containers so they’re completely isolated from any library versioning issues.
DIY NAS all the way. I had a QNAP that had a known manufacturing defect in the Intel CPU and QNAP refused to provide any support or repair options despite knowing about the issue for a long time. I will never again bow down to silly corporate shenanigans when it comes to my data.
My DIY NAS is a bit…unconventional and definitely doesn’t fit in your budget requirement, but I’ll leave the parts list as an interesting thought experiment: https://pcpartpicker.com/list/Lm92Kp
…okay, look, I know its a bit crazy. No, its A LOT crazy. But I genuinely feel like it isn’t worth dealing with HDDs anymore when it comes to building a NAS. Back when I was using the QNAP, I had to replace each HDD at least twice and I spent $90-$100 bucks per drive. A NVMe SSD can easily outlast two or 3 HDD drives and you can get the MSI Spatiums on sale for $180, so in the long-term the costs even out. But the speed at which an NVMe array performs during scrubs and rebuilds blows a regular HDD array out of the water. Yes, its a higher up front cost, but an immensely better experience and the costs even out in the long run. Plus, a PCIe bifurcation expansion card is a hell of a lot smaller than 4 HDD drives, so it opens up your case selection for more compact builds.
I got the NZXT H1 because it was easy to build, came with cooler and PSU and just made things simple. It also goes on sale for around $180. You can definitely go with something else entirely. My thought process was that if I ever wanted a compact PC, I could possibly repurpose this case. This is just for me, it is not a hard recommendation.
I picked Ryzen 5600G because it was relatively cheap, decently powerful, and has HW h265 and h264 decode and H264 encode, which is basically what you need for Jellyfin or Plex. Just be aware that it only supports up to x4x4x8 PCIe bifurcation, so if you do go with a NVMe expansion slot, you can only put 3 on there and will have to use a mobo slot for the 4th. That’s how mine is currently setup and it works great.
Yeah, its crazy, and I am sure some people here will scoff at the build, but after using it for 3 years, I just can’t go back to regular HDD performance. An NVMe array just makes all of the services you host fly.
The failure to wait for network-online was the last thing preventing me from going rootless. I am going to have to try this again.