I can’t help but feel overwhelmed by the sheer complexity of self-hosting modern web applications (if you look under the surface!)

Most modern web applications are designed to basically run standalone on a server. Integration into an existing environment a real challenge if not impossible. They often come with their own set of requirements and dependencies that don’t easily align with an established infrastructure.

“So you have an already running and fully configured web server? Too bad for you, bind me to port 443 or GTFO. Reverse-proxying by subdomain? Never heard of that. I won’t work. Deal with it. Oh, and your TLS certificates? Screw them, I ship my own!”

Attempting to merge everything together requires meticulous planning, extensive configuration, and often annoying development work and finding workarounds.

Modern web applications, with their elusive promises of flexibility and power, have instead become a source of maddening frustration when not being the only application that is served.

My frustration about this is real. Self-hosting modern web applications is an uphill battle, not only in terms of technology but also when it comes to setting up the hosting environment.

I just want to drop some PHP files into a directory and call it a day. A PHP interpreter and a simple HTTP server – that’s all I want to need for hosting my applications.

  • poVoq@slrpnk.net
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    While php is still cool… join the dark side and start using containers 😏

    • 𝘋𝘪𝘳𝘬@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      Yes, containers could be the way – if every applications would come in a container or it were super easy to containerize them without the applications knowing it.

      Can I run half a dozen of applications in containers that all need port 443 and how annoying is it to set it up?

      • Skull giver@popplesburger.hilciferous.nl
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        With docker compose you can usually bind any exposed ports to a port of your choice on localhost. Port statements like 127.0.0.1:8888:80 will bind port 80 within the container to port 8888 on 127.0.0.1, for example.

        If you need separate addresses, I mean you can, assuming you have access to a modern network. IPv6 gives you billions of IP addresses per network. Generate a random one, or a funny hex vanity address, and bind to that.

        For IPv4 you can use tricks like an SNI proxy to sniff the SNI header and transparently redirect applications to the right IPv6 host. HTTP requests can just be proxied with any reverse proxy. For non-HTTP, non-TLS traffic, you’ll need more complicated solutions, though.

        I’d again go with reverse proxies (wildcard cert on the proxy, let the applications figure stuff out themselves if they decide to do it themselves) but that’ll get you double certificates that you don’t really need.

        I haven’t run into issues just proxying everything through nginx but there are possibilities if something decides to absolutely need to bind to a publicly reachable port.

        • 𝘋𝘪𝘳𝘬@lemmy.mlOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 year ago

          For IPv4 you can use tricks like an SNI proxy to sniff the SNI header and transparently redirect applications to the right IPv6 host. HTTP requests can just be proxied with any reverse proxy. For non-HTTP, non-TLS traffic, you’ll need more complicated solutions, though

          This is what I mean. I don’t think it is easy. I also don’t like it. I find that utterly annoying. And yes, I need all of my services to listen on port 443 or other publicly reachable ports. Right now my router just forwards the poets to the machine the applications run on.

          My dynamic DNS Provider does not offer wildcards or unlimited subdomains. This was never an issue during the last 10+ years.

          But it seems to be impossible to selfhost anything nowadays without reimplementing the whole tech stack for each individual application - ending up with half a dozen of operating systems and different configurations I need to maintain.

          • microair2@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            In my opinion, it seems like it’s actually becoming increasingly easier to self-host in recent times, no more dependency hell to deal with.

            Most of the apps are offering docker support making it easier to deploy, you don’t waste time setting up the perfect environment for that new app you want to deploy and sometimes mess up your current setup in the process.

            Which apps are you using that make it seem so bad for you, would you like to share? I think you should try it once at least managing ports it’s really easy in containerized apps and offer the same solution you need, like to port 443 of all your favourite apps to different ports and set up reverse proxy and access them.

            I’d recommend you to try pi-hole with docker compose, you’ll be surprised.

      • poVoq@slrpnk.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Yes, you can just map the internal 443 port to another port outside of the container and then reverse-proxy them all.

  • wpuckering@lm.williampuckering.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Containers really shine in the selfhosting world in modern times. Complete userspace isolation, basically no worries about dependencies or conflicts since it’s all internally shipped and pre-configured, easy port mapping, immutable “system” files and volume mounting for persistent data… And much more. If built properly, container images solve almost all problems you’re grappling with.

    I can’t imagine ever building another application myself without containerization ever again. I can’t remember the last time I installed any kind of server-side software directly on a host without containerization, with the exception of packages required by the host that are unavoidable to support containers or to increase security posture.

    I’m my (admittedly strong) opinion, it’s absolute madness, and dare I say, reckless and incomprehensible, why anybody would ever create a brand new product that doesn’t ship via container images in this day and age, if you have the required knowledge to make it happen, or the capacity to get up to speed to learn how to make it happen (properly and following best practices of course) in time to meet a deadline.

    I’m sure some would disagree or have special use-cases they could cite where containers wouldn’t be a good fit for a product or solution, but I’m pretty confident that those would be really niche cases that would apply to barely anyone.

  • PrivateButts@geddit.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    The thing that boils my blood is secret sqlite databases. I just want to store my volumes on an NAS using NFS, and run the stacks on a server built for it. Having a container randomly blows up because an undocumented sqlite database failed to get a lock sucks ass.

    • 𝘋𝘪𝘳𝘬@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      secret sqlite databases

      The thing is: “secret”. SQLite databases in general are awesome. Basically no need to configuration. They just work and don’t even need an own server and in 99% of all cases they’re absolutely enough for what they used for. I’d always chose a SQLite database over anything else - but it should made clear that such a database is used.

  • MangoPenguin@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Docker containers do pretty much solve that, drop a docker-compose.yml file in place, maybe tweak a few lines, and that’s all.

  • D-RAJ@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Perhaps a solution like CloudPanel or Cloudron would make self-hosting multiple sites / apps easier for you. I use CloudPanel to host multiple Wordpress websites and it works very well. I use Cloudron to quickly deploy various open-source apps on one VPS.

    https://www.cloudpanel.io

    https://www.cloudron.io

  • themoonisacheese@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Sadly, a PHP dev environment and a webserver is not enough for modern devs.

    I just ended up installing proxmox, and everything I install gets it’s own VM. It binds to the port it wants, and my public IP port 443 binds to a VM win ngnix. If you hit a subdomain, ngnix proxies the request to the actual server and port. Servers can ship whatever certificates they want, my ngnix is the one clients negociate SSL with, so it has its own certificate. The only other thing running on that server is certbot.

    It’s honestly much simpler this way. Need to restart a machine after install? Everything stays up. One of the software needs glibc version fuck my ass? Don’t care that machine will have that version of glibc and I will not touch it. Software has a memory leak? Qemu doesn’t, and the VM is limited in ram so only that is crashing.

    Just asked sure your VM template is good (and has your ssh key installed) and you’re golden. Before this week’s internet outage, I had 99.999% uptime with a single hypervisor, and the only monitoring I have is just uptime of all services as seen from AWS. I don’t even have alerts.

    I sometimes long for the days (that I missed, I’m only 24) of monolithic Linux servers where you have a webserver, a database server and that’s it. Sadly, VMs are cheap and dependencies hell. It’s still quite fun to tinker in the virtualized world. It’s just not the same as what has been.

  • dan@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I recently(ish) installed Unraid on a new NAS, as I’d heard good things but knew nothing about it. Didn’t really intend to install much on it, but got playing around with the Docker stuff built into it and… fuck me. The amount of time I used to spend installing dependancies, configuring stuff, trying to work out why the hell it wasn’t working. With really not much work I’ve got a fully fledged Arr setup with Jellyfin, got a full dev environment, Grafana and influx for monitoring, automated tls certs, and a bunch of other things all working pretty damn flawlessly.

    Containers are awesome.

  • amnesiac@lemmy.villa-straylight.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    What “modern web application” doesn’t work with rev proxy by subdomain? (Esp one that can’t be remedied by rewriting the host header at the proxy).

    Furthermore which of these apps require binding to 443 and issue their own certs? This sounds range if a listening port can’t be specified.

  • 𝘋𝘪𝘳𝘬@lemmy.mlOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Sometimes venting off a little helps a a little. I finally sat down and learned the basics of docker and found an easy to follow video series on how to setup Docker with Portainer and Nginx Proxy Manager. Works like charm. I also set up my GoToSocial instance again but failed at setting up a Lemmy instance … but I guess that’s for another discussion :)

  • grin@grinnit.grin.hu
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    1 year ago

    Not sure what’s the problem though. Pull up a reverse proxy, and give all the crappy shit a private ip and whatever port they want, and access it through the proxy, and everyone can be on 443. 127.42.1.123:443, whatever. Maybe use real containers, or that crappy docker shit, both offer you independent namespaces with all the port and whatnot.