• 0 Posts
  • 49 Comments
Joined 1 year ago
cake
Cake day: June 10th, 2023

help-circle
  • It also means that ALL traffic incoming on a specific port of that VPS can only go to exactly ONE private wireguard peer. You could avoid both of these issues by having the reverse proxy on the VPS (which is why cloudflare works the way it does), but I prefer my https endpoint to be on my own trusted hardware.

    For TLS-based protocols like HTTPS you can run a reverse proxy on the VPS that only looks at the SNI (server name indication) which does not require the private key to be present on the VPS. That way you can run all your HTTPS endpoints on the same port without issue even if the backend server depends on the host name.

    This StackOverflow thread shows how to set that up for a few different reverse proxies.


  • Perhyte@lemmy.worldtoReddit@lemmy.mlWait. Why is Reddit losing so much money?
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    4 months ago

    And, interestingly, they lost $91 million last year. If the CEO had instead earned $100 million last year, the company have made a multi-million dollar profit (if only just). If it had been $10 million (still way overpaid for any single person, I’d argue), they’d be nearing the hundreds-of-millions-per-year profit scale.

    I’ll never understand companies paying their CEOs hundreds of millions while they’re losing money hand over fist…


  • If this is something you run into often, it’s likely still only for a limited number of servers? ssh and scp both respect .ssh/config, and I suspect (but haven’t tested) that sftp does too. If you add something like this to that file:

    Host host1 host2
      Port 8080
    

    then SSH connections to hosts named in that first line will use port 8080 by default and you can leave off the -p/-P when contacting those hosts. You can add multiple such sections if you have other hosts that require different ports, of course.






  • There are FOSS licenses (notably the GPL) that say that if you do resell (or otherwise redistribute) the software, you have to do so only under the same terms. (That is, you can’t sell a proprietary fork. But you could sell a fork under FOSS terms.) But none that say “no selling.”

    For many companies (especially large ones), the GPL and similar copyleft licenses may as well mean “no selling”, because they won’t go near it for code that’s incorporated in their own software products. Which is why some projects have such a license but with a “or pay us to get a commercial license” alternative.



  • I have a similar setup.

    Getting the DNS to return the right addresses is easy enough: you just set your records for subdomain * instead a specific subdomain, and then any subdomain that’s not explicitly configured will default to using the records for *.

    Assuming you want to use Let’s Encrypt (or another ACME CA) you’ll probably want to make sure you use an ACME client that supports your DNS provider’s API (or switch DNS provider to one that has an API your client supports). That way you can get wildcard TLS certificates (so individual subdomains won’t still leak via Certificate Transparency logs). Configure your ACME client to use the Let’s Encrypt staging server until you see a wildcard certificate on your domains.

    Some other stuff you’ll probably want:

    • A reverse proxy to handle requests for those subdomains. I use Caddy, but basically any reverse proxy will do. The reason I like Caddy is that it has a built-in ACME client as well as a bunch of plugins for DNS providers including my preferred one. It’s a bit tricky to set this up with wildcard certificates (by default it likes to request individual subdomain certificates), but I got it working and it’s been running very smoothly since.
    • To put a login screen before each service I’ve configured Caddy to only let visitors through to the real pages (or the error page, for unconfigured domains) if Authelia agrees.





  • Technically DNS will let you look up a host name from an IP address, but the catch is that it might not work: it’s not automatically configured. And even if it is configured you might not get all of the host names pointing at that address.

    Very many webserver operators don’t bother adding the server’s host name to reverse DNS. For example, lemmy.world’s IP address does not map to any host name in reverse DNS, and google.com’s IP address maps to some completely different name for me, with no mention of Google in the returned name.

    Also, many websites can be served from the same IP address, especially if they are hosted in the cloud. You are correct that someone snooping on the connection would still see the IP address, but if that points them at something like a webhosting company or a CDN (or some other server hosting many different sites) it still doesn’t really tell them which specific site is being accessed.

    But yes, if the site you’re accessing is the only one hosted on that server then the snoop could potentially guess the host name. But even then: how would they know that’s the only site hosted there? If some site they’ve never even heard of uses the same IP address they would never know.


  • Without a VPN every host you connect to can approximate your location down to a few miles.

    I just tried a few geo-IP lookups of my current IP address, and they all point to a location that (as the bird flies) is almost exactly 100 miles from my actual location. This is despite the ISP I’m using being headquartered in my current city, but maybe they have some infrastructure there?

    On mobile data I instead get a location 90 miles away, and if I look up the IP address of another machine I know the exact location of, the result is 60 miles off.

    60-100 miles is a pretty generous definition of “a few”.


  • Perhyte@lemmy.worldtoLemmy@lemmy.mlLemmy 0.19 Breaking Changes
    link
    fedilink
    English
    arrow-up
    11
    ·
    9 months ago

    There’s a bit more to it than captured in the summary, which is why it’s just a summary of the spec and not the actual spec.

    From a bit further down on that page:

    1. Major version zero (0.y.z) is for initial development. Anything MAY change at any time. The public API SHOULD NOT be considered stable.

    Lemmy is still in major version zero, so it can make breaking changes without incrementing the major version and still be in compliance with the spec. This way, projects won’t have their first “real” version be something like v123.0.0.

    Lemmy still being v0.x also serves as kind of a warning to app developers that changes like this may be made at any time.



  • Perhyte@lemmy.worldtoPiracy@lemmy.mlPiracy > resellers
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 months ago

    Many piracy sites run ads though, don’t they? Unless everyone visiting runs ad blockers (unlikely) the people running those are making at least some money. Presumably it at least covers the cost of running the sites.

    It’s probably just as the comment you replied to said: “stuff bought with stolen credit cards (and resold on those sites) actually costs us money, as opposed to piracy which merely ‘costs’ us money”.


  • You produce a hundred 24 core cpus, then you test them rigorously. You discover that 30 work perfectly and sell them as the 24 core mdoel. 30 have between one and eight defective cores, so you block access to those cores and sell them as the 16 core model. Rinse and repeat until you reach the minimum number of cores for a saleable cpu.

    Except the ratios of consumer demand do not always match up neatly with the production ratios. IIRC there have been cases where they’ve overproduced the top model but expected not to be able to sell them all at the price they were asking for that model, and chose to artificially “cripple” some of those and sell them as a more limited model. An alternative sales strategy would have been to lower the price of the top model to increase demand for it, of course, but that may not always be the most profitable thing to do.