• 0 Posts
  • 118 Comments
Joined 1 year ago
cake
Cake day: June 22nd, 2023

help-circle

  • Was about to post this, this works well for me.

    In my case I’m storing the DB on my Google Drive for now, but Keepass2Android supports many different systems, including “generic” things like WebDAV, so really anything should work.

    While Keepass2Android is integrated with the syncing and will always check for conflicts (i.e. check for latest version before saving), the same isn’t necessarily true for the desktop client. But since I rarely edit from both devices at the same time, anything that syncs to the Desktop in a somewhat realtime fashion should work just fine.

    And for the few (long ago) cases where updates were overwritten, the “previous version” feature of Google Drive was god-sent! (And KeepassX can simply merge the old overwritten version into the current one and you’ll get the correct merge).


  • I think the difference is at what level:

    • don’t implement your own storage redundancy system at the kernel level with a small team in a closed-source fashion, because that’s the kind of thing that needs many eyes, lots of experience and many millions of hours real-world usage to fully debug and make sure it work.
    • do build your own system by combining pre-existing technologies that are built by experienced teams and tested/vetted by wide/popular usage.

    I feel OPs critique has some truth to it. I personally would rather stay with raidz by zfs, exactly because of it’s open nature (yes, they too have bugs, nothing is perfect).


  • Do you have any devices on your local network where the firmware hasn’t been updated in the last 12 month? The answer to that is surprisingly frequently yes, because “smart device” companies are laughably bad about device security. My intercom runs some ancient Linux kernel, my frigging washing machine could be connected to WiFi and the box that controls my roller shutters hasn’t gotten an update sind 2018.

    Not everyone has those and one could isolate those in VLANs and use other measures, but in this day and age “my local home network is 100% secure” is far from a safe assumption.

    Heck, even your router might be vulnerable…

    Adding HTTPS is just another layer in your defense in depth. How many layers you are willing to put up with is up to you, but it’s definitely not overkill.


  • They are in fact the same image, as you can verify by comparing their digest:

    $ docker pull ghcr.io/linuxserver/plex
    Using default tag: latest
    latest: Pulling from linuxserver/plex
    Digest: sha256:476c057d677ff239d6b0b5c8e7efb2d572a705f69f9860bbe4221d5bbfdf2144
    Status: Image is up to date for ghcr.io/linuxserver/plex:latest
    ghcr.io/linuxserver/plex:latest
    $ docker pull lscr.io/linuxserver/plex
    Using default tag: latest
    latest: Pulling from linuxserver/plex
    Digest: sha256:476c057d677ff239d6b0b5c8e7efb2d572a705f69f9860bbe4221d5bbfdf2144
    Status: Image is up to date for lscr.io/linuxserver/plex:latest
    lscr.io/linuxserver/plex:latest
    $
    
    

    See how both images have the digest sha256:476c057d677ff239d6b0b5c8e7efb2d572a705f69f9860bbe4221d5bbfdf2144. Since the digest uniquely identifies the exact content/image, that guarantees that those images are in fact byte-for-byte identical.




  • The EULA is just standard terms like don’t try to circumvent the license requirement, if you buy a license don’t share it with other people, some warranty and liability stuff, etc.

    Yes, I know. I actually read it (which is rare) and it’s mostly sensible stuff. The “no reverse engineering” clause just felt weird in something that claims to be “mostly open source”.

    In the end I find it slightly misleading to call this open-core when the app with just the non-commercial features can’t be built full from the published source.

    They are not necessary for basic core functionality but it doesn’t work without it as the license requirement could be disabled easily then as I mentioned before.

    I don’t quite understand this argument. If I can build a development version I can run any and all code in the repo (while providing an existing xpipe installation) and somehow I would be able to ship this, if I had criminal energies, so how exactly does this requirement prevent that?

    In other words: if the only way to access the commercial features without a license is by doing something illegal then … that’s not really adding much burden, is it?

    In the end I’m probably just one of the open-source proponents that don’t like that, and that’s fine. Not everyone needs to agree with everyone, there’s a lot of space here where reasonable minds can disagree. I just think that claiming “the main application is open source” when it can’t be built purely from the source is a bit misleading.


  • This looks really interesting.

    I don’t mind the commercialization at all and think it’s actually a good sign for an open source project to have a monetization strategy to be able to hang around.

    But why do I have to agree to a EULA on a Apache-licensed piece of software? I understand that for the commercial features that might be necessary, but in that case could we get a separate installer for “this is all Apache licensed, no need for a EULA”?

    Additionally the contribution file mentions that “some components are only included in the release version and not in this repository.”. What are these components? Are they necessary for the basic core functionality?


  • The issue is that according to the spec the two DNS servers provided by DHCP are equivalent. While most clients favor the first one as the default, that’s not universally the case and when and how it switches to the secondary can vary by client (and effectively appear random). So you won’t be able to know for sure which client uses your DNS, especially after your DNS server was unreachable for a while for whatever reason. Personally I’ve “just” gotten a second Pi to run redundant copies of PiHole, but only having a single DNS server is usually fine as well.




  • Note that just because everything is digital doesn’t mean something like that isn’t necessary: If you depend on your service provider to keep all of your records then you will be out of luck once they … stop liking you, go out of business, have a technical malfunction, decide they no longer want to keep any records older than X years, …

    So even in a all-digital world I’d still keep all the PDF artifacts in something like that.

    And I also second the suggestion of paperless-ngx (even though I’m not using it for very long yet, but it’s working great so far).


  • Ask yourself what your “job” in the homelab should be: do you want to manage what apps are available or do you want to be a DB admin? Because if you are sharing DB-containers between multiple applications, then you’ve basically signed up to checking the release notes of each release of each involved app closely to check for changes like this.

    Treating “immich+postgres+redis+…” as a single unit that you deploy and upgrade together makes everything simpler at the (probably small) cost of requiring some more resources. But even on a 4GB-ram RPi that’s unlikely to become the primary issue soon.


  • There’s many different ways with different performance tradeoffs. for example for my Homeland server I’ve set it up that I have to enter it every boot, which isn’t often. But I’ve also set it up to run a ssh server so I can enter it remotely.

    On my work laptop I simply have to enter it on each boot, but it mostly just goes into suspend.

    One could also have the key on a usb stick (or better use a yubikey) and unplug that whenever is reasonable.


  • Just FYI: the often-cited NIST-800 standard no longer recommends/requires more than a single pass of a fixed pattern to clear magnetic media. See https://nvlpubs.nist.gov/nistpubs/specialpublications/nist.sp.800-88r1.pdf for the full text. In Appendix A “Guidelines for Media Sanitation” it states:

    Overwrite media by using organizationally approved software and perform verification on the
    overwritten data. The Clear pattern should be at least a single write pass with a fixed data value,
    such as all zeros. Multiple write passes or more complex values may optionally be used.

    This is the standard that pretty much birthed the “multiple passes” idea, but modern HDD technology has made that essentially unnecessary (unless you are combating nation-state-sponsored attackers, in which case you should be physically destroying anything anyway, preferably using some high-heat method).



  • That saying also means something else (and imo more important): RAID doesn’t protect against accidental or malicious deletion/modification. It only protects against data loss due to hardware fault.

    If you delete stuff or overwrite it then RAID will dutifully duplicate/mirror/parity-check that action, but doesn’t let you go back in time.

    Thats the same reason why just syncing the data automatically to another target also isn’t the same as a full backup.