Before anything else, I would like to say that I admit systemd
has brought great change to GNU/Linux. sysvinit
wasn’t the best, and custom scripts for every distro is a pain I’d rather not have.
With that said, Poettering now works for Microsoft, systemd
has basically taken over all of the common/popular distributions (if this is about the argument of “systemd
making it easier for developers”, disclaimer: I don’t know. I’m not a developer), and this has led to a rampant monopolisation of the init system.
Memes aside, this has very real consequences. If you don’t want another CentOS-style “oof, sorry, off to testing” debacle happening with your init system, might want to look at the more “advanced” distributions that let you choose the init system.
I am well aware that systemd works well for the most part, and that gamers and most other people likely don’t care - which is fine, at least for now. I do expect to see a massive turnover in sentiment if something ever happens to systemd
(not that I’d like for that to happen, but no trusting RedHat anymore), but I suppose we’ll get to it when we do.
My sentiments are well enunciated in this recent post on the Devuan forum: https://dev1galaxy.org/viewtopic.php?id=5826
Cheers!
Remember when Google’s DNS server address was hard-coded in systemd-resolved? Good times, what a laugh we all had.
Systemd-networkd (not systemd the init system) defaulted to the google DNS servers when:
That is indeed a serious issue worth bringing up decades later.
The main thing that turned it into a serious issue rather than just a stupid thing to joke about was that Poettering refused (as of five years ago) to admit that it was a mistake.
Why would he? It never was an issue.
It’s just one more annoying little thing to go on the big list of items to be corrected when setting up a systemd-equipped system, but more importantly believing that it’s acceptable to just leave it there demonstrates extremely poor judgement to a degree that makes many of us doubt the trustworthiness of the entire project. Perhaps in 2013, or whenever the decision was initially made, substantial numbers of people were sufficiently clueless as to think that adding in the possibility of inadvertently having your system quietly direct all its DNS queries to Google was better than the more obvious alternative of not doing so, but after everything that’s gone down since then it’s quite hard to imagine why anyone would stick up for such a bizarre point of view today.
Where are those “many of us”?
It is what the CI uses for testing. If several layers of people decide to not do their job and you have no hardware in your network that announces the DNS servers to use like basically everybody has, then those CI settings might leak through to the occassional user. Even then, at least there is network: Somebody that can’t be arsed to configure their network or pick any semi-private distribution will probably prefer that.
Absolutely no issue here, nothing to see.
Were they really? Or were they told “change it if you don’t like it”? Genuine question, and it would make some difference.
But in either case I’m sure not all of them did, and failing that it is all down to the one person (or worse, one team of people) administering the system. Badly configured networks resulting in DNS problems is not exactly rare, but that is beside the point. It’s clearly wrong no matter how uncommon is the situation that makes it materially detrimental.