That’s a good question. Apparently, these large data companies start with their own unaligned dataset and then introduce bias through training their model after. The censorship we’re talking about isn’t necessarily trimming good input vs. bad input data, but rather “alignment” which is intentionally introduced after.
Eric Hartford, the man who created Wizard (the LLM I use for uncensored work), wrote a blog post about how he was able to unalign LLAMA over here: https://erichartford.com/uncensored-models
You probably could trim input data to censor output down the line, but I’m assuming that data companies don’t because it’s less useful in a general sense and probably more laborious.
Immediately, probably not. Privacy is one of those things where when you really need it, you can’t get it… unless you already have it.
Also, it’s not like you know the motivations of all 7 billion people on earth. If you’re out in the open, it just makes it easy for the lazy to find you.
I can get behind using a VPN, a phone with Graphene or Calyx, adblocker, user agent switcher, librewolf, and stuff… you give up some convenience for privacy, but it’s not overbearing. Tor, however, isn’t exactly useful as a daily driver.
So is there a visible benefit? Hopefully not. If you’re doing it right, you’ll just live a normal life and not be bothered.