![](/static/66c60d9f/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/0943eca5-c4c2-4d65-acc2-7e220598f99e.png)
9/11 killed more in one day than mass shootings have in the last 20+ years. https://www.statista.com/statistics/811504/mass-shooting-victims-in-the-united-states-by-fatalities-and-injuries/
9/11 killed more in one day than mass shootings have in the last 20+ years. https://www.statista.com/statistics/811504/mass-shooting-victims-in-the-united-states-by-fatalities-and-injuries/
The drive is visible to the OS so if they have any kind of management software in place which looks for hardware changes it will be noticed.
I’m not sure why it would be any different from how this is treated with search engines. Both scrape massive amounts of openly available data and make it available in some form. Any training data or information that a model could potentially spit out is already available through a search engine’s index.
Just introducing them to it is probably enough. Show them different desktop environments and applications to get them used to the idea of diverse interfaces and workflows. Just knowing that alternatives exist could help them break out of the Windows monoculture later. Enable all of the cool window effects.
Dropping support after only 25 years? I can’t believe Linux is contributing to planned obsolescence.
Smaller communities aren’t necessarily a bad thing. Compared to reddit I rarely feel like I’m commenting into the void.
In the case of Machine learning the term has sort of been morphed to mean “open weights” as well as open inference/training code. To me the OSI is just being elitist and gate keeping the term.
That isn’t neccesarily true, though for now there’s no way to tell since they’ve yet to release their code. If the timeline is anything like their last paper it will be out around a month after publication, which will be Nov 20th.
There have been similar papers for confusing image classification models, not sure how successful they’ve been IRL.
We could just drop a million people from a third nationality in there and see what happens
The scientific method.
They could have added their own repos which is the concern here.
From what I’ve heard they’re competitive for English but I’ve never used Deepspeech myself. Whisper has much more community support so it’s probably easier to use overall.
I’ve used the tplink ones that they’re using and they’ve been pretty solid. I can’t say how they’d fare in a 24/7 setup though since they’re not really intended for that.
Whisper is your best bet for FOSS transcription. This is the most efficient implementation AFAIK: https://github.com/guillaumekln/faster-whisper.
Middle mouse? What’s that?
Not even remotely. It requires custom firmware which often requires physical disassembly to install. From there you can install any distro, but you will continue to have many small issues and inconveniences often due to the nonstandard keyboard.
There was a Chromebook targeted Linux distro called eupnea that could be installed without custom firmware via depthboot, but it’s dead now and the original repo got deleted after the Dev got hacked, so the build scripts don’t work anymore.
There are 2 ways to do it, either via depthboot(software only, no custom firmware, lots of manual OS prep, 0 risk) or custom firmware(maybe physical, model dependant, no os prep, small risk). For custom firmware you usually have to either bridge an internal jumper, unplug the battery, or build a custom cable, depending on your model.
While it is allowed it’s not supported by google.
I would never recommend buying a Chromebook with the intention of replacing the OS unless you’re looking for a project or you’re getting it for cheap.
As someone who has owned a Chromebook for several years, I can tell you that you shouldn’t. Hardware wise it’s hard to beat Chromebooks at their price points, but the complete lack of control over the system is a deal breaker. I don’t have time to list all of the issues I’ve had. In many cases what would have been trivial fixes on a normal Linux system required full reinstalls on chromeOS. Like the time I accidentally filled up the fairly modest system storage. The system refused to allow me to delete anything, requiring a reset just to get local file management abilities back.
I ultimately ended up installing full Linux on it, which ended up being a whole other ordeal due to all of Google’s “security” features.
That’s basically only OpenAI, maybe some obscure startups as well. Mozzila is far too old and niche to get away with that anyway.
I use okular as my primary image viewer as well. I love the middle mouse drag to zoom.
Koboldcpp should allow you to run much larger models with a little bit of ram offloading. There’s a fork that supports rocm for AMD cards: https://github.com/YellowRoseCx/koboldcpp-rocm
Make sure to use quantized models for the best performace, q4k_M being the standard.