• 1 Post
  • 40 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle

  • This was my core point. I don’t consider a business raising prices or gating features as a direct result of those features increasing their cost as “enshittification”. Stickers being paid, custom emojis, etc, that doesn’t cost Discord anything to provide, making that paid is enshittification; But if the feature itself costs the business actual money to provide, does everyone just expect them to eat that cost forever, in a lot of cases for absolutely no revenue from the users?

    Calling out businesses for not giving stuff that costs them money away for free just, doesn’t fundamentally make sense to me. Why is it just expected of Discord that they pay to store all your large files? A lot of “freemium” services like GMail recoup some of that money by mining your email for data that it can sell to advertisers, or eating the cost in an attempt to lock you into an ecosystem where you’ll spend money. Storing files on Discord is neither of those things.

    Don’t get me wrong, a lot of services are enshittifying, and making their services worse so you spend more money with them— but adjusting your quotas and pricing to reflect your real world cost of business is not that. To frame it as though you are entitled to free compute and resources from companies that don’t owe you anything comes off as just that, entitled. The cloud isn’t free. If you want to use a service, you should pay for it if you can.




  • You don’t need to exercise your stock options to access their value. It’s common practice to take loans out against their value, which allows you to access your money effectively tax free by instead paying interest against the loan. This is (again) a fairly commonplace practice used to make collecting tax difficult, and allow them to make the argument to regulators that they aren’t actually being paid that much, it’s totally just options they would never sell off. That’s why C suite has such a “burn everything to the ground, as long as our stock price goes up” mentality, because if it doesn’t, they have to start worrying about interest on their loans— because they have fairly low liquidity (percentage wise).







  • pup_atlas@pawb.socialtoLinux@lemmy.mlWhat if I paid for all my free software?
    link
    fedilink
    arrow-up
    11
    arrow-down
    3
    ·
    edit-2
    10 months ago

    If that were solely true, there would be a lot more competition in the field right now. Amazon, (and to a much lesser extent the other 2 big names, GCP and Azure) are so massive not because they have a lot of power (plenty of other companies like digital ocean or OVM have plenty of scaling power too)— but because the integrations between their products are so seamless. Most of that functionality has a foundation in FOSS software that they’ve built on top of.


  • They claim it in the article, and in a few other publications, but I haven’t seen anything that explicitly confirms, from sunbird, that this is the case, including on their website. They also make claims on their website that conflict with that architecture, as I don’t believe it would be possible to E2E encrypt messages like they claim they do. I kinda wonder if the Mac Mini claim is an assumption that everyone just ran with, without confirming that it’s true. I could be wrong though, I’ll gladly eat my words if anyone has a primary source to cite, but that architecture and business model just doesn’t appear to be compatible with their claims.


  • In the article it mentions that the service is run by sunbird. Just by reading their FAQ it doesn’t actually sound like they are MITM’ing messages via some mac server somewhere. It actually sounds more plausible to me that they are doing all the magic “on device”. They specifically mention that this won’t work on multiple phones at the same time, that’s what’s tipping me off.

    What I suspect is happening is that the phone itself is spoofing an actual iPhone, and connecting to Apple servers as if it is one. Normally you wouldn’t be able to do this, Apple sells the phones, so they know all the serial numbers that should be able to access iMessage, and would be able to block anything that doesn’t report to be a real iPhone. What I think may be happening is that sunbird could be buying up pallets of dead, old, or otherwise unusable iPhones for pennies on the dollar, and using those serial numbers to pretend they were an iPhone from another device (like the nothing phone) directly.

    This would make sense with their business model, according to their FAQ they have “no reason to charge money” for their product yet. Buying access to iMessage for a few bucks upfront with no ongoing cost would match up with what they are claiming, and it would be extremely hard for Apple to detect on their end, as they would appear to be all sorts of models, bought at different times, in different places, and signed in by real people.

    I want to reiterate that this is pure speculation on my part, it’s just a theory. Which this would mean that (in theory) chats could (and would) be E2E encrypted from sender to receiver, ultimately it’s still Nothing/Sunbird’s app, so they could be doing anything with it on device.




  • Spending less than you earn isn’t a realistic prospect for A LOT of people. In many areas the cost of living has increased so dramatically that even those pinching pennies, eating simple cheap meals like chicken and rice every night are feeling it. The inflation numbers being distributed by the government are at this point, a straight up fabrication, with no relation to how the economy is actually functioning. In terms of actual costs real people are paying, things have doubled or tripled in cost across the board since 2020 at the grocery store, not to mention how outrageous rent has become. Unless you are also doubling or tripling your income every few years, it’s easy to see where all that money is going.



  • I would love to switch, and I tried to the other day, but I discovered that Firefox still doesn’t support integrated WebAuthn tokens (I.E. using Touch ID in lieu of YubiKeys). That is (unfortunately) a non-starter for me, as I use that technology everywhere, and I’m not intentionally weakening my security posture to switch. I’m honestly really surprised to find this feature disparity, as this feature has been generally available elsewhere for years. I’m a developer, so maybe I’ll take a crack at implementing it myself sometime, but it’s a big enough deal that I genuinely can’t switch yet :(


  • I’m out and about today, so apologies if my responses don’t contain the level of detail I’d like; As for the law being collective morality, all sorts of religious prohibitions and moral scares HAVE ended up in the law. The idea is that the “collective” is large enough to dispel any niche restrictive beliefs. Whether or not you agree with that strategy aside, that is how I believe the current system works in an ideal sense (even if it works differently in practice), that’s what it is designed to protect from my perspective.

    As for anti-AI artists, let me pose a situation for you to illustrate my perspective. As a prerequisite for this situation, a large part of a lawsuit, and the ability to advocate for a law is based on standing, the idea that you personally, or a group you represent has been directly, tangibly harmed by the thing you are trying to restrict. Here is the situation:

    I am a furry, and a LARGE part of the fandom is based on art and artists. A core furry experience is getting art commissioned of your character from other artists. It’s commonplace for all these artists to have a very specific, identifiable signature style, so much so that it is trivial for me and other furs to be able to identify artists by their work alone at just a glance. Many of these artists have shifted to making their living full time off of creating art. With the advent of some new generational models, it is now possible to train a model exclusively off of one singular artists style, and generate art indistinguishable from the real thing without ever contacting them. This puts their livelihood directly at risk, and also muddies the waters in terms of subject matter, and what they support. Without laws regulating training, this could take away their livelihood, or even give a (very convincing, and hard to disprove) impression that they support things they don’t, like making art involving political parties, or illegal activities, which I have seen happen already. This almost approaches defamation in my opinion.

    One argument you could make is that this is similar to the invention of photography, which may have directly threatened the work of painters. And while there are some comparisons you could draw from that situation, photography didn’t fundamentally replace their work verbatim, it merely provided an alternative that filled a similar role. This situation is distinct because in many cases, it’s not possible, or at least immediately apparent which pieces are authentic, or not. That is a VERY large problem the law needs to solve as soon as possible.

    Further, I believe the same, or similar problems exist in LLMs, like they do in the situation involving generative image models above. Sure with enough training, those issues are lessened in impact, but where is the line of what is ok and what isn’t? Ultimately the models themselves don’t contain any copyrighted content, but they (by design) combine related ideas and patterns found in the training data, in a way that will always approximate it, depending on the depth of training data. While “overfitting” might be considered a negative in the industry, it’s still a possibility, and until there is some sort of regulations establishing the fitness of commercially available LLMs, I can envision situations in which management would cut training short once it’s “good enough”, leaving overfitting issues in place.

    Lastly, with respect, I’d like to push back on both the notion that I’d like to ban AI or LLMs, as well as the notion that I’m not educated enough on the subject to adequately debate regulations on it. Both are untrue. I’m very much in favor of developing the technology, and exploring all it’s applications. It’s revolutionary, and worthy of the research attention it’s getting. I work on a variety of models across the AI and LLM space professionally, and I’ve seen how versatile it is; That said, I have also seen how over publicized it is. We’re clearly (from my perspective) in a bubble that will eventually pop. We’re claiming products use AI to do this and that across nearly every industry, and while LLMs in particular are amazing, and can be used in a ton of applications, it’s certainly not all of them— and I’m particularly cautious of putting new models in charge of dangerous or risky processes where they shouldn’t be before we develop adequate metrics, regulation, and guardrails. To summarize my position, I’m very excited to work towards developing them further, but I want to publicly express the notion that it’s not a silver bullet, and we need to develop legal frameworks for protecting people now, rather than later.


  • The law is (in an ideal world), the reflection of our collective morality. It is supposed to dictate what is “right” and “wrong”. That said— I see too many folks believing that it works the other way too, that what is illegal must be wrong, and what is legal must be ok. This is (decisively) not the case.

    In AI terms, I do believe some of the things that LLMs and the companies behind them are doing now may turn out to be illegal under certain interpretations of the law. But further, I think a lot of the things companies are doing to train these models are seen as “immoral” (me included), and that the law should be changed to reflect that.

    Sure that may mean that “stuff these companies are doing now is legal”, but that doesn’t mean we don’t have the right to be upset about it. Tons of stuff large corporations have done was fully legal until public outcry forced the government to legislate against it. The first step in many laws being passed is the public demonstrating a vested interest in it. I believe the same is happening here.