• 0 Posts
  • 29 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle


  • And here’s a concern about the decentralized-but-still-centralized nature of attesters:

    From my understanding, attesting is conceptually similar to how the SSL/TLS infrastructure currently works:

    • Each ultimately-trusted attester has their own key pair (e.g. root certificate) for signing.

    • Some non-profit group or corporation collects all the public keys of these attesters and bundles them together.

    • The requesting party (web browser for TLS, web server for WEI) checks the signature sent by the other party against public keys in the requesting party’s bundle. If it matches one of them, the other party is trusted. If it doesn’t, they are not not trusted.

    This works for TLS because we have a ton of root certificates, intermediate certificates, and signing authorities. If CA Foo is prejudice against you or your domain name, you can always go to another of the hundreds of CAs.

    For WEI, there isn’t such an infrastructure in place. It’s likely that we’ll have these attesters to start with:

    • Microsoft
    • Apple
    • Google

    But hey, maybe we’ll have some intermediate attesters as well:

    • Canonical
    • RedHat
    • Mozilla
    • Brave

    Even with that list, though, it doesn’t bode well for FOSS software. Who’s going to attest to various browser forks, or for browsers running on different operating systems that aren’t backed by corporations?

    Furthermore, if this is meant to verify the integrity of browser environments, what is that going to mean for devices that don’t support Secure Boot? Will they be considered unverified because the OS can’t ensure it wasn’t tampered with by the bootloader?


  • Adding another issue to the pile:

    Even if it isn’t the intent of the spec, it’s dangerous to allow for websites to differentiate between unverified browsers, browsers attested to by party A, and browser attested to by party B. Providing a mechanism for cryptographic verification opens the door for specific browsers to be enforced for websites.

    For a corporate example:

    Suppose we have ExampleTechFirm, a huge investor in a private AI company, ShutAI. ExampleTechFirm happens to also make a web browser, Sledge. ExampleTechFirm could exert influence on ShutAI so that ShutAI adds rate limiting to all browsers that aren’t verified with ShutAI as the attester. Now, anyone who isn’t using Sledge is being given a degraded experience. Because attesting uses cryptographic signatures, you can’t bypass this user-hostile quality of service mechanism; you have to install Sledge.

    For a political example:

    Consider that I’m General Aladeen, the leader of the country Wadiya. I want to spy on my citizens and know what all of them are doing on their computers. I don’t want to start a revolt by making it illegal to own a computer without my spyware EyeOfAladeen, nor do I have the resources to do that.

    Instead, I enact a law that makes it illegal for companies to operate in Wadiya unless their web services refuse access to Wadiyan citizens that aren’t using a browser attested to by the “free, non-profit” Wadiyan Web Agency. Next, I have my scientists create and release a renamed versions of Chromium and Firefox with EyeOfAladeen bundled in them. Those are the only two browsers that are attested by the Wadiyan Web Agency.

    Now, all my citizens are being encouraged to unknowingly install spyware. Goal achieved!



  • Fair and respectable points, but I don’t think we’re going to see eye to eye on this. It seems like we have different priorities when it comes to reporting on issues.

    Honestly, I don’t disagree with you in thinking that the ulterior motive of the proposal is to undermine user freedom, user privacy, and/or ad blockers. Given Google’s history with Manifest V3 and using Chrome’s dominance to force vendors to adopt out-of-spec changes to web standards (passive scroll listeners come to mind), it would be burying my head in the sand to expect otherwise. My issue here is with portraying speculation and personal opinions as objective truths. Even if I agree that a locked down web is the most likely outcome, it’s just not a fact until someone working on that proposal outright says it was their intent, or it actually happens.

    That doesn’t mean I think we should ignore the Doomsday device factory until it starts creating Doomsday devices, either, though. Google will never outright state that is their goal to cripple adblockers or control the web, and if it comes to happen, they’ll just rely on corporate weasel words to claim that they never promised they wouldn’t. And since we can’t trust corporations to be transparent and truthful, we shouldn’t be taking their promises or claims at face value. You’re absolutely right about that.

    Going back to reporting about this kind of stuff, though: It’s not wrong for the original post to look past the surface-level claims, or for people to point out the corporate speak and lack of commitment. If there’s a factory labeled “Not Doomsday Devices” that pinkie promises they aren’t building Doomsday devices, I definitely would want someone to bring attention to it. I just don’t think the right way to do it is with a pitchfork-wielding mob of angry citizens who were told the factory is unquestionably building anthrax bioweapons, however.

    We don’t gain much from readers being told things that will worry them and piss them off. I mean—sure—there’s now more awareness about the issue. But it’s not actually all that constructive if they aren’t critically engaging with the proposal. Google and web standards committees aren’t going to listen to a bunch of angry Lemmy users reiterating the same talking points over and over. They’re just going to treat it as a brigade and block further feedback until people forget about it (which they did).

    If the topic was broached in a balanced and accurate way that refrained from making conclusions before providing readers with the facts, there would be less knee-jerk reactions. Maybe this is just me being naive, but I think it’s more likely that Google would be receptive to well-thought-out, respectful criticism as opposed to a significant quantity of hostile accusations.

    With that being said, I will concede that I overcorrected for the original post too much. I should have written a response covering the issue in a way that I found more ideal, rather than trying to balance out the bias from the original post. My goal was to point out the ragebait title and add missing information so readers could come to their own informed conclusions, not defend Google.



  • Did you read until the end, or was it more important to accuse me of either being stupid or a corporate shill? I have nothing against you, and I don’t see how it’s constructive to be hostile towards me.

    I said that the proposal itself does not aim to be DRM or adblock repellent, and cited the text directly from the document. It’s possible that something got lost in communication, but that wasn’t me trying to suggest that we should just blindly trust that this proposal has the users’ best interests at heart, or that motivations behind creating it could never, ever be disingenuous.

    Hell, I even made sure to edit my post to clarify how the proposal—if implemented—could be used to prevent ad blockers. The paragraphs right after the one you quoted say:

    To elaborate on the consequences of the proposal…

    Could it be used to prevent ad blocking? Yes. There are two hypothetical ways this could hurt adblock extensions:

    1. As part of the browser “environment” data, the browser could opt to send details about whether built-in ad-block is enabled, any ad-block extensions are enabled, or even if there are any extensions installed at all.

    Knowing this data and trusting it’s not fake, a website could choose to refuse to serve contents to browsers that have extensions or ad blocking software.

    1. This could lead to a walled-garden web. Browsers that don’t support the standard, or minority usage browsers could be prevented from accessing content.

    Websites could then require that users visit from a browser that doesn’t support adblock extensions.


  • Given Google’s history, the assertion made by the title isn’t wrong. That doesn’t mean that it’s objective and informative, however.

    The title suggests that the intent is to create DRM for web pages and “make ad blockers near-impossible”. From an informational standpoint, it correctly captures the likely consequences that would occur should the proposal be implemented. What it (nor the post body) does not do is provide an explanation, information, or context to explain why the proposal demonstrates the claim that is being made.

    The reader is not informed about Google’s history of trying to subvert ad blockers, nor are they shown how the proposal will lead to DRMed web pages and adblock prevention. The post is a reaction-inducing title followed by a link to a proposal and angry comments on GitHub. That’s not informative; that’s ragebait.

    Suppose I give the post the benefit of the doubt, and consider the bar for being “informative” to be simply letting people know about something. It’s still not objective. I’m not saying the OP should support Google or downplay the severity of the proposal, but they could have got the same point across without including their own prejudices:

    “Google engineers propose new web standard that would enable websites to prevent access from browsers running adblockers or website-altering extentions.”

    For the record: I agree with what this post is trying to say. I just disagree with how it’s said. Lemmy isn’t hemorrhaging ad money, and it isn’t overwhelmingly noisy. We don’t need to bring over toxic engagement tactics to generate views.


  • Oh, for sure. When bullet point number one involves advertising, they don’t make it hard to see that the underlying motivation is to assist advertising platforms somehow.

    I think this is an extremely slippery and dangerous slope to go down, and I’ve commented as such and explained how this sort of thing could end up harming users directly as well as providing ways to shut out users with adblocking software.

    But, that doesn’t change my opinion that the original post is framed in a sensationalized manner and comes across as ragebaiting and misinforming. The proposal doesn’t directly endorse or enable DRMing of web pages and their contents, and the post text does not explain how the conclusion of adblockers being killed follows from the premise of the proposal being implemented. To understand how OP came to that conclusion, I had to read the full document, read the feedback on the GitHub issues, and put myself in the shoes of someone trying to abuse it. Unfortunately, not everyone will take the time to do that.

    As an open community, we need to do better than incite anger and lead others into jumping to conclusions. Teach and explain. Help readers understand what this is all about, and then show them how these changes would negatively impact them.


  • Having thought about it for a bit, it’s possible for this proposal to be abused by authoritarian governments.

    Suppose a government—say, Wadiya—mandated that all websites allowed on the Wadiyan Internet must ensure that visitors are using a list of verified browsers. This list is provided by the Wadiyan government, and includes: Wadiya On-Line, Wadiya Explorer, and WadiyaScape Navigator. All three of those browsers are developed in cooperation with the Wadiyan government.

    Each of those browsers also happen to send a list of visited URLs to a Wadiyan government agency, and routinely scan the hard drive for material deemed “anti-social.”

    Because the attestations are cryptographically verified, citizens would not be able to fake the browser environment. They couldn’t just download Firefox and install an extension to pretend to be Wadiya Explorer; they would actually have to install the spyware browser to be able to browse websites available on the Wadiyan Internet.


  • In my other comments, I did say that I don’t trust this proposal either. I even edited the comment you’re replying to to explain how the proposal could be used in a way to hurt adblockers.

    My issue is strictly with how the original post is framed. It’s using a sensationalized title, doesn’t attempt to describe the proposal, and doesn’t explain how the conclusion of “Google […] [wants] to introduce DRM for web pages” follows the premise (the linked proposal).

    I wouldn’t be here commenting if the post had used a better title such as “Google proposing web standard for web browser verification: a slippery slope that may hurt adblockers and the open web,” summarized the proposal, and explained the potential consequences of it being implemented.


  • Frankly, I don’t trust that the end result won’t hurt users. This kind of thing, allowing browser environments to be sent to websites, is ripe for abuse and is a slippery slope to a walled garden of “approved” browsers and devices.

    That being said, the post title is misleading, and that was my whole reason to comment. It frames the proposal as a direct and intentional attack on users ability to locally modify the web pages served to them. I wouldn’t have said anything if the post body made a reasonable attempt to objectively describe the proposal and explain why it would likely hurt users who install adblockers.




  • eth0p@iusearchlinux.fyitoPrivacy@lemmy.ml[Rant] I hate the modern internet
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    edit-2
    1 year ago

    I suspect to get downvotes into oblivion for this, but there’s nothing wrong with the concept of C2PA.

    It’s basically just Git commit signing, but for images. An organization (user) signs image data (a commit) with their public key, and other users can check that the image provenance (chain of signed commits) exists and the signing key is known to be owned by the organization (the signer’s public key is trusted). It does signing of images created using multiple assets (merge commits), too.

    All of this is opt-in, and you need a private key. No private key, no signing. You can also strip the provenance by just copying the raw pixels and saving it as a new image (copying the worktree and deleting .git).

    A scummy manufacturer could automatically generate keys on a per-user basis and sign the images to “track” the creator, but C2PA doesn’t make it any easier than just throwing a field in the EXIF or automatically uploading photos to some government-owned server.


  • Circular dependencies can be removed in almost every case by splitting out a large module into smaller ones and adding an interface or two.

    In your bot example, you have a circular dependency where (for example) the bot needs to read messages, then run a command from a module, which then needs to send messages back.

        v-----------\
      bot    command_foo
        \-----------^
    

    This can be solved by making a command conform to an interface, and shifting the responsibility of registering commands to the code that creates the bot instance.

        main <---
        ^        \
        |          \
        bot ---> command_foo
    

    The bot module would expose the Bot class and a Command instance. The command_foo module would import Bot and export a class implementing Command.

    The main function would import Bot and CommandFoo, and create an instance of the bot with CommandFoo registered:

    // bot module
    export interface Command {
        onRegister(bot: Bot, command: string);
        onCommand(user: User, message: string);
    }
    
    // command_foo module
    import {Bot, Command} from "bot";
    export class CommandFoo implements Command {
        private bot: Bot;
    
        onRegister(bot: Bot, command: string) {
            this.bot = bot;
        }
    
        onCommand(user: User, message: string) {
            this.bot.replyTo(user, "Bar.");
        }
    }
    
    // main
    import {Bot} from "bot";
    import {CommandFoo} from "command_foo";
    
    let bot = new Bot();
    bot.registerCommand("/foo", new CommandFoo());
    bot.start();
    

    It’s a few more lines of code, but it has no circular dependencies, reduced coupling, and more flexibility. It’s easier to write unit tests for, and users are free to extend it with whatever commands they want, without needing to modify the bot module to add them.


  • A couple years back, I had some fun proof-of-concepting the terrible UX of preventing password managers or pasting passwords.

    It can get so much worse than just an alert() when right-clicking.

    The codepen.

    A small note: It doesn’t work with mobile virtual keyboards, since they don’t send keystrokes. Maybe that’s a bug, or maybe it’s a security feature ;)

    But yeah, best tried with a laptop or desktop computer.

    How it detects password managers:

    • Unexpected CSS or DOM changes to the input element, such as an icon overlay for LastPass.

    • Paste event listening.

    • Right clicking.

    • Detecting if more than one character is inserted or deleted at a time.

    In hindsight, it could be even worse by using Object.defineProperty to check if the value property is manipulated or if setAttribute is called with the value attribute.


  • This may be an unpopular opinion, but I like some of the ideas behind functional programming.

    An excellent example would be where you have a stream of data that you need to process. With streams, filters, maps, and (to a lesser extent) reduction functions, you’re encouraged to write maintainable code. As long as everything isn’t horribly coupled and lambdas are replaced with named functions, you end up with a nicely readable pipeline that describes what happens at each stage. Having a bunch of smaller functions is great for unit testing, too!

    But in Java… yeah, no. Java, the JVM and Java bytecode is not optimized for that style of programming.

    As far as the language itself goes, the lack of suffix functions hurts readability. If we have code to do some specific, common operation over streams, we’re stuck with nesting. For instance,

    var result = sortAndSumEveryNthValue(2, 
                     data.stream()
                         .map(parseData)
                         .filter(ParsedData::isValid)
                         .map(ParsedData::getValue)
                    )
                    .map(value -> value / 2)
                    ...
    

    That would be much easier to read at a glance if we had a pipeline operator or something like Kotlin extension functions.

    var result = data.stream()
                    .map(parseData)
                    .filter(ParsedData::isValid)
                    .map(ParsedData::getValue)
                    .sortAndSumEveryNthValue(2) // suffix form
                    .map(value -> value / 2)
                    ...
    

    Even JavaScript added a pipeline operator to solve this kind of nesting problem.

    And then we have the issues caused by the implementation of the language. Everything except primitives are an object, and only objects can be passed into generic functions.

    Lambda functions? Short-lived instances of anonymous classes that implement some interface.

    Generics over a primitive type (e.g. HashMap<Integer, String>)? Short-lived boxed primitives that automatically desugar to the primitive type.

    If I wanted my functional code to be as fast as writing everything in an imperative style, I would have to trust that the JIT performs appropriate optimizations. Unfortunately, I don’t. There’s a lot that needs to be optimized:

    • Inlining lambdas and small functions.
    • Recognizing boxed primitives and replacing them with raw primitives.
    • Escape analysis and avoiding heap memory allocations for temporary objects.
    • Avoiding unnecessary copying by constructing object fields in-place.
    • Converting the stream to a loop.

    I’m sure some of those are implemented, but as far as benchmarks have shown, Streams are still slower in Java 17. That’s not to say that Java’s functional programming APIs should be avoided at all costs—that’s premature optimization. But in hot loops or places where performance is critical, they are not the optimal choice.

    Outside of Java but still within the JVM ecosystem, Kotlin actually has the capability to inline functions passed to higher-order functions at compile time.

    /rant


  • Yep! I ended up doing my entire co-op with them, and it meshed really well with my interest in creating developer-focused tooling and automation.

    Unfortunately I didn’t have the time to make the necessary changes and get approval from legal to open-source it, but I spent a good few months creating a tool for validating constraints for deployments on a Kubernetes cluster. It basically lets the operations team specify rules to check deployments for footguns that affect the cluster health, and then can be run by the dev-ops teams locally or as a Kubernetes operator (a daemon service running on the cluster) that will spam a Slack channel if a team deploys something super dangerous.

    The neat part was that the constraint checking logic was extremely powerful, completely customizable, versioned, and used a declarative policy language instead of a scripting language. None of the rules were hard-coded into the binary, and teams could even write their own rules to help them avoid past deployment issues. It handled iterating over arbitrary-sized lists, and even could access values across different files in the deployment to check complex constraints like some value in one manifest didn’t exceed a value declared in some other manifest.

    I’m not sure if a new tool has come along to fill the niche that mine did, but at the time, the others all had their own issues that failed to meet the needs I was trying to satisfy (e.g. hard-coded, used JavaScript, couldn’t handle loops, couldn’t check across file boundaries, etc.).

    It’s probably one of the tools I’m most proud of, honestly. I just wish I wrote the code better. Did not have much experience with Go at the time, and I really could have done a better job structuring the packages to have fewer layers of nested dependencies.