This is my personal opinion, it is not the collective opinion of all lemmy.ml admins nor the broader Lemmy network as a whole. But I feel like no one is talking about this side of things, hence this post.

It seems that a major point of friction lately has been the registration screening questions that most large instances have, and the fact that instances which do not are being blocked. People are complaining about not knowing when they will be allowed to use their new accounts, if they will at all, as well as their instance being blocked by larger instances. While I empathize with the frustration this is causing, and I do agree that the registration screening system is far from perfect, I really feel the need to defend my fellow instance admins here, on all the major instances, and defend their decision to have registration screening.

We are all unpaid volunteers, and are running and/or moderating Lemmy instances because we are interested in doing so. In fact we have regular jobs and responsibilities that we juggle with moderating Lemmy. At the same time, we want to provide high quality spaces where users can interact and engage in meaningful discussion, and that requires that the threads be mostly free of trolls, abusive users, and spam. We have no automoderator, no automated content screening or spam/abuse detection in general, nor do we generally have enough people to even cover all 24 hours of the day (especially since instances tend to be run by people in the same or similar time zone). Registration screening goes a long, long way toward easing our workload and actually allowing our instances to function without getting overrun by undesirable content.

Lemmy, especially the larger instances, has been the target of many raids and brigades from places like 4chan. They were, I kid you not and you can find plenty of discussion about this if you go far back enough, posting anything from Nazi/fascist propaganda, to scat and gore porn, and rendered instances completely unusable for a time. Based on my experiences with lemmy.ml getting brigaded, enabling registration screening brought the number of abuse posts from tens or well over a hundred per hour to almost none, because just having to put that bit of work in to make a troll account is enough to discourage most people who have no interest in actually participating meaningfully, and it also makes it much more difficult to create multiple accounts for ban evasion, or to automate the creation of accounts in the form of a bot.

Also based on my own experiences, I can tell you that any instance with open registration is very quickly overrun by spam and abuse posts, to the point where it can make other, larger instances unusable if that influx of content is federated over, as well as generally massively increasing the workload of the admins on the other instances as now they have to pick up the slack and moderate what content from the open instance is actually real content that is allowed on their own instances. This would be happening in a time where those instance admins are already being swamped with new registrations and moderating the huge influx of content being generated from their own instance. Until a Lemmy instance gets large enough to actually hire full time admins to catch and remove abusive content ASAP, and/or implements reasonably accurate spam and abuse content screening that is resistant to evasion tactics, I don’t see instances reasonably being able to go without registration screening because the trolls will seize on that opportunity every time.

Admins of larger instances see it all the time:

  1. New instance pops up, yay! And most instances automatically federate with new instances!

  2. It doesn’t have registration screening, this is quickly discovered by trolls and adbots and the instance gets filled with rule breaking content.

  3. Large instances start blocking it because by federating with an instance that is being used in this way degrades the quality of your own instance and adds a ton of workload to your (unpaid) mods and admins. They specifically do this because they know that posting in small, brand new instances will also get their content forwarded to the large instances, because they have a harder time directly posting abuse on the larger instances. (We don’t block instances because “how dare they not have registration screening?!” We only really block instances when we start getting flooded with reports from our own users flagging the incoming abuse posts.)

  4. The instance eventually enables registration screening, and other instances start unblocking it.

It’s happened with plenty of instances before and will probably keep happening as long as spam and trolling exist.

Most instances’ registration questions are fairly simple, and all we really want is for you to spend a minute of your time to write a few sentences, maybe a paragraph at most. You doing that reduces the workload for us, as well as contribute to a nicer environment for you and your fellow users.

  • hemmes@lemmy.one
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Wait people have an issue with screening? Mine took less than 24 hours, but I’d wait a week. I don’t know how in the world mods and admins have the time to do this stuff but god bless ‘em.

    Also, some instances like my own have disabled downvoting. I didn’t know that was a thing when I signed up, but I really enjoy .one so far. That being said I think disabling downvotes is killing the self moderation to the extent that it alters the expected flow of this type of curation platform and, most importantly, places excess loads on the mods.

    Screening, let the users do their thing, mods step in when shit hits the fan. In that order.

    Thanks to all you admins and mods for what you do, keeping the world moving for us internet junkies.

  • Aurix@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I would not even be mad to close down registration at some point completely, if it becomes necessary. At this stage lemmy is in alpha and people are fine with Blue Sky’s limits. Until there are effective tools to deal with bots and trolls.

  • nivenkos@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    They aren’t really screening anything. The trolls can just enter what the admins want to see - especially in this era of ChatGPT.

    It’s just pointless theatre.

    • AgreeableLandscape@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 year ago

      The number of trolls and spammers who will actually spend the time and effort to write a script to pull the registration questions, feed it into ChatGPT, and then feed its output back (which costs money to do at any significant scale, you’re only allowed like 10 responses a day for free, and 7 of them will likely be either “something went wrong, please try again” or generally not make enough sense to fool the admins based on my admittedly limited experience with trying it) is probably less than 1%. At which point it’s probably easier to just do it the old fashioned way and make stuff up on the screening questions yourself. If they aren’t even willing to do that most of the time I doubt ChatGPT will change anything, trolls tend not to put tons of effort into their trolling. This is less like a castle wall and more like a picket fence, adding just enough resistance and requiring just enough extra effort to evade is enough to keep out the majority of the riffraff, we can deal with the rest much easier.

      In fact, having screening makes it much easier for us to more effectively deal with the more determined trolls, by not giving them a place to hide among the casual trolls.

      With all due respect, if it was pointless theatre we wouldn’t be doing it. Reviewing registration applications is also a lot of work, and we only do it because it’s less work overall than dealing with floods of abuse posts (also more pleasant to review applications vs having to look at the various unpleasant things trolls cook up in order to remove them). I don’t need to theorize these things, the evidence is there: We see a significant reduction of abuse accounts with registration screening enabled, down to almost none in fact, even if the admins don’t pay super close attention to the actual responses (just needing to write something in the first place discourages most casual trolls), and again, the only reason open instances are being blocked is not because we’re offended that it’s an open instance with no screening (we don’t even preemptively check if an instance has screening or not), but because we’ve actually noticed large amounts of abuse posts from an instance. I think the fact that the open instances currently have the most issues with abuse speaks for itself, something one can independently verify by looking at the new posts and comments for various instances.