• 520@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    11 months ago

    They are not talking about the training process

    They literally say they do this “to combat the racial bias in its training data”

    to combat racial bias on the training process, they insert words on the prompt, like for example “racially ambiguous”.

    And like I said, this makes no fucking sense.

    If your training processes, specifically your training data, has biases, inserting key words does not fix that issue. It literally does nothing to actually combat it. It might hide issues if the data model has sufficient training to do the job with the inserted key words, but that is not a fix, nor combating the issue. It is a cheap hack that does not address the underlying training issues.

    • jacksilver@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      9 months ago

      So the issue is not that they don’t have diverse training data, the issue is that not all things get equal representation. So their trained model will have biases to produce a white person when you ask generically for a “person”. To prevent it from always spitting out a white person when someone prompts the model for a generic person, they inject additional words into the prompt, like “racially ambiguous”. Therefore it occasionally encourages/forces more diversity in the results. The issue is that these models are too complex for these kinds of approaches to work seamlessly.

    • Primarily0617@kbin.social
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      11 months ago

      but that is not a fix

      congratulations you stumbled upon the reason this is a bad idea all by yourself

      all it took was a bit of actually-reading-the-original-post