Totally not a an AI asking this question.

  • pjhenry1216@kbin.social
    link
    fedilink
    arrow-up
    9
    arrow-down
    2
    ·
    1 year ago

    There are other forms of machine learning that could be utilized. Some work more toward being given a set of circumstances to reach and then it just keeps trying to new things and as it gets closer, it just keeps building on those.

    • Greyscale@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      That would require the humans controlling the experiment to both be willing to input altruistic goals AND accept the consequences that get us there.

      We can’t even surrender a drop of individualism and accept that trains are the way we should travel non-trivial distances.

      • pjhenry1216@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        In a dictatorship with an AI being in control, I don’t think there’s a question of accepting consequences at they very least.

        There is no such thing as best case scenario objectively, so it’s always going to be a question of what goals the AI has, whether it’s given them or arrives at them on its own.