tracyspcy@lemmy.ml to ChatGPT@lemmy.mlEnglish · 1 year agoOver just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study findsfortune.comexternal-linkmessage-square8fedilinkarrow-up174arrow-down15
arrow-up169arrow-down1external-linkOver just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study findsfortune.comtracyspcy@lemmy.ml to ChatGPT@lemmy.mlEnglish · 1 year agomessage-square8fedilink
minus-squaretaladar@sh.itjust.workslinkfedilinkEnglisharrow-up19arrow-down1·1 year agoA system that has no idea if what it is saying is true or false or what true or false even mean is not very consistent in answering things truthfully?
minus-squaretracyspcy@lemmy.mlOPlinkfedilinkEnglisharrow-up5arrow-down1·edit-21 year agoWait for the next version which will be trained on data that includes gpt generated word salad
minus-squareintensely_human@lemm.eelinkfedilinkarrow-up1arrow-down2·1 year agoNo that is not the thesis of this story. If I’m reading the headline correctly, the rate of its being correct has changed from one stable distribution to another one.
A system that has no idea if what it is saying is true or false or what true or false even mean is not very consistent in answering things truthfully?
Wait for the next version which will be trained on data that includes gpt generated word salad
Detroit: Become Human moment.
No that is not the thesis of this story. If I’m reading the headline correctly, the rate of its being correct has changed from one stable distribution to another one.