Funny enough I have the opposite opinion, human brains are the type of thinking we have most experience with - so we’ve devised our input methods around what we notice most, and so will be able to most easily train the AI.
I also believe that we’ll be abke to reduce the noise to a level lower than actual person variation fairly easily, cause an AI has the benefit of being able to scale to a populous size - no human even has that much experience with humans
I use to work on research on microscopic mechanisms of the brain, and I work in AI.
Human thoughts derive from extremely complex microscopic mechanisms, that do not “average out” when moving to the macroscopic world, but instead create very complex non-linear stochastic process that are thoughts.
Unless some scientific miracle happens, human thoughts will stay human.
But an AI does anything but average out, else we wouldn’t be any more advanced than the earliest mathematicians.
Its skill comes from being able to have millions to billions of parameters if required, and then contain data within them all.
It doesn’t seem entirely unreasonable that it could use those (riding off our suprisingly good math skills) and create a model that represents a human with low enough noise we wouldn’t even notice.
(but also I’m in a similar more chemically focused field, nanotechnology so I have experience with nanoscopic-microscopic structures, and what can we artificially build from them while not killing the biological side of things)
As you are in nanotechnologies, when I say average out I am talking in a statistical mechanics way, i.e. the macroscopic phenomenon arising from averaging over the multiple accessible microscopic configurations. Thoughts do not arise like this, they are the results of multiple complex non linear stochastic signals. They depend on a huge amount of single microscopic events, that are not replicabile in a computer as is, and likely not reproducible in a parametrized function. Nothing wrong with that, we might be able to approximate human thoughts, most likely not reproduce them.
What area of nanotechnology are you? Main problem of nanotechnologies is that they cannot reproduce the complexity of the biological counterparts. Take carbon nanotubes, we cannot reproduce the features of the simpler ion channels with them, let alone the more complex human ones.
We could build nice models, with interesting functionality, as we are doing with current AI. Machines that can do logic, take decisions, and so on. Even a machine that can predict human thoughts. But they’ll do it in their way, while the real human thoughts will most likely stay human, as the processes from which they arise are very human
nano engineering, and course were talking some years in the future, but if anything nano’s convinced me were all just math when you break it down - when just depends on how much math we can do.
Even a simple conversation can be broken down into tokenizable words recently and bam chatgpt, reasonably the rest of our ‘humanity’ could be modeled following a similar trend until the Turing test is useless
What I mean is different. A dog thinks as a dog, a human thinks as a human, an AI will think as an AI. It will likely be able to pretend to think as a human, but it won’t think as one.
It won’t have a Proust’s madalaine (sensorial experiences that trigger epiphanies), have the need to travel to some “sacred” location looking for spirituality, miss the hometown were it grew up, its thinking won’t be driven by fears of spiders, need of social recognition, pleasure to see naked women. It’s thoughts won’t be dependent on the daily diet, on the amount of sugar, fat, vitamins, stimulants intake.
These are simple examples, but in general it will think in a different way. Humans will tune it to pretend to be “as human as possible”, but humans will remain unique
What makes you say that so definitely?
Funny enough I have the opposite opinion, human brains are the type of thinking we have most experience with - so we’ve devised our input methods around what we notice most, and so will be able to most easily train the AI.
I also believe that we’ll be abke to reduce the noise to a level lower than actual person variation fairly easily, cause an AI has the benefit of being able to scale to a populous size - no human even has that much experience with humans
I use to work on research on microscopic mechanisms of the brain, and I work in AI.
Human thoughts derive from extremely complex microscopic mechanisms, that do not “average out” when moving to the macroscopic world, but instead create very complex non-linear stochastic process that are thoughts.
Unless some scientific miracle happens, human thoughts will stay human.
But an AI does anything but average out, else we wouldn’t be any more advanced than the earliest mathematicians.
Its skill comes from being able to have millions to billions of parameters if required, and then contain data within them all.
It doesn’t seem entirely unreasonable that it could use those (riding off our suprisingly good math skills) and create a model that represents a human with low enough noise we wouldn’t even notice.
(but also I’m in a similar more chemically focused field, nanotechnology so I have experience with nanoscopic-microscopic structures, and what can we artificially build from them while not killing the biological side of things)
As you are in nanotechnologies, when I say average out I am talking in a statistical mechanics way, i.e. the macroscopic phenomenon arising from averaging over the multiple accessible microscopic configurations. Thoughts do not arise like this, they are the results of multiple complex non linear stochastic signals. They depend on a huge amount of single microscopic events, that are not replicabile in a computer as is, and likely not reproducible in a parametrized function. Nothing wrong with that, we might be able to approximate human thoughts, most likely not reproduce them.
What area of nanotechnology are you? Main problem of nanotechnologies is that they cannot reproduce the complexity of the biological counterparts. Take carbon nanotubes, we cannot reproduce the features of the simpler ion channels with them, let alone the more complex human ones.
We could build nice models, with interesting functionality, as we are doing with current AI. Machines that can do logic, take decisions, and so on. Even a machine that can predict human thoughts. But they’ll do it in their way, while the real human thoughts will most likely stay human, as the processes from which they arise are very human
nano engineering, and course were talking some years in the future, but if anything nano’s convinced me were all just math when you break it down - when just depends on how much math we can do.
Even a simple conversation can be broken down into tokenizable words recently and bam chatgpt, reasonably the rest of our ‘humanity’ could be modeled following a similar trend until the Turing test is useless
What I mean is different. A dog thinks as a dog, a human thinks as a human, an AI will think as an AI. It will likely be able to pretend to think as a human, but it won’t think as one.
It won’t have a Proust’s madalaine (sensorial experiences that trigger epiphanies), have the need to travel to some “sacred” location looking for spirituality, miss the hometown were it grew up, its thinking won’t be driven by fears of spiders, need of social recognition, pleasure to see naked women. It’s thoughts won’t be dependent on the daily diet, on the amount of sugar, fat, vitamins, stimulants intake.
These are simple examples, but in general it will think in a different way. Humans will tune it to pretend to be “as human as possible”, but humans will remain unique