As somebody fairly well-versed in the tech, and have done more than just play around with ChatGPT
Lol. See above. And below. Someone “fairly well versed” in ChatGPT gives you just about basically zero expertise in this field. LLMs are a tiny little sliver in the ocean of AI. No one uses LLMs to drive cars. They’re LANGUAGE models. This doesn’t translate. Like, at all.
Experts in the AI field know much more than some random person who has experimented with a “fad” of an online tool that gained massive popularity in the past year. This field is way bigger than that and you can’t extrapolate LLMs to driving cars.
I can tell you that self-driving AI is not going to be here for at least another 40-50 years. The challenges are too great, and the act of driving a car takes a lot of effort for even a human to achieve.
This is a fucking ludicrous statement. Some of these are already outperforming human drivers. You have your head in the sand. Telsa and Cruise are notoriously poorly performing. But they’re the ones in the public eye.
When we have these cars that are sabotaged by a simple traffic cone on the windshield, or mistakes a tractor-trailer for the sky,
If you don’t understand how minor these problems are in the scheme of the system, you have no idea how any of this works. If you do some weird shit to a car like plant an object on it that normally wouldn’t be there then I fucking hope to God the thing stops. It has no idea what that means so it fucking better stop. What do you want from it? To keep driving around doing it’s thing when it doesn’t understand what’s happening? What if the cone then falls off as it drives down the highway? Is that a better solution? What if that thing on its windshield it doesn’t recognize is a fucking human? Stopping is literally exactly what the fucking car should do. What would you do if I put a traffic cone on your windshield? I hope you wouldn’t keep driving.
When we have these cars that are sabotaged by a simple traffic cone on the windshield, or mistakes a tractor-trailer for the sky, then we know that this tech is far worse than human drivers
This is just a fucking insane leap. The fact that they are still statistically outperforming humans, while still having these problems says a lot about just how much better they are.
Level 5 autonomous driving is simply not a thing. It will never be a thing for a long time.
Level 5 is just a harder problem. We’ve already reached four. If you think 5 is going to take more than another ten to fifteen years, you’re fucking insane.
Billions of dollars poured into the tech has gotten us a bunch of aggressive up-starts, who think they can just ignore the fatalities as the money comes pouring, and lie to our face about the capabilities of the technology. These companies need to be driven off of cliff and buried in criminal cases. They should not be protected by the shield of corporate personhood and put up for trial, but here we fucking are now…
This paragraph actually makes sense. This is the one redeeming chunk of your entire post. Everything else is just bullshit. But yes, this is a serious problem. Unfortunately people can’t see the nuance in stuff like this, and when they see this they start with the “AI BAD! AUTONOMOUS VEHICLES ARE A HUGE PROBLEM! THIS IS NEVER HAPPENING!”.
Yes, they’re are fucking crazy companies doing absolutely crazy shit. That’s the same in every industry. The only reason many of these companies exist and are allowed is because companies like Google/Waymo slowly pushed this stuff forward for many years and proved that cars could safely drive autonomously on public roads without causing massive safety concerns. They won the trust of legislation and got AI on the road.
And then came the fucking billions in tech investment in companies that have no idea what they’re doing and putting shit on the road under the same legislation without the same levels of internal responsibility and safety. They have essentially abused the good faith won by their predecessors and the governing bodies need to fix this shit yesterday to get this dangerous shit off the streets. Thankfully that’s getting attention NOW and not when things got worse.
But overwhelmingly slandering the whole fucking industry and claiming all AI or automous vehicles are bad is just too far off the deep end.
I really appreciate you saying the things I wanted to say, but more clearly and drawn from far more domain experience and expertise than I have.
I hope that you will be willing to work on avoiding language that stigmatizes mental health though. When talking about horribly unwise and unethical behavior ableism is basically built into our langage. It’s easy to pull from words like “crazy” when talking about problems.
But in my experience, most times people use “crazy” they’re actually talking about failures that can be much more concretely attributed to systems of oppression and how those systems lead individuals to:
De-value the lives of their fellow human beings.
Ignore input from other people they see as “inferior”.
Overvalue their own superiority and “genius”. And generally avoid accountability and dissenting opinions.
I feel like this discussion in particularly really highlights those causes, and not anything related to mental health or intellectual disability.
Lol. See above. And below. Someone “fairly well versed” in ChatGPT gives you just about basically zero expertise in this field. LLMs are a tiny little sliver in the ocean of AI. No one uses LLMs to drive cars. They’re LANGUAGE models. This doesn’t translate. Like, at all.
Experts in the AI field know much more than some random person who has experimented with a “fad” of an online tool that gained massive popularity in the past year. This field is way bigger than that and you can’t extrapolate LLMs to driving cars.
This is a fucking ludicrous statement. Some of these are already outperforming human drivers. You have your head in the sand. Telsa and Cruise are notoriously poorly performing. But they’re the ones in the public eye.
If you don’t understand how minor these problems are in the scheme of the system, you have no idea how any of this works. If you do some weird shit to a car like plant an object on it that normally wouldn’t be there then I fucking hope to God the thing stops. It has no idea what that means so it fucking better stop. What do you want from it? To keep driving around doing it’s thing when it doesn’t understand what’s happening? What if the cone then falls off as it drives down the highway? Is that a better solution? What if that thing on its windshield it doesn’t recognize is a fucking human? Stopping is literally exactly what the fucking car should do. What would you do if I put a traffic cone on your windshield? I hope you wouldn’t keep driving.
This is just a fucking insane leap. The fact that they are still statistically outperforming humans, while still having these problems says a lot about just how much better they are.
Level 5 is just a harder problem. We’ve already reached four. If you think 5 is going to take more than another ten to fifteen years, you’re fucking insane.
This paragraph actually makes sense. This is the one redeeming chunk of your entire post. Everything else is just bullshit. But yes, this is a serious problem. Unfortunately people can’t see the nuance in stuff like this, and when they see this they start with the “AI BAD! AUTONOMOUS VEHICLES ARE A HUGE PROBLEM! THIS IS NEVER HAPPENING!”.
Yes, they’re are fucking crazy companies doing absolutely crazy shit. That’s the same in every industry. The only reason many of these companies exist and are allowed is because companies like Google/Waymo slowly pushed this stuff forward for many years and proved that cars could safely drive autonomously on public roads without causing massive safety concerns. They won the trust of legislation and got AI on the road.
And then came the fucking billions in tech investment in companies that have no idea what they’re doing and putting shit on the road under the same legislation without the same levels of internal responsibility and safety. They have essentially abused the good faith won by their predecessors and the governing bodies need to fix this shit yesterday to get this dangerous shit off the streets. Thankfully that’s getting attention NOW and not when things got worse.
But overwhelmingly slandering the whole fucking industry and claiming all AI or automous vehicles are bad is just too far off the deep end.
I really appreciate you saying the things I wanted to say, but more clearly and drawn from far more domain experience and expertise than I have.
I hope that you will be willing to work on avoiding language that stigmatizes mental health though. When talking about horribly unwise and unethical behavior ableism is basically built into our langage. It’s easy to pull from words like “crazy” when talking about problems.
But in my experience, most times people use “crazy” they’re actually talking about failures that can be much more concretely attributed to systems of oppression and how those systems lead individuals to:
De-value the lives of their fellow human beings. Ignore input from other people they see as “inferior”. Overvalue their own superiority and “genius”. And generally avoid accountability and dissenting opinions.
I feel like this discussion in particularly really highlights those causes, and not anything related to mental health or intellectual disability.