Software developer, here.
It’s not actually AI. A large language model is essentially autocomplete on steroids. Very useful in some contexts, but it doesn’t “learn” the way a neural network can. When you’re feeding corrections into, say, ChatGPT, you’re making small, temporary, cached adjustments to its data model, but you’re not actually teaching it anything, because by its nature, it can’t learn.
I’m not trying to diss LLMs, by the way. Like I said, they can be very useful in some contexts. I use Copilot to assist with coding, for example. Don’t want to write a bunch of boilerplate code? Copilot is excellent for speeding that process up.
I mean, I guess the way people use the term “AI” these days, sure, but we’re really beating all specificity out of the term.