Thank you Noam Chomsky for pointing out the key difference between generative AI and human “intelligence.”

The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.

Noam Chomsky: The False Promise of ChatGPT

Rather than a brute force, pattern-matching auto-correct on steroids that cannot distinguish between right and wrong, the human mind can infer and draw connections on incomplete data and generally has the moral compass to guide it to make ethical decisions that benefit the society in which we live.

As a child of two cultures, I visited Japan several times when I was still learning English. I attended Japanese kindergarten during the summer while visiting Japan with my mother (my aunt was a kindergarten teacher).

I would struggle to explain how my mind would *click* into Japanese. As native English speakers, we all intuitively “know” when something sounds right. There is a rhyme to the language and we all learn what a grammatically correct sentence sounds like even if we cognitively cannot tell you the rules that make it so. If you went to kindergarten in the US and you hear the first bars of Mary Had a Little Lamb, you all know how it finishes. I believe the grammar of a language is the same way, there is a rhyme that is picked up easily by children who can take it in and absorb it while older adults are limited because their learning is filtered by what is possible in their native language.

All this to say, no, humans are more than pattern-matchers and that there is a long way to go before we see Generalized Artificial Intelligence.