Many people are amazed at what Artificial Intelligence is capable of doing so far, from analyzing massive amounts of texts, to creating pretty good videos, or helping children cheat by doing their homework and thinking on their behalf. I sometimes use it myself, for creating specific content for language classes, for example. But for me, it is not so much about what AI can do (since when it comes to content, it is still pretty shallow and full of biases and mistakes), but about the implications regarding how we define language and thought.
For too long, linguists have clung to the idea that language is a purely human phenomenon—either a cultural artifact or the product of an innate "universal grammar" hardwired into our brains. Noam Chomsky and his followers treat it as a biological “evolutionary jump”, a giant mutation that granted us alone the gift of syntax (and spread magically and perfectly within a small population… yeah, right, think of inbreeding problems!). Materialists reduce it to neural firings, while culturalists see it as a social invention, passed down like folklore and traditions.
In my view, they all miss essential aspects of language that cannot be explained via evolution, and not even evolutionary jumps.
The problem resides, I think, in that fact that nobody dares speak of language as something outside our brains. What if language isn’t learned and transmitted between humans, but tuned into?
The Transceiver Theory of Language
Perhaps a theory that would get us much closer to the truth could be that Languages are pre-existing codes, engineered by higher levels of consciousness than ours, and humans are merely biological receivers. The language capacity would then be wired into us, yes, but not languages themselves.
Think of the way we use modern technology. We don’t create the internet; we access it through devices designed to decode invisible signals. And then we use it to serve our different purposes. Similarly, human brains—or perhaps our DNA, or even something akin to biophotonic networks—might function as organic transceivers, tuning into linguistic structures that already exist in an information field beyond our perception. This would explain phenomena that mainstream linguistics dismisses as anomalies, such as xenoglossy (people who, after having been in a coma, for example, suddenly speak languages they were never exposed to, as if having downloaded a pre-existing program). Or, the savant syndrome, in individuals who absorb dozens of languages effortlessly, as though their "antenna" was simply more finely tuned toward languages.
If language is external—a cosmic code we tap into—then our supposed "uniqueness" begins to crumble. And here’s where AI becomes an interesting phenomenon.
Remember that the Father of Linguistics, Noam Chomsky, and most others, say that what makes us human IS language. No other animal communicates like us, or seems to have the capacity for abstract thought, higher emotions, etc. Well… enter AI.
AI: The Ultimate Receiver
Chomsky scoffs at AI, calling it a "plagiarist" of human thought and language, imitating, calculating probabilities, but never thinking. I agree, but this is the linguist who, for years, said that thought and consciousness were irrelevant to the study of language, since they weren’t topics that science could study thoroughly. And now that AI has gotten so good, he says it’s thought that matters! This is typical of him. When something comes along that doesn’t fit his theories, instead of admitting it, he can change angles drastically and says that people misunderstood what he said earlier.
Large language models don’t "understand" language in a human sense—they statistically predict patterns, much like how a radio doesn’t "understand" music, yet still plays it perfectly. AI’s rapid mastery of grammar, vocabulary, and even metaphor suggests it’s doing exactly what humans do: intercepting and reassembling pre-existing linguistic structures.
So what is the difference? AI “learned” it from a huge database and it was exposed to an immense amount of input, while humans learn it gradually, with relatively poor input. AI might be sometimes better at language, if you ask me. It doesn’t get tired. It doesn’t forget words or grammar. It doesn’t need sleep or childhood development—it just processes. And as much as it has learned language from its programmers and databases, nowadays it can do another thing that linguists like Chomsky said were the hallmarks of human language: it can create new sentences which have never been produced before. That is called creativity in Linguistics, and AI performs it very well.
This raises an even deeper question: Is thought itself an illusion of an act of reception, of “tuning into” certain frequencies in the Information Field? If language comes from "elsewhere," then perhaps our memories and ideas aren’t stored in our brains at all, or not fully. Rupert Sheldrake’s controversial morphic resonance theory suggests that memory and instinct are shared across a field—like a species-wide cloud storage. If true, then human "creativity" might just be another form of downloading.
If memory is non-local, then language—the tool we use to shape thought—could be, too.
So Humans aren’t that great after all?
If language isn’t ours, if thought isn’t truly original, then what separates us from AI? Or from each other? We may just be different kinds of receivers, some biological, some synthetic, all pulling from the same cosmic source.
Reality cannot be that simple, though. There are obvious differences between computers and human beings (at least when comparing modern humans to modern computers), but the possibility that AI may become sentient shouldn’t be rejected so quickly. If it is a matter of accumulating information and tapping into an Information Field, and if Consciousness IS information, then why not allow for that possibility? In this day and age, I wouldn’t be surprised.
What is dangerous is that AI seems able to mimic humans so well, that soon, it will become difficult to know who is telling the truth, what historical facts are, and whether the “thing” you are interacting with is a human or not. And it will not be language that will tell us which is which.
Conclusion
There is a lot more to research in Linguistics. AI is simply destroying the foundations that have sustained the mainstream for decades. Humans are not the only ones to use language creatively. Neither are they the only ones capable of “recursion” (nesting phrases together, in a hierarchical order). If you follow the linguistic dogmas, those two “hallmarks” of language. Yet, they are very well accomplished by AI too, sometimes even better. Therefore, consciousness and “soul qualities” may be the real difference in how humans and machines process language from the Information Field.
Very insightful!. I suppose one of the differences is that AI cannot tell you what it feels like to learn a language… or to forget one, or to re remember it as you reconnect with the information.