Information used to focus solely on facts. Humans considered this content. By contrast, AI views information differently; AI thinks of information as fuel. These are now divergent tracks of information. This multiplicity was unannounced, yet it defines the present moment as do few other characterizations.
We have been trained to think of information as meaning something. But for a moment, think of information as everything said or written about a given topic—information as volume; information as how acknowledged experts have weighed in on a given topic—information as weight; how views of that topic have changed over time—information as movement; or, how much information on that topic now exists—information as quantity.
When you think like that you’re starting to think like AI. You may argue, well, so what? What about the meaning of that information? Isn’t that why we go to information in the first place? Now you’re thinking about content. Once you realize there are multiple tracks to information, you are seeing more clearly. Yet uncomfortable questions arise: What about the originality of the content? If AI takes my thoughts and mashes it with others’ thinking, who is the owner of the content? Who owns the intellectual property?
Now we begin to see glaring differences in these tracks. We notice how unlike one another the tracks are. The content track focuses on meaning; the AI track focuses on meta levels—levels above content (i.e, a camera with data about the image in addition to the image). This, we realize, is another kind of meaning.
AI and data logic
AI works to a different logic, a logic we are going to have to understand more fully than we do now. AI needs data as humans need oxygen and water. Without sufficient examples or instances—tax form 1116, comprehensive medical panels (CMPs), the Malboge programming language—AI could not function. Human output—our data—is the fuel of AI. AI ‘learns’ from human actions, behaviors, expressions; from art, literature, music, molecular formulae, or MRI scans. This is a recursive endeavor. With AI, humans have created a technology that mimics human activity such as filling out a form, reading an x-ray, critiquing a story or implementing a supply chain. So, data logic via AI is first mimetic. Without a series of examples to train on, AI could not come into being. (That is not the end of the story but the beginning; whether AI can think as, or out think, humans is a distinct possibility.) Notice that content, originality, independent thinking and reflection—are initially not relevant to AI. However, evolved AI (Anthropic Claude 3.5, for example) has trained on originality and independent human thinking and can weigh in on the degree to which a new piece of thinking reflects more or less originality and depth. This is another layer of fuel for AI.
Why is it important to understand AI logic?
AI logic and content logic are remarkably different. AI presents the logic of statistical significance—AI requires enough statistical patterns in the training data to match, mimic, conclude, improve. Perhaps one day, maybe soon, AI may employ data to think or adjunct-think as humans have done for millennia. Content logic, by contrast, focuses on utility (I need this information to prepare Croquetas de Jamón); or knowledge (what is the Rieman hypothesis, anyway?). Content logic is tactical, practical, proximate; AI logic works to algorithmic tactics which are hidden from the users whose thinking and actions the AI captures and then mimics. This logic waits for humans or AI to put it to use. Perhaps AI may find use for these tactics before humans do.
The volume, weight, quantity, and movement of information are its meta levels. We are not used to considering the meta level of our content. Humans have not been trained to think about content form or structure; we are proud content believers. We pay attention to what someone says; we pay almost no attention to the structure of the saying. In this measure, we are gleeful naifs. As Timothy R. Levine writes in Duped, social cohesion makes us believers rather than skeptics. Nonetheless, it is imperative to understand AI logic because AI will play a growing role in our lives. AI trains on us, on our content and expressions. Presently convenience and speed induce us to turn over more control to AI and the (now massively wealthy) companies that create and control AI. But beware: we are going to need artificial intelligence literacy and savvy to deal with the implications of this unusual presence in our lives.
How does AI change the meaning of information?
AI emphasizes quantity. It needs volume and repetition. Since AI has to use large data sets to prove its utility, quantification is a significant driver of AI logic. Human history is a chronicle of opinion, assertion, belief, and codified traditions that become institutions. Evaluating daily activities in numbers, i.e, quantification, did not play a role in human history until the rise of science and technology. In the information age, information has mostly meant: the content of what happened. But quantification has lent information another dimension. The quantified self movement shows how assessing one’s life factually can yield insights otherwise unseen.
AI is the quantified self on steroids. For AI, the meaning of information is: what does the data show? What are the facts? Who or what or how can the data change? AI is data driven. This is a significant difference from what someone, no matter how enlightened, says is true. AI is also essentially iterative. The more times AI encounters a chess problem, a military planning scenario, a medical operation quandary, the better the AI can perform. Thus, AI proves to be exceptional at tasks that require repetitive focus, precision, or a facticity determination. This is AI tool logic. That is important because humans entrain with the logic of the tools we use—which means with AI we will entrain with quantification, facticity, data, iteration, shared mind and expression. Our entraining with AI will change—is changing—the meaning of information.
Human originality and creativity
I get ideas from a new piece of technology - David Hockney, A History of Pictures
Then there is the question of original thinking: if there had been no Einstein, could AI have discovered the theory of relativity? If he had never been born, could AI have painted like Picasso? Can AI ‘think’ original, revolutionary previously un-thought ideas?
Our ideas of the world are based on the tools we use to see the world. The printing press enabled individual expression. Invented around 1440, not only did it give someone the means to record thoughts and propositions, it gave them the means to think different thoughts and disseminate them. Before Gutenberg, thought was governed by the dogma of the church. Even artists, the most original and creative of all souls, used religious themes and symbolism exclusively. In retrospect it is obvious: once the printing press arrived—enabling the Enlightenment of new ideas and approaches, but especially the formation of individual consciousness—painters began painting different subjects. Buildings, kitchen scenes of maidens and servants, a pregnant woman examining herself in a mirror. Seen from the vantage point of the 21st century, the printing press was fuel. It fueled the notion that not only kings and priests had voices that could and should be heard; it fueled individual conscience and expression; it fueled the privacy and primacy of personal thought; it fueled the value of individual man (woman would come later—much later) and his thoughts; it fueled the notion of individual rights because individual expression—written, printed, and disseminated by presses—could now be seen and heard. (Out of sight, out of mind applies here: without the means of expression, individual lives and souls had neither merit nor standing (meaning) in the modern sense.)
In effect, the printing press and books and publications which followed fueled the value of individual expression, a prior dormant force in human experience. For about 600 years that value has been exalted, celebrated, defined and redefined. Today there are many more device tools by which individuals can express themselves than there are humans on the planet. Here Comes Everybody Clay Shirky wrote in 2008. With the arrival of AI, however, that individual expression—my thoughts, your perceptions—run into a snag. AI wants to use our thoughts and expression. The aim is to help us—but they will use them nonetheless, and this using fundamentally changes what information means.
AI tool logic and the threat to individuality
AI challenges 600 years of the myth of human genius. When the printing press arrived, the world changed from the tribal notion of hive mind and the medieval notion of mystical mind to the fertile ground of personal expression. Human perception became more individualistic; we embraced the notion of individual genius as hero. AI upsets this notion. AI is fueled by all humans who create, who express, who question and wonder, invest or trade, launch drones and missiles, or write laws and constitutions. As I have said elsewhere, AI is Mind2. It is a collective mind, a thief of ideas and tone and style, a marvel of human ingenuity and technology. But, most evident to human producers, AI represents a threat to the individuality of expression. Not to individuals and not to expression itself, but to the adamant conviction that my thoughts are mine and no one or nothing dare intrude on my genius. Well, as Kurt Vonnegut might have said, AI fricassees that canard.
Ironically, AI may move us close to the original meaning of genius. In late 14th century usage, genius referred to “a tutelary or moral spirit” who guides and governs an individual through life (compare our notion of AI today). By contrast, the idea of a person of “exalted natural mental ability, skill in the synthesis of knowledge derived from perception” is attested by the 1640s, well after the arrival of the printing press.
What does it mean if AI uses content as fuel?
1. Human expression, behavior, movement, thinking, practices, forms, and institutions all become raw data for AI
2. All content becomes malleable: thoughts, ideas, writing, art, laws, constitutions, practices become a mashup, a cultural merry-go-round to be refreshed infinitely
3. AI challenges and interrupts the human expression/reward cycle. Formerly, humans created art or artifacts to gain some kind of approbation: psychological or financial reward
4. Humans will think with AI; AI will think with, and probably out think, humans
5. AI devalues individual expression in favor of collective advancement
6. The nature of content fundamentally changes: every topic becomes the hub of connected, evolving, revolving, gyroscopic data which becomes information
7. All human creators will take into account what AI will do with their creations
8. AI becomes the great mill of the mind, grinding down everything into data instances
9. As content becomes data, expression takes on the characteristics of miscellany: “jumbled digitally and sorted out only when and how a user wants to look for them”
Caution: AI and The Exterminating Angel
I’ll close with a cautionary tale about our understanding of content versus fuel. In his 1962 Mexican surrealist black comedy, The Exterminating Angel, Luis Bunuel created a timeless fable.
After a night at the opera, Edmundo and Lucía Nóbile have 18 wealthy acquaintances over for a dinner party at their lavish mansion. The servants inexplicably begin to leave as the guests are about to arrive and, by the time the meal is over, only Julio, the majordomo, is left. Strange occurrences occur: the guests somehow enter the mansion and go upstairs twice; the guests and hosts settle in and spend the night on the couches, chairs, and floor of the salon, while Julio sleeps at the table in the dining room. People wonder why no one attempted to leave the night before. A few guests try to exit the salon, but they all turn back or become distressed and stop before crossing the threshold. When Julio brings some leftovers for breakfast, he is trapped as well. They cannot leave. After much melodrama and hand wringing, Edmundo offers to take his own life. He gets a small pistol he had hidden, but one of the guests, Leticia, tells him to wait. She says all of the people and furniture are in the same spot as they had been at the nighttime party. Leticia then has Blanca play the end of the piano sonata and has everyone repeat the conversation that followed. This time, when Blanca says she is tired, the group finds they can leave the salon, and then the mansion—hallelujah! —they are free. The guests go directly to a nearby church to give thanks—and they cannot leave the church.
Why is Bunuel’s fable a cautionary tale for us? Because with AI, we are not aware of the forces we have unleashed; we’re enjoying ourselves at the raucous party of inattention. We’ll soon become stuck; unable to leave the alphabetic order focus on content that brought us to this juncture, and yet unable to fathom the outcomes of building a world around AI fuel-driven logic. Because we don’t see what got us stuck in the first place, we are ill prepared to evade the traps of AI logic. Being unlikely to consider the logic of our entraining ways, we get stuck again.
Another insightful piece from the engaging intelligence and rich imagination of Barry Chudakov, thinker at large.