Umbria is known as the "Green Heart of Italy," Umbria brags untouched landscapes in its verdant hills, mountains, and valleys. Etruscans, Romans, and medieval feuding families have left an incredible artistic and cultural heritage, while priests and monks have given a fascinating religious imprint on its towns. During my visit to Umbria in late summer, I met a couple from New York at their marvelous farmhouse. I had a short yet fascinating conversation with the husband, a distinguished anthropologist and university professor, while my wife and our friends were getting the tour by the wife - a famous Journalist. The couples were in their mid-80s.
The husband asked about my profession, and I said, "I'm in AI Education." He immediately asked: "Can AI understand irony?" That question still puzzles me today. I put the answer to this question on one side and started focusing instead on the question itself. I focused on a more fundamental question I have been thinking about lately. I have been thinking about "consciousness," the complicated problem and even more complex question in the field of AI. Exploring a bit into philosophy, the complex problem of consciousness is the problem of explaining why any physical state is conscious rather than nonconscious. It is truly the problem of explaining why there is "something lit is like" for a subject in conscious experience, why conscious mental states "light up" and directly appear to the subject. The usual methods of science involve an explanation of functional, dynamical, and structural properties—an explanation of what a thing does, how it changes over time, and how it is put together. But even after we have explained the conscious mind's functional, dynamic, and structural properties, we can still meaningfully ask why it is deliberate. This suggests that an explanation of consciousness will have to go beyond the usual methods of science. Consciousness presents a complex problem for science, or perhaps it marks the limits of what science can explain. Explaining why consciousness occurs at all can be contrasted with the so-called "easy problems" of consciousness: the problems of explaining the function, dynamics, and structure of consciousness. These elements can be described using the usual methods of science. But that leaves the question of why there is something it is like for the subject when these functions, dynamics, and structures are present. This is a complicated problem. But let's, for a moment, assume a conscious being is one capable of having thought and not disclosing it. This means consciousness would be the prerequisite for irony or saying one thing while meaning the opposite, which happens in my Persian culture. We know we are being ironic when we realize our words don't correspond with our thoughts. The truth is that most of us have this unique capacity - and most of us certainly and regularly convey our unspoken meanings in this way - is something that, I think, should surprise us more often than it does. It indeed seems almost discreetly human. Animals can be funny but not deliberately so. So how about computers or machines? Can they deceive? Can they keep secrets? Can they be ironic? The truth is that anything related to AI is already being studied or researched by an army of obscenely well-resourced computer scientists and AI researchers. This is also the cares with the question of AI and irony, which has recently attracted significant research among academia and private companies. Of course, since irony involves saying one thing while meaning the opposite, creating an intelligent machine that can detect and generate it is not a simple task. But if the AI community could make such an intelligent machine, it would have many practical applications, some more sinister than others. In the age of Google online reviews, among others, retailers have become very keen on so-called "opinion mining" and "sentiment analysis," which utilize AI to map the content and the mood of reviewers' comments. Knowing whether the product is being praised or becoming the butt of the joke is valuable information. And this is what Amazon is doing currently. Or even consider content moderation on various social media platforms. If let's say, Twitter or Facebook wants to limit online abuse while protecting freedom of speech, would it not be helpful to know when someone is serious or when they are just joking? Or what if someone tweets that they have just done something crazy and illegal? (don't ever tweet crazy or illegal stuff, by the way). Imagine if we could determine instantly whether they are serious or whether they are just "being ironic." The truth is that given irony's proximity to lying, it's not hard to imagine how the entire shadowy machinery of government and corporate surveillance that has grown up around new communications technologies would find the prospects of an irony-detector extremely interesting. And that goes a long way toward explaining the growing literature on the topic in the AI field. To better understand the state of current research into AI and irony, it is beneficial to know a little about the history of AI in general. That history is broken down into two periods. In the 90s, AI researchers sought to program computers with a set of handcrafted formal rules for how to behave in predefined environments. For example, if you used Microsoft Word in the 90s, you might remember the annoying office assistant Clippy, who was endlessly popping up to offer unwanted advice. Since the early 2000s, that model has been replaced by data-driven machine learning and sophisticated neural networks. Enormous caches of examples of given phenomena are translated into numerical values, on which computers can perform complex mathematical operations to determine patterns no human could ever discover. Moreover, the computer doesn't merely apply a rule. Instead, it learns from experience and develops new operations independent of human intervention. The main difference between the two approaches is between Clippy and facial recognition technology. To create a neural network that can detect irony, AI scientists focus initially on what some would consider its simplest form: sarcasm. AI scientists begin with data stripped from social media. For example, they might collect all tweets labeled "sarcasm" with or without # of course, or Reddit posts labeled/s, a shorthand that Reddit users employ to indicate they are not serious. The point is not to teach the computer to recognize the two separate meanings of any given sarcastic post. Indeed, meaning is of no relevance whatsoever. Instead, the computer is instructed to search for recurring patterns, or what researchers call "syntactical fingerprints" - words, punctuations, errors, emojis, phrases, context, and so forth. On top of that, the dataset is bolstered by adding even more streams of examples - other posts in the same treads, for instance, or from the same account. Each new individual sample is then run through a battery of calculations until we arrive at a single determination: sarcastic or not sarcastic. Last, a bot can be programmed to reply to each original poster and ask whether they were being sarcastic. Any reply can be added to the machine's growing mountain of experience. So, assuming AI will continue to grow and advance at the rate that took us from Clippy to facial recognition technology in less than two decades, can Ironic androids be far off? It could be argued that there are qualitative differences between sorting through the "syntactical fingerprints" of irony and understanding it. Some might suggest not. If a computer can be taught to behave exactly like a human, then it's immaterial whether a rich internal world of meaning lurks beneath its behavior. But I would argue that iron is a unique case; it relies on the distinction between external behaviors and internal beliefs. While AI scientists have only recently become interested in irony, philosophers and literary critics have been thinking about irony for a very, very, very long time. And perhaps exploring that tradition would shed old light, as it were, on a new problem. Of the many names one could think about in this context, two are indispensable: the German Romantic philosopher Friedrich Schlegel; and the post-structuralist literary theorist Paul de Man. As for Schlegel, irony does not simply entail a false external meaning and a true, internal one. Rather, two opposite meanings are presented as equally valid in irony. And the resulting indeterminacy has devastating implications for logic, most notably the law of non-contradiction, which holds that a statement cannot be simultaneously true and falls. De Man follows Schlegel on this score and, in a sense, universalizes his insight. De Man notes that every effort to define a concept of irony is bound to be infected by the phenomena it purports to explain. Indeed, de Man believes all language is infected by irony and involves "permanent parabasis." Because humans have the power to conceal their thoughts from one another, it will always be possible - permanently possible - that they do not mean what they are saying. The irony, in other words, is not one kind of language among many; it structures or, better, haunts every use of language and every interaction. And in this sense, it exceeds the order of proof and computation. The question is whether the same is true of human beings in general.
0 Comments
|
AuthorRoozbeh, born in Tehran - Iran (March 1984) Archives
April 2024
Categories |