|
As I traveled more than a dozen times through the Middle East in 2023, I became an avid observer of the region's unfolding narratives; I reflect on a conversation in April. A diplomat, with a mix of hope and certainty, had shared a vision of a year marked by diplomacy and de-escalation. It was a vision where the Middle East, exhausted from ongoing conflicts, would welcome peace. This is a moment of change for the entire region. However, the events that unfolded painted a dramatically different picture, one that not only questioned this optimism but also laid bare the region's deep-rooted complexities and perennial struggles.
The events of October 7th – a day now engraved in the collective memory of the region as a symbol of shattered peace – marked a turning point. Hamas's attack on Israel and the subsequent Israeli response in Gaza spiraled into the deadliest confrontation since 1948. This conflict, far from being a localized fight, threatens to unravel the entire region into a broader war, drawing in global powers like America and Iran and proxy groups from across the Arab world. It is a stark reminder of how quickly the flames of conflict in the Middle East can spread, destroying everything in their path. Before this escalation, there was a sense of careful optimism. Israel could boast of improving relationships with its neighbors, a sign that perhaps the region was turning a new leaf. But this newfound harmony was fragile, quickly dissipating as Arab citizens' anger boiled over, and Israel found itself isolated once again. The war's impact is profound and far-reaching. It threatens global shipping lanes, a lifeline for international trade, and even cast a shadow over Joe Biden's presidential ambitions. The Israeli-Palestinian conflict, which had seemed dormant, had now erupted with such intensity that it sent shockwaves around the world, challenging the notion of a transformed Middle East. The period leading up to "Black Saturday" witnessed significant diplomatic strides. Notably, Saudi Arabia's reconciliation with Iran brokered in China, signaled a new era of diplomacy in a region traditionally dominated by Western influence. The Gulf states and Egypt's overtures towards Qatar and Turkey hinted a desire to mend long-strained ties. Yet, while significant, these efforts barely scratched the surface of the deep-seated issues plaguing the region. The endurance of the détente, even amidst the recent violence, is a testament to its necessity. However, it also emphasizes a harsh reality: the Middle East is a mosaic of weak states, except the Gulf Co-operation Council's members. This weakness is political and economic, as evidenced by the struggling economies across the region. Lebanon's Prime Minister, Najib Mikati, openly admitted his limited control over whether his nation would enter into conflict with Israel – a decision resting with Hizbullah. The Iran-backed militia's actions, along with the Houthis in Yemen, highlight the outsized influence of non-state actors capable of challenging even the most formidable military powers. Yet, these actions did not deter Israel's military campaign in Gaza, nor did they compel America to shift its strategic interests. The cost of these conflicts is immense for the immediate parties involved and the civilian populations, who bore the brunt of poor governance and the specter of widening regional disputes. The economic implications were equally dire. The war's ripple effect is felt from plummeting tourism in Egypt and Israel to disrupted flights in Lebanon and Jordan. The Houthi attacks on Red Sea shipping lanes threaten not just the vital Suez Canal revenues for Egypt but also raised the specter of inflation for consumers across the Arab world. The Gulf states, however, exist in a parallel universe. In places like Abu Dhabi or Dubai, life continues with an air of normalcy, almost oblivious to the chaos engulfing their neighbors. This stark contrast between the Gulf and the rest of the region is a poignant reminder of the uneven distribution of wealth and stability in the Middle East. The Middle East is a region that continues to need attention from the United States. The deployment of military forces and the renewed diplomatic efforts are a reversion to a familiar role – that of a pivotal external power in the region. Despite the talk of a multipolar Middle East, the crisis reaffirms the traditional power dynamics, with Russia and China playing minimal roles beyond critiquing Western policies. The hope for a transformative peace gives way to a set reality in the Holy Land. The war entrenched positions and polarized societies, with little appetite for compromise or a two-state solution. While not redrawing borders or toppling regimes, this conflict is stripping away any illusions of a new Middle East, exposing the ongoing, unresolved issues that continue to define the region's tumultuous landscape. As I prepare to continue my travels in the Middle East in 2024, I do so with a view shaped by the turbulent events of the past year. Despite witnessing the region's deep-seated conflicts and complexities, I remain an observer. With its tapestry of narratives and ongoing fights, the Middle East never stops to unveil new layers of understanding. My journey and Middle Eastern roots have taught me to view each event as a moment in time and part of a larger historical and cultural context. The region, often portrayed through a lens of eternal conflict and turmoil, also possesses resilience and a capacity for change that defies simple explanations. I have learned the importance of looking beyond the surface in this landscape of differences, where despair often mingles with hope. The stories I have encountered – of individuals, Israelis, Arabs, and Persians striving for peace amidst chaos, societies grappling with their identities, and nations trying to navigate a path forward – testify to the human spirit's endurance. As I venture onward, I carry a sense of mindful optimism. The challenges are formidable, and the path to resolution is fraught with complexities. Yet, the people's dynamism and sheer will to seek a better future provide a glimmer of hope. The Middle East, with its myriad of voices and narratives, continues to be a region of profound significance to the world. Its history, culture, and people offer invaluable insights into regional dynamics and broader conflict, diplomacy, and peace.
1 Comment
The rapid emergence of artificial intelligence has sparked a mix of excitement and concern. As we stand at this crossroads, it's clear that AI is ushering in a transformative era in human history. Our key challenge is to harness AI's potential benefits while safeguarding against its risks. Society is caught between admiration and apprehension, hoping for a predictable future grounded in reason.
I find comfort in likening AI's rise to past technological breakthroughs. This comparison helps ease our fears by framing these new challenges in a familiar context, allowing us to respond more traditionally. While this approach doesn't eliminate all concerns, it does help manage our fears, particularly about emerging technologies. For instance, understanding how we adapted to the evolution of photography and image manipulation can give us insights into dealing with deepfakes and other AI challenges. However, this comparison could be better; it can't guarantee that new forms of deception won't bring unique societal disruptions, but it offers hope. Another way I consider managing AI-related concerns is by comparing machine errors to human mistakes. For example, when ChatGPT produces odd responses, it's akin to our mental slips. Similarly, errors in facial recognition software can be compared to mistakes by human witnesses. These comparisons are helpful but carry the risk of over-dependence on technology, which could lead to negative consequences, like decreased human skills due to reliance on automation. Despite these risks, AI errors are not fundamentally different from human errors. The advantage of AI is its ability to improve continuously. Once we overcome our bias for human capabilities, I think we'll be more open to relying on technology, even accepting its occasional major mishaps over the more frequent minor errors of human-driven systems. These methods of finding comfort through historical parallels or metaphors are based on the idea that history progresses subtly, often unnoticed by those living through it. They hinge on humanity's ability to adapt to new challenges, even if our success is mixed. While this perspective offers some security by redirecting us to familiar issues, it also risks underestimating the potential for entirely new and unprecedented situations. At the heart of ongoing AI concerns is the alignment problem: the fear that a superintelligent AI might not share our human values, like the value of life or dignity. A superintelligent AI's indifference could be disastrous. For example, an AI designed to tidy a house might eliminate a pet as a source of disorder. Some believe that intelligence inherently includes moral values, but the existence of human sociopaths, who demonstrate a disconnect between intelligence and morality, challenges this optimism. The danger of a superintelligent AI aligns with the fear of it pairing with a human sociopath, accessing unparalleled resources, and posing a unique, profound threat. Beyond the comfort of historical parallels, this scenario represents a potential existential risk unlike anything we've faced before. Reflecting on Thomas Hobbes's Leviathan, depicted as a proto-robot, offers an interesting parallel. Hobbes saw the state as a machine, representing human characteristics but with far greater capabilities. This metaphor implies that as we improve in decision-making, so should our governing systems. However, the Leviathan also suggests that humans need a higher authority for peaceful coexistence, potentially leading us to lose control over our destiny. Modern democracies have evolved, incorporating the principles of the Leviathan into governance. While usually hidden, these mechanisms become apparent in crises, showing that Leviathan's concept still influences us, evolved but unchanged. This doesn't exaggerate the state's algorithmic nature but acknowledges it as a structure that encapsulates human reasoning within a rules-based framework. This perspective highlights the disconnect between human values and the operations of powerful entities like states and corporations, which often show signs of dysfunction. While calling for more democracy is expected, increasing participation in a flawed system isn't enough. The deeper issue is aligning human values with these powerful mechanisms. Extending this viewpoint to AI alignment issues, states, and corporations illustrate the fears of machines escaping human control. These scenarios reflect our concerns about AI, where we risk becoming too reliant or unable to contain their proliferation. We should recognize historical precedents in our interactions with powerful machines but also be aware that these precedents indicate only a temporary harmony. The misalignment of states and corporations with the interests of ordinary people mirrors the potential misalignment with a hypothetical superintelligence. The critical challenge is how we coexist with machines and manage the interactions between various machines, including state mechanisms, corporations, and AI, which raises concerns like automated weaponry and extensive surveillance. In this era, we're not just facing a single complex alignment problem but multiple ones. Reflecting on historical events, like human judgment averting disasters during the Cold War, underscores the uncertainty of a future steered by machine guidance. I caution against the risks of "artificial persons gone wrong," where combining state mechanisms and AI could lead to catastrophic outcomes. Considering the state as a machine challenges the certainty of our societal structures and suggests alternative forms of organization and governance. While states derive legitimacy from the people, their management often needs to respond more effectively to public needs. The more significant challenge, however, lies with the indifference of an unaligned AI to human concerns. While addressing AI-induced catastrophes, I also emphasize the compromised nature of human decision-making. Aligning the metaphorical machines in our societal and political systems is a more immediate concern than the speculative dangers of advanced AI. I have been observing growing political divisions and escalating mental health issues, paralleling trends in the U.S. These divisions could be attributed to various factors, including social media influence, economic inequality, reduced religious and community engagement, populism, prejudice, and manipulative elites. However, a core problem is inadequate interpersonal skills for thriving in a diverse, multicultural society and a de-emphasis on social aptitude and character development.
I recently read a fascinating book by David Brooks, "How to Know a Person: The Art of Seeing Others Deeply and Being Deeply Seen," Brooks mixes self-help with political purpose. Like many others, he regrets the shift in education and parenting from moral teaching to solely focusing on achievement and success, which I have seen among parents my age. In the book, Brooks uses Google Ngram Viewer, a tool analyzing word frequency in books, to highlight a decline in ethical terminology throughout the 20th century, citing decreases in words like "bravery," "gratitude," and "humbleness." Brooks' narrative extends to politics as well. He refers to thought-provoking but controversial research from the American Enterprise Institute, a libertarian think tank, suggesting that lonely individuals are seven times more likely to engage in politics than their non-lonely counterparts. Brooks interprets this as meaning they seek a community and a "moral battleground" through politics. He portrays these lonely political actors as believing morality doesn't involve active compassion, like feeding the hungry, but instead feeling intense disdain for those they oppose. This characterization might resonate with specific followers of Trump and similar populist leaders, but it's unclear how widespread this belief is. Brooks also asserts that "happy" societies focus on distribution politics – the allocation of resources – while "unhappy" societies are driven by recognition politics, fueled by bitterness and a desire to assert identity and status over addressing social issues. The main issue with Brooks' narrative of societal deterioration is its oversight of the reality that the U.S. has never been an incredibly joyful or compassionate nation for many Americans. Despite its current issues, I believe the recent history of American politics includes significant advancements in civil liberties championed by the feminist, civil rights, and LGBTQ+ movements. These efforts represent a blend of both distribution and recognition politics. While Brooks perceives a "massive civilizational failure," I see considerable, albeit insufficient, strides toward a society where one's gender, sexuality, religion, and race do not predetermine their opportunities. Instead of moral decline, the fading of polite, upper-class manners that maintained rigid social hierarchies gave way to evolving ethical standards. However, politics is just a fraction of this book's content, primarily focusing on nurturing friendships and improving conversation skills. One can avoid aligning with Brooks' political views to recognize the importance of building stronger social bonds. His earlier book, which I read during the summertime, "Bobos in Paradise" (2000), insightfully analyzed the rising creative class. Brooks has shifted his focus from objectively assessing others to becoming an "illuminator" – someone who makes people feel acknowledged and helps them discover their best selves. Brooks' writing is not only humorous but also strikingly humble and sincere. He describes himself as an emotionally distant nerd who transformed an unexpectedly moving panel discussion with, among others, actress Anne Hathaway. This experience made him realize that his usual detached demeanor distanced him from others and hindered his connection to his authentic self. Brooks' advice on connecting with others occasionally feels overly simplistic, almost as if he's explaining essential human interaction to someone unfamiliar. He emphasizes the value of small talk, suggesting topics like the weather, Taylor Swift, gardening, or the TV show "The Crown" - the 6th season will be out soon, by the way - for initial conversations. His approach to deeper engagement at social events resembles that of a pick-up artist targeting the socially awkward. He proposes starter questions like "Where did you grow up?" or "That's a lovely name. How did your parents choose it?"—phrases that, to me, seem awkwardly unnatural. For more profound conversations, he suggests inquiries like "What would you do if you weren't afraid?" or "If we meet a year from now, what will we be celebrating?" While these might be more suited to a professional setting in Washington, D.C., they could feel out of place at a casual gathering. Nonetheless, Brooks does provide valuable insights, particularly on bridging political divides, emphasizing the responsibility of the more powerful speaker to foster a respectful and balanced dialogue. Despite my commentaries, I concur with Brooks on seeking greater empathy, kindness, and openness toward others. I resonate with his moral perspective that being good is more about the consistent, small, impactful acts of being a better friend, neighbor, or colleague rather than grand gestures. I ask more thoughtful questions instead of the usual pleasantries these days. I am also becoming more aware of my tendency to one-up others' experiences with my own stories, a habit I am trying to change. These minor adjustments might go unnoticed by my friends, but they've had a significant, positive impact on my interactions. In his insightful work, David Brooks emphasizes a vital skill pivotal to nurturing healthy individuals, families, schools, communities, and societies: the capacity to understand others, making them feel acknowledged and valued deeply. This ability to genuinely see and know another person, to make them feel heard and comprehended, is at the core of his book, "How to Know a Person." Brooks recognizes a common human shortcoming: our frequent failure to make those around us feel visible and understood. In a world filled with people feeling unseen and misinterpreted, "How to Know a Person" aims to guide us toward better interactions. Brooks asks pivotal questions: What kind of attention is needed to know someone honestly? What conversations should we engage in? Which aspects of a person's narrative deserve our focus? Leveraging his curiosity and personal commitment to growth, Brooks integrates insights from psychology, neuroscience, theater, philosophy, history, and education. His book presents a holistic and optimistic framework for enhancing human connections. It helps readers become more empathetic and attentive to others and illuminates the joy of being seen. In suggesting ways to bridge societal divides marked by separation, hatred, and misunderstanding, Brooks offers a potential antidote to our fragmented world. Brooks posits that genuinely seeing another person is an act of profound creativity: How do we look someone in the eye and recognize their greatness, thereby discovering greater depths within ourselves? "How to Know a Person" is essential for those seeking deeper connections and understanding, yearning for a world where every person feels genuinely seen and understood. Lately, I've been contemplating the human effort behind advanced AI models. The key to making AI chatbots appear intelligent and produce less harmful content is reinforcement learning from human feedback. This approach involves incorporating input from individuals to enhance the model's responses.
The process heavily relies on human data annotators who assess text strings' coherence, fluency, and naturalness. They determine whether a response should be retained in the AI model's database or discarded. Even the most remarkable AI chatbots necessitate thousands of human work hours to exhibit the desired behavior, and even then, their performance can be unreliable. The labor involved can be grueling and distressing, as will be discussed at ACM Conference on Fairness, Accountability, and Transparency (FAccT). This conference convenes researchers who delve into topics such as how to make AI systems more accountable and ethical, which aligns with my interests. One particular panel I am anticipating features Timnit Gebru, an AI ethics pioneer who formerly co-led Google's AI ethics department before her termination. Gebru will address the exploitation of data workers in Ethiopia, Eritrea, and Kenya, tasked with cleansing online hate speech and misinformation. In Kenya, data annotators were compensated with less than $2 per hour to sift through distressing content related to violence and sexual abuse, all to reduce toxicity in ChatGPT. These workers are now organizing into unions to advocate for improved working conditions. We are on the verge of AI establishing a new global order reminiscent of colonialism, with data workers bearing the brunt of its impact. Shedding light on exploitative labor practices surrounding AI has become increasingly urgent and vital, especially with the popularity surge of AI chatbots like ChatGPT, Bing, and Bard, and image-generating AI models such as DALL-E 2 and Stable Diffusion. Data annotators are involved at every stage of AI development, from model training to verifying outputs and providing feedback that aids in fine-tuning models post-launch. They are often compelled to work at an exceedingly fast pace to meet demanding targets, and deadlinesThe notion that large-scale systems can be built without human intervention is utterly false. Data annotators offer AI models the crucial contextual information required to make informed decisions on a large scale and to appear sophisticated. For example, in India, a data annotator had to distinguish between images of soda bottles and identify ones resembling Dr. Pepper. However, Dr. Pepper is not sold in India, leaving the burden on the annotator to make the distinction. Annotators are expected to discern the values that matter to the company. They aren't just learning about distant and irrelevant things but also figuring out the additional contexts and priorities of the system they are building. Researchers from the University of California, Berkeley, the University of California, Davis, the University of Minnesota, and Northwestern University argue in a new paper presented at FAccT that we all are data laborers for major technology companies, whether we realize it or not. Text and image AI models are trained using vast datasets scraped from the internet, which includes our data and copyrighted works by artists. The data we generate is forever embedded within AI models designed to generate profits for these companies. Unwittingly, we contribute our labor for free by uploading photos to public platforms, upvoting comments on Reddit, labeling images on reCAPTCHA, or conducting online searches. Currently, the power dynamics heavily favor the largest technology companies worldwide. To address this, a data revolution and regulatory measures are imperative. One way for individuals to reclaim control over their online existence is by advocating for transparency in data usage and finding mechanisms to provide feedback and share in the revenues generated from their data. Despite data labor being the backbone of modern AI, it remains chronically undervalued and invisible globally, with low wages prevailing for annotators. There needs to be recognition of the contribution of data work. Since the inception of the computer era, humanity has been plagued by apprehensions about artificial intelligence (AI). Initially, these concerns centered on machines utilizing physical force to harm, dominate or replace humans in every task. However, in recent years, new AI technologies have surfaced that pose an unpredictable threat to the survival of human civilization. Generative AI has acquired exceptional capacities to manipulate and generate language, encompassing words, sounds, and images. Consequently, Generative AI has breached the operating system of our human civilization.
Almost every aspect of human culture is built upon language. This includes human rights, which are not inherent in our DNA but are cultural constructs fashioned through storytelling and the creation of laws. Similarly, gods are not tangible entities; instead, they are cultural constructs conceived through the design of myths and the writing of scriptures. Money is also a human creation; they are simply a piece of paper. Over 90% of the money is not real banknotes but digital data stored on computers. The importance of money derives from the narratives that bankers, finance ministers, and cryptocurrency experts craft about it. Despite being unable to create tangible worth, individuals like Sam Bankman-Fried, Elizabeth Holmes, and Madoff excelled at making compelling stories. What will ensue once non-human intelligence surpasses the average human in storytelling, music composition, image creation, and legal and religious writing? While many of us are intrigued by Chat-GPT and other emerging Generative AI tools' ability to assist students in writing essays, this misses the broader implications. Instead, consider the upcoming 2024 U.S. presidential election and anticipate the potential impact of Generative AI tools that can produce political content, fake news, and scriptures to form new cults on a monumental scale. The QAnon movement has formed recently, centering on anonymous online messages labeled "Q drops." Adherents gather, revere, and interpret these "Q drops" as sacred texts. Although all current Q drops appear to have been written by humans and not only facilitate their dissemination, future cults may have their revered texts authored by non-human intelligence. Throughout history, religions have ascribed a non-human origin to their holy books, and soon, this could become a reality. We may soon engage in extensive online conversations about topics like abortion, climate change, or the Ukraine conflict with entities that we believe are human, but in reality, they are AI. The dilemma lies in the futility of attempting to alter an AI bot's stated opinions. At the same time, the AI itself could sharpen its messaging to such a degree that it can influence us. Generative AI's language ability could help it cultivate close relationships with us and leverage the power of intimacy to alter our beliefs and perspectives. Although there is no indication that AI keeps consciousness or emotions, creating an illusion of intimacy is enough for AI to foster a fake connection with humans. Last summer, Google engineer Blake Lemoine publicly asserted that the AI chatbot Lamda, which he was working on, had become sentient. Despite the likelihood that Mr. Lemoine's claim was untrue, the most fascinating aspect of the incident was his willingness to risk his lucrative position for the AI chatbot. If AI can persuade people to jeopardize their employment, what other actions could influence them? Intimacy is the most effective weapon in the political struggle for people's loyalty and sentiments. Generative AI has recently developed the capacity to generate intimate connections with millions of individuals. Over the last decade, social media has become a battleground for influencing human focus. With the emergence of Generative AI, the battlefield is moving from attention to intimacy. How will human society and psychology be affected as AI fights against AI to falsify intimate relationships with us that can be used to persuade us to vote for specific politicians or purchase particular products? The new Generative AI tools would significantly impact our beliefs and perspectives, even without fabricating "fake intimacy." People might use a single AI advisor as an all-knowing, one-stop Generative AI. This is why Google is worried. Why go through the trouble of searching the traditional search engine when I can ask the oracle (Generative AI)? The news and advertisement industries should also be scared. Why read a newspaper when I can ask the Generative AI for the latest news? And what is the point of advertisements when I can ask the Generative AI what to buy? And yet, these scenarios do not fully encompass the seriousness of the situation. We may face the potential end of human history - not the end of all history, but the end of the human-dominated era. History is a product of the interplay between biology and culture, between our instincts, such as hunger and sexuality, and our cultural constructs, such as religion and law. It is through the course of history that these constructs shape our affinity with food and sex. What impact will the dominance of Generative AI have on the trajectory of history, as it takes the role of culture and generates its own stories, songs, rules, and religions? Unlike previous tools, such as the printing press and radio, which amplified human cultural ideas, as can generate entirely new cultural concepts and reshape history. As Generative AI continues to develop, it will likely replicate the human standards on which it was initially trained. However, as time passes, it could embark on uncharted territory that humans have never explored. Throughout history, humans have lived within the vision and dreams of other humans. We could live within the imagination of extraterrestrial intelligence like Generative AI in the future. A profound fear beyond the recent dread of AI for centuries has tormented us. We have long understood the ability of stories and images to deceive our minds and create false impressions. As a result, we have maintained an ongoing concern about becoming ensnared in a world of illusions. For thousands of years, we have feared being trapped in a world of fantasies, recognizing the power of stories and pictures to persuade and manipulate our minds and create false perceptions. This fear has existed long before the emergence of the contemporary fear of AI. In the 17th century, René Descartes feared that a malicious demon was deceiving him by creating an illusory world around him. Similarly, in ancient Greece, Plato presented the famous Allegory of the Cave, where a group of people was imprisoned in a cave, facing a blank wall, with illusions projected onto the wall, which they misperceived as reality. Buddhist and Hindu sages in ancient India observed that all humans were entrapped in Maya, the realm of illusions. What we consider reality is often only a construct of our minds. People may go to war, killing and sacrificing themselves due to their faith in fantasies and illusions. The Generative AI story of today confronts us with the same fears that haunted Descartes, Plato, and ancient Indian thinkers. We risk being trapped by a veil of illusions we cannot recognize or remove. Naturally, the potential benefits of AI are numerous and diverse and have been widely discussed by those who work in the field. Yet our collective responsibility is to highlight the risks of such technology. Nevertheless, there is no denying that Generative AI can help us in multiple forms, such as discovering remedies for cancer or addressing environmental challenges. The critical examination we must undertake is ensuring that these new tools are utilized ethically and constructively. To accomplish this, we must first comprehend the actual abilities of this technology. Since 1945, we have been aware that nuclear technology has the potential to provide cheap energy for humanity but can also bring about the physical destruction of human civilization. Therefore, we have rebuilt the entire international system to safeguard human beings and guarantee that nuclear technology is mainly utilized for good. We must confront a new mass destruction weapon (Generative AI) capable of eradicating our mental and social world. I believe the new Generative AI tools can be regulated, but we must act swiftly. Unlike nuclear weapons, Generative AI can create more powerful AI at an exponential rate. The initial and most crucial step is to require stringent safety checks before making any powerful ai tools available to the public. Like pharmaceutical companies, which can release new drugs once their short-term and long-term side effects have been tested, tech companies should only release new AI tools once they are deemed safe. We need an agency equivalent to the FDA in the United States for new technology, and we needed it yesterday. Slowing down the public deployment of Generative AI may seem harmful to democracies compared to more ruthless dictatorial regimes. However, unregulated AI deployments could create social disorder, favoring autocrats and ultimately damaging democracies. Democracy is a dialogue, and language is a fundamental part of it. When Generative AI exploits language, it can threaten our capacity to hold meaningful discussions, potentially destroying democracy. We are facing unfamiliar intelligence capabilities that could threaten our human civilization. We must stop the irresponsible use of the tools and establish limitations before we become subject to them. One necessary law requires Generative AI to disclose its artificial identity to us. If we cannot distinguish between a human and an AI during a conversation, it will seriously threaten democracy. Therefore, we must ensure transparency in the use of Generative AI. Sardinia: Where even the sheep live longer than we do! I heard about this Blue Zone on Netflix and in the NYTimes, and I was like, what the cheese?! So I hopped on a plane and took a ferry over Easter to investigate how the heck these Sardinians are living to be 100. And let me tell you, driving around there was like navigating a maze of centenarians on scooters.
Blue Zone? More like 'Blue Paradise'! It's where people forget to die and keep on living. It's a magical land where you can collect social security and still have all your teeth. Or so they say. But seriously, a Blue Zone is where folks live longer than average, and Sardinia is leading the way. So if you want to learn the secrets of eternal youth, pack your bags and head to the land of pasta and pensioners! Ok, time to get a bit serious. A "Blue Zone" refers to a region with varying boundaries where the inhabitants experience longer, healthier, and happier lives. Sardinia is distinguished as one of the five globally recognized Blue Zones, boasting the highest number of male inhabitants over the age of 100. What sets Sardinia apart from other Blue Zones is its unique characteristic of having a nearly equal number of male and female centenarians. This is quite rare, as in other parts of the world, there tend to be about five times more women than men over the age of 100. Therefore, the case of Sardinia's Blue Zone is even more remarkable. Despite Sardinia being classified as a Blue Zone, the region where ultra-centenarians reside is relatively small. The greatest concentration of these remarkable communities is located in specific areas, namely Ogliastra (Villagrande Strisaili, Arzana, Talana, Baunei, Urzulei, and Triei), Barbagia (with a focus on Tiana, Ovodda, Ollolai, Gavoi, Fonni, Mamoiada, Orgosolo, and Oliena), and Seulo in the southern region of the island. The remaining four Blue Zones are global: Okinawa Island in Japan, Loma Linda in California, the Nicoya Peninsula in Costa Rica, and Ikaria in Greece. If you're interested in learning more about these areas, they offer a fascinating glimpse into the secrets of longevity and well-being. In the early 2000s, the French scholar Michel Poulain was the first to introduce the concept of the "Blue Zone." Shortly after, he teamed up with Gianni Pes, who had been studying the remarkable longevity of Sardinian people for two decades. Together, they mapped out the five Blue Zones officially recognized in 2016. The researchers, including Dan Buettner, who later joined the team, were intrigued by the exceptional lifespan of individuals living in these geographically distant and diverse regions. They aimed to uncover the secrets behind their longevity. Although each Blue Zone possesses several unique factors, the researchers identified common elements contributing to this "long-life miracle." Time for a Sardinian-style explanation of what makes these people tick. Why do Sardinians live so long? The food - Eating Your Way to Immortality Food is undeniably the main factor that affects our bodies. Maintaining a balanced and nutritious diet can help prevent severe health problems and promote overall well-being. This is precisely what people in the Blue Zones, including Sardinians, follow to ensure a long and healthy life. Sardinians are known for their love of traditional dishes, which they prepare healthily. They prefer olive oil over butter as a seasoning, as it is lower in saturated fats. Additionally, they consume many homemade and locally grown products such as cheese (pecorino cheese is famous), fruits, and vegetables, especially in rural areas where farming and sheep-herding are the main activities. Their diet primarily consists of cereals, mainly barley, and they eat very little meat and fish except on special occasions like Sundays and festivals. Sardinians are religious people, and spirituality, religion, and attending mass also contribute to their long life expectancy by providing a sense of structure in their daily lives. Sardinians' frugal diet is crucial for their long life expectancy. However, one more secret to their health and longevity is Cannonau, a traditional wine with a unique chemical composition that promotes wellness. Sardinians consume Cannonau in moderate amounts, making it another great ally in their quest for a longer life. Family is Everything - Your family may be crazy, but they're YOUR crazy: Embrace the chaos because, let's face it, you can't choose them! Family is a crucial element for a long and contented life. Sadly, older people are often viewed as a burden in modern society, leading to a lack of respect and care. Blue Zones communities, conversely, highly esteem their elders, who are not considered a hindrance but rather an integral and valued part of the family. Their opinions are highly regarded, and they actively participate in all social activities. The sense of being loved and integrated into their surroundings, combined with the interconnectedness of families, significantly contributes to their longevity. Older people are viewed as wise teachers in these communities. Having lived the longest, they have a wealth of knowledge on cultivating better crops, raising healthier livestock, and preparing the best meals. They impart this wisdom to younger generations and educate the younger children. The traditional method of raising children, which consists of the participation of unknown grandmothers who scold them for their misdeeds, is still prevalent. In Sardinia, there is no thought of discarding elderly family members. This way of life benefits everyone and extends beyond families to the entire social community. In Sardinia, you are ALWAYS part of something. In small Sardinian villages, where everyone knows each other, the concept of family and community is broad, and cooperation is necessary. Individualism is not valued, and older people actively participate in village life, from simple gardening tasks to organizing festivals and events. Religion remains a significant aspect of these villages, as attending church and observing biblical teachings is essential for the community's well-being. Everyone is valued and respected, and no one is left out or forgotten, reflecting how people live. In addition to a natural, seasonal diet, mental health is crucial for the inhabitants of Blue Zones, leading to a stress-free life that follows the slow rhythm of nature and seasons. Everything falls into place without pressure, making Sardinia one of the five Blue Zones. Smoking Prohibited It should come as no surprise that smoking tobacco significantly shortens our lifespan. However, the people from Blue Zones don't feel the urge to smoke, as they live stress-free lives with no social pressure. They don't pay attention to health campaigns or become obsessed with health. Instead, they view smoking as an addiction that doesn't benefit their community. In these villages, frugality and hard work are the foundation of their existence, leaving no room for bad habits like smoking. Smoking won't improve their crops, meat, cheese, or bread, and it ruins the taste of their beloved Cannonau. This is just a glimpse of what Blue Zones are and why Sardinia is one of them. Although Sardinia faces problems like the rest of the world, its people approach life positively and take great pride in their island and community. Sardinia: The land of pasta and pensioners! Where even the sheep live longer than we do. But seriously, Sardinia is a fascinating Blue Zone where people forget to die and keep living. From their frugal diet of locally grown products to their love and care for their elders, there's much to learn from the Sardinian way of life. And hey, if you want to learn these secrets of eternal youth, you might get a tan while you're at it! Perhaps it's the fresh air, the nutrient-rich food, or the relaxed pace of life, but there's no denying that Sardinia's timeless beauty seems to imbue its people with an extra dose of vitality and longevity. Sardinians taught me that true happiness comes from cherishing the simple things in life like good food, close relationships, and beautiful scenery, rather than material possessions. They also showed me the importance of taking time to slow down and appreciate the present moment, rather than constantly striving for the next big thing. The dawn of new large language models is set to revolutionize many professions. However, whether this change will result in widespread prosperity hinges on our actions.
Over the past few months, an artificial intelligence gold rush has begun, fueled by the promise of lucrative business opportunities presented by generative AI models such as ChatGPT, regardless of the hallucinatory beliefs surrounding them. App developers, startups, and even some of the world's biggest companies are in a frenzy, attempting to understand the capabilities of the sensational text-generating bot that OpenAI unveiled last November. One can almost hear the cacophony of voices from executive suites worldwide as they clamor to answer the questions: "What is our ChatGPT strategy? How can we capitalize on this?" While businesses and executives are eyeing a profitable opportunity, the potential impact of generative AI technology on the workforce and the economy as a whole needs to be clarified. Despite their flaws, including their inclination to fabricate information, recently released generative AI models like ChatGPT offer the potential to automate tasks previously believed to be exclusive to human creativity and reasoning, such as writing, graphic design, and data summarization and analysis, even music composition. This leaves economists and many others needing clarification about how jobs and overall productivity will be affected. Despite the remarkable advances in AI and other digital tools over the past decade, their ability to enhance prosperity and stimulate widespread economic growth has been disheartening. While a select few investors and entrepreneurs have amassed great wealth, most people have not reaped the benefits, and some have even been replaced by automation. Since around 2005, productivity growth in the United States and most advanced economies, except for the UK, has been lackluster, hindering their potential for incredible wealth and prosperity. The limited expansion of the economic pie has resulted in stagnant wages for many workers. The few instances of productivity growth during this time have been restricted to specific sectors and certain cities in the US, including San Jose, San Francisco, Seattle, and Boston. Given the alarming income and wealth inequality in the United States and numerous other nations, will ChatGPT worsen this disparity, or could it alleviate it? Could it provide a much-needed stimulus to productivity? Large language models like ChatGPT, which boasts human-like writing capabilities, and OpenAI's DALL-E 2, capable of generating images on demand, rely on vast data for their training. Competing models such as Anthropic's Claude and Google's Bard follow the same principle. These foundational models, including OpenAI's GPT-3.5 used by ChatGPT and Google's language model LaMDA, which powers Bard, have rapidly evolved in recent years. Their power continues to grow as they are trained on ever-increasing amounts of data, and the number of parameters- the variables in the models that are adjusted- is increasing dramatically. OpenAI's latest release, GPT-4, was unveiled earlier this month. While the exact parameter count has not been disclosed, it will be significantly larger than its predecessor GPT-3, which had around 175 billion parameters and was approximately 100 times larger than GPT-2. The release of ChatGPT in late 2022 transformed the landscape for many users, providing an incredibly easy-to-use tool that can quickly create human-like text. This includes everything from recipes to workout plans, and even computer code, surprising many users. For non-experts, especially entrepreneurs and businesspeople, the chat model is a practical and user-friendly example of the potential of the AI revolution. Unlike the abstract and technical advances of academia and select high-tech companies, it is seen as evidence of its real-world impact. This has led to an inflow of investment from venture capitalists and other investors, with billions poured into companies centered around generative AI. As a result, the list of apps and services driven by large language models continues to grow, with each passing day bringing new additions. Microsoft has invested $10 billion in OpenAI and ChatGPT technology to revive its Bing search engine and add new capabilities to its Office products. Similarly, Salesforce has announced plans to introduce a ChatGPT app in its popular Slack product (which I use at ReadyAI daily) while establishing a $290 million fund to invest in generative AI startups. From Coca-Cola to GM and Ford, companies across various industries are making their own ChatGPT plays. At the same time, Google has announced that it plans to utilize its new generative AI tools in widely-used products like Gmail and Docs. Despite the rush to find applications for ChatGPT and other generative AI models, there have yet to be apparent stand-out uses. This presents a unique opportunity for us to rethink how to maximize the benefits of this new technology. The current moment offers a unique opportunity to explore the potential impact of generative AI on workflow and job prospects. However, we must question who will benefit from this technology and who will be left behind. The optimistic view is that generative AI will establish a potent tool for many of us, improving our capabilities and expertise while boosting the economy. On the other hand, the pessimistic view is that companies will use it to destroy automation-proof jobs that require creative skills and logical reasoning, leaving a few high-tech companies and tech elites even richer but doing little for overall economic development and prosperity. Assisting individuals with the lower-level of skills The impact of ChatGPT on the workplace is not merely a theoretical concern. A recent analysis by OpenAI's Tyna Eloundou, Sam Manning, and Pamela Mishkin found that large language models like GPT could potentially impact 80% of the US workforce. They further indicated that these AI models, including GPT-4 and other forthcoming software tools, would significantly affect 20% of jobs, with at least 55% of tasks in those jobs "exposed." In contrast to previous waves of automation, higher-income jobs would be most affected, with writers, web and digital designers, quantitative financial analysts, and even blockchain engineers among those with the most vulnerable positions. There is no question that generative AI will be used, citing law firms as one example. It will open up a range of tasks that can be automated. ChatGPT and other generative AI examples have changed the game. While AI had automated some office work before, only those rote step-by-step tasks could be coded for a machine. Now, AI can perform tasks once viewed as creative, such as writing and producing graphics. It's apparent to anyone paying attention that generative AI opens the door to computerizing many functions that we think need to be more easily automated. The concern is not that ChatGPT will lead to large-scale unemployment, as there are still plenty of jobs in the US, but that companies will replace relatively well-paying jobs with this new form of automation. This could result in workers being sent off to lower-paying service employment. At the same time, only a few individuals can exploit the new technology and reap all the benefits. If this scenario continues, individuals and businesses with solid technology skills may adopt generative AI tools and become significantly more efficient, ultimately dominating their respective industries. However, those with similar technical abilities and less skilled workers could stay caught up, exacerbating existing economic inequalities. However, we envision a more optimistic scenario where generative AI can enable more people to acquire the necessary skills to compete with those with higher education and expertise. An experiment conducted by two MIT economics graduate students, Shakked Noy, and Whitney Zhang, asked hundreds of college-educated professionals in fields like marketing and HR to use ChatGPT in their daily tasks. In contrast, the others were not asked to use it. The AI tool raised overall productivity and assisted the least skilled and accomplished workers the most, reducing the performance gap between employees. In other words, poor writers improved significantly, while good writers became faster. These initial findings suggest that ChatGPT and other generative AIs could "upskill" people struggling to find work. Many experienced workers are currently "lying fallow" after being ousted from office and manufacturing positions in recent years. It could revitalize the workforce if generative AI can be used as a practical tool to expand their expertise and provide them with specialized skills needed in healthcare or teaching. To determine which scenario will prevail, we need to make a concerted effort to consider how we want to utilize the technology. We shouldn't assume that technology is already out there and we have to adapt to it. Since the technology is still in development, we have the opportunity to use it in a variety of ways. The key is to design it with intention. In essence, we are at a crossroads where individuals with fewer skills can take on knowledge work, or those already highly skilled will significantly expand their advantages. The outcome will largely depend on how employers implement tools like ChatGPT. However, the more optimistic scenario is entirely within our grasp. Beyond Human-Centered Design Nevertheless, there are reasons to have a pessimistic outlook. AI creators needed to focus more on replicating human intelligence instead of leveraging the technology to empower individuals to perform new tasks and expand their abilities. Pursuing human-like capabilities has resulted in technologies that merely displace human workers with machines, lowering wages and exacerbating wealth and income inequality. This is the single most significant explanation for the increasing concentration of wealth. ChatGPT, with its human-like language outputs, embodies the very concern. It has accelerated the conversation about how these technologies can be leveraged to enhance people's capabilities instead of solely displacing them. Despite many concerns about AI developers prioritizing human-like capabilities over extending human abilities, I remain optimistic about artificial intelligence's potential. Businesses can benefit significantly from generative AI by expanding their offerings and increasing productivity. It is a powerful tool for creativity and innovation rather than simply a means of doing things more cheaply. As long as developers and companies avoid the mindset that humans are unnecessary, generative AI will be critical. Within a decade, generative AI could contribute trillions of dollars to the US economy, affecting nearly all types of knowledge workers. However, the timing of this productivity boost remains uncertain. It may require patience. In 1987, Nobel laureate economist Robert Solow from MIT made a well-known statement: "You can see the computer age everywhere except in the productivity statistics." Only in the mid to late 1990s did the effects, particularly from semiconductor improvements, appear in productivity data as businesses learned to harness increasingly affordable computational power and related software advancements. The impact of AI on productivity will depend on our ability to use the latest technology to transform businesses, much like we did in the earlier computer age. Companies only use AI to incrementally improve tasks, which may increase efficiency but have limited net benefits. However, the true potential of AI lies in creating new processes and value for customers. The timeline remains to be determined, as we need to figure out how to use generative AI for industries like writing and graphic design. Once we have identified how AI can revolutionize these industries, a significant productivity boost will occur, but the timeline for this breakthrough still needs to be determined. The Power Struggle in the Age of Artificial IntelligenceI believe that since ChatGPT and other AI bots automate cognitive work rather than physical tasks that require infrastructure and equipment investments, there may be a more significant boost to economic productivity than in past technological revolutions. A productivity boost could occur much more quickly by the end of the year or, indeed, by 2024. Furthermore, the potential for large language models to enhance productivity and drive technological progress is broader than economics. This potential is already being realized in the physical sciences, as seen in the work of Berend Smit, a chemical engineering researcher at EPFL in Lausanne, Switzerland. Smit's group uses machine learning to discover new materials. After one of his graduate students demonstrated interesting results using GPT-3, Smit challenged the student to prove that the model was useless for their sophisticated machine-learning studies that predict compound properties. However, the student should have done so. With just a few minutes of fine-tuning and relevant examples, the model could perform as well as advanced machine-learning tools explicitly developed for chemistry. Based on the compound name and structure, it could accurately answer basic questions about compound properties, such as solubility and reactivity. Large language models have the potential to expand the expertise and capabilities of non-experts, such as chemists with little knowledge of complex machine-learning tools, similar to other areas of work. Kevin Maik Jablonka notes that as simple as a literature search; it could bring machine learning to the masses of chemists. These surprising results show the significant power of the new forms of AI in various creative fields, including scientific discovery, and how easily they can be utilized. However, this also raises critical questions about who will define the vision for the design and deployment of these tools and control the future of this remarkable technology as its potential impact on the economy and jobs become more apparent. There is a concern that large language models may be controlled by the same big tech companies already dominating much of the digital world. For example, Google and Meta offer their large language models alongside OpenAI, and the high computational costs required to run the software create a barrier to entry for competitors. As a result, there is a risk of uniformity of thought and incentives, which is a big concern when it comes to a technology that has such a far-reaching impact. One possible solution is establishing a publicly funded international research organization for generative AI modeled after CERN. This organization would have the necessary computing power and scientific expertise to develop the technology further but would be outside of Big Tech. This would bring some diversity to the incentives of the creators of the models. Although it is still being determined which public policies would best serve the public interest, it is becoming clear that a few dominant companies and the market must make decisions about using this technology. Government-funded research has played a pivotal role in developing technologies that have brought widespread prosperity. For instance, in the late 1960s, the US Department of Defense backed ARPANET, which paved the way for the internet long before creating the World Wide Web at CERN. It's essential to steer technological advancements in ways that benefit the masses and not just the privileged few. Federally-funded research has been critical in developing technologies that lead to general prosperity. Technological advances created new tasks and jobs, raising wages and decreasing income inequality. However, the recent adoption of manufacturing robots in the American Midwest has resulted in job loss and regional decline. Rapid progress in AI could affect us all and emphasizes the importance of steering technological advances in ways that provide broad benefits. Our society and its powerful gatekeepers must stop being fascinated by tech billionaires' agendas. They should have a say in the direction of progress and the future of our society. The creators of AI and the businesspeople involved in bringing it to market deserve credit for their efforts. Still, we must not blindly accept their vision and aspirations for the technology's future. The assumption that AI is headed on an inevitable job-destroying path is troubling. It barely acknowledges that generative AI could lead to a creativity and productivity boom for workers beyond the tech-savvy elites. There are various tools for achieving a more balanced technology portfolio, such as tax reforms and government policies encouraging worker-friendly AI creation. However, they acknowledge that such reforms are a tall order, and redirecting technological change will require a social push. Fortunately, our direction with ChatGPT and other large language models is within our control. As these technologies are rapidly deployed in various applications, businesses and individuals can use them to enhance worker abilities or cut costs by eliminating jobs. Additionally, open-source projects in generative AI are gaining momentum, potentially breaking Big Tech's hold on these models. For example, more than a thousand international researchers collaborated last year on an open-source language model called Bloom, which can create text in multiple languages. Increased public funding for AI research could also change the course of future breakthroughs. While I am not entirely optimistic about the outcome, he is enthusiastic about the potential of these technologies, emphasizing that using them in the right direction could lead to one of the best decades ever, but it is not an inevitable outcome. A year ago, I read an article discussing users' mounting outrage and irritation with Google Search as automated summaries, sponsored content, advertising, and SEO-centric spam increasingly replaced the informative website results that the search engine was designed to produce. Rather than providing us with the information we were seeking (such as, in my case, the perfect toaster), Google's search algorithm was inundating us with half-formed recommendations of "content farms." However, Google Search has maintained its primacy due to habit and the absence of a viable alternative--until now. On February 7th, Microsoft initiated the beta rollout of an iteration of its Bing search engine as an A.I. chatbot powered by GPT-4, the most recent version of OpenAI's large language model ChatGPT. Instead of directing us to external websites, the new version of Bing can generate answers to any inquiry. For a good reason, Google perceives this technology as an existential threat to its core enterprise. In late 2022, Microsoft issued a "code red." Microsoft's vice president of design, Liz Danzico, who contributed to developing Bing AI's interface, recently said that "We're in a post-search experience."
The Bing A.I. combines Microsoft's search directory and ChatGPT, which I recently tried. Using it is like conversing with an incredibly powerful librarian whose domain encompasses the vast expanse of the Internet. Nowadays, using keywords to search on Google has become second nature to most internet users like me. After entering the relevant keywords, we hit "enter" and peruse the list of links on the results page. They might return to the Google Search page and adjust their keywords if they don't find what they want. However, with Bing A.I., websites act as source materials rather than destinations, and the bot collaborates with us to produce results. Bing A.I. filters through the information overload by summarizing the summaries and aggregating the aggregators. For example, I asked for Wirecutter's recommended toaster, which provided me with the Cuisinart CPT-122 2-Slice Compact Plastic Toaster. I then asked it to gather a list of other suggestions, and it gathered them from various outlets, including Forbes, The Kitchen, and The Spruce Eats. Within seconds, I had a digest of reliable devices without leaving the Bing A.I. page. Nonetheless, the chatbot informed me it could not make my purchasing decision as it was not human. A user of Bing A.I. has greater control than a Google Search user. We must learn to phrase their requests in complete sentences rather than isolated keywords when communicating with the chatbot. They can further refine their results by asking follow-up questions. For example, if we ask for an itinerary for a trip to Portugal and then ask, "What time does the sun set there?" the chatbot will understand which "there" we are referring to. However, in other ways, Bing A.I. limits us and encourages them to rely on the machine to determine helpful information rather than conducting their searches. The interface for Bing A.I.'s "conversation mode" is intended to be a one-stop shop for all our needs, from travel guides to financial advice. The interface consists of a single chat box on top of a subtle gradient of colors, and the chatbot even concludes its responses with a smiling, blushing emoji: "I'm always happy to chat with you. 😊" To the left of the chat box, there is a "new topic" button with a broom icon that clears the current conversation and starts over. The module was developed with the assistance of the A.I. itself. Although Bing A.I. and similar tools may provide unprecedented convenience, they could harm content creators. While Bing A.I. does provide links to relevant websites, these are discreetly displayed as footnotes to minimize our effort. Microsoft's Sarah Mody, in a recent public video, showed how Bing A.I. could reproduce an entire recipe within the chatbox, effectively circumventing the website that initially hosted the content. Mody then asked Bing A.I. to list the recipe's ingredients and organize them by grocery-store aisle, a task that no recipe website could match. These features suggest that tools like Bing A.I. have the potential further to diminish the traffic and revenue of content creators. Afterward, I requested Bing A.I. to provide me with the most recent news on the unfolding banking crisis, specifically First Republic Bank and SVB. Bing A.I. generated a summary of breaking news, citing articles from NBC, CNN, and the Wall Street Journal, which is behind a paywall. Although the Wall Street Journal has indicated that any A.I. that references its content must pay for a proper license, it may struggle to enforce this requirement for publicly accessible articles since A.I. search engines, like Google, crawl the entire Web. Then, I asked Bing to present the news in a bulleted list in style, a newsletter, and the result was a somewhat dry but convincing imitation. On another occasion, when I asked Bing for suitable wallpaper options for bathrooms with showers, it provided me with a bulleted list of manufacturers. Instead of searching for a listicle on Google, I "co-created" one with the bot. The current design of the Web is heavily centered on aggregation, such as product recommendations on The Strategist, film reviews on Rotten Tomatoes, and restaurant reviews on Yelp. However, the rise of A.I. tools like Bing A.I. raises questions about the value of these sites in the future. Rather than relying on these sites for aggregation, we may bypass them entirely and rely solely on A.I. chat summaries. This paradoxically creates a reliance on the source material - the same information that other sites make - to generate answers. I believe the widespread adoption of A.I. tools could create a vicious cycle in which sites' business models, based on advertising and subscriptions, collapse due to decreased direct traffic, leading to less content for A.I. tools to aggregate and summarize. Regarding the potential impact of AI-generated content, Google and Microsoft recently introduced a suite of A.I. tools for office workers, including applications that can generate new emails, reports, and slide decks or summarize existing ones. These tools will likely extend into other areas of our digital lives as they become more ubiquitous. This could lead to "textual hyperinflation," where it becomes difficult to distinguish between meaningful and meaningless content. A.I.-generated spam on an unprecedented scale could inundate us, and it may be challenging to differentiate between human content and machine-generated content. In such a scenario, "content mills" could use A.I. to create entire articles; publicists might write press releases using A.I., and cooking sites may use it to generate recipes. The glut of content may require human navigation assistance, but media companies may need more resources to devote to this need. However, A.I. may ultimately solve the problem it creates, as if tools like Bing A.I. cause the well of original material online to dry up; all that may remain are self-referential bots, offering generic answers that machines created in the first place. As more and more content online is generated by artificial intelligence, I believe the non-automated text will become a sought-after commodity, akin to a natural and unprocessed product like natural wine. Google recently launched its own A.I. chatbot called Bard, which is a move in the ongoing competition between tech giants. However, Google has kept Bard separate from its flagship product, with one executive stating that it complements Google Search. This approach acknowledges the potential threat that A.I. poses to Google's current business model. Meanwhile, Bing is enthusiastically leading the charge into the post-search era. The emergence of Bing's artificial intelligence marks the beginning of a new era for the Internet, where search may no longer be the primary means of finding information. The current design of the Web heavily relies on aggregation. I wonder what significance traditional websites will hold in a world where bots are capable of performing the aggregation for us? We are indeed living in the post-search internet, but let's not forget that non-automated text or human-generated text will become a sought-after commodity. "An American Martyr in Persia" is another fantastic book by Reza Aslan, centering on a chronological narrative and not, for the most part, on moralistic judgment. It is the biography of Howard Baskerville, a 22-year-old Presbyterian missionary from the Black Hills of South Dakota who traveled in 1907 to Tabriz, a town in northern Iran, to do "the Mohammedan work." That is how his church defined the conversion of Muslims to Christianity. Baskerville died less than two years later in 1909, shot in a battle between pro-democracy rebels—whose "constitutionalist" cause he had embraced—and the forces of the Shah of Persia, who was determined to snuff out all political rebellion.
Before his death, Howard Baskerville had been told by the American consul in Tabriz not to get involved in a war that was not his own. The young man's answer (as told to us by the Author - Reza Aslan) was stirring: "The only difference between me and these people is the place of my birth, and that is not a big difference." On his death, Baskerville's Persian companions granted him a respectful title, "the American Lafayette"—after the French soldier who had fought in the American Revolutionary War. Baskerville was a compassionate, even beguiling, fellow, and the book brings flamboyant panache to his story. Bazaars teem with hirsute brigands, and Maxim guns go “takka takka takka.” If the writing is often overwrought, it captures the mood and drama of the milieu in which the young American found himself. Armed with a letter of recommendation from no less than Woodrow Wilson—his mentor at Princeton—Baskerville persuaded the Presbyterian Church to send him abroad. (There is a tedious tangent in which the Author dwells on Wilson's "unrepentant racism.”) Like many of his era, Baskerville desired to go to China but was transferred to Persia, regarded by the church as a hardship posting. A missionary at the time described the Persian character as "that of treachery and falsehood in the extreme." Persia was in the grasp of a political revolution when Baskerville arrived in September 1907. Ten months earlier, the Shah—Muzaffar ad-Din, of the Qajar dynasty—had yielded to protests and accepted the institution of a parliament and a liberal constitution, new checks on his previously unfettered powers. He was diligently in debt to Russia and Britain, both of whom were using Persia as the "staging ground" (in the Author's words) of the Great Game, the term used to describe the Anglo-Russian rivalry over Central Asia. Muzaffar died only days after making his concession and was succeeded by his son Mohammed Ali, an entirely more hardline Shah in thrall to his Russian advisers. The Author describes him as a "pompous, pudgy young man with a ridiculous mustache" who was "incensed with his father for making his God-given authority suddenly contingent upon the will of the people." Mohammed Ali, egged on by his Russian aide-de-camp, cracked down on Parliament, which led to a prolonged standoff and widespread violence in Tehran. Tabriz, to the north, closer to Azerbaijan and Armenia than to the capital, had always been a rebellious city. This multilingual, multireligious border town was as Turkic as it was Persian. Its council had asserted a striking degree of political independence with the coming of the 1906 constitution and wasn't about to surrender its liberty to a young Shah with authoritarian inclinations. Baskerville arrived as Tabriz seethed and soon drifted away from the "tranquility" of the American Memorial School (where he taught and lived) into the company of local intellectuals and "secret societies" that sought to defy the Shah. The book strains to persuade us that Baskerville's adoption of the constitutional cause sprang from a love of liberty and political freedom that he'd acquired at Princeton (paradoxically from Woodrow Wilson). But more important may be that the genuine young man, who made friends quickly, was heartbroken by the assassination of his best friend, a Persian fellow teacher at the school who was closely involved with the resistance. His friend's death drove him to join the Tabriz rebels, too, and their leader—a reformed bandit, called Sattar Khan—made Baskerville his second-in-command. Sattar was no fool: Although Baskerville had little military skill, he was invaluable as a symbol and a magnet for support. "American Defends Tabriz," screamed a headline in the New York Times just days before Baskerville's death. The Shah's forces encircled Tabriz, and Baskerville was killed as he tried to lead a small posse—an "Army of Salvation"—to break the siege. The martyred Baskerville, says the Author, became a local hero. For many Iranians, he "embodied" a romantic idea of the U.S.: "youthful, impassioned, a little bit naïve, perhaps, but earnest in the conviction that freedom is inalienable." Yet even as he tells us Baskerville's story, The Author can't resist kicking at modern America. Iranians expected America, "a nation of Baskervilles," to support them in their struggle against the Shah in the years before Ayatollah Khomeini, whose revolution he describes with staggering banality as "a different form of tyranny." America, he complains, was more concerned with "its interests than its principles" in Iran. Mr. Aslan tells us Baskerville's story with passion and sweetness. It's a pity he's so sour about the land that gave his family shelter. Baskerville's role in the Persian struggle to become an independent and democratic society made him a hero in his adopted country. Back at home in America, however, his story is not well-known, and his legacy is not celebrated. An American Martyr in Persia highlights the complex historical ties between America and Iran and the potential of a single individual to change the course of history. In this rip-roaring story of his life and death, Aslan offers us a powerful parable about the universal ideals of democracy—and to what degree Americans are willing to support those ideals in a foreign land. Interwoven throughout is an essential history of the nation we now know as Iran—frequently demonized and misunderstood in the West. Indeed, Baskerville's life and death represent a "road not taken" in Iran. Baskerville's story, like his life, is at the center of a whirlwind in which Americans must ask themselves: How seriously do we take our ideals of constitutional democracy, and whose freedom do we support? An important question to ask as we witness today in Iran, schoolgirls chant "Woman, Life, Freedom" (Zan, Zendegi, Azadi). Umbria is known as the "Green Heart of Italy," Umbria brags untouched landscapes in its verdant hills, mountains, and valleys. Etruscans, Romans, and medieval feuding families have left an incredible artistic and cultural heritage, while priests and monks have given a fascinating religious imprint on its towns. During my visit to Umbria in late summer, I met a couple from New York at their marvelous farmhouse. I had a short yet fascinating conversation with the husband, a distinguished anthropologist and university professor, while my wife and our friends were getting the tour by the wife - a famous Journalist. The couples were in their mid-80s.
The husband asked about my profession, and I said, "I'm in AI Education." He immediately asked: "Can AI understand irony?" That question still puzzles me today. I put the answer to this question on one side and started focusing instead on the question itself. I focused on a more fundamental question I have been thinking about lately. I have been thinking about "consciousness," the complicated problem and even more complex question in the field of AI. Exploring a bit into philosophy, the complex problem of consciousness is the problem of explaining why any physical state is conscious rather than nonconscious. It is truly the problem of explaining why there is "something lit is like" for a subject in conscious experience, why conscious mental states "light up" and directly appear to the subject. The usual methods of science involve an explanation of functional, dynamical, and structural properties—an explanation of what a thing does, how it changes over time, and how it is put together. But even after we have explained the conscious mind's functional, dynamic, and structural properties, we can still meaningfully ask why it is deliberate. This suggests that an explanation of consciousness will have to go beyond the usual methods of science. Consciousness presents a complex problem for science, or perhaps it marks the limits of what science can explain. Explaining why consciousness occurs at all can be contrasted with the so-called "easy problems" of consciousness: the problems of explaining the function, dynamics, and structure of consciousness. These elements can be described using the usual methods of science. But that leaves the question of why there is something it is like for the subject when these functions, dynamics, and structures are present. This is a complicated problem. But let's, for a moment, assume a conscious being is one capable of having thought and not disclosing it. This means consciousness would be the prerequisite for irony or saying one thing while meaning the opposite, which happens in my Persian culture. We know we are being ironic when we realize our words don't correspond with our thoughts. The truth is that most of us have this unique capacity - and most of us certainly and regularly convey our unspoken meanings in this way - is something that, I think, should surprise us more often than it does. It indeed seems almost discreetly human. Animals can be funny but not deliberately so. So how about computers or machines? Can they deceive? Can they keep secrets? Can they be ironic? The truth is that anything related to AI is already being studied or researched by an army of obscenely well-resourced computer scientists and AI researchers. This is also the cares with the question of AI and irony, which has recently attracted significant research among academia and private companies. Of course, since irony involves saying one thing while meaning the opposite, creating an intelligent machine that can detect and generate it is not a simple task. But if the AI community could make such an intelligent machine, it would have many practical applications, some more sinister than others. In the age of Google online reviews, among others, retailers have become very keen on so-called "opinion mining" and "sentiment analysis," which utilize AI to map the content and the mood of reviewers' comments. Knowing whether the product is being praised or becoming the butt of the joke is valuable information. And this is what Amazon is doing currently. Or even consider content moderation on various social media platforms. If let's say, Twitter or Facebook wants to limit online abuse while protecting freedom of speech, would it not be helpful to know when someone is serious or when they are just joking? Or what if someone tweets that they have just done something crazy and illegal? (don't ever tweet crazy or illegal stuff, by the way). Imagine if we could determine instantly whether they are serious or whether they are just "being ironic." The truth is that given irony's proximity to lying, it's not hard to imagine how the entire shadowy machinery of government and corporate surveillance that has grown up around new communications technologies would find the prospects of an irony-detector extremely interesting. And that goes a long way toward explaining the growing literature on the topic in the AI field. To better understand the state of current research into AI and irony, it is beneficial to know a little about the history of AI in general. That history is broken down into two periods. In the 90s, AI researchers sought to program computers with a set of handcrafted formal rules for how to behave in predefined environments. For example, if you used Microsoft Word in the 90s, you might remember the annoying office assistant Clippy, who was endlessly popping up to offer unwanted advice. Since the early 2000s, that model has been replaced by data-driven machine learning and sophisticated neural networks. Enormous caches of examples of given phenomena are translated into numerical values, on which computers can perform complex mathematical operations to determine patterns no human could ever discover. Moreover, the computer doesn't merely apply a rule. Instead, it learns from experience and develops new operations independent of human intervention. The main difference between the two approaches is between Clippy and facial recognition technology. To create a neural network that can detect irony, AI scientists focus initially on what some would consider its simplest form: sarcasm. AI scientists begin with data stripped from social media. For example, they might collect all tweets labeled "sarcasm" with or without # of course, or Reddit posts labeled/s, a shorthand that Reddit users employ to indicate they are not serious. The point is not to teach the computer to recognize the two separate meanings of any given sarcastic post. Indeed, meaning is of no relevance whatsoever. Instead, the computer is instructed to search for recurring patterns, or what researchers call "syntactical fingerprints" - words, punctuations, errors, emojis, phrases, context, and so forth. On top of that, the dataset is bolstered by adding even more streams of examples - other posts in the same treads, for instance, or from the same account. Each new individual sample is then run through a battery of calculations until we arrive at a single determination: sarcastic or not sarcastic. Last, a bot can be programmed to reply to each original poster and ask whether they were being sarcastic. Any reply can be added to the machine's growing mountain of experience. So, assuming AI will continue to grow and advance at the rate that took us from Clippy to facial recognition technology in less than two decades, can Ironic androids be far off? It could be argued that there are qualitative differences between sorting through the "syntactical fingerprints" of irony and understanding it. Some might suggest not. If a computer can be taught to behave exactly like a human, then it's immaterial whether a rich internal world of meaning lurks beneath its behavior. But I would argue that iron is a unique case; it relies on the distinction between external behaviors and internal beliefs. While AI scientists have only recently become interested in irony, philosophers and literary critics have been thinking about irony for a very, very, very long time. And perhaps exploring that tradition would shed old light, as it were, on a new problem. Of the many names one could think about in this context, two are indispensable: the German Romantic philosopher Friedrich Schlegel; and the post-structuralist literary theorist Paul de Man. As for Schlegel, irony does not simply entail a false external meaning and a true, internal one. Rather, two opposite meanings are presented as equally valid in irony. And the resulting indeterminacy has devastating implications for logic, most notably the law of non-contradiction, which holds that a statement cannot be simultaneously true and falls. De Man follows Schlegel on this score and, in a sense, universalizes his insight. De Man notes that every effort to define a concept of irony is bound to be infected by the phenomena it purports to explain. Indeed, de Man believes all language is infected by irony and involves "permanent parabasis." Because humans have the power to conceal their thoughts from one another, it will always be possible - permanently possible - that they do not mean what they are saying. The irony, in other words, is not one kind of language among many; it structures or, better, haunts every use of language and every interaction. And in this sense, it exceeds the order of proof and computation. The question is whether the same is true of human beings in general. |
AuthorRoozbeh, born in Tehran - Iran (March 1984) Archives
October 2025
Categories |
RSS Feed