The dawn of new large language models is set to revolutionize many professions. However, whether this change will result in widespread prosperity hinges on our actions.
Over the past few months, an artificial intelligence gold rush has begun, fueled by the promise of lucrative business opportunities presented by generative AI models such as ChatGPT, regardless of the hallucinatory beliefs surrounding them. App developers, startups, and even some of the world's biggest companies are in a frenzy, attempting to understand the capabilities of the sensational text-generating bot that OpenAI unveiled last November. One can almost hear the cacophony of voices from executive suites worldwide as they clamor to answer the questions: "What is our ChatGPT strategy? How can we capitalize on this?" While businesses and executives are eyeing a profitable opportunity, the potential impact of generative AI technology on the workforce and the economy as a whole needs to be clarified. Despite their flaws, including their inclination to fabricate information, recently released generative AI models like ChatGPT offer the potential to automate tasks previously believed to be exclusive to human creativity and reasoning, such as writing, graphic design, and data summarization and analysis, even music composition. This leaves economists and many others needing clarification about how jobs and overall productivity will be affected. Despite the remarkable advances in AI and other digital tools over the past decade, their ability to enhance prosperity and stimulate widespread economic growth has been disheartening. While a select few investors and entrepreneurs have amassed great wealth, most people have not reaped the benefits, and some have even been replaced by automation. Since around 2005, productivity growth in the United States and most advanced economies, except for the UK, has been lackluster, hindering their potential for incredible wealth and prosperity. The limited expansion of the economic pie has resulted in stagnant wages for many workers. The few instances of productivity growth during this time have been restricted to specific sectors and certain cities in the US, including San Jose, San Francisco, Seattle, and Boston. Given the alarming income and wealth inequality in the United States and numerous other nations, will ChatGPT worsen this disparity, or could it alleviate it? Could it provide a much-needed stimulus to productivity? Large language models like ChatGPT, which boasts human-like writing capabilities, and OpenAI's DALL-E 2, capable of generating images on demand, rely on vast data for their training. Competing models such as Anthropic's Claude and Google's Bard follow the same principle. These foundational models, including OpenAI's GPT-3.5 used by ChatGPT and Google's language model LaMDA, which powers Bard, have rapidly evolved in recent years. Their power continues to grow as they are trained on ever-increasing amounts of data, and the number of parameters- the variables in the models that are adjusted- is increasing dramatically. OpenAI's latest release, GPT-4, was unveiled earlier this month. While the exact parameter count has not been disclosed, it will be significantly larger than its predecessor GPT-3, which had around 175 billion parameters and was approximately 100 times larger than GPT-2. The release of ChatGPT in late 2022 transformed the landscape for many users, providing an incredibly easy-to-use tool that can quickly create human-like text. This includes everything from recipes to workout plans, and even computer code, surprising many users. For non-experts, especially entrepreneurs and businesspeople, the chat model is a practical and user-friendly example of the potential of the AI revolution. Unlike the abstract and technical advances of academia and select high-tech companies, it is seen as evidence of its real-world impact. This has led to an inflow of investment from venture capitalists and other investors, with billions poured into companies centered around generative AI. As a result, the list of apps and services driven by large language models continues to grow, with each passing day bringing new additions. Microsoft has invested $10 billion in OpenAI and ChatGPT technology to revive its Bing search engine and add new capabilities to its Office products. Similarly, Salesforce has announced plans to introduce a ChatGPT app in its popular Slack product (which I use at ReadyAI daily) while establishing a $290 million fund to invest in generative AI startups. From Coca-Cola to GM and Ford, companies across various industries are making their own ChatGPT plays. At the same time, Google has announced that it plans to utilize its new generative AI tools in widely-used products like Gmail and Docs. Despite the rush to find applications for ChatGPT and other generative AI models, there have yet to be apparent stand-out uses. This presents a unique opportunity for us to rethink how to maximize the benefits of this new technology. The current moment offers a unique opportunity to explore the potential impact of generative AI on workflow and job prospects. However, we must question who will benefit from this technology and who will be left behind. The optimistic view is that generative AI will establish a potent tool for many of us, improving our capabilities and expertise while boosting the economy. On the other hand, the pessimistic view is that companies will use it to destroy automation-proof jobs that require creative skills and logical reasoning, leaving a few high-tech companies and tech elites even richer but doing little for overall economic development and prosperity. Assisting individuals with the lower-level of skills The impact of ChatGPT on the workplace is not merely a theoretical concern. A recent analysis by OpenAI's Tyna Eloundou, Sam Manning, and Pamela Mishkin found that large language models like GPT could potentially impact 80% of the US workforce. They further indicated that these AI models, including GPT-4 and other forthcoming software tools, would significantly affect 20% of jobs, with at least 55% of tasks in those jobs "exposed." In contrast to previous waves of automation, higher-income jobs would be most affected, with writers, web and digital designers, quantitative financial analysts, and even blockchain engineers among those with the most vulnerable positions. There is no question that generative AI will be used, citing law firms as one example. It will open up a range of tasks that can be automated. ChatGPT and other generative AI examples have changed the game. While AI had automated some office work before, only those rote step-by-step tasks could be coded for a machine. Now, AI can perform tasks once viewed as creative, such as writing and producing graphics. It's apparent to anyone paying attention that generative AI opens the door to computerizing many functions that we think need to be more easily automated. The concern is not that ChatGPT will lead to large-scale unemployment, as there are still plenty of jobs in the US, but that companies will replace relatively well-paying jobs with this new form of automation. This could result in workers being sent off to lower-paying service employment. At the same time, only a few individuals can exploit the new technology and reap all the benefits. If this scenario continues, individuals and businesses with solid technology skills may adopt generative AI tools and become significantly more efficient, ultimately dominating their respective industries. However, those with similar technical abilities and less skilled workers could stay caught up, exacerbating existing economic inequalities. However, we envision a more optimistic scenario where generative AI can enable more people to acquire the necessary skills to compete with those with higher education and expertise. An experiment conducted by two MIT economics graduate students, Shakked Noy, and Whitney Zhang, asked hundreds of college-educated professionals in fields like marketing and HR to use ChatGPT in their daily tasks. In contrast, the others were not asked to use it. The AI tool raised overall productivity and assisted the least skilled and accomplished workers the most, reducing the performance gap between employees. In other words, poor writers improved significantly, while good writers became faster. These initial findings suggest that ChatGPT and other generative AIs could "upskill" people struggling to find work. Many experienced workers are currently "lying fallow" after being ousted from office and manufacturing positions in recent years. It could revitalize the workforce if generative AI can be used as a practical tool to expand their expertise and provide them with specialized skills needed in healthcare or teaching. To determine which scenario will prevail, we need to make a concerted effort to consider how we want to utilize the technology. We shouldn't assume that technology is already out there and we have to adapt to it. Since the technology is still in development, we have the opportunity to use it in a variety of ways. The key is to design it with intention. In essence, we are at a crossroads where individuals with fewer skills can take on knowledge work, or those already highly skilled will significantly expand their advantages. The outcome will largely depend on how employers implement tools like ChatGPT. However, the more optimistic scenario is entirely within our grasp. Beyond Human-Centered Design Nevertheless, there are reasons to have a pessimistic outlook. AI creators needed to focus more on replicating human intelligence instead of leveraging the technology to empower individuals to perform new tasks and expand their abilities. Pursuing human-like capabilities has resulted in technologies that merely displace human workers with machines, lowering wages and exacerbating wealth and income inequality. This is the single most significant explanation for the increasing concentration of wealth. ChatGPT, with its human-like language outputs, embodies the very concern. It has accelerated the conversation about how these technologies can be leveraged to enhance people's capabilities instead of solely displacing them. Despite many concerns about AI developers prioritizing human-like capabilities over extending human abilities, I remain optimistic about artificial intelligence's potential. Businesses can benefit significantly from generative AI by expanding their offerings and increasing productivity. It is a powerful tool for creativity and innovation rather than simply a means of doing things more cheaply. As long as developers and companies avoid the mindset that humans are unnecessary, generative AI will be critical. Within a decade, generative AI could contribute trillions of dollars to the US economy, affecting nearly all types of knowledge workers. However, the timing of this productivity boost remains uncertain. It may require patience. In 1987, Nobel laureate economist Robert Solow from MIT made a well-known statement: "You can see the computer age everywhere except in the productivity statistics." Only in the mid to late 1990s did the effects, particularly from semiconductor improvements, appear in productivity data as businesses learned to harness increasingly affordable computational power and related software advancements. The impact of AI on productivity will depend on our ability to use the latest technology to transform businesses, much like we did in the earlier computer age. Companies only use AI to incrementally improve tasks, which may increase efficiency but have limited net benefits. However, the true potential of AI lies in creating new processes and value for customers. The timeline remains to be determined, as we need to figure out how to use generative AI for industries like writing and graphic design. Once we have identified how AI can revolutionize these industries, a significant productivity boost will occur, but the timeline for this breakthrough still needs to be determined. The Power Struggle in the Age of Artificial IntelligenceI believe that since ChatGPT and other AI bots automate cognitive work rather than physical tasks that require infrastructure and equipment investments, there may be a more significant boost to economic productivity than in past technological revolutions. A productivity boost could occur much more quickly by the end of the year or, indeed, by 2024. Furthermore, the potential for large language models to enhance productivity and drive technological progress is broader than economics. This potential is already being realized in the physical sciences, as seen in the work of Berend Smit, a chemical engineering researcher at EPFL in Lausanne, Switzerland. Smit's group uses machine learning to discover new materials. After one of his graduate students demonstrated interesting results using GPT-3, Smit challenged the student to prove that the model was useless for their sophisticated machine-learning studies that predict compound properties. However, the student should have done so. With just a few minutes of fine-tuning and relevant examples, the model could perform as well as advanced machine-learning tools explicitly developed for chemistry. Based on the compound name and structure, it could accurately answer basic questions about compound properties, such as solubility and reactivity. Large language models have the potential to expand the expertise and capabilities of non-experts, such as chemists with little knowledge of complex machine-learning tools, similar to other areas of work. Kevin Maik Jablonka notes that as simple as a literature search; it could bring machine learning to the masses of chemists. These surprising results show the significant power of the new forms of AI in various creative fields, including scientific discovery, and how easily they can be utilized. However, this also raises critical questions about who will define the vision for the design and deployment of these tools and control the future of this remarkable technology as its potential impact on the economy and jobs become more apparent. There is a concern that large language models may be controlled by the same big tech companies already dominating much of the digital world. For example, Google and Meta offer their large language models alongside OpenAI, and the high computational costs required to run the software create a barrier to entry for competitors. As a result, there is a risk of uniformity of thought and incentives, which is a big concern when it comes to a technology that has such a far-reaching impact. One possible solution is establishing a publicly funded international research organization for generative AI modeled after CERN. This organization would have the necessary computing power and scientific expertise to develop the technology further but would be outside of Big Tech. This would bring some diversity to the incentives of the creators of the models. Although it is still being determined which public policies would best serve the public interest, it is becoming clear that a few dominant companies and the market must make decisions about using this technology. Government-funded research has played a pivotal role in developing technologies that have brought widespread prosperity. For instance, in the late 1960s, the US Department of Defense backed ARPANET, which paved the way for the internet long before creating the World Wide Web at CERN. It's essential to steer technological advancements in ways that benefit the masses and not just the privileged few. Federally-funded research has been critical in developing technologies that lead to general prosperity. Technological advances created new tasks and jobs, raising wages and decreasing income inequality. However, the recent adoption of manufacturing robots in the American Midwest has resulted in job loss and regional decline. Rapid progress in AI could affect us all and emphasizes the importance of steering technological advances in ways that provide broad benefits. Our society and its powerful gatekeepers must stop being fascinated by tech billionaires' agendas. They should have a say in the direction of progress and the future of our society. The creators of AI and the businesspeople involved in bringing it to market deserve credit for their efforts. Still, we must not blindly accept their vision and aspirations for the technology's future. The assumption that AI is headed on an inevitable job-destroying path is troubling. It barely acknowledges that generative AI could lead to a creativity and productivity boom for workers beyond the tech-savvy elites. There are various tools for achieving a more balanced technology portfolio, such as tax reforms and government policies encouraging worker-friendly AI creation. However, they acknowledge that such reforms are a tall order, and redirecting technological change will require a social push. Fortunately, our direction with ChatGPT and other large language models is within our control. As these technologies are rapidly deployed in various applications, businesses and individuals can use them to enhance worker abilities or cut costs by eliminating jobs. Additionally, open-source projects in generative AI are gaining momentum, potentially breaking Big Tech's hold on these models. For example, more than a thousand international researchers collaborated last year on an open-source language model called Bloom, which can create text in multiple languages. Increased public funding for AI research could also change the course of future breakthroughs. While I am not entirely optimistic about the outcome, he is enthusiastic about the potential of these technologies, emphasizing that using them in the right direction could lead to one of the best decades ever, but it is not an inevitable outcome.
0 Comments
A year ago, I read an article discussing users' mounting outrage and irritation with Google Search as automated summaries, sponsored content, advertising, and SEO-centric spam increasingly replaced the informative website results that the search engine was designed to produce. Rather than providing us with the information we were seeking (such as, in my case, the perfect toaster), Google's search algorithm was inundating us with half-formed recommendations of "content farms." However, Google Search has maintained its primacy due to habit and the absence of a viable alternative--until now. On February 7th, Microsoft initiated the beta rollout of an iteration of its Bing search engine as an A.I. chatbot powered by GPT-4, the most recent version of OpenAI's large language model ChatGPT. Instead of directing us to external websites, the new version of Bing can generate answers to any inquiry. For a good reason, Google perceives this technology as an existential threat to its core enterprise. In late 2022, Microsoft issued a "code red." Microsoft's vice president of design, Liz Danzico, who contributed to developing Bing AI's interface, recently said that "We're in a post-search experience."
The Bing A.I. combines Microsoft's search directory and ChatGPT, which I recently tried. Using it is like conversing with an incredibly powerful librarian whose domain encompasses the vast expanse of the Internet. Nowadays, using keywords to search on Google has become second nature to most internet users like me. After entering the relevant keywords, we hit "enter" and peruse the list of links on the results page. They might return to the Google Search page and adjust their keywords if they don't find what they want. However, with Bing A.I., websites act as source materials rather than destinations, and the bot collaborates with us to produce results. Bing A.I. filters through the information overload by summarizing the summaries and aggregating the aggregators. For example, I asked for Wirecutter's recommended toaster, which provided me with the Cuisinart CPT-122 2-Slice Compact Plastic Toaster. I then asked it to gather a list of other suggestions, and it gathered them from various outlets, including Forbes, The Kitchen, and The Spruce Eats. Within seconds, I had a digest of reliable devices without leaving the Bing A.I. page. Nonetheless, the chatbot informed me it could not make my purchasing decision as it was not human. A user of Bing A.I. has greater control than a Google Search user. We must learn to phrase their requests in complete sentences rather than isolated keywords when communicating with the chatbot. They can further refine their results by asking follow-up questions. For example, if we ask for an itinerary for a trip to Portugal and then ask, "What time does the sun set there?" the chatbot will understand which "there" we are referring to. However, in other ways, Bing A.I. limits us and encourages them to rely on the machine to determine helpful information rather than conducting their searches. The interface for Bing A.I.'s "conversation mode" is intended to be a one-stop shop for all our needs, from travel guides to financial advice. The interface consists of a single chat box on top of a subtle gradient of colors, and the chatbot even concludes its responses with a smiling, blushing emoji: "I'm always happy to chat with you. 😊" To the left of the chat box, there is a "new topic" button with a broom icon that clears the current conversation and starts over. The module was developed with the assistance of the A.I. itself. Although Bing A.I. and similar tools may provide unprecedented convenience, they could harm content creators. While Bing A.I. does provide links to relevant websites, these are discreetly displayed as footnotes to minimize our effort. Microsoft's Sarah Mody, in a recent public video, showed how Bing A.I. could reproduce an entire recipe within the chatbox, effectively circumventing the website that initially hosted the content. Mody then asked Bing A.I. to list the recipe's ingredients and organize them by grocery-store aisle, a task that no recipe website could match. These features suggest that tools like Bing A.I. have the potential further to diminish the traffic and revenue of content creators. Afterward, I requested Bing A.I. to provide me with the most recent news on the unfolding banking crisis, specifically First Republic Bank and SVB. Bing A.I. generated a summary of breaking news, citing articles from NBC, CNN, and the Wall Street Journal, which is behind a paywall. Although the Wall Street Journal has indicated that any A.I. that references its content must pay for a proper license, it may struggle to enforce this requirement for publicly accessible articles since A.I. search engines, like Google, crawl the entire Web. Then, I asked Bing to present the news in a bulleted list in style, a newsletter, and the result was a somewhat dry but convincing imitation. On another occasion, when I asked Bing for suitable wallpaper options for bathrooms with showers, it provided me with a bulleted list of manufacturers. Instead of searching for a listicle on Google, I "co-created" one with the bot. The current design of the Web is heavily centered on aggregation, such as product recommendations on The Strategist, film reviews on Rotten Tomatoes, and restaurant reviews on Yelp. However, the rise of A.I. tools like Bing A.I. raises questions about the value of these sites in the future. Rather than relying on these sites for aggregation, we may bypass them entirely and rely solely on A.I. chat summaries. This paradoxically creates a reliance on the source material - the same information that other sites make - to generate answers. I believe the widespread adoption of A.I. tools could create a vicious cycle in which sites' business models, based on advertising and subscriptions, collapse due to decreased direct traffic, leading to less content for A.I. tools to aggregate and summarize. Regarding the potential impact of AI-generated content, Google and Microsoft recently introduced a suite of A.I. tools for office workers, including applications that can generate new emails, reports, and slide decks or summarize existing ones. These tools will likely extend into other areas of our digital lives as they become more ubiquitous. This could lead to "textual hyperinflation," where it becomes difficult to distinguish between meaningful and meaningless content. A.I.-generated spam on an unprecedented scale could inundate us, and it may be challenging to differentiate between human content and machine-generated content. In such a scenario, "content mills" could use A.I. to create entire articles; publicists might write press releases using A.I., and cooking sites may use it to generate recipes. The glut of content may require human navigation assistance, but media companies may need more resources to devote to this need. However, A.I. may ultimately solve the problem it creates, as if tools like Bing A.I. cause the well of original material online to dry up; all that may remain are self-referential bots, offering generic answers that machines created in the first place. As more and more content online is generated by artificial intelligence, I believe the non-automated text will become a sought-after commodity, akin to a natural and unprocessed product like natural wine. Google recently launched its own A.I. chatbot called Bard, which is a move in the ongoing competition between tech giants. However, Google has kept Bard separate from its flagship product, with one executive stating that it complements Google Search. This approach acknowledges the potential threat that A.I. poses to Google's current business model. Meanwhile, Bing is enthusiastically leading the charge into the post-search era. The emergence of Bing's artificial intelligence marks the beginning of a new era for the Internet, where search may no longer be the primary means of finding information. The current design of the Web heavily relies on aggregation. I wonder what significance traditional websites will hold in a world where bots are capable of performing the aggregation for us? We are indeed living in the post-search internet, but let's not forget that non-automated text or human-generated text will become a sought-after commodity. "An American Martyr in Persia" is another fantastic book by Reza Aslan, centering on a chronological narrative and not, for the most part, on moralistic judgment. It is the biography of Howard Baskerville, a 22-year-old Presbyterian missionary from the Black Hills of South Dakota who traveled in 1907 to Tabriz, a town in northern Iran, to do "the Mohammedan work." That is how his church defined the conversion of Muslims to Christianity. Baskerville died less than two years later in 1909, shot in a battle between pro-democracy rebels—whose "constitutionalist" cause he had embraced—and the forces of the Shah of Persia, who was determined to snuff out all political rebellion.
Before his death, Howard Baskerville had been told by the American consul in Tabriz not to get involved in a war that was not his own. The young man's answer (as told to us by the Author - Reza Aslan) was stirring: "The only difference between me and these people is the place of my birth, and that is not a big difference." On his death, Baskerville's Persian companions granted him a respectful title, "the American Lafayette"—after the French soldier who had fought in the American Revolutionary War. Baskerville was a compassionate, even beguiling, fellow, and the book brings flamboyant panache to his story. Bazaars teem with hirsute brigands, and Maxim guns go “takka takka takka.” If the writing is often overwrought, it captures the mood and drama of the milieu in which the young American found himself. Armed with a letter of recommendation from no less than Woodrow Wilson—his mentor at Princeton—Baskerville persuaded the Presbyterian Church to send him abroad. (There is a tedious tangent in which the Author dwells on Wilson's "unrepentant racism.”) Like many of his era, Baskerville desired to go to China but was transferred to Persia, regarded by the church as a hardship posting. A missionary at the time described the Persian character as "that of treachery and falsehood in the extreme." Persia was in the grasp of a political revolution when Baskerville arrived in September 1907. Ten months earlier, the Shah—Muzaffar ad-Din, of the Qajar dynasty—had yielded to protests and accepted the institution of a parliament and a liberal constitution, new checks on his previously unfettered powers. He was diligently in debt to Russia and Britain, both of whom were using Persia as the "staging ground" (in the Author's words) of the Great Game, the term used to describe the Anglo-Russian rivalry over Central Asia. Muzaffar died only days after making his concession and was succeeded by his son Mohammed Ali, an entirely more hardline Shah in thrall to his Russian advisers. The Author describes him as a "pompous, pudgy young man with a ridiculous mustache" who was "incensed with his father for making his God-given authority suddenly contingent upon the will of the people." Mohammed Ali, egged on by his Russian aide-de-camp, cracked down on Parliament, which led to a prolonged standoff and widespread violence in Tehran. Tabriz, to the north, closer to Azerbaijan and Armenia than to the capital, had always been a rebellious city. This multilingual, multireligious border town was as Turkic as it was Persian. Its council had asserted a striking degree of political independence with the coming of the 1906 constitution and wasn't about to surrender its liberty to a young Shah with authoritarian inclinations. Baskerville arrived as Tabriz seethed and soon drifted away from the "tranquility" of the American Memorial School (where he taught and lived) into the company of local intellectuals and "secret societies" that sought to defy the Shah. The book strains to persuade us that Baskerville's adoption of the constitutional cause sprang from a love of liberty and political freedom that he'd acquired at Princeton (paradoxically from Woodrow Wilson). But more important may be that the genuine young man, who made friends quickly, was heartbroken by the assassination of his best friend, a Persian fellow teacher at the school who was closely involved with the resistance. His friend's death drove him to join the Tabriz rebels, too, and their leader—a reformed bandit, called Sattar Khan—made Baskerville his second-in-command. Sattar was no fool: Although Baskerville had little military skill, he was invaluable as a symbol and a magnet for support. "American Defends Tabriz," screamed a headline in the New York Times just days before Baskerville's death. The Shah's forces encircled Tabriz, and Baskerville was killed as he tried to lead a small posse—an "Army of Salvation"—to break the siege. The martyred Baskerville, says the Author, became a local hero. For many Iranians, he "embodied" a romantic idea of the U.S.: "youthful, impassioned, a little bit naïve, perhaps, but earnest in the conviction that freedom is inalienable." Yet even as he tells us Baskerville's story, The Author can't resist kicking at modern America. Iranians expected America, "a nation of Baskervilles," to support them in their struggle against the Shah in the years before Ayatollah Khomeini, whose revolution he describes with staggering banality as "a different form of tyranny." America, he complains, was more concerned with "its interests than its principles" in Iran. Mr. Aslan tells us Baskerville's story with passion and sweetness. It's a pity he's so sour about the land that gave his family shelter. Baskerville's role in the Persian struggle to become an independent and democratic society made him a hero in his adopted country. Back at home in America, however, his story is not well-known, and his legacy is not celebrated. An American Martyr in Persia highlights the complex historical ties between America and Iran and the potential of a single individual to change the course of history. In this rip-roaring story of his life and death, Aslan offers us a powerful parable about the universal ideals of democracy—and to what degree Americans are willing to support those ideals in a foreign land. Interwoven throughout is an essential history of the nation we now know as Iran—frequently demonized and misunderstood in the West. Indeed, Baskerville's life and death represent a "road not taken" in Iran. Baskerville's story, like his life, is at the center of a whirlwind in which Americans must ask themselves: How seriously do we take our ideals of constitutional democracy, and whose freedom do we support? An important question to ask as we witness today in Iran, schoolgirls chant "Woman, Life, Freedom" (Zan, Zendegi, Azadi). Umbria is known as the "Green Heart of Italy," Umbria brags untouched landscapes in its verdant hills, mountains, and valleys. Etruscans, Romans, and medieval feuding families have left an incredible artistic and cultural heritage, while priests and monks have given a fascinating religious imprint on its towns. During my visit to Umbria in late summer, I met a couple from New York at their marvelous farmhouse. I had a short yet fascinating conversation with the husband, a distinguished anthropologist and university professor, while my wife and our friends were getting the tour by the wife - a famous Journalist. The couples were in their mid-80s.
The husband asked about my profession, and I said, "I'm in AI Education." He immediately asked: "Can AI understand irony?" That question still puzzles me today. I put the answer to this question on one side and started focusing instead on the question itself. I focused on a more fundamental question I have been thinking about lately. I have been thinking about "consciousness," the complicated problem and even more complex question in the field of AI. Exploring a bit into philosophy, the complex problem of consciousness is the problem of explaining why any physical state is conscious rather than nonconscious. It is truly the problem of explaining why there is "something lit is like" for a subject in conscious experience, why conscious mental states "light up" and directly appear to the subject. The usual methods of science involve an explanation of functional, dynamical, and structural properties—an explanation of what a thing does, how it changes over time, and how it is put together. But even after we have explained the conscious mind's functional, dynamic, and structural properties, we can still meaningfully ask why it is deliberate. This suggests that an explanation of consciousness will have to go beyond the usual methods of science. Consciousness presents a complex problem for science, or perhaps it marks the limits of what science can explain. Explaining why consciousness occurs at all can be contrasted with the so-called "easy problems" of consciousness: the problems of explaining the function, dynamics, and structure of consciousness. These elements can be described using the usual methods of science. But that leaves the question of why there is something it is like for the subject when these functions, dynamics, and structures are present. This is a complicated problem. But let's, for a moment, assume a conscious being is one capable of having thought and not disclosing it. This means consciousness would be the prerequisite for irony or saying one thing while meaning the opposite, which happens in my Persian culture. We know we are being ironic when we realize our words don't correspond with our thoughts. The truth is that most of us have this unique capacity - and most of us certainly and regularly convey our unspoken meanings in this way - is something that, I think, should surprise us more often than it does. It indeed seems almost discreetly human. Animals can be funny but not deliberately so. So how about computers or machines? Can they deceive? Can they keep secrets? Can they be ironic? The truth is that anything related to AI is already being studied or researched by an army of obscenely well-resourced computer scientists and AI researchers. This is also the cares with the question of AI and irony, which has recently attracted significant research among academia and private companies. Of course, since irony involves saying one thing while meaning the opposite, creating an intelligent machine that can detect and generate it is not a simple task. But if the AI community could make such an intelligent machine, it would have many practical applications, some more sinister than others. In the age of Google online reviews, among others, retailers have become very keen on so-called "opinion mining" and "sentiment analysis," which utilize AI to map the content and the mood of reviewers' comments. Knowing whether the product is being praised or becoming the butt of the joke is valuable information. And this is what Amazon is doing currently. Or even consider content moderation on various social media platforms. If let's say, Twitter or Facebook wants to limit online abuse while protecting freedom of speech, would it not be helpful to know when someone is serious or when they are just joking? Or what if someone tweets that they have just done something crazy and illegal? (don't ever tweet crazy or illegal stuff, by the way). Imagine if we could determine instantly whether they are serious or whether they are just "being ironic." The truth is that given irony's proximity to lying, it's not hard to imagine how the entire shadowy machinery of government and corporate surveillance that has grown up around new communications technologies would find the prospects of an irony-detector extremely interesting. And that goes a long way toward explaining the growing literature on the topic in the AI field. To better understand the state of current research into AI and irony, it is beneficial to know a little about the history of AI in general. That history is broken down into two periods. In the 90s, AI researchers sought to program computers with a set of handcrafted formal rules for how to behave in predefined environments. For example, if you used Microsoft Word in the 90s, you might remember the annoying office assistant Clippy, who was endlessly popping up to offer unwanted advice. Since the early 2000s, that model has been replaced by data-driven machine learning and sophisticated neural networks. Enormous caches of examples of given phenomena are translated into numerical values, on which computers can perform complex mathematical operations to determine patterns no human could ever discover. Moreover, the computer doesn't merely apply a rule. Instead, it learns from experience and develops new operations independent of human intervention. The main difference between the two approaches is between Clippy and facial recognition technology. To create a neural network that can detect irony, AI scientists focus initially on what some would consider its simplest form: sarcasm. AI scientists begin with data stripped from social media. For example, they might collect all tweets labeled "sarcasm" with or without # of course, or Reddit posts labeled/s, a shorthand that Reddit users employ to indicate they are not serious. The point is not to teach the computer to recognize the two separate meanings of any given sarcastic post. Indeed, meaning is of no relevance whatsoever. Instead, the computer is instructed to search for recurring patterns, or what researchers call "syntactical fingerprints" - words, punctuations, errors, emojis, phrases, context, and so forth. On top of that, the dataset is bolstered by adding even more streams of examples - other posts in the same treads, for instance, or from the same account. Each new individual sample is then run through a battery of calculations until we arrive at a single determination: sarcastic or not sarcastic. Last, a bot can be programmed to reply to each original poster and ask whether they were being sarcastic. Any reply can be added to the machine's growing mountain of experience. So, assuming AI will continue to grow and advance at the rate that took us from Clippy to facial recognition technology in less than two decades, can Ironic androids be far off? It could be argued that there are qualitative differences between sorting through the "syntactical fingerprints" of irony and understanding it. Some might suggest not. If a computer can be taught to behave exactly like a human, then it's immaterial whether a rich internal world of meaning lurks beneath its behavior. But I would argue that iron is a unique case; it relies on the distinction between external behaviors and internal beliefs. While AI scientists have only recently become interested in irony, philosophers and literary critics have been thinking about irony for a very, very, very long time. And perhaps exploring that tradition would shed old light, as it were, on a new problem. Of the many names one could think about in this context, two are indispensable: the German Romantic philosopher Friedrich Schlegel; and the post-structuralist literary theorist Paul de Man. As for Schlegel, irony does not simply entail a false external meaning and a true, internal one. Rather, two opposite meanings are presented as equally valid in irony. And the resulting indeterminacy has devastating implications for logic, most notably the law of non-contradiction, which holds that a statement cannot be simultaneously true and falls. De Man follows Schlegel on this score and, in a sense, universalizes his insight. De Man notes that every effort to define a concept of irony is bound to be infected by the phenomena it purports to explain. Indeed, de Man believes all language is infected by irony and involves "permanent parabasis." Because humans have the power to conceal their thoughts from one another, it will always be possible - permanently possible - that they do not mean what they are saying. The irony, in other words, is not one kind of language among many; it structures or, better, haunts every use of language and every interaction. And in this sense, it exceeds the order of proof and computation. The question is whether the same is true of human beings in general. Technology capitalism is the dominant economic establishment of our time, and it is on a crash course with democracy, and this is more visible than ever in the Western world. Technology capitalism’s giants—Google, Facebook, Amazon, Microsoft, and Apple—now possess, operate, and mediator nearly every aspect of human interaction with global information and communication systems, unconstrained by public law. All roads to economic, social, and even political participation now lead through a handful of unaccountable companies, a state that has intensified during two years of the COVID-19 pandemic.
The result is a path of social decay:
Rights and laws once codified to defend citizens from industrial capitalism—such as antitrust law and workers’ rights—do not shield us from these harms. If the ideal of the people’s self-governance is to endure this century, then a democratic counterrevolution is the only solution. U.S. and European lawmakers have finally begun to think seriously about regulating privacy and content. Still, they have yet to consider the far more fundamental question of structure and govern information and communication for a democratic digital future. Three principles could offer a starting point. First, the democratic rule of law governs. There is no so-called cyberspace immune to rights and laws, which must apply to every domain of society, whether populated by people or machines. Publishers, for example, are held accountable for the information they publish. Even though their profit-maximizing algorithms enable and exploit disinformation, technology capitalists have no such accountability. Second, unprecedented harms demand unprecedented solutions. Existing antitrust laws can break up the tech giants, but that won’t address the underlying economics. The target must be the secret extraction of human data once considered private. Democracies must outlaw this extraction, end the corporate concentration of personal information, eliminate targeting algorithms, and abolish corporate control of information flows. Third, new conditions require new rights. Our era demands the codification of epistemic rights—the right to know and decide who knows what about our lives. These fundamental rights are not codified in law because they have never come under systemic threat. They must be codified if they are to exist at all. We can be a technology capitalist society or a democracy, but we cannot be both. Democracy is a fragile political condition dedicated to the prospect of self-governance, harbored by the principle of justice and maintained by collective effort. Each generation’s mission is always the same: to protect and keep democracy moving forward in a relay race against anti-democratic forces that spans centuries. The liberal democracies have the power and legitimacy to lead against technology capitalism and do so on behalf of all peoples struggling against a dystopian future. The most influential architect of the U.S. political system, James Madison, was deeply fascinated by the Enlightenment thinkers who saw politics as a science. They imagined a system of checks and balances producing good government almost as a machine with wheels and pulleys could have motion or transfer energy. They did not expect people to be wise or virtuous. “If men were angels,” Madison famously wrote in the Federalist Papers, “no government would be necessary.” Madison built a system, he believed, that did not require virtue to function. “Ambition must be made to counteract ambition,” he urged, and from this conflict of interest would come ordered liberty and democracy. This American model became the template for much of the world. In the United States and worldwide, we are now witnessing experiments in politics without angels—and they aren’t working so well. Democratic institutions have weakened in many places, broken in others, and feel under stress where they are still functioning. Those countries that have not faced the full furies of populism and nationalism—Germany and Japan are the most striking examples—have escaped these dangers because of their culture and history rather than some better democratic design. Everywhere, Ralph Waldo Emerson’s truth seems to hold: Institutions are merely lengthened shadows of men. If such men fail and misbehave, venally or irresponsibly, the democratic system is endangered. We enter the 21st century asking one of the oldest political questions, much older than the Enlightenment ideas that democracy was built on. It is a question the ancient Greeks and Romans debated more than two millennia ago: How do we produce virtue in human beings? We all know that Zoom (or Google Meet, which I often use) causes fatigue, social media spreads misinformation, and Google Maps wipes out our sense of direction. We also know, of course, that Zoom allows us to cooperate across continents, that social media (Twitter, Instagram, or TikTok) connects us to our families and friends, and Google Maps keeps us from being lost. Today's technological criticism concerns whether a technology is good or bad or judging its various applications. But there’s an older tradition of criticism that asks a more fundamental and nuanced question: How do these technologies change the people who use them, both for good and bad? And what do the people who use them — all of us, in other words — actually want? Do we even know?
L.M. Sacasas explores these questions in his great newsletter, “The Convivial Society.” His work is marrying the theorists of the 20th century — Hannah Arendt, C.S. Lewis, Ivan Illich, Marshall McLuhan, Neil Postman, and more — to the present day's technologies. This merging of past thinkers and contemporary concerns is revelatory in an era when we tend to take the shape of our world for granted and forget how it would look to those who stood outside it or how it looked to those who were there at the inception of these tools and mediums. Sacasas recently published a list of 41 questions we should ask of the technologies and tools that shape our lives. What I admired about these questions is how they invite us to think not just about technologies but about ourselves, and how we act and what we want, and what, in the end, we actually value. I highly recommend listening to L.M. Sacasas's conversation with Ezra Kline. Here is the list of those 41 questions. I'd love to hear your answers to some of these questions:
But is this really the only morally relevant question one could ask? For instance, pursuing the example of the hammer, might I not also ask how having the hammer in hand encourages me to perceive the world around me? Or, what feelings having a hammer in hand arouses? As our world's more prosperous and fully vaccinated countries like the United States begin to come out from the Pandemic, there's a lot of talk about "the office." I have been thinking about "the office" as I spent most of the week at our Pittsburgh office preparing for one of our artificial intelligence camps at Winchester Thurston by ReadyAI.
Many business executives say that they expect employees to split time between working from home and the office, according to the latest report by McKinsey. (Click here to read the report). Savvy entrepreneurs are even making special speakers for remote workers to feel like they are in the office even when they aren't. They're also sending them care packages and subsidizing part of childcare services. I agree that office debate is an essential one. In fact, I am writing this piece from my home. But billions of people around the world, this debate is not relevant to their work and lives. Mainly because billions of people have jobs that cannot be done from a distance, for example, giving haircuts, tending to seriously ill or injured patients, or serving food. Or, perhaps jobs in occupations like sanitation, farming, deliveries, or transportation are essential but not confined to any specific space. The International Labor Organization estimates just 18 percent of the global workforce, or approximately 557 million people, were consistently teleworking during the Pandemic. (Click here to read the report). That's triple what it was before COVID. But it still leaves over 2.7 billion people worldwide for whom the "back-to-the office debate" sounds like something from another planet. Let's not ignore that those 2.7 billion people and their families have been hit hardest by COVID in terms of hours and wages lost, emotional trauma, and destructive unemployment. Today the division between the "Zoom class" and the rest of the world tracks some of the more obvious fault lines of inequality that cut across our communities and societies. Today even in prosperous economies like the US, only a small portion of workers can telework consistently. Here in the US, it's about a fifth. But the numbers are far lower in middle-income countries as the size of the "laptop class" or "Zoom class" plummets. For example, in India, where more than 470 million people work in retail for agriculture, only five percent can Zoom to the job. The numbers in Africa are alike. Let's think about this a bit further. This is because in-person services jobs are more prevalent in less developed countries - you are five times as likely to be a street vendor in a middle-income nation as you are in a wealthy one and 16 times as likely to work in agriculture. This is also about constraints on internet connectivity and internet services. Most countries don't have the internet infrastructure to support massive teleworking populations. On top of that, many of these countries have also experienced the additional blow of losing remittances from their citizens working abroad in jobs that often aren't "remote-workable." Also, high debt obligations and lack of cash mean that low and middle-income countries cannot roll out the kinds of unemployment benefits or infrastructure rebuilding programs that we've seen in the US or Europe. More flourishing countries have allocated up to 30 percent of their GDP to cushion the pandemic blow. but low and middle-income countries mustered less than six percent. (Click here to read the IMF report.) The bad news is that Pandemic has put decades of poverty reduction in reverse. In 2020 alone, more than 120 million people fell below the poverty line globally, and the number of people living in extreme poverty rose for the first time in 24 years (since 1997). (Click here to read the report) Today even in rich countries, non-remote jobs are overwhelmingly in lower-income, economically vulnerable professions. According to a recent Pew study, more than 3/4 of low-income workers in Americans can't work from home at all. Non-remote jobs have higher proportions of women, ethnic minorities, and younger people - groups went into the Pandemic at an economic disadvantage. All suffered disproportionate financial losses during the crisis itself. The Pandemic is far from over, but and many things need to be done. Globally, more prosperous countries need to look at the question of debt relief for cash-strapped developing nations. But even within more prosperous countries, better compensation and labor protection for "essential workers" are genuinely essential. It is nice to be out on the balcony applauding the essential workers and tweeting about them, but unless we start compensating them more, what happens if another pandemic comes around? On Sunday, January 31st, 2021, I received a text message from my friend, Reza Manesh. Reza informed me that he was 'almost done with his book' and would like me to "review with a critical eye and provide feedback to make it better."
I did review the draft. But it was the book that made me better at finding joy in life through the fantastic work that is part reflection and part memoir. Today Reza (as he likes to be called) is a leading voice in medical education. As he tells in his recently published book - Finding JOY in Medicine - the journey was not easy. But through learning and developing three essential attributes humanism, humility, and desire for growth - he has discovered joy in medicine, and I would argue in life too. (Watch Reza's Journey) Finding JOY in medicine is not about any particular destination. Having read the book several times, I am convinced it is not just about medicine. It is about authenticity, humanity, humility, and vulnerability. Reza reminds me of Walt Whitman's quote, "I am large. I contain multitudes." This is a story of multitudes of our shared humanity, humility, and wisdom through the story of Reza. The wisdom that I have noticed is the one that embodied moral elements out of Reza's very own moment of suffering and learning, which comes from compassionate regard for the fragility of others. Finding Joy in Medicine should be required reading for everyone, particularly my generation. It is a reminder that we should not chase: a job for money, compliments, imitations, job titles, influence, or shortcuts to learn. Reza reminds us to pursue: quality time with loved ones, a healthy lifestyle, work that brings joy, ways to help others, opportunities to learn and grow and show gratitude. The main character in the book is not Reza, or his patients, or other doctors. It's AGHA JOON (Persian-Farsi for grandfather - Reza's grandfather). He was not a doctor. He was an ordinary man, and there was just something extraordinary about him. The book starts with AGHA JOON's poem. This is just a reminder that we read poetry because the world is more than the facts, laws, and realities. Life is indeed way too short, and AGHA JOON and Reza remind us that perhaps poems will make it last a bit longer. So we can all find JOY... We all can gain from reading great writers. Some of the most successful business people and entrepreneurs do a lot of reading and thinking.
Visionary business minds like Warren Buffet, Bill Gates, or Peter Thiel teach us something that is often ignored by most of us. They all read a lot. The compelling ideas from history, philosophy, literature, and other humanities disciplines can be valuable sources for innovation and success as the more measurable fields of economics, science, and technology. The humanities brim with lessons and models for anyone ready and willing to examine them. Suppose we want to learn more about effective decision-making. In that case, we can study how President Lincoln kept the Union together during the Civil War, or perhaps why Ceasar crossed the Rubicon, or how JFK kept the world out of nuclear Armageddon during the Cuban Missile Crisis. Suppose we are looking for a fantastic manual on leadership. In that case, we should consider Martin Luther King Jr.'s writing on civil disobedience in his "Letter from Birmingham Jail" or Machiavelli's discussion on the nature of power in The Prince. This summer, I'll be reading and re-reading the following books, and let me tell you why. Plato, Apology (c. 399 BC) I frequently ask myself about my core principles or how I can develop a guiding set of ethics that no promise of money or power can corrupt? In fact, these are some of the questions that Plato's Apology answers, as it tells us the story of how Socrates faced his accusers at the trial that would end in his death sentence. Socrates teaches us all about how to craft a thoughtfully considered code of ethics that is the ultimate source of what Socrates called the "good life." Machiavelli, The Prince (published in 1532) It is a fantastic text on political power that can help us all in the modern world. The Prince is written at a time of remarkable change in Florence, and Machiavelli tried to fix the broken political system by whatever means required. It gets to the core of questions like, is it better to be feared or loved? or perhaps more advantageous actually to be powerful, or to seem powerful? Shakespeare, Othello (c. 1603) Shakespeare's legendary Othello is a reminder for all of us that to understand anyone; we must try first hearing his or her particular story. Melville, Billy Budd, Sailor (published in 1924) I won't spoil the story, but in life, at some point, we all face a terrible decision like Captain Vere. Should we follow our hearts? Should we not? These are the four classics I'll be reading this summer. Perhaps it might seem unusual, even counterintuitive, that each of these books was written decades and centuries ago. But I believe the best books remain indispensable for the long haul. That is why, as forward-thinking global citizens, we should engage with great writing. And that is why Plato believed that 'storytellers rule the world." We read great writings because they teach us; humility, curiosity, collaboration, and perspective. That's is something we can't learn from Instagram, Twitter, or Netflix. "What makes me happy?" I ask this question of myself frequently. We all do. But what makes a happy country?
For the 4th year in a row, Finland topped a list of countries evaluated as the happiest country in the world. I've interacted with Finns because of my work with ReadyAI. Finland has one of the best free intro AI courses for adults. I've completed the course last year and learned a lot. Yes, it was all free. I urge you all to look at the World Happiness Report. The report uses data from interviews of more than 350,000 people in over 95 countries and conducted by the polling company Gallup. The actual rankings are not based on factors like income or life expectancy but on how people rate their own happiness on a 10-point scale. Questions in the report are fascinating and include: "Did you smile or laugh a lot yesterday?", "Did you learn or do something interesting yesterday?" or "Were you treated with respect all day yesterday?" There are questions related to trust. Someone who thought the police or strangers were "very likely" to return his or her lost wallet had a much higher life evaluation score than someone who believed the opposite. Let me go back to Finland. It is an egalitarian society; people tend not to be fixated on "keeping up with the Joneses." People do pretty well in social comparison. And this starts from education; everybody has access to good education. Income and wealth differences are relatively small. Finns also tend to have realistic expectations for their lives. But when something in life does exceed expectations, people will often act with humility, preferring a self-deprecating joke over bragging... In fact, Finns are pros at keeping their happiness a secret. Once again, I urge you all to read the report. All of the countries ranked in the top 10 - including the four other Nordic countries - have different political philosophies than the US, No. 14 on the list, behind Ireland and Canada. Finland is far from perfect. Like many countries, far-right nationalism is on the rise, and unemployment is 8.1%, higher than the average unemployment rate of 7.5 percent in the EU. But there is a lot about the country that is indeed great. The country's public school system, which rarely tests kids, is among the best in the world. College is free. There is an excellent universal healthcare system, and child care is affordable. And the country has been one of the least impacted European countries by the pandemic, which is attributed to the high trust in government and little resistance to following restrictions. Yes, Trust... People trust each other. Each morning, it is common in Helsinki to see kids as young as seven walkings by themselves with their backpacks to school, feeling completely secure. That epitomizes Finnish happiness. There is something they've done right, and we can all learn from it. |
AuthorRoozbeh, born in Tehran - Iran (March 1984) Archives
April 2024
Categories |