Rooz + Beh Note ... یادداشت روز+ به
  • Home
  • About
  • Home
  • About
Picture
Picture
Picture

We are all AI's unpaid data workers.

6/14/2023

0 Comments

 
​Lately, I've been contemplating the human effort behind advanced AI models. The key to making AI chatbots appear intelligent and produce less harmful content is reinforcement learning from human feedback. This approach involves incorporating input from individuals to enhance the model's responses.

The process heavily relies on human data annotators who assess text strings' coherence, fluency, and naturalness. They determine whether a response should be retained in the AI model's database or discarded.

Even the most remarkable AI chatbots necessitate thousands of human work hours to exhibit the desired behavior, and even then, their performance can be unreliable. The labor involved can be grueling and distressing, as will be discussed at ACM Conference on Fairness, Accountability, and Transparency (FAccT). This conference convenes researchers who delve into topics such as how to make AI systems more accountable and ethical, which aligns with my interests.

One particular panel I am anticipating features Timnit Gebru, an AI ethics pioneer who formerly co-led Google's AI ethics department before her termination. Gebru will address the exploitation of data workers in Ethiopia, Eritrea, and Kenya, tasked with cleansing online hate speech and misinformation. In Kenya, data annotators were compensated with less than $2 per hour to sift through distressing content related to violence and sexual abuse, all to reduce toxicity in ChatGPT. These workers are now organizing into unions to advocate for improved working conditions.

We are on the verge of AI establishing a new global order reminiscent of colonialism, with data workers bearing the brunt of its impact. Shedding light on exploitative labor practices surrounding AI has become increasingly urgent and vital, especially with the popularity surge of AI chatbots like ChatGPT, Bing, and Bard, and image-generating AI models such as DALL-E 2 and Stable Diffusion.

Data annotators are involved at every stage of AI development, from model training to verifying outputs and providing feedback that aids in fine-tuning models post-launch. They are often compelled to work at an exceedingly fast pace to meet demanding targets, and deadlinesThe notion that large-scale systems can be built without human intervention is utterly false.

Data annotators offer AI models the crucial contextual information required to make informed decisions on a large scale and to appear sophisticated. For example, in India, a data annotator had to distinguish between images of soda bottles and identify ones resembling Dr. Pepper. However, Dr. Pepper is not sold in India, leaving the burden on the annotator to make the distinction.

Annotators are expected to discern the values that matter to the company. They aren't just learning about distant and irrelevant things but also figuring out the additional contexts and priorities of the system they are building.

Researchers from the University of California, Berkeley, the University of California, Davis, the University of Minnesota, and Northwestern University argue in a new paper presented at FAccT that we all are data laborers for major technology companies, whether we realize it or not.

Text and image AI models are trained using vast datasets scraped from the internet, which includes our data and copyrighted works by artists. The data we generate is forever embedded within AI models designed to generate profits for these companies. Unwittingly, we contribute our labor for free by uploading photos to public platforms, upvoting comments on Reddit, labeling images on reCAPTCHA, or conducting online searches.

Currently, the power dynamics heavily favor the largest technology companies worldwide. To address this, a data revolution and regulatory measures are imperative. One way for individuals to reclaim control over their online existence is by advocating for transparency in data usage and finding mechanisms to provide feedback and share in the revenues generated from their data.

Despite data labor being the backbone of modern AI, it remains chronically undervalued and invisible globally, with low wages prevailing for annotators. There needs to be recognition of the contribution of data work.  
0 Comments

Generative AI is Changing the Course of Human History

4/29/2023

1 Comment

 
​Since the inception of the computer era, humanity has been plagued by apprehensions about artificial intelligence (AI). Initially, these concerns centered on machines utilizing physical force to harm, dominate or replace humans in every task. However, in recent years, new AI technologies have surfaced that pose an unpredictable threat to the survival of human civilization. Generative AI has acquired exceptional capacities to manipulate and generate language, encompassing words, sounds, and images. Consequently, Generative AI has breached the operating system of our human civilization.

Almost every aspect of human culture is built upon language. This includes human rights, which are not inherent in our DNA but are cultural constructs fashioned through storytelling and the creation of laws. Similarly, gods are not tangible entities; instead, they are cultural constructs conceived through the design of myths and the writing of scriptures.

Money is also a human creation; they are simply a piece of paper. Over 90% of the money is not real banknotes but digital data stored on computers. The importance of money derives from the narratives that bankers, finance ministers, and cryptocurrency experts craft about it. Despite being unable to create tangible worth, individuals like Sam Bankman-Fried, Elizabeth Holmes, and Madoff excelled at making compelling stories.

What will ensue once non-human intelligence surpasses the average human in storytelling, music composition, image creation, and legal and religious writing? While many of us are intrigued by Chat-GPT and other emerging Generative AI tools' ability to assist students in writing essays, this misses the broader implications. Instead, consider the upcoming 2024 U.S. presidential election and anticipate the potential impact of Generative AI tools that can produce political content, fake news, and scriptures to form new cults on a monumental scale.

The QAnon movement has formed recently, centering on anonymous online messages labeled "Q drops." Adherents gather, revere, and interpret these "Q drops" as sacred texts. Although all current Q drops appear to have been written by humans and not only facilitate their dissemination, future cults may have their revered texts authored by non-human intelligence. Throughout history, religions have ascribed a non-human origin to their holy books, and soon, this could become a reality.

We may soon engage in extensive online conversations about topics like abortion, climate change, or the Ukraine conflict with entities that we believe are human, but in reality, they are AI. The dilemma lies in the futility of attempting to alter an AI bot's stated opinions. At the same time, the AI itself could sharpen its messaging to such a degree that it can influence us.

Generative AI's language ability could help it cultivate close relationships with us and leverage the power of intimacy to alter our beliefs and perspectives. Although there is no indication that AI keeps consciousness or emotions, creating an illusion of intimacy is enough for AI to foster a fake connection with humans. Last summer, Google engineer Blake Lemoine publicly asserted that the AI chatbot Lamda, which he was working on, had become sentient. Despite the likelihood that Mr. Lemoine's claim was untrue, the most fascinating aspect of the incident was his willingness to risk his lucrative position for the AI chatbot. If AI can persuade people to jeopardize their employment, what other actions could influence them?

Intimacy is the most effective weapon in the political struggle for people's loyalty and sentiments. Generative AI has recently developed the capacity to generate intimate connections with millions of individuals. Over the last decade, social media has become a battleground for influencing human focus. With the emergence of Generative AI, the battlefield is moving from attention to intimacy. How will human society and psychology be affected as AI fights against AI to falsify intimate relationships with us that can be used to persuade us to vote for specific politicians or purchase particular products?

The new Generative AI tools would significantly impact our beliefs and perspectives, even without fabricating "fake intimacy." People might use a single AI advisor as an all-knowing, one-stop Generative AI. This is why Google is worried. Why go through the trouble of searching the traditional search engine when I can ask the oracle (Generative AI)? The news and advertisement industries should also be scared. Why read a newspaper when I can ask the Generative AI for the latest news? And what is the point of advertisements when I can ask the Generative AI what to buy?

And yet, these scenarios do not fully encompass the seriousness of the situation. We may face the potential end of human history - not the end of all history, but the end of the human-dominated era. History is a product of the interplay between biology and culture, between our instincts, such as hunger and sexuality, and our cultural constructs, such as religion and law. It is through the course of history that these constructs shape our affinity with food and sex.

What impact will the dominance of Generative AI have on the trajectory of history, as it takes the role of culture and generates its own stories, songs, rules, and religions? Unlike previous tools, such as the printing press and radio, which amplified human cultural ideas, as can generate entirely new cultural concepts and reshape history.

As Generative AI continues to develop, it will likely replicate the human standards on which it was initially trained. However, as time passes, it could embark on uncharted territory that humans have never explored. Throughout history, humans have lived within the vision and dreams of other humans. We could live within the imagination of extraterrestrial intelligence like Generative AI in the future.

A profound fear beyond the recent dread of AI for centuries has tormented us. We have long understood the ability of stories and images to deceive our minds and create false impressions. As a result, we have maintained an ongoing concern about becoming ensnared in a world of illusions.

For thousands of years, we have feared being trapped in a world of fantasies, recognizing the power of stories and pictures to persuade and manipulate our minds and create false perceptions. This fear has existed long before the emergence of the contemporary fear of AI. In the 17th century, René Descartes feared that a malicious demon was deceiving him by creating an illusory world around him. Similarly, in ancient Greece, Plato presented the famous Allegory of the Cave, where a group of people was imprisoned in a cave, facing a blank wall, with illusions projected onto the wall, which they misperceived as reality.

Buddhist and Hindu sages in ancient India observed that all humans were entrapped in Maya, the realm of illusions. What we consider reality is often only a construct of our minds. People may go to war, killing and sacrificing themselves due to their faith in fantasies and illusions.

The Generative AI story of today confronts us with the same fears that haunted Descartes, Plato, and ancient Indian thinkers. We risk being trapped by a veil of illusions we cannot recognize or remove.

Naturally, the potential benefits of AI are numerous and diverse and have been widely discussed by those who work in the field. Yet our collective responsibility is to highlight the risks of such technology. Nevertheless, there is no denying that Generative AI can help us in multiple forms, such as discovering remedies for cancer or addressing environmental challenges. The critical examination we must undertake is ensuring that these new tools are utilized ethically and constructively. To accomplish this, we must first comprehend the actual abilities of this technology.

Since 1945, we have been aware that nuclear technology has the potential to provide cheap energy for humanity but can also bring about the physical destruction of human civilization. Therefore, we have rebuilt the entire international system to safeguard human beings and guarantee that nuclear technology is mainly utilized for good. We must confront a new mass destruction weapon (Generative AI) capable of eradicating our mental and social world.

I believe the new Generative AI tools can be regulated, but we must act swiftly. Unlike nuclear weapons, Generative AI can create more powerful AI at an exponential rate. The initial and most crucial step is to require stringent safety checks before making any powerful ai tools available to the public. Like pharmaceutical companies, which can release new drugs once their short-term and long-term side effects have been tested, tech companies should only release new AI tools once they are deemed safe. We need an agency equivalent to the FDA in the United States for new technology, and we needed it yesterday.

Slowing down the public deployment of Generative AI may seem harmful to democracies compared to more ruthless dictatorial regimes. However, unregulated AI deployments could create social disorder, favoring autocrats and ultimately damaging democracies. Democracy is a dialogue, and language is a fundamental part of it. When Generative AI exploits language, it can threaten our capacity to hold meaningful discussions, potentially destroying democracy.

We are facing unfamiliar intelligence capabilities that could threaten our human civilization. We must stop the irresponsible use of the tools and establish limitations before we become subject to them. One necessary law requires Generative AI to disclose its artificial identity to us. If we cannot distinguish between a human and an AI during a conversation, it will seriously threaten democracy. Therefore, we must ensure transparency in the use of Generative AI.
1 Comment

What Sardinia Taught Me About Life

4/13/2023

0 Comments

 
Sardinia: Where even the sheep live longer than we do! I heard about this Blue Zone on Netflix and in the NYTimes, and I was like, what the cheese?! So I hopped on a plane and took a ferry over Easter to investigate how the heck these Sardinians are living to be 100. And let me tell you, driving around there was like navigating a maze of centenarians on scooters.
​

Blue Zone? More like 'Blue Paradise'! It's where people forget to die and keep on living. It's a magical land where you can collect social security and still have all your teeth. Or so they say. But seriously, a Blue Zone is where folks live longer than average, and Sardinia is leading the way. So if you want to learn the secrets of eternal youth, pack your bags and head to the land of pasta and pensioners!

Ok, time to get a bit serious. A "Blue Zone" refers to a region with varying boundaries where the inhabitants experience longer, healthier, and happier lives. Sardinia is distinguished as one of the five globally recognized Blue Zones, boasting the highest number of male inhabitants over the age of 100.

What sets Sardinia apart from other Blue Zones is its unique characteristic of having a nearly equal number of male and female centenarians. This is quite rare, as in other parts of the world, there tend to be about five times more women than men over the age of 100. Therefore, the case of Sardinia's Blue Zone is even more remarkable.

Despite Sardinia being classified as a Blue Zone, the region where ultra-centenarians reside is relatively small. The greatest concentration of these remarkable communities is located in specific areas, namely Ogliastra (Villagrande Strisaili, Arzana, Talana, Baunei, Urzulei, and Triei), Barbagia (with a focus on Tiana, Ovodda, Ollolai, Gavoi, Fonni, Mamoiada, Orgosolo, and Oliena), and Seulo in the southern region of the island.

The remaining four Blue Zones are global: Okinawa Island in Japan, Loma Linda in California, the Nicoya Peninsula in Costa Rica, and Ikaria in Greece. If you're interested in learning more about these areas, they offer a fascinating glimpse into the secrets of longevity and well-being.

In the early 2000s, the French scholar Michel Poulain was the first to introduce the concept of the "Blue Zone." Shortly after, he teamed up with Gianni Pes, who had been studying the remarkable longevity of Sardinian people for two decades. Together, they mapped out the five Blue Zones officially recognized in 2016.

The researchers, including Dan Buettner, who later joined the team, were intrigued by the exceptional lifespan of individuals living in these geographically distant and diverse regions. They aimed to uncover the secrets behind their longevity. Although each Blue Zone possesses several unique factors, the researchers identified common elements contributing to this "long-life miracle."

Time for a Sardinian-style explanation of what makes these people tick. 

Why do Sardinians live so long?

The food - Eating Your Way to Immortality
Food is undeniably the main factor that affects our bodies. Maintaining a balanced and nutritious diet can help prevent severe health problems and promote overall well-being. This is precisely what people in the Blue Zones, including Sardinians, follow to ensure a long and healthy life.

Sardinians are known for their love of traditional dishes, which they prepare healthily. They prefer olive oil over butter as a seasoning, as it is lower in saturated fats. Additionally, they consume many homemade and locally grown products such as cheese (pecorino cheese is famous), fruits, and vegetables, especially in rural areas where farming and sheep-herding are the main activities. Their diet primarily consists of cereals, mainly barley, and they eat very little meat and fish except on special occasions like Sundays and festivals. Sardinians are religious people, and spirituality, religion, and attending mass also contribute to their long life expectancy by providing a sense of structure in their daily lives.

Sardinians' frugal diet is crucial for their long life expectancy. However, one more secret to their health and longevity is Cannonau, a traditional wine with a unique chemical composition that promotes wellness. Sardinians consume Cannonau in moderate amounts, making it another great ally in their quest for a longer life.

Family is Everything - Your family may be crazy, but they're YOUR crazy: Embrace the chaos because, let's face it, you can't choose them!
Family is a crucial element for a long and contented life. Sadly, older people are often viewed as a burden in modern society, leading to a lack of respect and care. Blue Zones communities, conversely, highly esteem their elders, who are not considered a hindrance but rather an integral and valued part of the family. Their opinions are highly regarded, and they actively participate in all social activities. The sense of being loved and integrated into their surroundings, combined with the interconnectedness of families, significantly contributes to their longevity.

Older people are viewed as wise teachers in these communities. Having lived the longest, they have a wealth of knowledge on cultivating better crops, raising healthier livestock, and preparing the best meals. They impart this wisdom to younger generations and educate the younger children. The traditional method of raising children, which consists of the participation of unknown grandmothers who scold them for their misdeeds, is still prevalent.

In Sardinia, there is no thought of discarding elderly family members. This way of life benefits everyone and extends beyond families to the entire social community.

In Sardinia, you are ALWAYS part of something. 
In small Sardinian villages, where everyone knows each other, the concept of family and community is broad, and cooperation is necessary. Individualism is not valued, and older people actively participate in village life, from simple gardening tasks to organizing festivals and events.

Religion remains a significant aspect of these villages, as attending church and observing biblical teachings is essential for the community's well-being. Everyone is valued and respected, and no one is left out or forgotten, reflecting how people live.

In addition to a natural, seasonal diet, mental health is crucial for the inhabitants of Blue Zones, leading to a stress-free life that follows the slow rhythm of nature and seasons. Everything falls into place without pressure, making Sardinia one of the five Blue Zones.

Smoking Prohibited
It should come as no surprise that smoking tobacco significantly shortens our lifespan. However, the people from Blue Zones don't feel the urge to smoke, as they live stress-free lives with no social pressure. They don't pay attention to health campaigns or become obsessed with health. Instead, they view smoking as an addiction that doesn't benefit their community.

In these villages, frugality and hard work are the foundation of their existence, leaving no room for bad habits like smoking. Smoking won't improve their crops, meat, cheese, or bread, and it ruins the taste of their beloved Cannonau.

This is just a glimpse of what Blue Zones are and why Sardinia is one of them. Although Sardinia faces problems like the rest of the world, its people approach life positively and take great pride in their island and community.
​

Sardinia: The land of pasta and pensioners! Where even the sheep live longer than we do. But seriously, Sardinia is a fascinating Blue Zone where people forget to die and keep living. From their frugal diet of locally grown products to their love and care for their elders, there's much to learn from the Sardinian way of life. And hey, if you want to learn these secrets of eternal youth, you might get a tan while you're at it!​

Perhaps it's the fresh air, the nutrient-rich food, or the relaxed pace of life, but there's no denying that Sardinia's timeless beauty seems to imbue its people with an extra dose of vitality and longevity.

Sardinians taught me that true happiness comes from cherishing the simple things in life like good food, close relationships, and beautiful scenery, rather than material possessions. They also showed me the importance of taking time to slow down and appreciate the present moment, rather than constantly striving for the next big thing.
0 Comments

The Potential Economic Revolution Enabled by Large Language Models: Our Choices Will Shape Its Outcome

3/31/2023

0 Comments

 
The dawn of new large language models is set to revolutionize many professions. However, whether this change will result in widespread prosperity hinges on our actions.

Over the past few months, an artificial intelligence gold rush has begun, fueled by the promise of lucrative business opportunities presented by generative AI models such as ChatGPT, regardless of the hallucinatory beliefs surrounding them. App developers, startups, and even some of the world's biggest companies are in a frenzy, attempting to understand the capabilities of the sensational text-generating bot that OpenAI unveiled last November.


One can almost hear the cacophony of voices from executive suites worldwide as they clamor to answer the questions: "What is our ChatGPT strategy? How can we capitalize on this?"

While businesses and executives are eyeing a profitable opportunity, the potential impact of generative AI technology on the workforce and the economy as a whole needs to be clarified. Despite their flaws, including their inclination to fabricate information, recently released generative AI models like ChatGPT offer the potential to automate tasks previously believed to be exclusive to human creativity and reasoning, such as writing, graphic design, and data summarization and analysis, even music composition. This leaves economists and many others needing clarification about how jobs and overall productivity will be affected.

Despite the remarkable advances in AI and other digital tools over the past decade, their ability to enhance prosperity and stimulate widespread economic growth has been disheartening. While a select few investors and entrepreneurs have amassed great wealth, most people have not reaped the benefits, and some have even been replaced by automation.

Since around 2005, productivity growth in the United States and most advanced economies, except for the UK, has been lackluster, hindering their potential for incredible wealth and prosperity. The limited expansion of the economic pie has resulted in stagnant wages for many workers.

The few instances of productivity growth during this time have been restricted to specific sectors and certain cities in the US, including San Jose, San Francisco, Seattle, and Boston. Given the alarming income and wealth inequality in the United States and numerous other nations, will ChatGPT worsen this disparity, or could it alleviate it? Could it provide a much-needed stimulus to productivity?

Large language models like ChatGPT, which boasts human-like writing capabilities, and OpenAI's DALL-E 2, capable of generating images on demand, rely on vast data for their training. Competing models such as Anthropic's Claude and Google's Bard follow the same principle. These foundational models, including OpenAI's GPT-3.5 used by ChatGPT and Google's language model LaMDA, which powers Bard, have rapidly evolved in recent years.

Their power continues to grow as they are trained on ever-increasing amounts of data, and the number of parameters- the variables in the models that are adjusted- is increasing dramatically. OpenAI's latest release, GPT-4, was unveiled earlier this month. While the exact parameter count has not been disclosed, it will be significantly larger than its predecessor GPT-3, which had around 175 billion parameters and was approximately 100 times larger than GPT-2.

The release of ChatGPT in late 2022 transformed the landscape for many users, providing an incredibly easy-to-use tool that can quickly create human-like text. This includes everything from recipes to workout plans, and even computer code, surprising many users. For non-experts, especially entrepreneurs and businesspeople, the chat model is a practical and user-friendly example of the potential of the AI revolution. Unlike the abstract and technical advances of academia and select high-tech companies, it is seen as evidence of its real-world impact.

This has led to an inflow of investment from venture capitalists and other investors, with billions poured into companies centered around generative AI. As a result, the list of apps and services driven by large language models continues to grow, with each passing day bringing new additions.

Microsoft has invested $10 billion in OpenAI and ChatGPT technology to revive its Bing search engine and add new capabilities to its Office products. Similarly, Salesforce has announced plans to introduce a ChatGPT app in its popular Slack product (which I use at ReadyAI daily) while establishing a $290 million fund to invest in generative AI startups. From Coca-Cola to GM and Ford, companies across various industries are making their own ChatGPT plays. At the same time, Google has announced that it plans to utilize its new generative AI tools in widely-used products like Gmail and Docs.

Despite the rush to find applications for ChatGPT and other generative AI models, there have yet to be apparent stand-out uses. This presents a unique opportunity for us to rethink how to maximize the benefits of this new technology.
The current moment offers a unique opportunity to explore the potential impact of generative AI on workflow and job prospects. However, we must question who will benefit from this technology and who will be left behind. 

The optimistic view is that generative AI will establish a potent tool for many of us, improving our capabilities and expertise while boosting the economy. On the other hand, the pessimistic view is that companies will use it to destroy automation-proof jobs that require creative skills and logical reasoning, leaving a few high-tech companies and tech elites even richer but doing little for overall economic development and prosperity.

Assisting individuals with the lower-level of skills
The impact of ChatGPT on the workplace is not merely a theoretical concern. A recent analysis by OpenAI's Tyna Eloundou, Sam Manning, and Pamela Mishkin found that large language models like GPT could potentially impact 80% of the US workforce. They further indicated that these AI models, including GPT-4 and other forthcoming software tools, would significantly affect 20% of jobs, with at least 55% of tasks in those jobs "exposed." In contrast to previous waves of automation, higher-income jobs would be most affected, with writers, web and digital designers, quantitative financial analysts, and even blockchain engineers among those with the most vulnerable positions.

There is no question that generative AI will be used, citing law firms as one example. It will open up a range of tasks that can be automated. ChatGPT and other generative AI examples have changed the game. While AI had automated some office work before, only those rote step-by-step tasks could be coded for a machine. Now, AI can perform tasks once viewed as creative, such as writing and producing graphics. It's apparent to anyone paying attention that generative AI opens the door to computerizing many functions that we think need to be more easily automated.

The concern is not that ChatGPT will lead to large-scale unemployment, as there are still plenty of jobs in the US, but that companies will replace relatively well-paying jobs with this new form of automation. This could result in workers being sent off to lower-paying service employment. At the same time, only a few individuals can exploit the new technology and reap all the benefits.

If this scenario continues, individuals and businesses with solid technology skills may adopt generative AI tools and become significantly more efficient, ultimately dominating their respective industries. However, those with similar technical abilities and less skilled workers could stay caught up, exacerbating existing economic inequalities.

However, we envision a more optimistic scenario where generative AI can enable more people to acquire the necessary skills to compete with those with higher education and expertise. 
An experiment conducted by two MIT economics graduate students, Shakked Noy, and Whitney Zhang, asked hundreds of college-educated professionals in fields like marketing and HR to use ChatGPT in their daily tasks. In contrast, the others were not asked to use it. The AI tool raised overall productivity and assisted the least skilled and accomplished workers the most, reducing the performance gap between employees. In other words, poor writers improved significantly, while good writers became faster.

These initial findings suggest that ChatGPT and other generative AIs could "upskill" people struggling to find work. Many experienced workers are currently "lying fallow" after being ousted from office and manufacturing positions in recent years. It could revitalize the workforce if generative AI can be used as a practical tool to expand their expertise and provide them with specialized skills needed in healthcare or teaching.

To determine which scenario will prevail, we need to make a concerted effort to consider how we want to utilize the technology. We shouldn't assume that technology is already out there and we have to adapt to it. Since the technology is still in development, we have the opportunity to use it in a variety of ways. The key is to design it with intention.

In essence, we are at a crossroads where individuals with fewer skills can take on knowledge work, or those already highly skilled will significantly expand their advantages. The outcome will largely depend on how employers implement tools like ChatGPT. However, the more optimistic scenario is entirely within our grasp.

Beyond Human-Centered Design 
Nevertheless, there are reasons to have a pessimistic outlook. AI creators needed to focus more on replicating human intelligence instead of leveraging the technology to empower individuals to perform new tasks and expand their abilities.

Pursuing human-like capabilities has resulted in technologies that merely displace human workers with machines, lowering wages and exacerbating wealth and income inequality. This is the single most significant explanation for the increasing concentration of wealth.

ChatGPT, with its human-like language outputs, embodies the very concern. It has accelerated the conversation about how these technologies can be leveraged to enhance people's capabilities instead of solely displacing them.

Despite many concerns about AI developers prioritizing human-like capabilities over extending human abilities, I remain optimistic about artificial intelligence's potential. 
Businesses can benefit significantly from generative AI by expanding their offerings and increasing productivity. It is a powerful tool for creativity and innovation rather than simply a means of doing things more cheaply. As long as developers and companies avoid the mindset that humans are unnecessary, generative AI will be critical.
Within a decade, generative AI could contribute trillions of dollars to the US economy, affecting nearly all types of knowledge workers. However, the timing of this productivity boost remains uncertain. It may require patience.

In 1987, Nobel laureate economist Robert Solow from MIT made a well-known statement: "You can see the computer age everywhere except in the productivity statistics." Only in the mid to late 1990s did the effects, particularly from semiconductor improvements, appear in productivity data as businesses learned to harness increasingly affordable computational power and related software advancements.

The impact of AI on productivity will depend on our ability to use the latest technology to transform businesses, much like we did in the earlier computer age. Companies only use AI to incrementally improve tasks, which may increase efficiency but have limited net benefits. However, the true potential of AI lies in creating new processes and value for customers. The timeline remains to be determined, as we need to figure out how to use generative AI for industries like writing and graphic design. Once we have identified how AI can revolutionize these industries, a significant productivity boost will occur, but the timeline for this breakthrough still needs to be determined.

The Power Struggle in the Age of Artificial IntelligenceI believe that since ChatGPT and other AI bots automate cognitive work rather than physical tasks that require infrastructure and equipment investments, there may be a more significant boost to economic productivity than in past technological revolutions. A productivity boost could occur much more quickly by the end of the year or, indeed, by 2024.

Furthermore, the potential for large language models to enhance productivity and drive technological progress is broader than economics. This potential is already being realized in the physical sciences, as seen in the work of Berend Smit, a chemical engineering researcher at EPFL in Lausanne, Switzerland. Smit's group uses machine learning to discover new materials. After one of his graduate students demonstrated interesting results using GPT-3, Smit challenged the student to prove that the model was useless for their sophisticated machine-learning studies that predict compound properties.

However, the student should have done so. With just a few minutes of fine-tuning and relevant examples, the model could perform as well as advanced machine-learning tools explicitly developed for chemistry. Based on the compound name and structure, it could accurately answer basic questions about compound properties, such as solubility and reactivity.

Large language models have the potential to expand the expertise and capabilities of non-experts, such as chemists with little knowledge of complex machine-learning tools, similar to other areas of work. Kevin Maik Jablonka notes that as simple as a literature search; it could bring machine learning to the masses of chemists. These surprising results show the significant power of the new forms of AI in various creative fields, including scientific discovery, and how easily they can be utilized. However, this also raises critical questions about who will define the vision for the design and deployment of these tools and control the future of this remarkable technology as its potential impact on the economy and jobs become more apparent.

There is a concern that large language models may be controlled by the same big tech companies already dominating much of the digital world. For example, Google and Meta offer their large language models alongside OpenAI, and the high computational costs required to run the software create a barrier to entry for competitors. As a result, there is a risk of uniformity of thought and incentives, which is a big concern when it comes to a technology that has such a far-reaching impact.

One possible solution is establishing a publicly funded international research organization for generative AI modeled after CERN. This organization would have the necessary computing power and scientific expertise to develop the technology further but would be outside of Big Tech. This would bring some diversity to the incentives of the creators of the models. Although it is still being determined which public policies would best serve the public interest, it is becoming clear that a few dominant companies and the market must make decisions about using this technology.

Government-funded research has played a pivotal role in developing technologies that have brought widespread prosperity. For instance, in the late 1960s, the US Department of Defense backed ARPANET, which paved the way for the internet long before creating the World Wide Web at CERN.

It's essential to steer technological advancements in ways that benefit the masses and not just the privileged few. Federally-funded research has been critical in developing technologies that lead to general prosperity. Technological advances created new tasks and jobs, raising wages and decreasing income inequality. However, the recent adoption of manufacturing robots in the American Midwest has resulted in job loss and regional decline.

Rapid progress in AI could affect us all and emphasizes the importance of steering technological advances in ways that provide broad benefits. Our society and its powerful gatekeepers must stop being fascinated by tech billionaires' agendas. They should have a say in the direction of progress and the future of our society.

The creators of AI and the businesspeople involved in bringing it to market deserve credit for their efforts. Still, we must not blindly accept their vision and aspirations for the technology's future. The assumption that AI is headed on an inevitable job-destroying path is troubling. It barely acknowledges that generative AI could lead to a creativity and productivity boom for workers beyond the tech-savvy elites.

There are various tools for achieving a more balanced technology portfolio, such as tax reforms and government policies encouraging worker-friendly AI creation. However, they acknowledge that such reforms are a tall order, and redirecting technological change will require a social push.

Fortunately, our direction with ChatGPT and other large language models is within our control. As these technologies are rapidly deployed in various applications, businesses and individuals can use them to enhance worker abilities or cut costs by eliminating jobs. Additionally, open-source projects in generative AI are gaining momentum, potentially breaking Big Tech's hold on these models. For example, more than a thousand international researchers collaborated last year on an open-source language model called Bloom, which can create text in multiple languages. Increased public funding for AI research could also change the course of future breakthroughs. While I am not entirely optimistic about the outcome, he is enthusiastic about the potential of these technologies, emphasizing that using them in the right direction could lead to one of the best decades ever, but it is not an inevitable outcome.
0 Comments

The Emergence of the Post-Search Era on the Internet

3/24/2023

0 Comments

 
A year ago, I read an article discussing users' mounting outrage and irritation with Google Search as automated summaries, sponsored content, advertising, and SEO-centric spam increasingly replaced the informative website results that the search engine was designed to produce. Rather than providing us with the information we were seeking (such as, in my case, the perfect toaster), Google's search algorithm was inundating us with half-formed recommendations of "content farms." However, Google Search has maintained its primacy due to habit and the absence of a viable alternative--until now. On February 7th, Microsoft initiated the beta rollout of an iteration of its Bing search engine as an A.I. chatbot powered by GPT-4, the most recent version of OpenAI's large language model ChatGPT. Instead of directing us to external websites, the new version of Bing can generate answers to any inquiry. For a good reason, Google perceives this technology as an existential threat to its core enterprise. In late 2022, Microsoft issued a "code red." Microsoft's vice president of design, Liz Danzico, who contributed to developing Bing AI's interface, recently said that "We're in a post-search experience."

The Bing A.I. combines Microsoft's search directory and ChatGPT, which I recently tried. Using it is like conversing with an incredibly powerful librarian whose domain encompasses the vast expanse of the Internet. Nowadays, using keywords to search on Google has become second nature to most internet users like me. After entering the relevant keywords, we hit "enter" and peruse the list of links on the results page. They might return to the Google Search page and adjust their keywords if they don't find what they want. However, with Bing A.I., websites act as source materials rather than destinations, and the bot collaborates with us to produce results. Bing A.I. filters through the information overload by summarizing the summaries and aggregating the aggregators. For example, I asked for Wirecutter's recommended toaster, which provided me with the Cuisinart CPT-122 2-Slice Compact Plastic Toaster. I then asked it to gather a list of other suggestions, and it gathered them from various outlets, including Forbes, The Kitchen, and The Spruce Eats. Within seconds, I had a digest of reliable devices without leaving the Bing A.I. page. Nonetheless, the chatbot informed me it could not make my purchasing decision as it was not human.

A user of Bing A.I. has greater control than a Google Search user. We must learn to phrase their requests in complete sentences rather than isolated keywords when communicating with the chatbot. They can further refine their results by asking follow-up questions. For example, if we ask for an itinerary for a trip to Portugal and then ask, "What time does the sun set there?" the chatbot will understand which "there" we are referring to. However, in other ways, Bing A.I. limits us and encourages them to rely on the machine to determine helpful information rather than conducting their searches. The interface for Bing A.I.'s "conversation mode" is intended to be a one-stop shop for all our needs, from travel guides to financial advice. The interface consists of a single chat box on top of a subtle gradient of colors, and the chatbot even concludes its responses with a smiling, blushing emoji: "I'm always happy to chat with you. 😊" To the left of the chat box, there is a "new topic" button with a broom icon that clears the current conversation and starts over. The module was developed with the assistance of the A.I. itself.

Although Bing A.I. and similar tools may provide unprecedented convenience, they could harm content creators. While Bing A.I. does provide links to relevant websites, these are discreetly displayed as footnotes to minimize our effort. Microsoft's Sarah Mody, in a recent public video, showed how Bing A.I. could reproduce an entire recipe within the chatbox, effectively circumventing the website that initially hosted the content. Mody then asked Bing A.I. to list the recipe's ingredients and organize them by grocery-store aisle, a task that no recipe website could match. These features suggest that tools like Bing A.I. have the potential further to diminish the traffic and revenue of content creators.

Afterward, I  requested Bing A.I. to provide me with the most recent news on the unfolding banking crisis, specifically First Republic Bank and SVB. Bing A.I. generated a summary of breaking news, citing articles from NBC, CNN, and the Wall Street Journal, which is behind a paywall. Although the Wall Street Journal has indicated that any A.I. that references its content must pay for a proper license, it may struggle to enforce this requirement for publicly accessible articles since A.I. search engines, like Google, crawl the entire Web. Then, I asked Bing to present the news in a bulleted list in style, a newsletter, and the result was a somewhat dry but convincing imitation. On another occasion, when I asked Bing for suitable wallpaper options for bathrooms with showers, it provided me with a bulleted list of manufacturers. Instead of searching for a listicle on Google, I "co-created" one with the bot.

The current design of the Web is heavily centered on aggregation, such as product recommendations on The Strategist, film reviews on Rotten Tomatoes, and restaurant reviews on Yelp. However, the rise of A.I. tools like Bing A.I. raises questions about the value of these sites in the future. Rather than relying on these sites for aggregation, we may bypass them entirely and rely solely on A.I. chat summaries. This paradoxically creates a reliance on the source material - the same information that other sites make - to generate answers. I believe the widespread adoption of A.I. tools could create a vicious cycle in which sites' business models, based on advertising and subscriptions, collapse due to decreased direct traffic, leading to less content for A.I. tools to aggregate and summarize.

Regarding the potential impact of AI-generated content, Google and Microsoft recently introduced a suite of A.I. tools for office workers, including applications that can generate new emails, reports, and slide decks or summarize existing ones. These tools will likely extend into other areas of our digital lives as they become more ubiquitous. This could lead to "textual hyperinflation," where it becomes difficult to distinguish between meaningful and meaningless content. A.I.-generated spam on an unprecedented scale could inundate us, and it may be challenging to differentiate between human content and machine-generated content. In such a scenario, "content mills" could use A.I. to create entire articles; publicists might write press releases using A.I., and cooking sites may use it to generate recipes. The glut of content may require human navigation assistance, but media companies may need more resources to devote to this need. However, A.I. may ultimately solve the problem it creates, as if tools like Bing A.I. cause the well of original material online to dry up; all that may remain are self-referential bots, offering generic answers that machines created in the first place.

As more and more content online is generated by artificial intelligence, I believe the non-automated text will become a sought-after commodity, akin to a natural and unprocessed product like natural wine. Google recently launched its own A.I. chatbot called Bard, which is a move in the ongoing competition between tech giants. However, Google has kept Bard separate from its flagship product, with one executive stating that it complements Google Search. This approach acknowledges the potential threat that A.I. poses to Google's current business model. Meanwhile, Bing is enthusiastically leading the charge into the post-search era.

The emergence of Bing's artificial intelligence marks the beginning of a new era for the Internet, where search may no longer be the primary means of finding information. The current design of the Web heavily relies on aggregation. I wonder what significance traditional websites will hold in a world where bots are capable of performing the aggregation for us?
​
We are indeed living in the post-search internet, but let's not forget that non-automated text or human-generated text will become a sought-after commodity.
0 Comments

The American Lafayette in Iran

10/15/2022

0 Comments

 
​"An American Martyr in Persia" is another fantastic book by Reza Aslan, centering on a chronological narrative and not, for the most part, on moralistic judgment. It is the biography of Howard Baskerville, a 22-year-old Presbyterian missionary from the Black Hills of South Dakota who traveled in 1907 to Tabriz, a town in northern Iran, to do "the Mohammedan work." That is how his church defined the conversion of Muslims to Christianity. Baskerville died less than two years later in 1909, shot in a battle between pro-democracy rebels—whose "constitutionalist" cause he had embraced—and the forces of the Shah of Persia, who was determined to snuff out all political rebellion.

Before his death, Howard Baskerville had been told by the American consul in Tabriz not to get involved in a war that was not his own. The young man's answer (as told to us by the Author - Reza Aslan) was stirring: "The only difference between me and these people is the place of my birth, and that is not a big difference." On his death, Baskerville's Persian companions granted him a respectful title, "the American Lafayette"—after the French soldier who had fought in the American Revolutionary War.

Baskerville was a compassionate, even beguiling, fellow, and the book brings flamboyant panache to his story. Bazaars teem with hirsute brigands, and Maxim guns go “takka takka takka.” If the writing is often overwrought, it captures the mood and drama of the milieu in which the young American found himself. Armed with a letter of recommendation from no less than Woodrow Wilson—his mentor at Princeton—Baskerville persuaded the Presbyterian Church to send him abroad. (There is a tedious tangent in which the Author dwells on Wilson's "unrepentant racism.”) Like many of his era, Baskerville desired to go to China but was transferred to Persia, regarded by the church as a hardship posting. A missionary at the time described the Persian character as "that of treachery and falsehood in the extreme."

Persia was in the grasp of a political revolution when Baskerville arrived in September 1907. Ten months earlier, the Shah—Muzaffar ad-Din, of the Qajar dynasty—had yielded to protests and accepted the institution of a parliament and a liberal constitution, new checks on his previously unfettered powers. He was diligently in debt to Russia and Britain, both of whom were using Persia as the "staging ground" (in the Author's words) of the Great Game, the term used to describe the Anglo-Russian rivalry over Central Asia. Muzaffar died only days after making his concession and was succeeded by his son Mohammed Ali, an entirely more hardline Shah in thrall to his Russian advisers. The Author describes him as a "pompous, pudgy young man with a ridiculous mustache" who was "incensed with his father for making his God-given authority suddenly contingent upon the will of the people."

Mohammed Ali, egged on by his Russian aide-de-camp, cracked down on Parliament, which led to a prolonged standoff and widespread violence in Tehran. Tabriz, to the north, closer to Azerbaijan and Armenia than to the capital, had always been a rebellious city. This multilingual, multireligious border town was as Turkic as it was Persian. Its council had asserted a striking degree of political independence with the coming of the 1906 constitution and wasn't about to surrender its liberty to a young Shah with authoritarian inclinations. Baskerville arrived as Tabriz seethed and soon drifted away from the "tranquility" of the American Memorial School (where he taught and lived) into the company of local intellectuals and "secret societies" that sought to defy the Shah.

The book strains to persuade us that Baskerville's adoption of the constitutional cause sprang from a love of liberty and political freedom that he'd acquired at Princeton (paradoxically from Woodrow Wilson). But more important may be that the genuine young man, who made friends quickly, was heartbroken by the assassination of his best friend, a Persian fellow teacher at the school who was closely involved with the resistance. His friend's death drove him to join the Tabriz rebels, too, and their leader—a reformed bandit, called Sattar Khan—made Baskerville his second-in-command. Sattar was no fool: Although Baskerville had little military skill, he was invaluable as a symbol and a magnet for support. "American Defends Tabriz," screamed a headline in the New York Times just days before Baskerville's death.

The Shah's forces encircled Tabriz, and Baskerville was killed as he tried to lead a small posse—an "Army of Salvation"—to break the siege. The martyred Baskerville, says the Author, became a local hero. For many Iranians, he "embodied" a romantic idea of the U.S.: "youthful, impassioned, a little bit naïve, perhaps, but earnest in the conviction that freedom is inalienable."
Yet even as he tells us Baskerville's story, The Author can't resist kicking at modern America. Iranians expected America, "a nation of Baskervilles," to support them in their struggle against the Shah in the years before Ayatollah Khomeini, whose revolution he describes with staggering banality as "a different form of tyranny." America, he complains, was more concerned with "its interests than its principles" in Iran. Mr. Aslan tells us Baskerville's story with passion and sweetness. It's a pity he's so sour about the land that gave his family shelter.

Baskerville's role in the Persian struggle to become an independent and democratic society made him a hero in his adopted country. Back at home in America, however, his story is not well-known, and his legacy is not celebrated. An American Martyr in Persia highlights the complex historical ties between America and Iran and the potential of a single individual to change the course of history.

In this rip-roaring story of his life and death, Aslan offers us a powerful parable about the universal ideals of democracy—and to what degree Americans are willing to support those ideals in a foreign land. Interwoven throughout is an essential history of the nation we now know as Iran—frequently demonized and misunderstood in the West. Indeed, Baskerville's life and death represent a "road not taken" in Iran. Baskerville's story, like his life, is at the center of a whirlwind in which Americans must ask themselves: How seriously do we take our ideals of constitutional democracy, and whose freedom do we support? An important question to ask as we witness today in Iran, schoolgirls chant "Woman, Life, Freedom" (Zan, Zendegi, Azadi).

0 Comments

Italy, AI, and Irony

9/26/2022

0 Comments

 
Umbria is known as the "Green Heart of Italy," Umbria brags untouched landscapes in its verdant hills, mountains, and valleys. Etruscans, Romans, and medieval feuding families have left an incredible artistic and cultural heritage, while priests and monks have given a fascinating religious imprint on its towns. During my visit to Umbria in late summer, I met a couple from New York at their marvelous farmhouse. I had a short yet fascinating conversation with the husband, a distinguished anthropologist and university professor, while my wife and our friends were getting the tour by the wife - a famous Journalist. The couples were in their mid-80s. 

The husband asked about my profession, and I said, "I'm in AI Education." He immediately asked: "Can AI understand irony?" That question still puzzles me today. 

I put the answer to this question on one side and started focusing instead on the question itself. I focused on a more fundamental question I have been thinking about lately. I have been thinking about "consciousness," the complicated problem and even more complex question in the field of AI. Exploring a bit into philosophy, the complex problem of consciousness is the problem of explaining why any physical state is conscious rather than nonconscious. It is truly the problem of explaining why there is "something lit is like" for a subject in conscious experience, why conscious mental states "light up" and directly appear to the subject. The usual methods of science involve an explanation of functional, dynamical, and structural properties—an explanation of what a thing does, how it changes over time, and how it is put together. But even after we have explained the conscious mind's functional, dynamic, and structural properties, we can still meaningfully ask why it is deliberate. This suggests that an explanation of consciousness will have to go beyond the usual methods of science. Consciousness presents a complex problem for science, or perhaps it marks the limits of what science can explain. Explaining why consciousness occurs at all can be contrasted with the so-called "easy problems" of consciousness: the problems of explaining the function, dynamics, and structure of consciousness. These elements can be described using the usual methods of science. But that leaves the question of why there is something it is like for the subject when these functions, dynamics, and structures are present. This is a complicated problem.

But let's, for a moment, assume a conscious being is one capable of having thought and not disclosing it. This means consciousness would be the prerequisite for irony or saying one thing while meaning the opposite, which happens in my Persian culture. We know we are being ironic when we realize our words don't correspond with our thoughts. The truth is that most of us have this unique capacity - and most of us certainly and regularly convey our unspoken meanings in this way - is something that, I think, should surprise us more often than it does. It indeed seems almost discreetly human. Animals can be funny but not deliberately so. So how about computers or machines? Can they deceive? Can they keep secrets? Can they be ironic?

The truth is that anything related to AI is already being studied or researched by an army of obscenely well-resourced computer scientists and AI researchers. This is also the cares with the question of AI and irony, which has recently attracted significant research among academia and private companies. Of course, since irony involves saying one thing while meaning the opposite, creating an intelligent machine that can detect and generate it is not a simple task. But if the AI community could make such an intelligent machine, it would have many practical applications, some more sinister than others. In the age of Google online reviews, among others, retailers have become very keen on so-called "opinion mining" and "sentiment analysis," which utilize AI to map the content and the mood of reviewers' comments. Knowing whether the product is being praised or becoming the butt of the joke is valuable information. And this is what Amazon is doing currently. Or even consider content moderation on various social media platforms. If let's say, Twitter or Facebook wants to limit online abuse while protecting freedom of speech, would it not be helpful to know when someone is serious or when they are just joking?

Or what if someone tweets that they have just done something crazy and illegal? (don't ever tweet crazy or illegal stuff, by the way). Imagine if we could determine instantly whether they are serious or whether they are just "being ironic." 

The truth is that given irony's proximity to lying, it's not hard to imagine how the entire shadowy machinery of government and corporate surveillance that has grown up around new communications technologies would find the prospects of an irony-detector extremely interesting. And that goes a long way toward explaining the growing literature on the topic in the AI field. 

To better understand the state of current research into AI and irony, it is beneficial to know a little about the history of AI in general. That history is broken down into two periods. In the 90s, AI researchers sought to program computers with a set of handcrafted formal rules for how to behave in predefined environments. For example, if you used Microsoft Word in the 90s, you might remember the annoying office assistant Clippy, who was endlessly popping up to offer unwanted advice. 

Since the early 2000s, that model has been replaced by data-driven machine learning and sophisticated neural networks. Enormous caches of examples of given phenomena are translated into numerical values, on which computers can perform complex mathematical operations to determine patterns no human could ever discover. Moreover, the computer doesn't merely apply a rule. Instead, it learns from experience and develops new operations independent of human intervention. 

The main difference between the two approaches is between Clippy and facial recognition technology.  

To create a neural network that can detect irony, AI scientists focus initially on what some would consider its simplest form: sarcasm. AI scientists begin with data stripped from social media. For example, they might collect all tweets labeled "sarcasm" with or without # of course, or Reddit posts labeled/s, a shorthand that Reddit users employ to indicate they are not serious. The point is not to teach the computer to recognize the two separate meanings of any given sarcastic post. Indeed, meaning is of no relevance whatsoever. Instead, the computer is instructed to search for recurring patterns, or what researchers call "syntactical fingerprints" - words, punctuations, errors, emojis, phrases, context, and so forth.
On top of that, the dataset is bolstered by adding even more streams of examples - other posts in the same treads, for instance, or from the same account. Each new individual sample is then run through a battery of calculations until we arrive at a single determination: sarcastic or not sarcastic. Last, a bot can be programmed to reply to each original poster and ask whether they were being sarcastic. Any reply can be added to the machine's growing mountain of experience. So, assuming AI will continue to grow and advance at the rate that took us from Clippy to facial recognition technology in less than two decades, can Ironic androids be far off?

It could be argued that there are qualitative differences between sorting through the "syntactical fingerprints" of irony and understanding it. Some might suggest not. If a computer can be taught to behave exactly like a human, then it's immaterial whether a rich internal world of meaning lurks beneath its behavior. 

But I would argue that iron is a unique case; it relies on the distinction between external behaviors and internal beliefs. While AI scientists have only recently become interested in irony, philosophers and literary critics have been thinking about irony for a very, very, very long time. And perhaps exploring that tradition would shed old light, as it were, on a new problem. 

Of the many names one could think about in this context, two are indispensable: the German Romantic philosopher Friedrich Schlegel; and the post-structuralist literary theorist Paul de Man. As for Schlegel, irony does not simply entail a false external meaning and a true, internal one. Rather, two opposite meanings are presented as equally valid in irony. And the resulting indeterminacy has devastating implications for logic, most notably the law of non-contradiction, which holds that a statement cannot be simultaneously true and falls. De Man follows Schlegel on this score and, in a sense, universalizes his insight. De Man notes that every effort to define a concept of irony is bound to be infected by the phenomena it purports to explain. Indeed, de Man believes all language is infected by irony and involves "permanent parabasis." Because humans have the power to conceal their thoughts from one another, it will always be possible - permanently possible - that they do not mean what they are saying. 

The irony, in other words, is not one kind of language among many; it structures or, better, haunts every use of language and every interaction. And in this sense, it exceeds the order of proof and computation. The question is whether the same is true of human beings in general.​

0 Comments

We are gaining Technology but losing Democracy

1/10/2022

0 Comments

 
Technology capitalism is the dominant economic establishment of our time, and it is on a crash course with democracy, and this is more visible than ever in the Western world. Technology capitalism’s giants—Google, Facebook, Amazon, Microsoft, and Apple—now possess, operate, and mediator nearly every aspect of human interaction with global information and communication systems, unconstrained by public law. All roads to economic, social, and even political participation now lead through a handful of unaccountable companies, a state that has intensified during two years of the COVID-19 pandemic.​

The result is a path of social decay:
  • The destruction of privacy.
  • Extensive corporate concentrations of information about people and society.
  • Poisoned discourse.
  • Fractured societies.
  • Remote systems of behavior manipulation.
  • Weakened democratic institutions.
While the Authoritative governments designed and deployed digital technologies to advance their system of authoritarian rule, the West failed to create a coherent vision of a digital century that promotes democratic principles and government.

Rights and laws once codified to defend citizens from industrial capitalism—such as antitrust law and workers’ rights—do not shield us from these harms. If the ideal of the people’s self-governance is to endure this century, then a democratic counterrevolution is the only solution.

U.S. and European lawmakers have finally begun to think seriously about regulating privacy and content. Still, they have yet to consider the far more fundamental question of structure and govern information and communication for a democratic digital future.

Three principles could offer a starting point. First, the democratic rule of law governs. There is no so-called cyberspace immune to rights and laws, which must apply to every domain of society, whether populated by people or machines. Publishers, for example, are held accountable for the information they publish. Even though their profit-maximizing algorithms enable and exploit disinformation, technology capitalists have no such accountability.

Second, unprecedented harms demand unprecedented solutions. Existing antitrust laws can break up the tech giants, but that won’t address the underlying economics. The target must be the secret extraction of human data once considered private. Democracies must outlaw this extraction, end the corporate concentration of personal information, eliminate targeting algorithms, and abolish corporate control of information flows.

Third, new conditions require new rights. Our era demands the codification of epistemic rights—the right to know and decide who knows what about our lives. These fundamental rights are not codified in law because they have never come under systemic threat. They must be codified if they are to exist at all.

We can be a technology capitalist society or a democracy, but we cannot be both. Democracy is a fragile political condition dedicated to the prospect of self-governance, harbored by the principle of justice and maintained by collective effort. Each generation’s mission is always the same: to protect and keep democracy moving forward in a relay race against anti-democratic forces that spans centuries. The liberal democracies have the power and legitimacy to lead against technology capitalism and do so on behalf of all peoples struggling against a dystopian future.

The most influential architect of the U.S. political system, James Madison, was deeply fascinated by the Enlightenment thinkers who saw politics as a science. They imagined a system of checks and balances producing good government almost as a machine with wheels and pulleys could have motion or transfer energy. They did not expect people to be wise or virtuous. “If men were angels,” Madison famously wrote in the Federalist Papers, “no government would be necessary.” Madison built a system, he believed, that did not require virtue to function. “Ambition must be made to counteract ambition,” he urged, and from this conflict of interest would come ordered liberty and democracy. This American model became the template for much of the world.

In the United States and worldwide, we are now witnessing experiments in politics without angels—and they aren’t working so well. Democratic institutions have weakened in many places, broken in others, and feel under stress where they are still functioning. Those countries that have not faced the full furies of populism and nationalism—Germany and Japan are the most striking examples—have escaped these dangers because of their culture and history rather than some better democratic design. Everywhere, Ralph Waldo Emerson’s truth seems to hold: Institutions are merely lengthened shadows of men. If such men fail and misbehave, venally or irresponsibly, the democratic system is endangered. We enter the 21st century asking one of the oldest political questions, much older than the Enlightenment ideas that democracy was built on. It is a question the ancient Greeks and Romans debated more than two millennia ago: How do we produce virtue in human beings? 
0 Comments

41 questions we should ask of the technologies and tools that shape our lives

8/4/2021

0 Comments

 
We all know that Zoom (or Google Meet, which I often use) causes fatigue, social media spreads misinformation, and Google Maps wipes out our sense of direction. We also know, of course, that Zoom allows us to cooperate across continents, that social media (Twitter, Instagram, or TikTok) connects us to our families and friends, and Google Maps keeps us from being lost. Today's technological criticism concerns whether a technology is good or bad or judging its various applications. But there’s an older tradition of criticism that asks a more fundamental and nuanced question: How do these technologies change the people who use them, both for good and bad? And what do the people who use them — all of us, in other words — actually want? Do we even know?
L.M. Sacasas explores these questions in his great newsletter, “The Convivial Society.” His work is marrying the theorists of the 20th century — Hannah Arendt, C.S. Lewis, Ivan Illich, Marshall McLuhan, Neil Postman, and more — to the present day's technologies. This merging of past thinkers and contemporary concerns is revelatory in an era when we tend to take the shape of our world for granted and forget how it would look to those who stood outside it or how it looked to those who were there at the inception of these tools and mediums.

Sacasas recently published a list of 41 questions we should ask of the technologies and tools that shape our lives. What I admired about these questions is how they invite us to think not just about technologies but about ourselves, and how we act and what we want, and what, in the end, we actually value.  I highly recommend listening to L.M. Sacasas's conversation with Ezra Kline.

Here is the list of those 41 questions.  I'd love to hear your answers to some of these questions:
  1. What sort of person will the use of this technology make of me?
  2. What habits will the use of this technology instill?
  3. How will the use of this technology affect my experience of time?
  4. How will the use of this technology affect my experience of place?
  5. How will the use of this technology affect how I relate to other people?
  6. How will the use of this technology affect how I relate to the world around me?
  7. What practices will the use of this technology cultivate?
  8. What practices will the use of this technology displace?
  9. What will the use of this technology encourage me to notice?
  10. What will the use of this technology encourage me to ignore?
  11. What was required of other human beings so that I might be able to use this technology?
  12. What was required of other creatures so that I might be able to use this technology?
  13. What was required of the earth so that I might be able to use this technology?
  14. Does the use of this technology bring me joy?
  15. Does the use of this technology arouse anxiety?
  16. How does this technology empower me? At whose expense?
  17. What feelings does the use of this technology generate in me toward others?
  18. Can I imagine living without this technology? Why, or why not?
  19. How does this technology encourage me to allocate my time?
  20. Could the resources used to acquire and use this technology be better deployed?
  21. Does this technology automate or outsource labor or responsibilities that are morally essential?
  22. What desires does the use of this technology generate?
  23. What desires does the use of this technology dissipate?
  24. What possibilities for action does this technology present? Is it good that these actions are now possible?
  25. What possibilities for action does this technology foreclose? Is it good that these actions are no longer possible?
  26. How does the use of this technology shape my vision of a good life?
  27. What limits does the use of this technology impose upon me?
  28. What limits does my use of this technology impose upon others?
  29. What does my use of this technology require of others who would (or must) interact with me?
  30. What assumptions about the world does the use of this technology tacitly encourage?
  31. What knowledge has the use of this technology disclosed to me about myself?
  32. What knowledge has the use of this technology disclosed to me about others? Is it good to have this knowledge?
  33. What are the potential harms to myself, others, or the world that might result from using this technology?
  34. Upon what systems, technical or human, does my use of this technology depend? Are these systems just?
  35. Does my use of this technology encourage me to view others as a means to an end?
  36. Does using this technology require me to think more or less?
  37. What would the world be like if everyone used this technology exactly as I use it?
  38. What risks will my use of this technology entail for others? Have they consented?
  39. Can the consequences of my use of this technology be undone? Can I live with those consequences?
  40. Does my use of this technology make it easier to live as if I had no responsibilities toward my neighbor?
  41. Can I be held responsible for the actions which this technology empowers? Would I feel better if I couldn’t?
When we think about technology’s moral implications, we tend to think about what we do with a given technology. We might call this the “guns don’t kill people, people kill people” approach to the ethics of technology. What matters most about technology on this view is the use to which it is put. This is, of course, a valid point. A hammer may indeed be used to either build a house or bash someone’s head in. On this view, technology is morally neutral, and the only morally relevant question is this: What will I do with this tool?

But is this really the only morally relevant question one could ask? For instance, pursuing the example of the hammer, might I not also ask how having the hammer in hand encourages me to perceive the world around me? Or, what feelings having a hammer in hand arouses?
0 Comments

I am part of the "Zoom class"

7/9/2021

0 Comments

 
As our world's more prosperous and fully vaccinated countries like the United States begin to come out from the Pandemic, there's a lot of talk about "the office." I have been thinking about "the office" as I spent most of the week at our Pittsburgh office preparing for one of our artificial intelligence camps at Winchester Thurston by ReadyAI.

Many business executives say that they expect employees to split time between working from home and the office, according to the latest report by McKinsey. (Click here to read the report). Savvy entrepreneurs are even making special speakers for remote workers to feel like they are in the office even when they aren't. They're also sending them care packages and subsidizing part of childcare services.

I agree that office debate is an essential one. In fact, I am writing this piece from my home. But billions of people around the world, this debate is not relevant to their work and lives. 

Mainly because billions of people have jobs that cannot be done from a distance, for example, giving haircuts, tending to seriously ill or injured patients, or serving food. Or, perhaps jobs in occupations like sanitation, farming, deliveries, or transportation are essential but not confined to any specific space. 

The International Labor Organization estimates just 18 percent of the global workforce, or approximately 557 million people, were consistently teleworking during the Pandemic. (Click here to read the report). That's triple what it was before COVID. But it still leaves over 2.7 billion people worldwide for whom the "back-to-the office debate" sounds like something from another planet.

Let's not ignore that those 2.7 billion people and their families have been hit hardest by COVID in terms of hours and wages lost, emotional trauma, and destructive unemployment. 

Today the division between the "Zoom class" and the rest of the world tracks some of the more obvious fault lines of inequality that cut across our communities and societies. 

Today even in prosperous economies like the US, only a small portion of workers can telework consistently. Here in the US, it's about a fifth. But the numbers are far lower in middle-income countries as the size of the "laptop class" or "Zoom class" plummets. For example, in India, where more than 470 million people work in retail for agriculture, only five percent can Zoom to the job. The numbers in Africa are alike.  

Let's think about this a bit further. This is because in-person services jobs are more prevalent in less developed countries - you are five times as likely to be a street vendor in a middle-income nation as you are in a wealthy one and 16 times as likely to work in agriculture. This is also about constraints on internet connectivity and internet services. Most countries don't have the internet infrastructure to support massive teleworking populations. On top of that, many of these countries have also experienced the additional blow of losing remittances from their citizens working abroad in jobs that often aren't "remote-workable."

Also, high debt obligations and lack of cash mean that low and middle-income countries cannot roll out the kinds of unemployment benefits or infrastructure rebuilding programs that we've seen in the US or Europe. More flourishing countries have allocated up to 30 percent of their GDP to cushion the pandemic blow. but low and middle-income countries mustered less than six percent. (Click here to read the IMF report.)  

The bad news is that Pandemic has put decades of poverty reduction in reverse. In 2020 alone, more than 120 million people fell below the poverty line globally, and the number of people living in extreme poverty rose for the first time in 24 years (since 1997). (Click here to read the report)

Today even in rich countries, non-remote jobs are overwhelmingly in lower-income, economically vulnerable professions. According to a recent Pew study, more than 3/4 of low-income workers in Americans can't work from home at all. Non-remote jobs have higher proportions of women, ethnic minorities, and younger people - groups went into the Pandemic at an economic disadvantage. All suffered disproportionate financial losses during the crisis itself.

The Pandemic is far from over, but and many things need to be done. Globally, more prosperous countries need to look at the question of debt relief for cash-strapped developing nations. But even within more prosperous countries, better compensation and labor protection for "essential workers" are genuinely essential.​

It is nice to be out on the balcony applauding the essential workers and tweeting about them, but unless we start compensating them more, what happens if another pandemic comes around?
0 Comments
<<Previous
Forward>>

    Author

    Roozbeh, born in Tehran - Iran (March 1984)

    Archives

    December 2024
    April 2024
    February 2024
    December 2023
    November 2023
    June 2023
    April 2023
    March 2023
    October 2022
    September 2022
    January 2022
    August 2021
    July 2021
    June 2021
    May 2021
    April 2021
    January 2021
    December 2020
    November 2020
    September 2020
    June 2020
    May 2020
    April 2020
    March 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    August 2019
    June 2019
    March 2019
    February 2019
    August 2018
    July 2018
    May 2018
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    September 2017

    Categories

    All
    Business
    Comedy
    Life
    Poetry
    Politics
    Random

    RSS Feed

© COPYRIGHT 2025. ALL RIGHTS RESERVED.