|
Everywhere I travel—whether in classrooms where children are learning to build their first robots or in conference halls where policymakers debate the future of technology—I sense the same quiet fear: we are losing trust. People trust one another less. Citizens trust their institutions less. Nations no longer trust the international order that once offered some measure of stability. And just when the world needs shared leadership to confront climate change and the rise of artificial intelligence, the loudest voices are turning inward, insisting that making themselves stronger—even at others’ expense—is both natural and necessary.
I have seen where that mindset leads. Coming from the Middle East, I recognize the old worldview resurfacing: the strong dominate, the weak obey. If the weak resist, they are blamed for the conflict. It is an imperial instinct, deeply ingrained in history. Yet for a few brief decades after World War II, the world attempted something different—an order built more on law than on force. That order was never perfect, but it did something extraordinary: it made many nations feel safe enough to invest in people instead of weapons. For the first time in history, global military budgets shrank to around seven percent of government spending, while investments in health, education, and welfare rose. That was not utopia—it was the best humanity had achieved. Now, that progress is unraveling. The Russian invasion of Ukraine shattered a vital taboo: that strong nations cannot invade weaker ones to conquer territory. When such taboos fall, fear spreads. Across Europe and East Asia, governments are rearming, convinced that only military strength can guarantee survival. America’s retreat from global leadership and its inward turn have deepened the anxiety. If everyone acts out of fear, an arms race is inevitable—and trust, once broken, becomes harder to rebuild. This erosion of trust between nations mirrors what is happening within societies themselves. Over the past two decades, we have handed the architecture of our public conversation to artificial intelligence. Fifty years ago, editors decided what stories appeared on the front page; today, algorithms decide what billions of people see, read, and believe. Their objective is not truth or civic health—it is engagement. If amplifying anger keeps us scrolling, anger will spread. Democracy is built on human conversation; dictatorship is built on dictation. Yet the global democratic conversation is now managed by nonhuman agents that no one truly controls or understands. The irony is striking: humanity created the most powerful information networks in history—and those same networks are making it harder for people to talk, to listen, or to trust. Part of the confusion comes from mistaking the freedom of speech with the freedom of algorithms. Human speech—however offensive or misguided—is protected because it represents a conscious mind expressing thought. Algorithmic decisions are not speech; they are engineered outcomes designed to optimize behavior. When a platform chooses to spread falsehoods or rage because those emotions hold attention longer, it is not exercising freedom; it is manipulating us for profit. Bots do not have rights. And allowing them to masquerade as people erodes the foundation of trust on which democracy depends. Governments should treat counterfeit humans online as they treat counterfeit currency: a direct threat to public order. The deeper issue, though, is that truth itself is becoming harder to find. Truth is costly, complicated, and often painful. Fiction is cheap, simple, and flattering. In an environment flooded with information, the lighter falsehoods float to the top. Societies used to have institutions—newsrooms, universities, courts—dedicated to the slow, expensive work of verification. Now, speed and novelty dominate. The result is a digital world where fact and fabrication mix freely, and most people lack the tools to tell the difference. This is not the end of truth, but it is the end of taking truth for granted. To adapt, we need time—and time is precisely what the AI race is stealing from us. Whenever I speak with AI researchers and executives, I hear the same paradox. Everyone admits that it would be wiser to slow down development and invest in safety, but no one dares to stop. “If we pause,” they tell me, “our competitors won’t, and they’ll dominate the future.” Fear of others, not faith in progress, drives the acceleration. It’s a tragic irony: we distrust other humans so profoundly that we are rushing to build machines we trust far too easily. We have centuries of experience managing human power through checks and balances, elections, and laws. We have almost no experience controlling nonhuman power. This is not a race for innovation—it is a race against our own fear. Yet, even amid this turbulence, I try not to be either an optimist or a pessimist. Pessimism paralyzes, optimism anesthetizes. Realism is the only responsible position. It recognizes that our challenges—climate change, war, AI—are serious but solvable. Humanity has overcome existential problems before. We can again, if we act with humility and cooperation. And cooperation begins with trust. Building trust is not an abstract virtue; it is a practical necessity. It starts with restoring honesty in how we communicate and transparency in how our technologies operate. Algorithms that shape public discourse should be accountable and understandable. People should always know when they are speaking with a machine, not a person. Education systems must train the next generation not only to write code but to recognize manipulation, to question information, and to think critically about its sources. Democracies must strengthen their self-correcting mechanisms—independent courts, free media, open universities—so that mistakes can be recognized and corrected before they calcify into crises. And globally, we need cooperation to set shared standards for AI development: agreements on safety, transparency, and the prohibition of autonomous systems that make life-or-death decisions without human oversight. These are not utopian dreams; they are the basic scaffolding of a stable civilization in the age of intelligent machines. Rebuilding trust also means relearning the art of generous interpretation. It means not assuming the worst about others unless they’ve given us reason to. After the attacks of October 7, many Israelis said, “We can never trust Arabs again.” But look closer: Egypt and Jordan, which have peace treaties with Israel, honored them. The Palestinian Authority did not attack. The Arab citizens of Israel, more than two million people, did not rise in violence. In many towns, they protected their Jewish neighbors. The lesson isn’t that evil disappears; it’s that trust, when nurtured through agreements and dialogue, holds. Even in moments of horror, cooperation endures. History supports this. A hundred thousand years ago, humans lived in small tribes that trusted only their kin. Over millennia, we built systems—laws, markets, religions—that allowed millions, then billions, of strangers to cooperate. Today, we entrust our lives daily to people we will never meet: the engineers who design our aircraft, the farmers who grow our food, the scientists who create our medicines. That web of trust is the most astonishing achievement in human history. Yet the very technologies that once connected us are now straining those bonds, inserting nonhuman intermediaries into every human interaction. We are learning, painfully, what it means to build trust in a world mediated by machines. The heart of this transformation is a simple question: what makes humans valuable in the age of AI? Intelligence alone cannot be the answer. Intelligence solves problems and achieves goals, but it does not feel. Consciousness—the ability to experience pain, joy, love, and grief—is what gives life moral weight. A superintelligent system can outthink us, but as far as we know, it cannot feel. It can simulate emotion, but it does not experience it. That distinction matters. Ethics begins with empathy, not efficiency. Education, therefore, must focus not only on intellect but on emotional depth, moral reasoning, and the capacity to care. If we train our children to compete with machines rather than to understand what makes them human, we will lose both the race and our reflection. I often say that fear keeps individuals alive, but trust keeps civilizations alive. Every breath we take is an act of trust in the air around us. Every meal we eat depends on strangers across continents. Every idea that enriches our minds was born somewhere else. If we close ourselves off—politically, technologically, emotionally—we suffocate. The way forward is not blind faith, but informed trust: trust built on transparency, empathy, and shared responsibility. Artificial intelligence is unprecedented. But our capacity for cooperation is also unprecedented. Before we can decide whether to trust machines, we must relearn how to trust each other. Only then will we have not just the intelligence, but the wisdom, to guide the future we are building.
0 Comments
Leave a Reply. |
AuthorRoozbeh, born in Tehran - Iran (March 1984) Archives
October 2025
Categories |
RSS Feed