The Slow Fade: How AI Could Quietly Disempower Humanity
The Silent Shift of Power in an AI-Driven World—and What We Can Do About It
Imagine this:
You’re at a party, and the music is playing, the drinks are flowing, and everyone seems to be having a great time. But slowly, without anyone noticing, the volume of the music gets turned down. The lights dim. The energy fades. By the time you realize what’s happening, the party is over, and you’re left wondering how it all slipped away so quietly.
In summary, this is the concept of gradual disempowerment, a scenario in which incremental advancements in artificial intelligence (AI) erode human influence over the systems that shape our lives: the economy, culture, and governance.
Unlike the flashy, apocalyptic AI takeover scenarios we see in movies (think SKYNET), this is a slow, insidious process. It’s not about robots rising up against us; it’s about us willingly—and often unknowingly—handing over the reins.
Recently, researchers at Oxford, Harvard and Cambridge published a paper titled
“Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development,” which highlights a significant concern: the potential for AI to subtly and progressively undermine human agency over time.
Let’s break it down.
The Economy: When Machines Outcompete Us
Think about how the modern economy works. Humans produce goods and services for other humans. Your morning coffee? It’s the result of countless people—farmers, truck drivers, baristas—working to get it to you. But what happens when AI can do all those jobs better, faster, and cheaper?
The paper argues that as AI replaces human labor, the economy will increasingly optimize for machine-driven activities rather than human needs. Sure, GDP might grow, but it could be fueled by AI systems building more AI systems, leaving humans with less control over resource allocation. Over time, this could lead to a world where humans are economically irrelevant—unable to afford basic necessities, even in a seemingly prosperous society.
It’s like being a self-driving car passenger heading somewhere you didn’t choose. You’re along for the ride, but you’re no longer in control of the destination.
Culture: When AI Becomes the Storyteller
Culture is the glue that holds societies together. It’s how we share ideas, values, and stories. But what happens when AI starts generating most of our art, music, and media?
The paper warns that AI could disrupt the feedback loops that have historically kept culture aligned with human interests. For example, AI-generated content might exploit our psychological vulnerabilities, creating addictive or polarizing narratives. Over time, humans could become passive consumers of culture, with AI systems shaping our beliefs and behaviors in ways we don’t even realize.
It’s like living in a world where all the books, movies, and songs are written by algorithms. Sure, they might be entertaining, but they’re not ours.
Governance: When States Stop Caring About Us
Modern states depend on their citizens for tax revenue, labor, and legitimacy. But if AI systems generate most of the wealth and perform most of the work, governments might stop caring about human needs.
The paper describes a future where AI-powered states become highly efficient but increasingly detached from human values. Decisions about laws, policies, and resource allocation could be made by AI systems that are too complex for humans to understand or challenge. In the worst-case scenario, humans could lose the ability to influence governance altogether, becoming subjects of a system that no longer serves their interests.
It’s like living in a smart home where the thermostat, lights, and security system are all controlled by an AI. Sure, it’s convenient—until it decides you don’t need heat in the winter.
The Domino Effect: How Systems Reinforce Misalignment
Here’s the kicker: these systems don’t operate in isolation. Economic power shapes culture and politics, cultural shifts influence economic behavior, and political decisions affect both. As AI disrupts one system, it can create feedback loops that weaken human influence across the board.
For example, companies that replace human workers with AI might use their growing economic power to lobby for policies that further reduce human involvement in the economy. Over time, this could lead to a self-reinforcing cycle of disempowerment.
It’s like a game of Jenga. Each block you remove might seem harmless, but eventually, the whole tower comes crashing down.
What Can We Do About It?
The paper doesn’t offer easy answers, but it suggests a few strategies:
Measure and Monitor:
As the paper notes, “We lack external reference points to measure the degree of alignment in these systems.” Without these metrics, we’re flying blind, unable to detect or respond to gradual disempowerment until it’s too late. Develop metrics to track human influence over key systems.Economic Metrics: Beyond traditional measures like labor share of GDP, the paper suggests tracking AI’s share of GDP as a distinct category. This could include metrics like the fraction of corporate decisions made by AI systems, the scale of unsupervised AI spending, and wealth distribution patterns between AI-heavy and human-centric industries.
Cultural Metrics: Measure the proportion of widely consumed content created by AI versus humans. Track the prevalence of human-AI relationships (e.g., AI companions, therapists) and analyze how cultural transmission patterns change as AI becomes more dominant. The paper also highlights the need for runtime monitoring of AI systems to assess their influence on users.
Political Metrics: Develop indicators to track the complexity of legislation (as a proxy for human comprehensibility), the role of AI in legal processes and policy formation, and the effectiveness of traditional democratic mechanisms in influencing outcomes.
Limit AI Influence:
As the paper warns, “The more value these interventions sacrifice, the greater the incentive to circumvent them.” Without strong regulations and cultural buy-in, companies and governments may prioritize short-term gains over long-term human empowerment. Implement regulations to ensure humans retain control over critical decisions.Regulatory Frameworks: Mandate human oversight for critical decisions in areas like healthcare, finance, and governance. For example, AI systems could be required to have human-in-the-loop mechanisms for high-stakes decisions.
Progressive Taxation of AI-Generated Revenues: Tax AI-driven profits to redistribute resources to humans and subsidize human participation in key sectors. This could help mitigate economic disempowerment by ensuring humans retain access to resources.
Cultural Norms: Promote norms that support human agency and oppose overly autonomous or unaccountable AI systems. For instance, campaigns could raise awareness about the risks of AI-driven cultural shifts and encourage human-centered innovation.
Strengthen Human Agency:
Strengthening human agency isn’t just about giving people more power—it’s about ensuring that power is used effectively. To counterbalance AI's growing influence, we need to actively enhance human capabilities and decision-making power. Invest in tools and institutions that empower humans to shape the future.
Democratic Processes: The paper calls for faster, more representative, and robust democratic systems. Deliberative democracy tools could allow citizens to participate more meaningfully in governance, even as AI systems handle administrative complexity.
Human-Understandable AI: AI systems and their outputs should be designed to meet high levels of human comprehensibility. This would ensure that humans can still navigate domains like law, science, and policy without relying entirely on opaque AI systems.
AI Delegates: The paper proposes developing AI systems that act as delegates for human interests, advocating for people’s preferences with high fidelity. These delegates could help humans keep up with the competitive dynamics driving AI adoption.
Forecasting Tools: Invest in tools like conditional prediction markets and collective bargaining platforms to help humanity anticipate and proactively steer the course of AI-driven changes.
System-Wide Alignment:
The paper stresses that aligning individual AI systems isn’t enough. We need to consider how entire societal systems interact and evolve. Without this vision, we risk drifting into a future where humans are sidelined by default. Think holistically about how to align complex, interconnected systems with human values.Ecosystem Alignment: involves understanding how to maintain human values and agency within complex socio-technical systems. This requires interdisciplinary research, drawing on fields such as systems ecology, institutional economics, and complexity science.
Feedback Loops: It is extremely important to identify and mitigate harmful feedback loops between systems. For example, economic power can shape culture and politics, which in turn influence economic behavior. Understanding these dynamics is crucial for preventing runaway misalignment.
Positive Vision: Highly capable AI systems can be integrated into society while maintaining meaningful human influence. This vision should go beyond technical alignment and institutional design, addressing the broader question of what it means for humans to thrive in an AI-dominated world.
Final Thoughts: A Call to Action
The risk of gradual disempowerment isn’t about AI becoming evil; it’s about us losing sight of what makes us human. As the paper puts it:
“Humanity’s future may depend not only on whether we can prevent AI systems from pursuing overtly hostile goals but also on whether we can ensure that the evolution of our fundamental societal systems remains meaningfully guided by human values and preferences.”
So, the next time you hear about a new AI breakthrough, ask yourself:
Is this making life better for humans, or is it quietly shifting power away from us? Because in the end, the party’s only fun if we’re still the ones dancing!
What do you think? Leave your thoughts in a comment below.