AGI - Additional Perspectives

This note explores how Artificial General Intelligence (AGI) intersects with economic systems, labor structures, environmental sustainability, and global power dynamics.

The emergence of AGI could be an event as economically and socially significant as the industrial revolution (likely even more so), since it targets the cognitive realm which underpins virtually all sectors.


Table of Contents


Capital and Wealth Concentration

The pursuit of AGI is occurring within a capitalist framework, raising questions about who will own and benefit from such powerful intelligence. Without intervention, AGI could greatly concentrate wealth and power. Imagine a company or nation controlling an AGI that can out-invent, out-strategize, and out-negotiate any human: it would attain a near-monopoly on innovation and productivity.

Economic models suggest that if AGI can essentially replace human labor and operate at near-zero marginal cost, the traditional relationship between labor and capital shifts dramatically. One study argues that AGI could push the value of human labor to near zero, with capital owners reaping most gains, leading to extreme inequality and a crisis of demand (people can’t earn to buy goods)1. To avoid systemic collapse, ideas like Universal Basic Income (UBI) or public ownership of AI are floated1. In other words, AGI might force a renegotiation of the social contract: if machines produce all wealth, how is that wealth distributed?

We already see how digital technology tends to yield “winner-takes-most” markets (a few big tech companies dominate due to network effects and high fixed / low marginal cost economics). AGI could amplify this. If one company gets an AGI that can drive all sorts of innovations, they could enter any industry and outcompete incumbents. In a sense, an AGI could become the ultimate “productive asset.” Seth Baum’s survey noted 72 active AGI projects2, but it’s likely only a handful have the scale to succeed. If one of those hits gold, the first-mover advantage might be enormous.

This raises concerns of monopolies unlike any seen before: maybe an “AGI-Microsoft” or “AGI-Google” controlling key infrastructure of the economy. Traditional antitrust might not help if the AGI advantage is too decisive. Some, like economist Tyler Cowen, argue that market competition or diffusion will eventually make AGI widespread. But others fear a scenario depicted in sci-fi works where a single megacorporation (or state) essentially has all the advanced AI and thus calls the shots globally (the Tyrell Corp in Blade Runner or Weyland-Yutani in Alien).

Some advocate treating AGI (or its output) as a public utility rather than a private asset, to prevent a dystopia of “AI overlords” economically.

Labor and Employment

AGI is often envisioned as automating not just physical or routine jobs (as AI does now) but also cognitive and creative jobs (potentially any job). This raises the prospect of mass unemployment or a work revolution.

Optimistic scenarios foresee a world where automation leads to a “post-scarcity” economy: humans are freed from drudgery to pursue education, leisure, and creativity, aided by AI, with wealth redistributed via mechanisms like UBI. Some imagine “fully automated luxury communism” where AI and robots provide abundance and society is reoriented to common good.

Pessimistic scenarios worry about technological unemployment: if our economic system isn’t restructured, millions could be jobless and excluded. A neo-feudal outcome where a tiny elite owning AI enjoys extreme wealth while a large underclass is unemployed. The pace matters too: if AGI breakthroughs come rapidly, society might not adapt in time, causing economic shocks3.

Historically, technological unemployment has been mitigated by new job creation (tractor replaces farmers, but new jobs in manufacturing and services appear). The crucial difference with AGI is the fear that all human skills could be eventually matched. If that’s true, then there may simply be fewer jobs needed for humans. Some tasks might always require a human touch (therapy, artisanal craft), but these might be niche.

Human-AI Collaboration

A more optimistic labor scenario: humans might still do a lot of work but augmented by AI, becoming “centaurs” (like how centaur chess teams, human+AI, initially outperformed either alone). Every professional might have AI assistants boosting their productivity drastically. In that case, perhaps we transition into jobs that are more supervisory or creative using AI as a tool. But if the AI gets too good, the human might become the junior partner or even unnecessary.

Work Ethic and Societal Values

Since the industrial era, identity and societal contribution are tied to work. If AGI breaks that link, we might need to find new ways to value people (beyond their economic output). Some propose a shift to an economy of volunteering, creativity, and caregiving being socially rewarded even if not tied to survival via wages. This is a deep cultural shift.

Scandinavia and others are exploring shorter work weeks and decoupling income from full employment, which might become more mainstream if AI shrinks labor demand.

Climate and Environmental Impact

AGI could influence climate change and the environment in two opposite ways.

The Energy Problem

Training advanced AI models today is already energy-intensive: large neural networks require huge computing clusters that consume electricity (often from fossil fuels) and water for cooling data centers4. GPT-3 was estimated to consume 1,287 MWh for training, emitting approximately 550 tons of CO25. If reaching AGI requires hundreds or thousands times more computing, then energy usage could skyrocket.

Critics note that an arms race for AGI could be an environmental nightmare if not powered by renewable energy. One analysis noted AI is “directly responsible for carbon emissions and millions of gallons of water consumption” at data centers4.

AGI as Climate Solution

On the other hand, AGI might become the ultimate tool for solving environmental problems. A superintelligent system could potentially:

  • Design better solar cells
  • Optimize energy grids globally
  • Invent carbon capture methods
  • Model climate with unparalleled accuracy
  • Coordinate large-scale environmental projects

Sam Altman and others have suggested advanced AI will help “fix the climate”6. There is also hope that smarter systems will accelerate the discovery of clean energy or even geoengineering solutions.

There’s an analogy to nuclear tech: it could power cities or destroy them. AGI might likewise either help solve climate change or worsen it, depending on how it’s used and developed. Some suggest making AGI alignment not just about not harming humans, but also valuing the biosphere: aligning AGI with environmental sustainability too.

Global Geopolitics and Security

Nations see leadership in AI as a strategic asset. The advent of AGI could massively shift the balance of power internationally. A country (or alliance) that develops AGI first might gain decisive advantages in economics, military, and technological supremacy.

The AGI Arms Race

This drives a quasi-arms race mentality: the U.S., China, Russia, and EU are all investing heavily in AI. This competition can spur rapid progress but also raises the risk of reduced cooperation and safety shortcuts (rushing to beat rivals could mean less testing or international dialogue). There’s fear of a Thucydides Trap in AI: tensions between an AI-leading superpower and others could escalate conflicts.

We already see moves like the US imposing export controls on advanced chips to China (because those chips are needed for training cutting-edge AI). This is essentially treating AI progress as a matter of national security (akin to restricting nuclear tech). If AGI development continues, such tech restrictions may intensify, potentially leading to an AI “cold war.”

Military AGI

A big concern is military AGI: an AI that could strategize in war, control autonomous weapons, or even launch cyberattacks. An AGI in warfare context might act faster than human decision loops, potentially leading to accidental conflicts if not properly checked.

Autonomous weapons already pose risk of faster conflict escalation (a drone might retaliate in seconds, giving humans little time to intervene). An AGI controlling cyber operations might launch extremely sophisticated attacks or defenses at blinding speed. This challenges strategic stability. Ex-Google CEO Eric Schmidt has warned that AI will disrupt military balance, advocating for dialogues akin to nuclear arms talks.

International Governance

This has led to calls for international agreements: perhaps a global treaty on AGI development akin to nuclear non-proliferation. Some have proposed an “AGI NPT (Non-Proliferation Treaty)” where countries agree to monitor and prevent any single project from running unchecked.

The difficulty is verification and trust. Unlike nukes, AI is soft: you can hide code more easily than a missile silo. This uncertainty can fuel mistrust. In 2023, we saw initial steps like the US and allies discussing common AI principles, and the UK hosting a global AI safety summit.

Global South and Development

If AGI automates manufacturing and services, countries that rely on labor cost advantage could see their development model upended (why outsource work to a low-wage country if an AI can do it cheaper at home?). This could exacerbate global inequality unless there’s technology transfer or new economic models.

If AGI amplifies productivity, ironically it could either flatten differences (since labor cost differences matter less if machines do everything) or increase them (the country/firm with AGI gets all production). If manufacturing becomes fully automated, companies might relocate factories back to their home country (since cheap labor abroad is irrelevant), potentially hurting developing economies.

On a positive note, AGI delivered via cloud could in theory provide expertise anywhere: a small village could have access to the best diagnostics, education, etc., via AI. But will it be accessible or behind paywalls? The digital divide could widen if AGI requires infrastructure only rich countries have.

These concerns suggest that global governance should consider equitable access to AGI’s benefits, perhaps via international organizations ensuring it’s used for UN Sustainable Development Goals.

Capitalism vs. Other Economic Systems

The combination of AGI + capitalism is especially uncertain. Some thinkers argue that advanced AI could either collapse capitalism or turbocharge it.

Collapse scenario: If profit comes at the cost of eliminating consumer incomes (through job loss) and environment, the system unsustainably implodes, necessitating a new system (maybe some form of socialism or resource-based economy). If people can’t earn, they can’t consume; if they can’t consume, profit can’t be realized.

Turbocharge scenario: Companies with AI might find all sorts of new profit avenues. There’s speculation of AI-driven corporations that operate largely autonomously: an AGI “CEO” could optimize a corporation’s every move, potentially outcompeting human-led firms.

AGI and Post-Capitalism

The idea of AGI forcing post-capitalism is intriguing. Marxist theory predicted that at some point, automation would reduce the need for human labor so much that the labor-based economy would crumble, requiring a new mode of distribution. Some modern Marxists see AGI as the final automation that could validate that prediction.

Already we see how productivity gains from automation haven’t translated into less working hours or broadly shared prosperity; often they went to capital owners. Without policy changes, AGI might continue that trend until perhaps the system breaks.

New Economic Models

Some foresee the need for economic redesign via:

  • Wealth redistribution mechanisms (UBI)
  • Data dividends
  • Collectively owned AI cooperatives

For example, if a government created a “national AGI” and distributed its services freely, that’s a very different outcome than AGI under control of a private monopoly. This raises legal and ethical puzzles: Do we treat an AI-run company differently? Do antitrust laws apply if one AGI-enabled company can do the work of ten and underprice all competition?

Human Dignity and Purpose

Beyond material aspects, AGI raises questions of purpose. Work is not only income; it’s identity and meaning for many. If AGI takes over many roles, society will need to adjust how people find purpose.

Historically, industrial automation moved humans to more cognitive jobs. If AGI even handles cognitive and creative tasks, what is left for humans?

Optimistic view: This could herald a renaissance of leisure and art (as utopians in 19th century envisioned machine liberation leading humans to lives of culture, learning, and play). Maybe AGI handles the tedious work and humans focus on human-to-human care, relationships, and pursuits AIs don’t directly fulfill. Even if AIs can simulate companionship, human authenticity might be valued.

Pessimistic view: A crisis of meaning where, in a world where your contributions are not needed, finding fulfillment becomes difficult. Idle populations can suffer psychological issues or social unrest, especially if inequality remains high.

An oft-mentioned need is re-focusing education and culture toward lifelong learning, creativity, and social connection rather than equating worth with productivity, because AGI might decouple those.

Political Systems and Governance

Authoritarian regimes might use advanced AI/AGI to strengthen surveillance and control (AI monitoring of citizens, predictive policing, censorship with AI). A worrying scenario is a totalitarian state powered by AGI that perfectly monitors dissent and manipulates public opinion with deep fakes and targeted propaganda: a 1984-like state with AI as the all-seeing eye.

Conversely, democracies might use AI to enhance citizen services or direct democracy (imagine AI that helps write laws reflecting people’s preferences optimally). So AGI could tilt the balance between open and closed societies depending on who harnesses it better.

AGI as Risk Multiplier

An often overlooked perspective is AGI and existential risk beyond just AI itself: if an AGI is in the hands of a malicious actor, they could use it to develop bioweapons or other catastrophic technologies much faster. So even if AGI itself is aligned, its use as a tool could amplify other risks (like advanced AI designing a super virus that a human terrorist then deploys).

This ties back to governance: we might need global oversight on how AGI is used in sensitive domains.

Policy Implications

AGI is not just a technological milestone; it’s a force multiplier that will interact with every facet of human society and planet. Its arrival (gradual or sudden) could challenge our economic system’s assumptions, shake up labor markets permanently, either degrade or help restore our environment, and reconfigure international power hierarchies.

This is why discussions of AGI increasingly involve not just computer scientists, but economists, sociologists, ethicists, and policymakers. The stakes are as broad as they can be: ensuring that the “general” in AGI means general benefit, not just general capability.

To prepare, some advocate scenario planning and proactive policy:

  • Experiments with UBI or reduced work weeks now
  • Developing global AI governance frameworks early
  • Heavily investing in renewable energy for computing needs
  • Encouraging multi-stakeholder dialogue (private sector, governments, civil society) on AGI’s development

The hope is to avoid being caught off-guard by a technology that could otherwise exacerbate current crises (inequality, climate, conflict) if left solely to market or nationalist forces. Instead, with wise handling, AGI might become a tool that helps solve those crises: essentially, a double-edged sword that we must collectively decide how to wield.

Democratic Alternatives and Counter-Movements

Beneath the dominant AI discourse—shaped by Silicon Valley capital and state military interests—alternative visions persist. These counter-movements imagine AI development guided by different values: collective benefit rather than private accumulation, precaution rather than acceleration, democratic deliberation rather than technocratic control7.

Indigenous Data Sovereignty

Indigenous communities worldwide are asserting control over data about their peoples, lands, and knowledge. Organizations like the First Nations Information Governance Centre in Canada and Te Mana Raraunga in Aotearoa New Zealand have developed frameworks for indigenous data sovereignty: the right of indigenous peoples to govern the collection, ownership, and application of data pertaining to them7.

These frameworks challenge the extractive model underlying much AI development, where data is harvested from communities without consent or compensation. If AI systems are trained on indigenous knowledge—traditional ecological practices, linguistic patterns, cultural expressions—indigenous governance principles demand that communities retain authority over how that knowledge is used. This represents not just resistance to AI harms but a fundamentally different conception of information’s relationship to collective life.

Applied to AGI, indigenous data sovereignty raises profound questions: Who should control an intelligence system trained partly on humanity’s accumulated knowledge? What consent structures could govern such a creation? What forms of collective ownership might apply?

Worker-Led AI Governance

Labor movements are developing responses to algorithmic management and automation that go beyond resistance to job loss. Unions in various sectors have begun negotiating for algorithmic transparency—the right to understand how AI systems make decisions affecting workers—and for limits on surveillance and automated discipline7.

Some initiatives go further, imagining worker ownership of AI tools:

  • Worker cooperatives developing AI systems for member benefit rather than investor return
  • Union-negotiated data trusts holding workplace data collectively
  • Demands for worker representation on AI governance boards

These approaches challenge the assumption that AI systems must be owned by capital. If workers collectively developed and governed the AI tools in their industries, the relationship between automation and employment would look quite different. Rather than machines replacing workers to increase profits for owners, worker-owned AI could reduce drudgery while distributing productivity gains equitably.

Community-Governed AI

Beyond the workplace, experiments in community AI governance are emerging. Some models include:

  • Data trusts where communities collectively hold and govern data, licensing it to AI developers only under agreed conditions
  • Municipal AI developed by and for local governments, focusing on public benefit rather than commercial extraction
  • Participatory AI auditing where affected communities evaluate AI systems for bias and harm

Cities like Barcelona have explored municipal data infrastructure as an alternative to corporate platforms. Academic projects have developed tools for community members to audit AI systems without technical expertise. These experiments remain small compared to corporate AI development, but they demonstrate that alternatives are possible7.

Degrowth-Oriented Technology

Environmental movements increasingly question whether larger, more powerful AI systems are desirable at all. The degrowth technology perspective asks: What if we developed AI that required less energy, less computing power, less infrastructure?7

This might mean:

  • Low-power AI models designed for efficiency rather than maximum capability
  • Community-hosted AI systems running on local hardware rather than distant data centers
  • AI that augments rather than replaces human decision-making, reducing the drive toward ever-more-powerful systems
  • Moratoriums or limits on AI development that consumes unsustainable resources

From this perspective, the race toward AGI is not just potentially dangerous but actively destructive—consuming resources, concentrating power, and accelerating unsustainable economic growth. A different kind of AI development, constrained by ecological limits and oriented toward sufficiency rather than expansion, might look unrecognizable to those immersed in Silicon Valley’s growth imperative.

Feminist and Disability-Led Technology

Feminist technology projects often emphasize care, relationality, and situated knowledge over the abstraction and optimization that characterize mainstream AI. These approaches might prioritize:

  • AI that supports care work rather than replacing it
  • Systems designed with and for marginalized users
  • Attention to the embodied and emotional dimensions of intelligence
  • Resistance to surveillance and control applications

Similarly, disability-led technology initiatives center the needs and perspectives of disabled people, who are often excluded from AI development but disproportionately affected by its deployment. These movements have developed frameworks for accessible AI that might inform broader democratic approaches.

Global South Initiatives

While AI development concentrates in wealthy nations, communities in the Global South are developing alternative approaches that prioritize local needs:

  • Localized AI for healthcare, agriculture, and education in resource-limited settings
  • South-South collaboration on AI development outside Northern corporate frameworks
  • Indigenous and traditional knowledge integration into AI systems
  • Resistance to AI-enabled surveillance and border control

These initiatives challenge both the assumption that AI must be developed in Silicon Valley and exported globally, and the narrative that the Global South should simply await AGI’s arrival. They demonstrate that communities can shape AI development to serve their own purposes7.

The Precautionary Framework

Perhaps the most fundamental alternative is simply slowing down. The precautionary principle—requiring evidence of safety before deployment rather than evidence of harm before restriction—offers a framework for AI governance that inverts current practice7.

Applied seriously, precaution would mean:

  • No deployment of AI systems until harms are assessed
  • Affected communities’ consent before systems impact them
  • Liability for AI harms resting with developers and deployers
  • Resources for ongoing monitoring and potential withdrawal

This approach conflicts fundamentally with Silicon Valley’s move-fast-and-break-things ethos. It assumes that societies can choose not to develop certain technologies, or to develop them slowly, rather than accepting technological change as an inevitable force to be managed.

Democratic Deliberation on AI Futures

Finally, some movements call for democratic deliberation on AI’s trajectory: citizen assemblies, public referenda, or participatory governance processes that bring ordinary people into decisions currently made by corporate executives and technical experts7.

Ireland’s Citizens’ Assembly on constitutional questions and France’s Citizens’ Convention on Climate offer models: randomly selected citizens deliberating on complex issues with expert input, producing recommendations that reflect informed public judgment. Applied to AI, such processes might address questions like:

  • Should we develop AGI at all?
  • Who should own advanced AI systems?
  • What limits should apply to AI surveillance?
  • How should automation’s benefits be distributed?

These questions are too important to leave to those who profit from particular answers.

Reframing “Intelligence”

At the deepest level, democratic alternatives challenge the concept of “general intelligence” itself. The dominant paradigm imagines intelligence as abstract, quantifiable, and individual—a property that can be measured and maximized. But many traditions conceive intelligence differently: as relational, contextual, and collective7.

Indigenous knowledge systems often locate intelligence in relationships between beings rather than in individual minds. Feminist epistemologies emphasize situated knowledge emerging from particular standpoints. Disability perspectives highlight the diverse forms intelligence takes across different minds and bodies.

From these perspectives, the project of creating “artificial general intelligence”—a single system maximizing some abstract metric—may be fundamentally misguided. Intelligence is not a ladder to climb but a garden to cultivate: diverse, interdependent, adapted to particular places and purposes.

This reframing doesn’t mean abandoning AI development, but it suggests different goals: AI systems that enhance collective intelligence rather than replacing human cognition, that serve particular communities rather than claiming universal applicability, that remain accountable to democratic governance rather than pursuing autonomous optimization.

The alternatives are real. They exist in indigenous communities asserting data sovereignty, in unions negotiating algorithmic transparency, in municipalities exploring public data infrastructure, in environmental movements questioning growth imperatives, in feminist and disability communities centering marginalized perspectives. These movements offer not just resistance to AI harms but positive visions of different technological futures7.

Whether these alternatives can scale to challenge corporate AI development remains uncertain. They face immense resource disadvantages and operate against powerful institutional inertia. But they demonstrate that the current trajectory is a choice, not destiny—and that different choices remain possible.


References

  1. Economic analysis of AGI impact on labor value  2

  2. Artificial general intelligence - Wikipedia 

  3. Studies on technological unemployment and pace of change 

  4. Data center energy and water consumption analysis  2

  5. AI carbon footprint studies 

  6. Sam Altman, OpenAI 

  7. The Politics Of Superintelligence - Noema Magazine (James O’Sullivan, 2025)  2 3 4 5 6 7 8 9 10

Notes mentioning this note


Here are all the notes in this garden, along with their links, visualized as a graph.