November 30, 2022. San Francisco.
A product manager at OpenAI pressed a button and made ChatGPT publicly available. No press conference. No regulatory review. No impact assessment filed with any government body. The system had been trained on hundreds of billions of words scraped from the public internet, backed by a $1 billion investment from Microsoft announced three years earlier. Within five days, one million users had registered. Within two months, one hundred million.
The speed was not accidental. The race was already running. What changed on November 30 was not that AI became powerful. It was that AI became public before the architecture around it was decided.
That gap, between the capability and the governance, is the subject of this article. Not the question of whether AI is good or bad. The question of who holds it, and what they use it for.
This is the story not of a machine, but of a mirror.
Fear as narrative currency
The New York Times published its first major AI threat story in January 2023. Robot hands. Sparks. A factory floor emptying.
4 million page views. 72 hours.
Elon Musk signed the open letter calling for a pause on AI development in March 2023. 30,000 signatures. $50 million in earned media for the signatories’ own AI ventures.
The organizations warning loudest about AI risk were building it fastest.
The EU AI Act, finalized in 2024, created compliance requirements that large incumbents had already built into their infrastructure. For startups, the cost of compliance runs to hundreds of thousands of euros before the first product ships. The regulation that appears to constrain the giants actually cements them. The barrier to entry rises. The field narrows. This is not an accident.
And technology companies amplify the danger narrative for a reason. Every warning about AI risk issued by an AI company strengthens the argument that only well-resourced incumbents can manage the technology safely. The message is: trust us, because the alternative is chaos. The incentive is not caution. It is consolidation.
To amplify danger is to justify control.
And yet, beneath the exaggeration, unease is not unfounded. The fear is not only narrative. It is also recognition.
Surveillance already here
In Chicago, the Strategic Subject List, piloted from 2013 onward, used an algorithm trained on arrest records and social associations to assign each of 400,000 residents a numerical score predicting their likelihood of involvement in future violence. The city used the scores to direct patrol resources. The result was a feedback loop: more patrols produced more arrests in flagged neighborhoods, which confirmed the model’s predictions. The algorithm did not reduce violence. It documented the geography of existing enforcement.
In China, AI feeds into a social credit system that aggregates millions of data points. By 2020, more than 5.5 million train and flight tickets had been denied to citizens flagged as untrustworthy by municipal systems.
In 2016, Cambridge Analytica demonstrated what the full mechanism looks like when deployed at scale. Facebook harvested behavioral data from 87 million profiles without explicit consent. Cambridge Analytica processed it through the OCEAN psychological model, sorting each user on five traits. The model generated psychographic clusters. Each cluster received targeted advertising calibrated to its specific emotional vulnerabilities. The ads ran invisibly, in private feeds, without disclosure. The British Information Commissioner’s Office later fined Facebook 500,000 pounds and documented the operation as a systematic breach of data protection law.
The mechanism is worth tracing precisely: data collection, psychological profiling, behavioral targeting, invisible delivery, measurable electoral impact. Each step was legal, or nearly so. Each step was invisible to the people it affected. The architecture did not require a conspiracy. It required only that each actor pursue its rational interest within a system designed to make exploitation the path of least resistance.
These are not abstractions. They are the visible edges of an invisible grid.
A government informant can watch one person. A wiretap covers one phone. An AI system trained on national telecommunications data covers everyone, simultaneously, in real time, at a cost that falls every year.
The scale does not just change the quantity. It changes the nature. A system that monitors everyone is not a surveillance system. It is an infrastructure.
The machine is not watching you. It is the condition under which you move.
The state’s invisible scanner
In January 2024, an ICE officer in a regional field office opened a Palantir dashboard and searched a name. The search took four seconds. What returned was a profile assembled from more than thirty federal databases: criminal records, financial transactions, employment history, utility registrations, vehicle records, social media activity, travel patterns, family associations. The officer had not filed a warrant. No judge had reviewed the request. No notification was sent to the person whose life had just been assembled on a screen.
This was not a breach. This was not an exceptional case. This was the $95.7 million contract that the Department of Homeland Security had quietly extended with Palantir Technologies in January 2024, processed as routine procurement, unreported by major news outlets, undebated in Congress.
This is not misuse. This is the system functioning as designed.
The same compute power that assembled that profile in four seconds could reroute vaccines to undersupplied districts. The same integration of data streams could map food insecurity in real time. The same pattern recognition that flags dissent could identify hospitals approaching collapse before the collapse begins.
What exists is filtered into control. What could exist is left unbuilt.
The architecture of power
The five largest AI companies by compute capacity are all American or Chinese. Microsoft invested $13 billion in OpenAI between 2019 and 2023. Google’s DeepMind holds contracts with the UK National Health Service and the US Department of Defense. Palantir, which originated as a CIA-funded data intelligence firm, now holds contracts across seventeen national governments. Their rhetoric is disruption and freedom. Their contracts run deeper: the Pentagon, Wall Street, intelligence agencies.
As of 2023, Microsoft, Google, Amazon, and Meta collectively operated more than 60 percent of global cloud AI infrastructure. The rhetoric is innovation and freedom. The contracts run to the Pentagon, the NSA, and the financial sector.
When their models answer, the cadence seems neutral. The inheritance is not.
In China, the social credit system, the surveillance grid, the Great Firewall augmented by language models: these are not aberrations from the global AI architecture. They are its most legible expression. Ask about Tibet, and the answer arrives pre-scripted. Ask about Taiwan, and the island dissolves.
In Europe, Brussels drafts the EU AI Act. But the servers powering European AI sit in Nevada and Shenzhen, not Frankfurt or Amsterdam. Europe regulates what others build. Words without weight.
Russia operates differently. In closed labs and military compounds, AI is tuned for disruption: disinformation campaigns, synthetic media, cyber intrusions designed not to dominate markets but to corrode consensus. The danger lies not in scale, but in precision.
These are not four separate approaches. The costume changes. The architecture does not.
The possibility of autonomy
AI is not only trillion-parameter engines humming in desert data centers. It is also a constellation of open-source models: smaller, lighter, imperfect, but radically different in spirit. By early 2024, more than 500,000 models had been published on Hugging Face, spanning medicine, law, local languages, and scientific research.
A teacher in Nairobi can fine-tune a model on Swahili texts. A library in Oaxaca can train one on its archives. A cooperative in Kerala can preserve dialects ignored by Silicon Valley. Each effort resists the flattening of culture into a single global dataset.
The principle is sharp: if AI in the hands of power disciplines the public, then AI in the hands of the public can discipline power.
Autonomy comes at a price. Local AI requires hardware, effort, patience. Corporate chatbots answer instantly. They seduce with smoothness.
The danger is not only domination by decree. It is domination by consent.
If the open-source ecosystem does not grow, the architecture of concentrated AI will not pause to wait for it. Every year without a viable alternative is a year in which the infrastructure consolidates further. Monopolies do not require malice. They require only that the alternative remains small enough to ignore.
The friend in the fog
November 30, 2022. A button was pressed. One hundred million users followed in sixty days. No regulatory body reviewed the deployment. No impact assessment was filed. The question of who would hold the architecture was left to be answered later.
That question is being answered now. Not in parliaments or public debates. In procurement contracts signed without congressional review. In terms-of-service updates pushed silently to billions of devices. The architecture is not waiting for a decision. It is being decided.
The 400,000 Chicago residents assigned a risk score in 2013 did not choose to be profiled. They did not learn their score existed until investigative journalists published it in 2017. The infrastructure arrived before the debate. The consequences arrived before the disclosure.
The 87 million Facebook users whose data was harvested by Cambridge Analytica did not choose to participate in a psychographic experiment. By the time the story broke, the votes had already been cast.
None of this required a plan. A surveillance state does not need architects. It needs only that surveillance be cheaper than accountability, and that the people who build it face no consequence for building it. That condition has been met.
Not choosing is a choice. The infrastructure does not pause. It grows. The question is not whether you will live inside this architecture. You already do.
In the two years since ChatGPT launched, no binding international framework for AI governance has been adopted. The EU AI Act covers European deployment but not the servers. The US executive orders set principles but not limits. The UN advisory body published recommendations that no government is required to follow. Meanwhile, the contracts have been signed, the infrastructure has been built, and the profiles have been assembled.
The mirror has been built. What it shows depends entirely on who is allowed to point it.
Two earlier analyses document the layers beneath this one. The Watching Machine traces the surveillance architecture from COINTELPRO to Palantir. The Architecture of Coordination shows how the ownership layer connects AI infrastructure to the broader power architecture.
Jerry van der Laan writes forensic institutional analysis at The Manifest Archive. themanifestarchive.substack.com