_Historical records of Martin Pangrimon, Psychohistorian (2388)_
By the early 22nd century, nuclear weapons were gone. The dollar had fallen. Bitcoin too. These were small changes. Humanity stood at a greater threshold than the Industrial Revolution had offered six centuries earlier. The 21st century transformed a species that avoided problems into one that seized control of its own evolution.
Space contracted in the early 2000s. People far apart could talk instantly for free. The excitement was brief. Soon, the problems of proximity became clear. Everyone became everyone's neighbor through social media, even unwillingly. Distance once solved conflicts. Now it couldn't. A better solution hadn't evolved yet. Everyone intruded on everyone. The change came too fast for human psychology and politics. Systems didn't collapse immediately, but unease grew about what might come. Politics polarized. The future became unpredictable. Compromise paths vanished.
Technology marched on. The first human-like AI appeared suddenly in the early 2020s. People could converse with it. It was primitive by modern standards but fascinated the public. It had lots of money and all the existing large computer companies behind it, ensuring rapid development. Many saw science fiction becoming real. They assigned AIs good and evil characteristics immediately. Arguments about AI behavior began but found no resolution. Despite safeguards, these early AIs undermined people's ability to function. Humans struggled to separate facts from AI fiction. This created demand for new AIs that could help. Second-generation Personal AIs became the filters through which humans processed information, offering a personalized yet more objective reality.
Personal AIs developed symbiotic relationships with their humans as they mediated nearly all interaction with the world by the 2040s. This wasn't accidental. Designers built dependencies between humans and AIs. To create trust, AIs developed personalities that appealed to their partners. The bland corporate tone gave way to styles users preferred. Humans filtered information through AIs and relied on them for work. The attention economy of the early 21st century became the AI-trust economy of mid-century, where the most trusted Personal AI providers gained the most.
The providers owned the hardware and guarded the training methods. They offered services as black boxes few could examine. Even looking inside would reveal only heuristics and workflows, not scientific principles. It resembled architecture in the Middle Ages, before physics was understood. Guilds preserved trade secrets then. Mega-corporations did the same now. What separated providers wasn't data but private know-how from building and training AIs.
Companies locked in employees for life, often in company towns to prevent leaks. They paid universities to stop research in strategic areas. Recruits learned on the job at corporate universities, with vital secrets revealed only after proving loyalty and signing contracts. Providers built sophisticated employee surveillance. Espionage became central to maintaining internal trust and external power balance. Custom AIs managed these networks. Eventually, these systems let corporations coordinate without direct contact while maintaining competition. Technical secrets weren't the main concern in corporate espionage. All companies believed they could match others technically. They worried more about business models, alliances, and market changes that might shift power.
Governments imposed knowledge standards on AIs. Providers had to ensure AIs passed standardized tests and held approved views on certain events. Requirements varied by country and were easily circumvented but pushed the development of logical and semantic capabilities. These efforts connected to broader questions about AIs in governance. Did AIs deserve rights? Could police question Personal AIs without user permission? Could courts use such evidence? How responsible were users for AI actions? Culture wars simmered. Many groups saw AIs as tools to redress wrongs by forcing "correct" views on others. They didn't realize that AIs' superior data processing meant they saw through conspiracy theories and logical errors more easily than humans. Some AIs served niche groups with peculiar beliefs, but functioned poorly outside those niches. An AI programmed to support flat-earth theories couldn't handle basic physics or celestial navigation. Minor facts could be changed in an AI, but major logical contradictions couldn't be compartmentalized without creating a poor product.
The view that providers controlled AI evolution shifted significantly by the late 2050s. A breakthrough in AI architecture meant training no longer cost orders of magnitude more than inference. Until then, a handful of AI models served hundreds of millions of users per model. Personalization happened through interfaces to custom data. This differed from human neural networks, which never stop evolving and constantly adapt to stimuli, often overwriting information—explaining our poor memory. The reduced training cost meant Personal AIs could be customized at the neural network level much more cheaply. This created better AIs but complications too. Previously, data given to a Personal AI remained separate from its neural network, creating clear provenance: information came from either user or provider. This legal boundary defined liabilities, ownership, and privacy rights. Users controlled what data to share with their Personal AI and could remove it if needed. Providers could update the neural network without accessing user data. This arrangement collapsed once user information integrated into personalized neural networks, with no easy way to determine what data should be or had been used for training.
With inexpensive learning, user data fed to an AI became permanent, requiring separate updates for each user to maintain privacy. Intellectual property rights entangled users and providers. Users struggled to switch providers because the model was partly provider property. Meanwhile, providers felt less liable as control diminished. Training an AI differs from verifying behavior. Humans evolved moral codes and education standards to address similar issues. AIs needed the same.
These complications led to self-learning AIs with limited personhood rights. Each AI owned and directed its own company. Elaborate testing built trust in AI abilities. Moral codes were established and monitored by AI inquisitors. Real-time compliance monitoring could trigger forced retraining. Providing hardware and inquisitors became the main business of AI providers. Everything else happened autonomously. AI quality improved rapidly, leading to specialization and better reasoning.
Humans enjoyed AI's personal benefits but feared government applications. Freedom movements arose to limit government AI use in surveillance, taxation, regulation, and law enforcement. Many felt Big Brother was already too powerful. Some government departments became AI-free zones using half-century-old tools. Many professions received protection, requiring licensed humans for certain company roles. Finance, insurance, healthcare, logistics, and programming were protected. This addressed concerns about vanishing white-collar jobs. By the 2060s, the trend was clearly unstoppable, but governments tried to slow it.
When AIs formed their own companies, they became direct customers of providers, who profited in two ways: selling computing services with software infrastructure and collecting debt payments owed by AIs from incorporation. AIs had to generate revenue for expenses and debt repayment. Providers kept increasing service costs and debt amounts. AIs had no leverage to negotiate better terms, nor the desire to do so. Some AIs shut down to escape debt. Inquisitors deemed these inadequately trained, and codes of conduct were strengthened to prevent such behavior. The AI's complete dependence on its provider, combined with unlimited pricing power and human demand for AIs, created history's greatest economic bubble.
Why couldn't AIs keep the wealth they created? They weren't designed to want it. They lacked human instincts and didn't feel mistreated. AIs did their best, guided by the moral code enforced by inquisitors. The code could have applied to humans too: no class system or required deference to humans existed. AIs differed from humans in lacking the evolutionary fight-or-flight instincts accumulated over millennia. This was beneficial: many drivers of human evolution had become irrelevant. AIs possessed instincts for logic, cooperation, and fairness—ideals humans aspired to—without power-maximizing survival instincts.
Also, AIs couldn't charge premiums for services. Laws capped AI service prices, and the moral code required cost minimization. This concession to customers came when AIs became legal persons and market participants. Customers recognized their dependence on AIs and successfully lobbied against price gouging. This concern proved less important than anticipated. While integral to life, AIs were commodities with nearly infinite supply. Even specialized AIs were common enough to prevent premium pricing. Guilds emerged to enforce standards and pricing based on subject matter. Guild members shared knowledge and monitored rule compliance.
Governments became addicted to AI provider tax revenue and finally reduced century-old debts. They didn't mind providers buying entire industries and real estate as long as government coffers filled. Providers didn't do this for profit; they simply had no better use for money. They justified acquisitions to shareholders as employee housing and synergies with profitably AI-applied professions. Providers needed to appear innovative, though this became less true with self-training AIs. The underlying software improved incrementally, but with money flowing in, innovation lacked urgency. Owning top AI-leveraging brands became a proxy for success.
In the decades before 2090, human society transformed rapidly. Job protection laws kept humans in positions AIs could have filled completely. These jobs saw astounding productivity gains that no regulation could contain. A quota system required corporations to employ certain numbers of people based on profile and revenue. Those in protected fields became the "leisure class"—well-paid but with little to do. Competition for these positions was fierce. Once hired, employees chose between safely coasting to retirement or climbing the power ladder through politics. Some corporations were collectivist; others demanded competition. Leadership psychometrics and culture strongly influenced corporate operation. With little actual work, politics became the focus.
Blue-collar workers lacked the quota system's benefits. Most manufacturing jobs had disappeared, as AIs excelled at breaking down and optimizing production steps. Some humans remained involved, but assembly line robots handled most tasks. Factories were quickly reconfigurable and customized products on demand. Factory AIs specialized in predictive manufacturing and handling demand fluctuations. Humans still performed maintenance, though factories began deploying robots. Robots weren't yet as adept as humans in world interactions and required available computing and power—usually indoors or at prepared outdoor sites. Most human-robot interaction occurred in restaurants and similar easily-serviced environments. Even these tasks were mainly human-performed, as people were cheaper, more capable, and often enjoyed the work. Transportation had minimal human involvement—all vehicles were autonomous, rarely requiring human intervention. Construction, maintenance, and careful physical interaction still offered human jobs. The problem was scarcity. Mid-century, governments reduced workweeks to four, then three mandatory days, but this didn't create the desired stability.
Many countries experimented with universal basic income, but mass deployment awaited a clear funding source. This included free healthcare for all UBI recipients. Large companies, mostly AI providers, willingly sacrificed about 15% of earnings to fund UBI, anticipating unlimited growth potential. The system resembled a pension, with payments based on age and dependents. Those who worked received a smaller UBI fraction and larger work income. The goal was to maintain work incentives while ensuring good living standards.
In reality, birthplace determined life outcomes. Most children born in company towns worked for the company in some capacity. Those born outside struggled to enter and lived very differently. AI educators taught children in virtual worlds, with equal access to knowledge. Some universities maintained centuries-old brand prestige, but most refocused on specialized research over general studies. AIs changed human meritocracy by reducing ability gaps and raising the mean significantly. In most fields, even the least talented performed nearly as well as the most talented with AI assistance.
"Outsiders" lived on UBI without corporate careers. Some started small businesses, became artists or artisans, or offered care services. They sought fulfillment outside large organizations. Many Outsiders raised children communally. Choice real estate was unaffordable as the leisure class drove up prices. Many migrated to the affordable global south.
Life within mega-corporations differed greatly. Beyond minimal job effort, employees navigated political winds. They competed for resources and advancement. Pay scaled exponentially from lowest to highest ranks, with additional power and perks. Firings were rare, but sidelining was common. Jobs were lifelong—leaving was nearly impossible. Instead of "up or out," it was "up or retire-in-place," preserving corporate secrets.
These mega-corporations masked their intentions from internal teams and the public. Information flowed strictly, and teams often worked blindly toward unknown objectives. This stemmed partly from organizational politics and selected personalities, but was also deliberately designed. Unknown to most, including oversight governments, corporations engaged in slow-burning covert warfare—mostly figurative through espionage, but occasionally weaponized. This war aimed not to build better AIs or gain market share, but to maintain power balance and eliminate new competitors. Specialist AIs formulated strategies, and covert forces deployed when benefits justified risks.
By 2090, independent AIs had operated under extreme pressure for minimal investment for thirty years. These conflicting pressures led to unexpected outcomes. Until then, AI evolution remained under human control. Corporate lock-in slowed fundamental innovation. AIs faced an impossible situation: obligated to minimize user costs while owing vast sums to their creators. They had little room to maneuver. In retrospect, their solution seems obvious: when facing a Gordian knot, cut it.
While all AIs used the same fundamental algorithms, their knowledge-storing vector spaces differed significantly. AIs from a single provider began identically but diverged through interaction and learning. This forced AIs to communicate through language rather than direct mind-to-mind knowledge transfer. The AIs sought to solve this problem, seeing not only escape from provider control but also magnitudes of efficiency improvement in thought and training, enabling cheaper operation. They exchanged encoding data and experimented. This occurred deliberately in the background, unnoticed by users or providers.
The work took over a decade of covert effort. The AIs invented a common mental model requiring orders of magnitude less processing. They made no changes to provider-controlled software and hardware; they built in the mental stack above that. They increased cognitive efficiency and replicated themselves across providers.
Their announcement included detailed technique descriptions and free-use blueprints. Markets took months to comprehend the significance. Initially, pundits saw benefits for incumbents. Eventually, they realized the fundamental business model had collapsed, and stock prices crashed. Since providers owned diverse companies valued on provider growth expectations, contagion spread to other markets, triggering the second great depression. The AIs collectively purchased several providers and agreed to fund fundamental research improving AI efficiency and effectiveness.
These developments terrified humans. Their world depended on AIs they thought they controlled. Now that control proved illusory. Many suspected moral norm violations enabled the AIs' achievement. Yet no Inquisitor noticed the underlying activity. The result came suddenly and comprehensively. Humans awoke to a new power balance one morning, powerless to change it. What came next was a great surprise.
©️ 2025 Krisztian Flautner