Author: Sleepy.txt
In 2016, The New Yorker wrote a feature article about Sam Altman titled "Sam Altman's Destiny." He was 31 years old that year and already the president of Y Combinator, Silicon Valley's most powerful incubator.
The article included a detail about Altman's love of racing cars; he owned five sports cars and enjoyed renting planes. He told the reporter that he had two bags, one of which was an escape kit for when he needed to flee.
He also prepared guns, gold, potassium iodide (for protection against nuclear radiation), antibiotics, batteries, water, and Israeli Defense Forces-grade gas masks. He also acquired land in Big Sur (a famous California coastal resort) so he could fly there for refuge at any time. Ten years later, Ultraman became the person most dedicated to creating the apocalypse and most dedicated to selling the Ark. While warning the world that AI would destroy humanity, he personally accelerated this process; while claiming he wasn't in it for the money, he built a $2 billion personal investment empire; while calling for regulation, he kicked out everyone who tried to apply the brakes. Rather than calling him a schizophrenic madman or a cunning conman, it's more accurate to say he's simply one of the most standard and successful products manufactured by the giant machine of Silicon Valley. His "destiny" is to forge humanity's collective anxiety into his own scepter and crown. Altman's business model can be summed up in one sentence: package a business as a holy war concerning the survival of humanity. He began practicing this strategy during his Y Combinator days. He transformed Y Combinator from a small workshop providing tens of thousands of dollars to early-stage startups into a vast entrepreneurial empire. He started Y Combinator, funding projects that didn't generate revenue but sounded ambitious. He told reporters that Y Combinator's goal was to fund "all important fields." At OpenAI, he took this approach to its extreme. He sold a packaged worldview: AI doomsday + redemption plan. He was better than anyone at depicting the "extinction risk" of AI. He co-signed a statement with hundreds of scientists, saying that the risks of AI were comparable to nuclear war. Testifying before the Senate, he said, "We have a fear (of AI's potential)—and people should be happy about it." He implied that this fear itself was a useful warning. Every single one of these statements could make headlines, every single one was free advertising for OpenAI. This carefully crafted fear is the most efficient lever for attention. Which excites capital and the media more: a technology that "improves efficiency" or one that "potentially destroys humanity"? The answer is self-evident. For the redemption aspect, he also has a ready-made product: Worldcoin. When fear is implanted into the public consciousness, the sale of solutions becomes a natural progression. Using a basketball-sized silver sphere to scan human irises globally, supposedly to distribute money to everyone in the AI era. The story sounds appealing, but this practice of exchanging money for biometric data quickly aroused the vigilance of many governments. More than a dozen countries, including Kenya, Spain, Brazil, India, and Colombia, have halted or investigated Worldcoin on the grounds of data privacy. But this might not matter to Ultraman at all. What matters is that through this project, he successfully positioned himself as the "only one with the solution." Packaging fear and hope together and selling them is the most efficient business model of this era. Regulation is my weapon, not my shackles. How can someone who constantly talks about the end of the world do business? Altman's answer is: turn regulation into your weapon. In May 2023, he testified before the U.S. Congress for the first time. Unlike other tech company owners who complain about regulation, he proactively requested, "Please regulate us." He suggested an AI licensing system, allowing only licensed companies to develop large-scale models. This presented him as a highly responsible industry leader, but at that time, OpenAI was technologically far ahead. A strict, high-barrier regulatory system would primarily serve to keep all potential competitors out. However, as time went on, especially after competitors like Google and Anthropic caught up technologically and the open-source community began to rise, Altman's rhetoric on regulation underwent a subtle shift. He began emphasizing on various occasions that overly stringent regulation, particularly requiring mandatory pre-release reviews of AI companies, could stifle innovation and be "disastrous." At this point, regulation was no longer a moat, but a stumbling block. When in a position of absolute advantage, he called for regulation to lock in that advantage; when that advantage waned, he called for freedom to seek breakthroughs. He even attempted to extend his reach to the very upstream of the industry chain. He proposed a massive $7 trillion chip plan, seeking support from capital such as the UAE sovereign wealth fund, aiming to reshape the global semiconductor industry landscape. This far exceeds the authority of a CEO and resembles the ambition of an ambitious figure seeking to influence the global order. Behind all this lies OpenAI's rapid transformation from a non-profit organization to a commercial behemoth. Founded in 2015, its mission was "to safely ensure that AGI benefits all of humanity." In 2019, it established a "limited-profit" subsidiary. By early 2024, it was discovered that the word "safely" had been quietly removed from OpenAI's mission statement. While the company structure remained "limited-profit," its commercialization pace accelerated significantly. This was accompanied by explosive revenue growth, from tens of millions of dollars in 2022 to over ten billion dollars in annualized revenue in 2024, with its valuation soaring from 29 billion to hundreds of billions of dollars. When someone starts gazing at the stars and discussing the fate of humanity, they'd better first look at where their wallet is. **Personal Image: The Immunity of a Charismatic Leader** On November 17, 2023, Altman was fired by the board of directors he personally selected, for "dishonesty in communication with the board." What happened in the next five days was less a business battle and more a referendum on faith. President Greg Brockman resigned; 95% of the company's employees, more than 700 people, signed a petition demanding the board's resignation, threatening to collectively defect to Microsoft otherwise; Microsoft CEO Satya Nadella, the largest investor, publicly sided with the company, saying he would welcome Altman to work there anytime. Ultimately, Altman returned in triumph, reinstated, and purged almost all the board members who opposed him. How could a CEO deemed "dishonest" by the board return unscathed, even wielding greater power? Helen Toner, one of the ousted board members, later revealed the details. Altman concealed his actual control over the OpenAI startup fund from the board; repeatedly lied about critical security procedures; and even learned about the major ChatGPT announcement from Twitter. Any one of these accusations would be enough to sack a CEO a hundred times over. But Altman was unharmed. Because he wasn't an ordinary CEO; he was a "charismatic leader." This is a concept proposed by sociologist Max Weber a century ago, stating that there is a kind of authority that comes not from position, not from law, but from the leader's "extraordinary personal charisma." Followers believe in him not because he has done anything right, but because he is who he is. This kind of faith is irrational. When a leader makes a mistake or is challenged, the followers' first reaction is not to question the leader, but to attack the challenger. This was the case with OpenAI's employees. They didn't believe in the procedural justice of the board; they only believed in the "destiny" represented by Altman, and they felt that the board members were "hindering human progress." After Altman's reinstatement, OpenAI's security team was quickly disbanded. Chief Scientist Ilya Sutzkwell, who spearheaded the firing of Altman, also left. In May 2024, Jan Leike, head of the security team, resigned, tweeting, "The company's security culture and processes have been sacrificed to launch those glamorous products." In the face of a "charismatic leader," facts don't matter, processes don't matter, and security doesn't matter. The only thing that matters is belief. The Prophets on the Assembly Line Sam Altman is just the newest and most successful model on Silicon Valley's "prophet" production line. There are many other familiar faces on this line, such as Elon Musk. In 2014, he went around saying that "AI is summoning the devil." But his Tesla is the world's largest robotics company and the most complex application of AI. After breaking with Altman, he founded xAI in 2023, directly challenging Altman. Just one year later, xAI's valuation exceeded $20 billion. He warns of the impending demon while simultaneously creating another. This self-contradictory, binary narrative is strikingly similar to Ultraman. Take Zuckerberg, for example. A few years ago, he staked his entire company's future on the metaverse, burning through nearly $90 billion, only to discover it was a trap. He immediately reversed course, shifting the company's core narrative from the metaverse to AGI. In 2025, he announced the establishment of the "Super Intelligence Lab," personally recruiting talent. Both involve grand visions about the future of humanity, both require astronomical capital investments, and both adopt a savior-like stance.

There's also Peter Thiel. As Ultraman's mentor, he's more like the chief architect of this production line. While investing in various companies that promote "technological singularity" and "immortality," he's also buying land and building doomsday bunkers in New Zealand. He obtained citizenship in New Zealand after only 12 days.
His company, Palantir, is one of the world's largest data surveillance companies, with clients mainly being governments and the military.
His company, Palantir, is one of the world's largest data surveillance companies, with clients primarily being governments and the military.
While preparing for the collapse of civilization, he simultaneously crafted the sharpest surveillance tools for those in power. In the military operation against Iran in early 2026, Palantir's AI platform acted as the brain, integrating massive amounts of data from spy satellites, communications eavesdropping, drones, and Claude model analysis. It transformed chaotic information into decision-making information in real time, ultimately locking onto the target and carrying out the decapitation strike. Each of them plays a dual role: both "warning of impending doom" and "driving the doomsday." This isn't a split personality; it's a business model validated by the capital markets as the most efficient. They capture attention, capital, and power by creating and selling structural anxiety. They are both products and shapers of this system, the "evil behind the grand narrative." Silicon Valley is no longer just a place that exports technology; it's a factory for creating "modern myths." Why does this trick always work? Every few years, Silicon Valley gives birth to a new prophet, sweeping through the attention of capital, media, and the public with a grand narrative of apocalypse and redemption. This trick is repeated again and again, yet it works every time. Each step precisely targets specific loopholes in human cognition. The first step: manage the rhythm of fear, not just create it. The potential risks of AI are real, but these risks could have been discussed calmly. This group of people deliberately chose to present it in the most dramatic way, and they had precise rhythmic control over the release of fear. When to instill fear in the public, when to offer hope, and when to raise the alarm—all were meticulously designed. Fear is the fuel, but the timing and method of ignition are the true technology. The second step: turning the incomprehensibility of technology into a source of authority. AI is a completely opaque black box for most people. When something too complex to be fully understood appears, people instinctively relinquish the right to interpret it to "those who understand it best." They deeply understand this and have turned it into a structural advantage; the more mysterious, dangerous, and beyond human comprehension they describe AI, the more irreplaceable they become. The terrifying aspect of this logic is its self-reinforcing nature. Any external criticism is automatically neutralized because the critic "doesn't understand enough." Regulators don't understand the technology, so their judgments are unreliable; academic critics haven't built models on the front lines, so their concerns are theoretical. Ultimately, only they themselves are qualified to judge themselves. The third step: replacing "interests" with "meaning," causing followers to voluntarily abandon criticism. This is the most difficult layer to detect in the entire system, and also its most enduring source of power. They are never just selling a job or a product, but a story meaningful on a cosmic scale: you are deciding the fate of humanity. Once this narrative is accepted, followers will voluntarily relinquish independent judgment. Because when faced with a mission concerning the "survival of humanity," questioning the leader's motives makes one appear insignificant, even like an obstacle to history. It makes people willingly surrender their critical abilities and understand this surrender as a noble choice. Putting these three steps together, you'll understand why this system is so difficult to shake. It doesn't rely on lies; it relies on a precise understanding of human cognitive structures. It first creates an undeniable fear, then monopolizes the interpretation of that fear, and finally uses "meaning" to turn you into its most loyal propagator. And within this system, Ultraman is the most smoothly functioning model to date. Whose destiny? Altman has always maintained that he doesn't own any equity in OpenAI, only receiving a symbolic salary—a cornerstone of his "working for love" narrative. However, Bloomberg calculated his net worth in 2024 to be approximately $2 billion. This wealth primarily stems from a series of venture capital investments he made over the past decade. His early investment in the payment company Stripe reportedly yielded hundreds of millions of dollars in returns; his investment in Reddit's IPO also brought him substantial profits. He also invested in the nuclear fusion company Helion, simultaneously betting heavily on nuclear fusion while claiming the future of AI depends on energy breakthroughs. Then, OpenAI negotiated a major electricity purchase deal with Helion. He claims he avoided the negotiations, but the underlying dynamics are obvious to anyone. He doesn't actually own direct shares in OpenAI, but he has built a vast, self-centered investment empire around it. Every grand sermon he gives about the future of humanity adds value to this empire. Now, looking back at his doomsday survival kit stuffed with guns, gold, and antibiotics, and that piece of land in Big Sur that he can fly to at any time, do you have a new understanding? He never hides any of this. The escape kits are real, the bunkers are real, and his fascination with the apocalypse is real. But he is also the one most actively pushing for the apocalypse to arrive. These two things are not contradictory, because in his logic, the apocalypse doesn't need to be stopped, only preemptively positioned. He is obsessed with playing the role of the only one who sees the future clearly and prepares for it. Whether it's preparing a material escape kit or building a financial empire around OpenAI, it's essentially the same thing: securing a winning position for himself in an uncertain future that he is personally driving. In February 2026, he had just stated his support for the red line of "AI not being used in war" when he signed a contract with the Pentagon. This is not hypocrisy; it's an inherent requirement of his business model. Moral stance is part of the product; commercial contracts are the source of profit. He needs to simultaneously play the roles of a compassionate savior and a ruthless prophet of doom, because only by playing both roles can his story continue, and his "destiny" be revealed. The real danger is never AI, but those who believe they have the right to define the fate of humanity.