- Effective Accelerationists, tech’s most staunchly pro-AI faction, are gaining momentum.
- The movement supports AI growth and development — and profits — without guardrails or regulation.
The Effective Accelerationism movement — a staunchly pro-AI ideology that has Silicon Valley split over how artificial intelligence should be regulated — appears to be walking a razor’s edge between being a techno-libertarian philosophy and a nihilistic, even reckless, approach to advancing one of the world’s most significant technological developments.
While its public proponents, like Garry Tan, CEO of the startup accelerator Y Combinator and former cofounder of the venture firm Initialized Capital, insist being “e/acc” is not about replacing humans with robots, it’s not exactly not about replacing humans with robots.
A riff on the effective altruism, or “EA,” philosophy touted by tech influencers like Sam Bankman-Fried and Elon Musk, e/acc took off in 2023, though its exact origins remain unclear. The movement has attracted a cast of unlikely characters, including venture capitalist Marc Andreessen and convicted fraudster Martin Shkreli.
“EA and e/acc are mostly the same people,” Emmett Shear, the former interim CEO of OpenAI, said in an interview with Meridian. “Their only difference is a value judgment on whether or not humanity getting wiped out is a problem.”
E/accs (pronounced ee-yacks) adherents believe the creation of an AI singularity, where technology advances beyond the point of human control, is not only unavoidable but desirable — a necessary part of evolution beyond humanity.
And investing in getting there could mean big money as e/accs spur increased controversy and interest in the AI industry, the development of which is driving such strong market changes that Goldman Sachs estimates generative AI could increase the global GDP by $7 trillion, or 7%, over the next 10 years.
‘No affinity for biological humans’
A jargon-filled website spreading the gospel of Effective Accelerationism describes “technocapitalistic progress” as inevitable, lauding e/acc proponents as builders who are “making the future happen.”
“Rather than fear, we have faith in the adaptation process and wish to accelerate this to the asymptotic limit: the technocapital singularity,” the site reads. “We have no affinity for biological humans or even the human mind structure. We are posthumanists in the sense that we recognize the supremacy of higher forms of free energy accumulation over lesser forms of free energy accumulation. We aim to accelerate this process to preserve the light of technocapital.”
Basically, AI overlords are a necessity to preserve capitalism, and we need to get on creating them quickly.
In the site’s first blog post, written by anonymous e/acc proponents @zestular, @creatine_cycle, @bayeslord, and @BasedBeffJezos — who Forbes later confirmed is Guillaume Verdon, a former Google engineer who later founded the AI startup Extropic — reads “We haven’t seen anything yet.”
While e/accs say they have no love for biological humans, they still describe their movement as “pro-human” — but to them, it’s technology that will save us, not ourselves.
E/accs are generally reluctant to indulge even the most earnest questions about safety concerns about AI development. In response to questions from Business Insider, Shkreli warned fellow accelerationists in a post on X not to talk to the press, calling it “the least e/acc thing you can do.”
Investing in a post-humanist future
Making “sentience more varied,” as the e/acc blog states, is the inevitable outcome of unrestrained AI development. And business is booming.
Tan is positioned as one of tech’s top investors, according to Forbes’ Midas Seed List, and through Y Combinator, has invested in more than 100 different AI startups.
Billionaire Andreessen, who has written and released a 5,000-word manifesto detailing his support of rapidly developing AI, has also invested heavily in the industry — including OpenAI, per Forbes.
Shkreli, who has “e/acc” proudly written next to his username on X, established an AI business called Dr. Gupta following his release from prison for securities fraud. The service is a “virtual healthcare assistant” that allows users to seek medical advice from a chatbot. The bot has been heavily criticized by experts, who have previously raised concerns about the ethics of a health bot run by someone convicted of fraud.
Extropic AI, Verdon’s startup, recently raised $14.1 million in seed round funding, per a company blog post. The post begins with an otherworldly dispatch from the “omnipresent generative AI” future. The company is developing microchips that run LLMs (think ChatGPT-type models), according to The Information.
Verdon told Forbes that his vision of a technocapital future is a heavy investment in solving the social issues pressing “the culture.” It echoes similar sentiments from tech bros who think that robots and AI will make the world a better place — while also making them very rich.
So the power to decide our future, the accelerationists say, will be in the hands of a group of Silicon Valley bros who celebrate their “continued cultural superiority” over everyone else.
Opponents say that future is bleak.
“The irony is that these are people who firmly believe that they’re doing good,” Nancy Connell, a biosecurity researcher at Rutgers University, told Politico. “And it’s really heartbreaking.”
‘It is like thinking that squirrels can control humanity.’
For e/accs, the world is simple: AI will solve our problems because we want it to, and we’re the ones programming it. Opportunity is endless, and at the end of the AI rainbow is a singularity worth more than gold.
A public proponent of Effective Accelerationism who spoke to Business Insider said that the movement wants people who can allocate capital to the e/acc cause and further their goals. He was granted anonymity to talk frankly about the movement without risk to his professional relationships, but his identity is known to Business Insider.
He said proponents believe engineers will only invest in an evolution of AI that would benefit humans — but AI safety experts just don’t see it that way; the e/acc movement has been heavily criticized [by cyber security experts. One researcher called it “a dangerous unaccountable ideology inspired by replacing humanity with AI.” Another said that the movement has “no social vision.”
It’s also a naive way of thinking about superintelligence, Roman Yampolskiy, the director of the Cyber Security Laboratory at the University of Louisville, told Business Insider.
“No one, even in e/acc, will suggest that they have a working superintelligence control mechanism or even a prototype for one,” Yampolskiy said. “Why would anyone think that it is possible to indefinitely control a superintelligent (god-like) machine? It is like thinking that squirrels can control humanity.”
Yampolskiy is trying to warn the industry that the future of AI overlords that these e/accs are quickly trying to usher in could be really, really bad. Terrifying even. And it’s better to be safe than sorry.
Despite his years of research, e/accs might see him and others invested in AI safety as a pessimistic doomer — or, in e/acc vernacular, a “decel.” But, as Yampolskiy pointed out to Business Insider, many e/accs are not scientists and are not AI safety researchers. This is his wheelhouse — not theirs.
‘Either we stop, or we all die.’
While well-developed AI has the power to help screen for cancer, increase accessibility for disabled people, conserve wildlife, combat world hunger, and even aid in the climate crisis, critics of the e/acc movement argue the current practical applications of the technology would become immediately irrelevant should AI begin thinking for itself and determining its own goals for humanity’s best interest without humans to control it.
But as e/accs seek to defy the warnings of safety researchers, what about the rest of us?
E/accs want to reshape society radically, alter how we work and interact, and redefine what it means to be alive, but the general public doesn’t have much of a say in AI — or enough money to have a voice.
Yampolskiy said that the attention this movement has garnered among the uber-rich is troubling and “even more worrisome if you look closely; you realize that these people are not representative of humanity, our belief and values, they themselves are not value-aligned with humans.”
His vision is diametrically opposed to that of the e/accs: Pause the development of AI.
“Either we stop before we get to superhuman AI, or we all die. ‘Huge AI, Inc.’ should not be running dangerous experiments on 8 billion humans.”