Entrepreneur Matt Shumer’s essay, “Something Big Is Happening,” is going mega-viral on X, where it’s been viewed 42 million times and counting.
The piece warns that rapid advancements in the AI industry over the past few weeks threaten to change the world as we know it. Shumer specifically likens the present moment to the weeks and months preceding the COVID-19 pandemic, and says most people won’t hear the warning “until it’s too late.”
We’ve heard warnings like this before from AI doomers, but Shumer wants us to believe that this time the ground really is shifting beneath our feet.
“But it’s time now,” he writes. “Not in an ‘eventually we should talk about this’ way. In a ‘this is happening right now and I need you to understand it’ way.”
This Tweet is currently unavailable. It might be loading or has been removed.
Unfortunately for Shumer, we’ve heard warnings like this before. We’ve heard it over, and over, and over, and over, and over, and over, and over. In the long run, some of these predictions will surely come true — a lot of people who are a lot smarter than me certainly believe they will — but I’m not changing my weekend plans to build a bunker.
The AI industry now has a massive Chicken Little problem, which is making it hard to take dire warnings like this too seriously. Because, as I’ve written before, when an AI entrepreneur tells you that AI is a world-changing technology on the order of COVID-19 or the agricultural revolution, you have to take this message for what it really is — a sales pitch.
Why people are so worried about AI right now
Shumer’s essay claims that the latest generative AI models from OpenAI and Anthropic are already capable of doing much of his job.
“Here’s the thing nobody outside of tech quite understands yet: the reason so many people in the industry are sounding the alarm right now is because this already happened to us. We’re not making predictions. We’re telling you what already occurred in our own jobs, and warning you that you’re next.”
The post clearly struck a nerve on X. Across the political spectrum, high-profile accounts with millions of followers are sharing the post as an urgent warning.
This Tweet is currently unavailable. It might be loading or has been removed.
This Tweet is currently unavailable. It might be loading or has been removed.
This Tweet is currently unavailable. It might be loading or has been removed.
This Tweet is currently unavailable. It might be loading or has been removed.
This Tweet is currently unavailable. It might be loading or has been removed.
To understand Shumer’s post, you need to understand big concepts like AGI and the Singularity. AGI, or artificial general intelligence, is a hypothetical AI program that “possesses human-like intelligence and can perform any intellectual task that a human can.” The Singularity refers to a threshold at which technology becomes self-improving, allowing it to progress exponentially.
Mashable Light Speed
Shumer is correct that there are good reasons to think that progress has been made toward both AGI and the Singularity.
OpenAI’s latest coding model, GPT-5.3-Codex, helped create itself. Anthropic has made similar claims about recent product launches. And there’s no denying that generative AI is now so good at writing code that it’s decimated the job market for entry-level coders.
It is absolutely true that generative AI is progressing rapidly and that it will surely have big impacts on everyday life, the labor market, and the future.
Even so, it’s hard to believe a weather report from Chicken Little. And it’s harder still to believe everything a car salesman tells you about the amazing new convertible that just rolled onto the sales lot.
Indeed, as Shumer’s post went viral, AI skeptics joined the fray.
This Tweet is currently unavailable. It might be loading or has been removed.
This Tweet is currently unavailable. It might be loading or has been removed.
This Tweet is currently unavailable. It might be loading or has been removed.
It’s not time to panic yet
There are a lot of reasons to be skeptical of Shumer’s claims. In the essay, he provides two specific examples of generative AI’s capabilities — its ability to conduct legal reasoning on par with top lawyers, and its ability to create, test, and debug apps.
Let’s look at the app argument first:
I’ll tell the AI: “I want to build this app. Here’s what it should do, here’s roughly what it should look like. Figure out the user flow, the design, all of it.” And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesn’t like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until it’s satisfied. Only once it has decided the app meets its own standards does it come back to me and say: “It’s ready for you to test.” And when I test it, it’s usually perfect.
I’m not exaggerating. That is what my Monday looked like this week.
Is this impressive? Absolutely!
At the same time, it’s a running joke in the tech world that you can already find an app for everything. (“There’s an app for that.”) That means coding models can model their work off tens of thousands of existing applications. Is the world really going to be irrevocably changed because we now have the ability to create new apps more quickly?
Let’s look at the legal claim, where Shumer says that AI is “like having a team of [lawyers] available instantly.” There’s just one problem: Lawyers all over the country are getting censured for actually using AI. A lawyer tracking AI hallucinations in the legal profession found 912 documented cases so far.
It’s hard to swallow warnings about AGI when even the most advanced LLMs are still completely incapable of fact-checking. According to OpenAI’s own documentation, its latest model, GPT-5.2, has a hallucination rate of 10.9 percent. Even when given access to the internet to check its work, it still hallucinates 5.8 percent of the time. Would you trust a person that only hallucinates six percent of the time?
Yes, it’s possible that a rapid leap forward is imminent. But it’s also possible that the AI industry will rapidly reach a point of diminishing returns. And there are good reasons to believe the latter is likely. This week, OpenAI introduced ads into ChatGPT, a tactic it previously called a “last resort.” OpenAI is also rolling out a new “ChatGPT adult” mode to let people engage in erotic roleplay with Chat. That’s hardly the behavior of a company that’s about to unleash AI super-intelligence onto an unsuspecting world.
This Tweet is currently unavailable. It might be loading or has been removed.
This article reflects the opinion of the author.
Disclosure: Ziff Davis, Mashable’s parent company, in April 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
Topics
Artificial Intelligence