top of page

Something Big is Happening in AI - And what it means for the Rest of Us

  • Feb 10
  • 4 min read

(February 2026)

If you've been following AI news even casually over the past year, you've probably heard versions of "things are accelerating" or "capabilities are improving fast." But most public conversations still soften the edges. People in the field — the ones building, testing, and deploying frontier models every day — often give measured, safe answers when friends or family ask what's really going on.


I'm not in that inner circle, but I've been paying close attention, reading the technical reports, following benchmark trends, and watching how tools are already reshaping work. Recently, Matt Shumer published a raw, unfiltered piece titled "Something Big Is Happening" that cuts through the hedging. He wrote it for the people he cares about who aren't immersed in AI daily — family, friends, regular professionals — because the honest version "sounds insane," but the gap between polite answers and reality has grown too wide.


Here's the core of what he's describing, adapted and expanded for anyone wondering whether (and how) to take this seriously in 2026.


The Speed of Change Feels Like February 2020 All Over Again


Think back to early 2020. A few experts were raising alarms about a novel virus spreading in Wuhan. Most people dismissed it or downplayed the risk. Then March hit, and everything changed in weeks.


Shumer argues we're in a similar "February 2020" moment with AI. The capabilities crossing into public view right now — models like GPT-5.3-level systems that can self-improve code, chain long reasoning steps, use tools reliably, and complete multi-hour professional tasks — are already here or arriving imminently.


A key piece of evidence comes from METR (a group measuring real AI agent progress). Their work tracks the "time horizon": how long a task (measured in human professional time) an AI can complete autonomously with ~50% reliability.


- Since around 2019, this time horizon has doubled roughly every 7 months.

- By early 2026, frontier models are handling tasks that take humans 50–100+ minutes reliably.

- Extrapolating the trend (with some acceleration visible in 2024–2025 data) puts us on track for AI agents completing month-long human tasks within roughly 5 years.


That's not science fiction. It's a straight-line projection from public benchmark data.


The Job Impact: Entry-Level White-Collar Automation Is Coming Fast


Shumer's most direct warning: within the next 1–5 years, AI could automate large portions (he cites up to ~50%) of entry-level white-collar work — research, writing, basic analysis, customer support, junior coding, marketing copy, data entry/cleanup, and more.


This isn't "AI will replace programmers" in the cartoonish sense. It's "AI agents will handle the boring/repetitive/legwork parts so fast and cheaply that companies restructure teams around them." The people most at risk aren't senior experts; they're the early-career roles that companies use to scale headcount.


We've seen analogs before: COVID forced remote work and digital tools overnight. Companies that adapted thrived; those that didn't struggled. AI adoption will feel similar — sudden for some industries, uneven for others — but the direction is clear.


What Regular People Can Actually Do About It


Shumer's piece isn't doom-posting. It's a call to act while the window is still open. His practical advice for non-technical people:


1. Start paying for good AI tools today — Not free tiers. Invest in Claude Pro, ChatGPT Plus/Teams, Gemini Advanced, Perplexity Pro, Cursor, or similar. Use them daily for real work. The gap between casual free users and daily paid-tool power users is already huge and widening.


2. Build a financial buffer now — Cut unnecessary spending, boost savings, pay down high-interest debt. If your role faces disruption in 2–4 years, having 12–18 months of runway changes everything.


3. Rethink what "learning" means — For yourself and especially for students/kids:

- Prioritize curiosity, asking good questions, and systems thinking.

- Learn to collaborate with AI as a co-pilot — prompt engineering, tool chaining, verifying outputs, iterating fast.

- Focus less on memorizing facts or basic procedures that models already do better/faster.


The people who thrive won't be the ones who know the most trivia; they'll be the ones who stay relentlessly curious, adapt tools creatively, and direct AI toward meaningful goals.


Final Thought: This Isn't Hype — It's Already in Motion


You don't have to believe every exponential curve continues forever. But the trend lines from independent groups like METR are hard to dismiss. Capabilities are scaling fast enough that ignoring them is riskier than preparing early.


Something big really is happening. The polite version of the story is getting harder to tell because reality is moving faster than the disclaimers. The good news? Individuals can still get ahead of the curve — not by becoming coders overnight, but by integrating powerful tools into daily life starting today.


If this resonates, share it with someone who keeps asking, "So… what's actually going on with AI?" They deserve the unfiltered version.


What do you think — too alarmist, or about right for 2026?



*(Inspired by and drawing heavily from Matt Shumer's February 2026 article "Something Big Is Happening."



 
 
 

Recent Posts

See All

Comments


bottom of page