Who Loves AI and Who Hates It? It's Not About Personality. It's About Paychecks.

It's not about personality. It's not optimists vs. pessimists. It's about paychecks. We analyzed 130 sources across 9 historical technology panics and the answer is brutally simple.

The One-Sentence Answer

The people who love AI are the people it makes richer, faster, or more powerful without threatening their identity. The people who hate it are the people whose lunch is getting eaten.

This isn't a new pattern. When Gutenberg's printing press appeared in 1440, monks who hand-copied manuscripts didn't celebrate the democratization of knowledge. They saw their livelihood evaporating.

When Uber arrived, taxi drivers with $250,000 medallions didn't applaud the efficiency gains. They watched their retirement fund collapse overnight.

The people who resist a new technology are almost never "afraid of change." They're afraid of a very specific, very rational thing: becoming economically irrelevant.

Four Reasons People Hate AI

Ranked by frequency across our ai-resistance brain (130 entries, 718 claims).

#1. Economic Threat

24 entries, legitimacy score 6-8 out of 10. This is the big one. Not abstract fear of robots. Concrete fear of not being able to pay rent. Writers who watched their per-word rate drop 60% in 18 months. Illustrators who lost ongoing contracts to Midjourney. Translators replaced by DeepL + a junior editor at 1/4 the cost.

The legitimacy score isn't a 10 because some of these jobs were already eroding before AI. But a 6-8 means the fear is substantially justified by real-world data.

#2. Loss of Control and Agency

Jaron Lanier nailed this one: "The problem isn't that AI is too smart. The problem is that the people deploying it don't care about the people affected by it."

When your employer mandates an AI tool that monitors your keystrokes, summarizes your meetings without asking, and generates your performance review, you're not "afraid of technology." You've lost agency over your own work life.

#3. The Enshittification Experience

Visit r/ClaudeAI on any given Tuesday. The most common post: "They nerfed it." Users who rely on AI tools daily experience a pattern where capabilities quietly degrade, guardrails tighten, and the product they're paying for becomes measurably worse.

This isn't technophobia. It's consumer frustration from people who actually use AI more than most.

#4. Existential Risk

The least common complaint in the data but the most publicized. Hinton puts catastrophic risk at 10-20%. Amodei says 25%. These aren't cranks or Luddites. They're the people who built the technology.

But here's the thing: existential risk concerns are overrepresented in media coverage relative to how often they appear in actual public discourse. Most people who hate AI aren't worried about Skynet. They're worried about making their mortgage payment.

Notice what's NOT on this list: "Fear of new things." The technophobia framing is a convenient fiction that lets AI companies dismiss legitimate structural concerns as psychological weakness. The data doesn't support it.

Who Loves AI

Tier 1: Pure Upside

PURE UPSIDE

Solopreneurs

One person doing what used to require a team of 10. AI eliminates their biggest bottleneck (labor cost) while amplifying their biggest asset (speed and taste). No employees to displace. No guild to protect. Pure leverage.

PURE UPSIDE

Workers Bottlenecked by Grunt Work

The senior consultant who spends 60% of her week on slide formatting. The researcher buried in literature reviews. The marketer writing the 47th variation of the same email. AI handles the drudgery; they do the thinking.

PURE UPSIDE

People with Disabilities and Neurodivergence

AI as equalizer. Voice interfaces for motor impairments. Real-time captioning for the deaf. Executive function scaffolding for ADHD. The technology closes gaps that the traditional workplace refused to accommodate.

PURE UPSIDE

Investors and Capital Owners

AI reduces labor costs. Capital loves anything that reduces labor costs. If you own the company rather than work at it, every productivity gain flows directly to your bottom line. This is the uncomfortable structural truth.

PURE UPSIDE

The Chronically Underserved

People who couldn't afford lawyers, therapists, tutors, or medical specialists. AI won't replace a great doctor, but it's infinitely better than no doctor at all. The biggest quality-of-life gains are at the bottom, not the top.

Tier 2: Conditional Love

CONDITIONAL

Senior Professionals Who Direct Work

They love AI as long as it amplifies their judgment rather than replacing it. The moment AI can do the "directing" part too, their enthusiasm will cool rapidly. Current sweet spot: AI does the execution, they do the taste.

CONDITIONAL

Academics Studying AI

Fascinated by the technology, concerned about its deployment. Simultaneously the most informed and the most anxious. Their love is intellectual; their worry is practical. Career incentives: AI as a research topic is a goldmine.

CONDITIONAL

Early-Career Opportunists

Young professionals who see AI as a way to leapfrog seniority. They can now produce senior-quality output with junior-level experience. The love is genuine but fragile. If AI eliminates the junior roles entirely, the ladder disappears.

How AI Actually Spreads

Adoption doesn't flow from enterprise down to individuals. It flows the other way.

Solopreneurs
Young & Curious
Mid-Market
Enterprise

Adoption speed: fastest on the left, slowest on the right

The key insight: Solopreneurs and individual adopters create the proof points that make corporate adoption inevitable. A one-person agency that outperforms a 20-person team doesn't stay a secret for long. That case study becomes the board presentation that triggers enterprise procurement.

McKinsey's data confirms it: enterprise AI adoption lags individual adoption by 18-24 months. By the time your company rolls out an "AI strategy," the freelancer competing with your division has been using it for two years.

Who Hates AI

THREATENED

Creative Professionals

Artists, writers, musicians, designers. Their objection isn't irrational. AI was trained on their work without consent or compensation. The economic impact is real and measurable. And unlike factory workers in the 1800s, they have social media platforms to organize their resistance.

THREATENED

Junior Knowledge Workers

Entry-level analysts, associate copywriters, junior developers. The "apprenticeship" layer of white-collar work is being compressed or eliminated. AI doesn't threaten the partner. It threatens the associate whose entire role was "do the partner's grunt work."

THREATENED

Gatekeepers and Credentialed Professionals

People whose value proposition is "I have a certification that says I can do this." When AI can pass the bar exam, the CPA exam, and medical boards, the credential stops being a moat. Decades of education and licensing feel like a depreciating asset.

THREATENED

Humanist Intellectuals

Not threatened economically but philosophically. If a machine can write a sonnet, does the sonnet still mean anything? This is the deepest and most legitimate form of AI resistance, and it's the one the tech industry is least equipped to answer.

THREATENED

Power Users Who Feel "Nerfed"

Paradoxically, some of AI's harshest critics are its heaviest users. They've experienced what AI can do at its best and they're furious about safety-driven degradation. They don't hate AI. They hate what AI companies do to AI.

THREATENED

Organized Labor

Unions exist to protect workers from capital replacing labor with machines. AI is the most powerful labor-replacing machine ever built. The SAG-AFTRA and WGA strikes weren't about wages. They were about whether human labor would remain structurally necessary.

586 Years of the Same Pattern

1440
Printing Press. Monks who copied manuscripts protested. The church tried to regulate printing. Within 50 years, literacy rates exploded and the monks found new work. The regulation failed.
1810s
Power Loom. The Luddites smashed weaving machines. They weren't idiots. They were skilled craftsmen whose lifetime of expertise became worthless in 18 months. They were right about the pain and wrong about the permanence.
1830s
Railways. Canal operators, coaching inns, and turnpike trusts all lobbied against trains. "Railway mania" collapsed in a speculative bust. The trains kept running. The canal operators did not.
1890s
Electricity. Gas lamp lighters, ice delivery companies, and steam engine operators resisted electrification. Edison and Westinghouse fought publicly over safety (the "War of Currents"). Within a generation, every home had power.
1990s
Internet. Travel agents, encyclopedia publishers, classified ad sales teams, and record stores fought the digital transition. Every one of them was correct that the internet would destroy their business. None of them could stop it.
2020s
AI. Creative professionals, junior knowledge workers, and gatekeepers resist. The economic objections are legitimate. The outcome will be the same. The question isn't whether AI wins. It's how fast, and who gets compensated during the transition.
The script never changes: A new technology appears. It threatens specific jobs. Those workers organize resistance. The resistance is morally justified. And the technology wins anyway. Every single time. What changes is how well we handle the transition. So far, our track record is terrible.

The Wilson Problem

Robert Anton Wilson proposed a useful framework: neophobes (people who fear the new) vs. neophiles (people who love the new). It sounds like it should explain the AI divide perfectly. It doesn't.

The data suggests the divide is more structural than dispositional. The same graphic designer who loves AI coding tools hates AI art generators. The same lawyer who embraces AI legal research opposes AI contract drafting. People aren't consistently pro- or anti-AI. They're pro-AI when it helps them and anti-AI when it threatens them.

This is why the "personality" framing is dangerous. It lets us pretend that AI resistance is a psychological problem ("they're just afraid of change") when it's actually an economic one ("their income is disappearing").

Don't ask whether someone is a neophobe or a neophile. Ask: does this specific AI capability threaten their income, their identity, or their sense of control? That question predicts their reaction with far more accuracy than any personality test.

Get the Full Analysis

We publish original research on AI, economics, and the patterns hiding in plain sight. No spam, no hype, just data-driven analysis you won't find anywhere else.

You're in. Check your inbox.