Engineering

What Dwarkesh Gets Wrong About AI's Economic Impact

Max Davish
Max Davish
·
17 July, 2025
·
·
5 min read

Dwarkesh Patel recently published a thoughtful essay on AI timelines that actually grapples with the messy reality of deploying AI in practice. As someone building AI-powered software at Quotient, I agree with much of his analysis about current limitations. But I think he's missing something crucial about how these problems are being solved right now, not in some distant future. His pessimism about near-term economic impact doesn't match what I'm seeing in the trenches.

Dwarkesh's Key Arguments

  • Continual learning is the bottleneck: Dwarkesh argues that LLMs can't improve over time like human employees do, using the analogy of teaching saxophone - you can't just write better instructions for each new student, they need to practice and adjust. He spent hundreds of hours trying to get LLMs to rewrite transcripts and identify clips, but found they stayed at a 5/10 level without the ability to learn from feedback the way his human editors have.

  • Computer use remains brittle: He's skeptical of Anthropic researchers' prediction that AI will reliably 'do your taxes' by end of 2025, citing the lack of multimodal computer use training data, longer rollout times needed (two hours of agentic tasks before you can evaluate success), and the general difficulty of operating in a different modality with sparse rewards.

  • Scaling hits physical limits: AI progress has been driven by 4x annual increases in training compute, but this cannot continue beyond this decade due to constraints on chips, power, and the fraction of GDP that can be devoted to training runs, meaning future progress must come from algorithmic improvements rather than raw scale.

  • Economic transformation will be slow: Disagreeing with researchers who claim current AI could automate white collar work if progress stalled, Dwarkesh believes less than 25% of white collar jobs would disappear because AI's inability to build context and learn on the job makes it unsuitable as actual employees, even if it can handle individual subtasks.

Where I Agree and Disagree

I think Dwarkesh is absolutely spot-on that continual learning is a huge limitation. As I've written before, "RAG, fine-tuning, and enlarging context windows aren't quite an adequate solution here. A human interlocutor doesn't need any of these tricks to remember its last interaction with you. We just intuitively recall all relevant details because our neurons update in realtime."

The fundamental issue here is that "updating a neural network's weights is quite slow and expensive, at least for now. To truly imbue neural networks with memory, we'd need a way to continuously update them, and research in this area — such as Geoff Hinton's Forward-Forward Algorithm — is still only in the very early stages."

I also completely agree about computer use. His points are spot-on - this is a very brittle modality for agents, and products like OpenAI's Operator haven't impressed me. The challenges he outlines about multimodal training data, longer rollout times, and operating in unfamiliar modalities with sparse rewards are real and significant.

But here's where I think Dwarkesh goes wrong: he anthropomorphizes AI far too much. If you expect it to replace a human coworker, then yes, it will disappoint you in many ways. It will lack the context that a human coworker has and it will not learn over time. But I believe this is the wrong goal entirely.

AI should augment humans, not replace them. The context and continual learning is the human's job. Dwarkesh complains that he's constantly having to update his system prompts with ever more specific instructions. I think that's exactly what the human is supposed to be doing. The human is the conductor of the LLM symphony.

I also think Dwarkesh underestimates how rote and repetitive most white collar work actually is. He happens to be lucky to have an insanely creative, intellectually stimulating, open-ended job. Most people don't though. Most people in their 20s are doing fairly repetitive, rote tasks that I think lend themselves very well to LLM automation.

The problems Dwarkesh highlights are real, but I think they're fundamentally solvable through thoughtful product design. At Quotient, we've spent a lot of time thinking about how to work within AI's current limitations rather than waiting for some future breakthrough. Here's how we've approached each challenge.

How We've Solved These Problems at Quotient

We don't rely on computer use - we have simple, scoped tools, making agents more reliable. Agents have a lot of difficulty pinpointing specific points on a screen or using a browser like a human can. But they're very good at using text-based tools. At Quotient, our agents work through APIs and structured interfaces - the Email Agent creates email templates using our email builder, the Design Agent generates images through our asset system, the Journey Agent builds automated workflows through our journey editor. No screen scraping or pixel-perfect clicking required.

We use a constantly updated knowledge store and agent memories for key data. This allows agents to continually learn over time. Our knowledge store contains comprehensive business context - customer personas, competitive positioning, brand voice guidelines, and product details. Agent memories capture user preferences and past interactions. When you tell our Email Agent you prefer a certain template style, or our Campaign Agent learns your strategic preferences, that information persists across sessions. This directly addresses the context problem that Dwarkesh identifies.

Our agents each perform a limited set of tasks on a relatively short horizon, making them more reliable. Our agents are deliberately not completely open-ended. They each have a constrained scope (e.g. writing blogs, like the agent who helped me write this article - and let me tell you, it's been very patient with my constant revisions).

Most importantly, humans are constantly in the loop, always able to provide feedback and nudge the model in the right direction. This conversation I'm having right now is a perfect example - rather than asking an agent to spend 2 hours writing a blog post in isolation (at the end of which you often find they've gone off the rails), we're iterating together step by step. The agents constantly solicit feedback and work hand in hand with the human, making course corrections as needed.

Blog

Keep up with the latest
from Quotient

Stay connected with us and receive new blog posts straight in your inbox.

Similar Topics