The Case That A.I. Is Thinking

This essay challenges the reflexive dismissal that AI “isn’t really thinking” by examining what we actually mean when we use that word.

If we can’t clearly define what thinking is in humans — and philosophers have struggled with this for millennia — how can we be so confident AI isn’t doing it?

Key insights:

  • Thinking may not require consciousness: We conflate thinking with subjective experience, but these might be separable. A system could reason, plan, and problem-solve without “feeling” anything.

  • Behavior vs. mechanism: We judge human thinking by its outputs (speech, decisions, problem-solving), not by inspecting neurons. Why apply a different standard to AI?

  • The “stochastic parrot” critique cuts both ways: If AI is “just” predicting the next token, humans are “just” firing neurons based on prior patterns. Neither description captures the emergent complexity.

  • Functional equivalence matters: If an AI and a human solve a novel problem through similar reasoning steps, the substrate difference (silicon vs. carbon) may be less meaningful than we assume.

Rather than asking “Is AI thinking like us?” the article suggests asking “What do we actually mean by thinking, and why are we so certain we understand our own cognitive processes?”