Links

Interesting finds with short descriptions.

Mental Murmuration

This piece argues that a useful way to understand the mind is not as a tidy set of separate modules (reason here, emotion there), but as a constantly shifting network — more like a murmuration of starlings than a machine with fixed parts.

If what you are experiencing is a dynamic “swirl” of thoughts, feelings, desires, memories, and body sensations, then the goal isn’t to dominate the swirl with “pure reason.” It’s to notice what’s arising, draw on the different kinds of inner information wisely, and redirect things when they start to run away.

Key insights:

  • The brain is networked and improvisational: instead of dedicated regions each doing one job, we form distributed “ensembles” that coordinate moment-by-moment responses to complex life.

  • Change is the constant: behavior is heavily context-dependent (“if-then” patterns), which undermines fixed labels and encourages more humility and compassion.

  • Mental categories blur in real life: perception, cognition, emotion, desire, and action interpenetrate; what you feel shapes what you see, and vice versa.

  • Reason isn’t a “charioteer” over dumb passions: emotions, desires, and the body carry real intelligence; life works best when all faculties coordinate rather than one suppressing the rest.

In other words: you’re not a static self that needs to force the mind into compliance — you’re learning to witness the swirl and respond wisely from within it.

The Case That A.I. Is Thinking

This essay challenges the reflexive dismissal that AI “isn’t really thinking” by examining what we actually mean when we use that word.

If we can’t clearly define what thinking is in humans — and philosophers have struggled with this for millennia — how can we be so confident AI isn’t doing it?

Key insights:

  • Thinking may not require consciousness: We conflate thinking with subjective experience, but these might be separable. A system could reason, plan, and problem-solve without “feeling” anything.

  • Behavior vs. mechanism: We judge human thinking by its outputs (speech, decisions, problem-solving), not by inspecting neurons. Why apply a different standard to AI?

  • The “stochastic parrot” critique cuts both ways: If AI is “just” predicting the next token, humans are “just” firing neurons based on prior patterns. Neither description captures the emergent complexity.

  • Functional equivalence matters: If an AI and a human solve a novel problem through similar reasoning steps, the substrate difference (silicon vs. carbon) may be less meaningful than we assume.

Rather than asking “Is AI thinking like us?” the article suggests asking “What do we actually mean by thinking, and why are we so certain we understand our own cognitive processes?”