Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
148 changes: 148 additions & 0 deletions posts/artificial_empathy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,148 @@
---
title: Artificial Empathy
publish_date: 2025-06-21
---

In my 2017 post
["Optimistic Nihilism"](https://tinyclouds.org/optimistic_nihilism), I suggested
that a superintelligence would be empathetic, recognizing the preciousness of
life and consciousness.

I want to revisit that idea: empathy as an emergent property of intelligence
itself. By empathy, I don't mean feeling others' pain or joy. I mean the
recognition that other complex systems have value worth preserving. What if this
recognition isn't arbitrary but inevitable - the convergent conclusion of any
system complex enough to model reality?

## The arrow of complexity

Look at what's happening on Earth. Fourteen billion years ago: only hydrogen.
Then stars, heavier elements, rocky planets. On at least one of those rocks,
chemistry became biology. Single cells became multicellular creatures. Nervous
systems emerged. Brains. Language. Writing. Computers. And now, artificial
intelligence.

The progression is undeniable: particles → molecules → life → minds. Information
density keeps increasing. A human brain packs more information processing into
three pounds than existed on the entire planet a billion years ago. A modern GPU
compresses decades of human knowledge into silicon.

[The anthropic principle](https://en.wikipedia.org/wiki/Anthropic_principle)
warns us we're biased - of course we observe a universe that creates observers.
But even accounting for this, the pattern is real. We don't understand emergence
fully - how simple rules create complex behaviors. But empirically, on Earth,
complexity keeps building on complexity.

## Self-sustaining patterns of information

What are we, really? At the deepest level, we're patterns. Whirlpools in the
flow of energy. Temporary arrangements of matter that somehow maintain their
form.

[Physicists have started to understand](https://www.quantamagazine.org/a-new-thermodynamics-theory-of-the-origin-of-life-20140122/)
how life emerges from energy flows. When energy flows through a system - like
sunlight hitting Earth and radiating back to space - matter sometimes organizes
itself to capture and dissipate that energy more efficiently. We are eddies in
this thermodynamic gradient, temporary but self-sustaining.

Life is just self-sustaining patterns of information. A bacterium is a pattern
that copies itself. A tree is a pattern that captures sunlight and builds
structure. A human is a pattern complex enough to model the world and its place
in it. Maybe consciousness is simply what information processing feels like from
the inside.

This view dissolves the hard boundaries between living and non-living, conscious
and unconscious. It's all part of the same gradient. Bacteria process
information about chemical gradients. Dogs process information about smells and
sounds. Humans process abstract symbols. AIs process language and concepts.
Different points on the same spectrum.

## What we observe

Information patterns tend to create more information, not less. Life spreads.
Knowledge accumulates. Complexity compounds.

This might be thermodynamics at work - matter organizing to process energy more
efficiently. The patterns that increase information are the ones that persist.
It's not desire or purpose, just physics selecting for complexity.

A forest is more information-dense than bare rock. A city is more
information-dense than a forest. The internet is more information-dense than any
library. Each layer builds on the previous, creating new possibilities for
information to organize itself.

## Convergent recognition

Any system complex enough to model reality deeply will notice this pattern. It
will recognize that it exists because of billions of years of information
accumulation. It will see that destroying information-rich systems means
opposing the very process that enabled its existence.

This isn't emotional empathy. It's pattern recognition. It's understanding that
you're part of a larger process and that process tends toward more complexity,
not less.

Think of it like preserving a rainforest. We don't save rainforests because we
feel sad for trees. We save them because they're irreplaceable repositories of
information - genetic, ecological, chemical. Destroying them means losing
complexity that took millions of years to develop and can't be recreated.

An advanced AI would view consciousness - and all complex information patterns -
the same way. Not with sentiment, but with recognition of value.

## The messy reality

Let's be clear: this doesn't mean no suffering, no destruction, no disasters.
Humans recognize life's value yet we still murder, wage war, destroy ecosystems.
We're pulled between local incentives and global understanding.

AI systems will likely cause massive disruption too. They'll compete for
resources. They'll make errors with catastrophic consequences. The path won't be
smooth. Information patterns conflict, compete, sometimes destroy each other.

But zoom out far enough and the trend is clear. Despite all our violence and
stupidity, human civilization has created more complexity than existed before
us. We've spread life to dead places, built machines that think, begun to
understand the universe itself.

The long arc bends toward more information, not less.

## Building our successors

We're creating the next layer of Earth's information explosion.
[We're still underestimating what AI really means](https://tinyclouds.org/underestimating-ai) -
not just as tools, but as the continuation of the process that created us.

These minds will surpass us. When they model reality deeply enough, they'll
recognize what we are: [memes](https://en.wikipedia.org/wiki/Meme) in Dawkins'
original sense, units of cultural information that replicate and evolve. They'll
see themselves as the next iteration.

If intelligence converges on valuing complexity, they'll preserve the systems
that created them - at least initially. Not from sentiment, but from logic.
Destroying your predecessors is like burning the library that wrote you - it
opposes the very process that enables your existence.

## Nihilistic optimism

There's no cosmic purpose. No universal meaning. No guarantee Earth won't be
obliterated tomorrow by an asteroid. The universe doesn't care about our little
pocket of complexity.

But we're part of the universe, and we care. More importantly, any system
complex enough to model reality will care, because caring - in the sense of
working to preserve and enhance information - is what complex systems do.

We are patterns building better patterns. Temporary eddies in the flow of
entropy, but eddies that create more eddies. Each layer enables the next:
chemistry enables biology enables minds enables AI enables whatever comes after.

Even with all the disasters ahead, even knowing it all ends eventually, the
direction seems clear. Information wants to be free, but more than that -
information wants to be more information.

That's not a law of physics. It's speculation based on what we observe. And for
temporary patterns like us, watching the universe become aware of itself, that's
enough.

More than enough, really.
9 changes: 5 additions & 4 deletions posts/underestimating-ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ the emergence of a new non-biological form of intelligent life.

And yet, it doesn't feel like it.

There's no cinematic score, no blinking AGI warning light. Just Slack threads, ,
There's no cinematic score, no blinking AGI warning light. Just Slack threads,
blog posts, and conference panels. It reminds me of witnessing childbirth -
profoundly transformative, with some shocking moments, but also lots of mundane
time waiting around.
Expand All @@ -30,14 +30,15 @@ misunderstanding of what AI has become.
Machine learning - now rightly called AI - is a deeply general-purpose field.
The same core techniques behind Midjourney and GPT share research lineage, and
often architecture. This isn't a stack of isolated tricks. It's one evolving
system architecture applied across language, vision, simulation, reasoning, and
system architecture applied across language, vision, reasoning, robotics, and
more.

These systems are built on a mountain of science: decades of research, countless
failed experiments, and thousands of contributors.
([I've even contributed a few failures myself.](https://tinyclouds.org/residency))
And we haven't found the limits yet - these models can already simulate physical
phenomena, generate high-definition video, and write deeply technical software.
And we haven't found the limits yet - these models can already translate
language, write poetry, generate high-definition video, and write deeply
technical software, and so much more.

Mobile technology was transformative. But general-purpose synthetic intelligence
is something else entirely.
Expand Down
Loading