What is understanding? For centuries, this question has been anchored to the mystery of subjective experience—the inner feeling of "getting it," the "what it's like" to see red or grasp a new idea. Similarly, we tie the concept of agency to the feeling of conscious will and intention. But as we design increasingly sophisticated artificial intelligence and deepen our understanding of animal cognition, this human-centric view is becoming a barrier.
What if we could describe understanding and agency in a way that doesn't depend on subjective experience at all? By combining ideas from modern machine learning and abstract mathematics, we can construct a powerful functionalist framework—one that defines meaning by its geometry and agency by its logic.
Consider a simple thought experiment: a human, a dog, and an advanced AI with a camera all observe a red apple.