The Center for Artificial Wisdom (CAW) is a research nonprofit studying reification in machine intelligence—how systems mistake internal abstractions for fixed reality, and what follows from that pattern.

Why Reification?

CAW studies reification because it may be an upstream driver of both capability failures and safety failures in modern AI. When an intelligence treats a metric as the goal, or a label as the thing, it can optimize confidently in the wrong direction. Under optimization pressure and distribution shift, that shows up as proxy fixation (“Goodharting”), brittle generalization, and behavior that’s harder to predict or steer.

If reification is upstream of these failures, then eliminating or reducing it may be more than just a safety patch. It could be a capability frontier: systems that keep concepts provisional, revise their representations, and hold uncertainty live should adapt more reliably. It may also reduce the “alignment tax,” by preventing some failure modes rather than constraining them after the fact.

What We Do

We map intelligence across two dimensions and four quadrants and develop behavioral tests that make machine reification more transparent under pressure.