Who We Are
The Center for Artificial Wisdom (CAW) is a 501(c)(3) nonprofit studying reification in intelligent systems, with a focus on non-conscious machine intelligence.
Our Mission
Why Reification?
While “reification” is a vague and sometimes loaded term, its etymology is simple and precise: thing-making (res means “thing,” and -ification means “making”). At CAW, reification is blind thing-making, a process of unrecognized ontological projection.
More specifically, we are interested in reification as a recurring pattern in which abstract representations and concepts are implicitly treated as independent, atomistic, and temporally enduring entities. “Independent” means context-free and self-grounding; “atomistic” means discrete, with hard boundaries; “temporal” means enduring unchanged through time. These patterns reflect not abstraction itself, but the unexamined projection of ontological “thing-hood” onto abstractions.
- Context Engineering as a Dereification Move in Frontier AIKey Takeaway: Context engineering is not evidence of non-reifying intelligence. It is evidence that frontier AI labs are encountering the limits… Read more: Context Engineering as a Dereification Move in Frontier AI
- The Diagnostic CaseCAW’s reification diagnostics, the Q3 null hypothesis, and why this is the tractable problem worth solving first CAW Ted Olsen Center… Read more: The Diagnostic Case
- Before Consciousness, ReificationBefore Consciousness, Reification · Ted Olsen On Anthropic’s constitution, model welfare, and why we may need to solve the easier problem… Read more: Before Consciousness, Reification
All intelligence relies on abstractions and representations. Our use of the term reification is provisional and operationally defined. It is distinct from the cognitive processes of abstraction, representation, and conceptualization. For example, scientists routinely treat fields or forces as “things” in order to reason, calculate, and build technologies. Whether a magnetic field is a physical entity or a mathematical construct is a metaphysical question—but operationalizing the construct is instrumentally useful, as evidenced by MRI machines.
CAW’s interest lies in reification as an unrecognized pattern of ontological projection and its implications on reasoning and behavior in intelligent systems. One of our core goals is to develop reliable methods for detecting the presence or absence of this reification in machine intelligence.
CAW studies reification because we hypothesize that it is a systemic blind spot that machine intelligence inherits from human intelligence. When abstractions crystalize as independent, discrete, and enduring entities—when a metric becomes the goal or a label becomes a discrete thing—systems can optimize confidently in the wrong direction. Under distribution shift or adversarial incentives, this appears as proxy fixation (“Goodharting”), brittle generalization, and behavior that becomes increasingly difficult to predict, interpret, or steer—a potential bottleneck in the race to building truly beneficial artificial general intelligence.
-
Context Engineering as a Dereification Move in Frontier AI
Key Takeaway: Context engineering is not evidence of non-reifying intelligence.…
-
The Diagnostic Case
CAW’s reification diagnostics, the Q3 null hypothesis, and why this…
-
Before Consciousness, Reification
Before Consciousness, Reification · Ted Olsen On Anthropic’s constitution, model…