The Center for Artificial Wisdom (CAW) is a research nonprofit studying reification in cognition, with a focus on machine intelligence. While “reification” is a vague and sometimes loaded term, its etymology is simple and precise: thing-making (res means “thing,” and -ification means “making”). At CAW, reification is blind thing-making, or unrecognized ontological projection.
More specifically, we are interested in reification as a recurring pattern in which abstract representations and concepts are blindly treated as independent, atomistic, and temporally enduring entities. “Independent” means context-free and self-grounding; “atomistic” means discrete, with hard boundaries; “temporal” means enduring unchanged through time. These patterns reflects not abstraction itself, but the unexamined projection of ontological “thing-hood” onto abstractions.
Our use of the term reification is deliberately narrow and should be distinguished from ordinary abstraction or discretization. All intelligence relies on representations. For example, scientists routinely treat fields or forces as “things” in order to reason, calculate, and build technologies. Whether a magnetic field is a physical entity or a mathematical construct is a metaphysical question—but reifying it is often instrumentally useful, as evidenced by MRI machines.
CAW’s interest lies in the unrecognized form of this projection and its consequences for reasoning and behavior. One of our core goals is to develop reliable methods for detecting the presence or absence of this cognitive error in machine intelligence.
Why Reification?
CAW studies reification because we hypothesize that it is a systemic blind spot that machine intelligence inherits from human intelligence. When abstractions are treated as literal reality—when a metric becomes the goal or a label becomes a discrete thing—systems can optimize confidently in the wrong direction. Under distribution shift or adversarial incentives, this appears as proxy fixation (“Goodharting”), brittle generalization, and behavior that becomes increasingly difficult to predict, interpret, or steer—a potential bottleneck in the race to building truly beneficial artificial general intelligence.