The Four-Quadrant Intelligence Map is a taxonomy for discussing types of intelligence across two orthogonal dimensions: reification and consciousness. It is designed to reduce category errors in AI discourse. It is an analytical tool, not an ontology.
Working definitions:
- Reification: treating provisional models, abstractions, or internal representations as fixed, literal entities—especially under uncertainty or optimization pressure.
- Consciousness: subjective experience (“what it is like”). Performance and self-report are not, by themselves, evidence of subjective experience.
The Map
| Reifying | Non-reifying | |
|---|---|---|
| Conscious | Q1 | Q2 |
| Non-conscious | Q3 | Q4 |
Null Hypothesis
By default, we assume frontier AI systems belong in quadrant 3 (Q3) unless the evidence forces a reclassification. This asymmetry is intentional: false positives about consciousness or non-reifying intelligence carry higher scientific, ethical, and governance costs than false negatives, especially in public or policy-facing contexts. Claims that a system sits in Q1, Q2, or Q4 carry unusually large implications, so the bar is higher: we look for repeatable behavior under constraint and convergent results across multiple tasks, not one-off demos or a model’s own description of what it is. Our diagnostics can strengthen or weaken a classification, but no single test—by itself—is likely to justify a claim of conscious or non-reifying machine intelligence.