Responsible AI, measured and accountable.

We are a nonprofit focused on responsible AI research and development. We prioritize transparency, reproducibility, and human-first safety standards.

We focus on verification: public protocols, measurable criteria, and transparent writeups—including failures.

QRI in numbers

Start small, publish fast, scale only when measurement is solid.

0+ Artifacts published Reports, protocols, and eval writeups
0+ Evaluation protocols Repeatable, documented tests
0+ Open governance docs Standards, policies, changelogs
0+ Community members Students, faculty, builders

We’ll update these metrics as we publish artifacts and evaluation results. No inflated numbers.

What we work on

Practical tools that make AI systems safer and less deceptive—especially in high-stakes settings.

  • Evaluation harnesses: Protocols, benchmarks, and structured tests for safety, reliability, and generalization.
  • Governance scaffolds: Decision gating, audit logs, and incident-style reporting for model behavior.
  • Responsible deployment: Practical methods to reduce deception, measure risk, and enforce constraints in production.

How we think

Academic-first means the method matters as much as the result.

Evidence over hype
Extraordinary claims require public protocols, results, and failure modes—not just confident marketing.
Academic-first methods
We aim for clear definitions, measurable criteria, reproducible experiments, and explicit uncertainty.
Human-first defaults
We protect people from manipulation, overclaiming, and unsafe deployment—especially where stakes are high.
Capability gating
Higher-impact behaviors require passing defined tests. No capability promotion by persuasion.

What we do not claim

We’re explicit about limits. This is part of being credible.

  • We do not claim to have created AGI.
  • We will update positions publicly when evidence changes.
  • We prefer cautious language and falsifiable claims.