Manifesto
The Credential–Competence Gap
Credentials aren't keeping up
Credentials used to show what people could do. Now, they often show what machines can produce.
AI can now pass professional exams that once took humans years to master. A 2025 NYU Stern and Goodfin study found that leading models scored above 79% on the CFA Level III — the hardest part of that certification. Similar results are emerging in medicine, engineering, and law.
AI can generate the signal of competence without the substance.
In response, the market has flooded with more credentials. The U.S. went from 334,000 in 2018 to 1.85 million by 2025. But more isn't better. HR professionals now struggle to tell which credentials mean anything, and Gartner forecasts that one in four candidate profiles could be entirely AI-fabricated by 2028.
When everyone optimises for the measure, the measure stops working. That's Goodhart's Law in action.
The missing connection
Five separate systems support professional trust today:
- Identity and fraud detection
- Skills assessment
- Digital credentials
- Content authenticity tracking
- Reputation and peer validation
Each works on its own — but they don't connect. There's no shared way to describe, compare, or verify proof of real competence across them.
We don't need new systems. We need a common layer that connects the ones we already have, making it possible to recognise human expertise in an AI-saturated world.
Our core beliefs
Competence comes from real work.
What people have actually done — their decisions, outputs, and outcomes — reveals more than any credential. Capturing and verifying that record should be the goal.
Human judgment needs attribution.
As AI produces more professional work, we must be able to tell what was done by humans and by machines. That's not just ethical — it's becoming law, as seen in the EU AI Act's Article 50 provenance rules.
AI performance must come with context.
AI can perform, but we need transparency about how it performs, who guided it, and who is responsible.
Open standards build real trust.
Standards like C2PA and W3C Verifiable Credentials succeeded because they're open and cooperative. Competence provenance should follow that path.
Interoperability helps everyone.
No one can fix this alone. A shared format — instead of a new platform — lets every system contribute and benefit.
What Cupel offers
Cupel is an open framework for tracing professional competence. It defines:
- A common vocabulary for five trust signal types: credential-based, assessment-based, outcome-based, peer-verified, and provenance-tracked.
- A lightweight data format (JSON-LD) for expressing and linking these signals.
- Guidelines for evaluating how much evidential weight each signal carries.
- Mappings to existing standards so platforms can integrate gradually.
Cupel works with existing systems, not against them. Any credential issuer, assessment body, or HR platform can participate without changing their core infrastructure.
The project is open-source (AGPL-3.0) and trademark-protected (UK IPO No. UK00004352899). “Cupel-conformant” means meeting published technical and ethical criteria — just as Linux or OpenID use open tech but protected names.
Who we invite
Implementers
Connect your existing credentials or assessments to the Cupel taxonomy. You don't need to adopt everything at once.
Standards bodies
Work with us on mappings between Cupel and your specifications — W3C, C2PA, Credential Engine, 1EdTech, and others.
Researchers
Help build the evidence base for what makes a professional signal trustworthy, especially in human–AI collaboration.
Practitioners and employers
Share your experience of what genuine competence looks like in your field. Your insight is essential.