Independent • Non-commercial • Governance-focused

Trustworthy AI and identity integrity
for systems that really matter.

The Lumen is an independent, London-based initiative working on trustworthy AI, identity integrity and governance frameworks for high-impact public-interest systems – from higher education to clinical environments and critical public services.

We explore how AI, identity and data can be governed in ways that are transparent, accountable and proportionate, without losing sight of human dignity and institutional trust.

AI Governance Identity Integrity Public-Interest Systems

Three Pillars of Work

The Lumen’s work is organised into three core pillars that together address the question: How can high-impact digital systems remain trustworthy when AI and automation are introduced?

Pillar I

AI Trustworthiness & Assurance

We look at AI not only as technology, but as part of institutional decision-making. Our interest is in assurance models, oversight structures and governance patterns that make AI usage explainable, auditable and appropriately constrained.

  • Patterns for AI-related decision oversight
  • Explainability and transparency as governance tools
  • Risk-sensitive deployment in high-stakes environments
Pillar II

Identity Integrity & the Certus Concept

Identity is more than a login. The emerging Certus concept focuses on integrity, misuse safeguards and trust around identities and claims in education and public services – without turning every step into surveillance.

  • Integrity of identities and credentials
  • Safeguards against impersonation and misuse
  • Respect for proportionality and human dignity
Pillar III

Data Provenance & Accountability

High-impact decisions require traceable, accountable data flows. We explore models where provenance, auditability and responsible stewardship are built into system design, not treated as afterthoughts.

  • Traceability of digital processes and datasets
  • Accountability structures across institutions
  • Practical models for public trust
Cross-cutting

Human-Centred Governance

Across all pillars, The Lumen starts from people: students, patients, staff, families, vulnerable groups. Governance must make sense at human scale and remain open to scrutiny.

  • Human-centred design in governance questions
  • Participation, clarity and explainability
  • Avoiding both overreach and under-protection

Methodology & Governance Thinking

The Lumen draws on traditions of governance research, systems thinking and risk analysis. The aim is not to add more slogans to the AI debate, but to produce structures that can be implemented, scrutinised and improved.

Layered Assurance

Rather than relying on single checks or static policies, we look at how assurance can be layered across:

  • Technology (models, data, infrastructure)
  • Institutions (roles, responsibilities, escalation paths)
  • Regulation & oversight (internal and external)

Smart-Ledger & Traceability Thinking

Inspired by work on smart ledgers and auditable records, we are interested in ways to make key decisions and data flows more traceable – not to control people, but to maintain trust and accountability.

Proportionality & Independence

As a non-commercial initiative, The Lumen aims to explore measures that are proportionate to the risks involved, without defaulting to maximal data collection or maximal automation.

Early Advisory Input

The Lumen is exploring light-touch advisory relationships with experienced governance and research organisations to keep its concepts grounded and reviewable from the outset.

Key Concepts – including Certus

The Lumen focuses on a small number of core concepts that bring together AI, identity and accountability in practical ways. One of them is the Certus concept, currently in early exploration.

The Certus Concept (Identity Integrity)

Certus is an emerging framework idea for identity integrity in education and public-interest systems. It is not a product and not a tracker. Instead, it aims to:

  • strengthen the trustworthiness of identities and claims,
  • provide structured safeguards against impersonation,
  • avoid unnecessary exposure of personal information.

Details are kept at a conceptual level and are shared only in controlled settings or under appropriate confidentiality.

Trustworthy AI in Institutions

The Lumen does not chase “AI for everything”. It focuses on a narrow question: how can institutions use AI tools in ways that are explainable, governed and open to challenge?

  • Support for decision-making, not automated authority
  • Clear boundaries of use, especially in education & health
  • Alignment with UK and international governance guidance

Projects & Early Initiatives

The Lumen is in an early exploratory phase, forming collaborations with universities, public-sector bodies and selected partners. Initial strands include:

AI Assurance for Higher Education

Exploring how universities can introduce AI-based tools and processes while maintaining academic integrity, fairness and transparency.

  • Assurance models for assessment and credentialing
  • Governance patterns for AI use among students and staff
  • Alignment with sector guidance and best practice

Identity Integrity in Education & Public Services

Investigating how identity and credential integrity can be strengthened, including the Certus concept, without overburdening individuals or institutions.

  • Safeguards against identity misuse
  • Proportionate verification structures
  • Trusted flows across institutions

Accountable AI in Public-Interest Systems

Early-stage thinking on AI usage in health, social care and other public-interest domains, focusing on accountability and safety rather than optimisation alone.

Standards & Governance Patterns

The Lumen is examining how practical governance patterns might evolve into standardisation efforts, in dialogue with recognised governance and research organisations.

Governance Principles & Collaboration Modes

The Lumen is not a vendor and not a lobbying body. It is a governance- oriented initiative with a small number of clear principles and a set of collaboration modes that keep roles transparent.

Governance Principles

  • Transparency – processes and rationales should be explainable, not opaque.
  • Accountability – it should be clear who is responsible for what, across technology and institutions.
  • Human-Centred Design – people, not tools, are the starting point for design.
  • Proportionality – measures should fit the risks and avoid overreach.
  • Independence – as a non-commercial initiative, The Lumen aims to think freely and critically.

A more detailed formulation of our ethical stance can be found under Ethical Principles.

Collaboration Modes

  • Academic Partners – universities and research groups exploring joint projects and pilots.
  • Public-Sector Exploratory Partners – bodies interested in governance and assurance models.
  • Advisory Collaborators – organisations providing light-touch methodological or governance insight.
  • Oversight Contributors – selected institutions, including industry, participating in early review of concepts.

Collaboration is structured, non-intrusive and aims to create shared understanding rather than proprietary lock-in.

Contact

Knut Herbst
Director – The Lumen

Email: office@the-lumen.org
Location: London WC2R, United Kingdom

Short concept notes and further details – including the Certus concept at a high level – are available on request and can be shared under appropriate confidentiality where necessary.