Trust Scan

Responsible AI must be visible – that’s exactly what the AIGN public Trust Scan makes possible.

The AIGN public Trust Scan – Our Active Contribution to a More Accountable AI Future. Responsible AI needs public accountability – and we don’t wait for permission.
At AIGN, we take responsibility seriously. That’s why we proactively evaluate organizations and companies around the world – whether they ask us to or not.
With the AIGN Public Trust Scan, we offer transparency by analyzing:
Public websites, policy documents, ESG reports, ethics declarations, press releases, and research publications.

Our mission is clear: to make responsibility in AI visible – for the public, for stakeholders, for the future.
Because in a world shaped by artificial intelligence, people deserve to know:
Who leads responsibly – and who just talks about it.

Our vision is clear:

In the future, organizations will not only be judged by what they do with AI – but by how responsibly they govern it.

Each organization is evaluated using a structured internal assessment model, covering six key dimensions of Responsible AI:

🌐 Governance & Leadership
🌐 Transparency & Explainability
🌐  Ethics & Human Rights
🌐 Data Quality & Privacy
🌐 Robustness & Security
🌐 Participation & Accountability

Each category is scored internally using a 0–100 point system.
Organizations that reach a minimum of 80 points are automatically awarded the AIGN Trust Label – as a public signal of strong AI governance.

 The AIGN Trust Label is awarded exclusively based on a formal, independent AIGN Trust Scan – not via self-assessment or self-declaration.

The Trust Label reflects:

🌐 Leadership in Responsible AI
🌐 Trust in governance, ethics, and data integrity
🌐 Transparency towards customers, partners, regulators, and the public

It is not requested or applied for – it is awarded proactively by AIGN, based on objective analysis and verifiable public data.

We believe in measured transparency. Our methodology is grounded in internationally recognized standards, including the OECD AI Principles, the EU AI Act, and leading cross-sectoral best practices.

At the same time, we recognize that full disclosure of all weighting factors and evaluation logic could:

🌐 Encourage gaming the system (e.g., AI washing through surface-level communications)
🌐 Compromise the integrity and objectivity of our Trust Scans
🌐 Undermine the independence of our evaluation framework.

That’s why we’ve chosen a deliberate middle path:

“The logic behind the AIGN Trust Scan is consistent, principled, and aligned with global frameworks – yet intentionally not fully public. This ensures transparency without compromising the independence and effectiveness of our methodology.”

We are open to expert-level dialogue around our approach – including panel discussions, research collaborations, or direct exchanges with regulatory bodies and institutions. Our goal is not only to assess responsible AI, but to actively shape its evolution.

All Trust Scan results – including score, year of issuance, and a summary justification – are published in the public AIGN Trust Index.
The Index is continuously updated and offers a global, cross-sector overview of organizations that demonstrate responsible AI use – whether they are corporates, SMEs, startups, or public institutions.

🌐 Visibility builds trust – with customers, partners, investors, and regulators
🌐 Comparability drives innovation – responsibility becomes measurable
🌐 Public accountability inspires change – and motivates continuous improvement

📩 Want to be included in the AIGN Trust Index or initiate a reassessment?
Reach out to us at message@now.digital