Trust, but verify.

We support companies to make their AI Systems responsible and trustworthy. By operationalizing risk assessment and risk mitigation for AI products we enable companies to align their AI systems with the required or desired level of trust.

Let's get you started!

Who you are.

You are a company, an AI Start-up or a Machine Learning Engineer building, testing and serving an AI product. Your objective is to make your AI system transparent, accountable, fair, reliable and compliant with privacy. You could also be an end-consumer, a government or regulatory organization who wants to assess the trustworthiness and reliability of a given AI system.

Who you are.
What we do.

What we do.

We facilitate the alignment of your AI system within a socio-technical context. By means of an assessment tool, our product estimates and certifies the compliance with an established set of orthogonal trust values. Mapped into a report, our product becomes an integral part of the requirements definition, development processes, test suite, and release audits of your AI systems and products.

Your answer to the EU AI Act.

Driven by our value-based assessment model, rooted in VDE SPEC 90012 V1. Our process allows you to identify risks, provides actionable improvement opportunities, and ensures compliance with EU AI Act requirements.

Your answer to the EU AI Act.

We are enthusiastic about making it hard for you to make bad decisions when developing, releasing, and serving AI systems and products.

Mark Schutera

Mark Schutera

Mark is our Solution Architect - he creates the overall vision for our techno-ethical answers to deploying trustworthy AI systems.
Luca Rettenberger

Luca Rettenberger

Luca is our Responsible AI Engineer - he builds assessment modules and system interfaces for socio-technical AI components.