Our work follows the principles of the European Network of Living Labs (ENoLL):
This ensures that every Living Lab engagement is grounded in real use, real people, and measurable outcomes.
Technology adoption depends on people and organisations, not systems alone.
We embed Agile People principles to ensure:
These principles allow risks to surface early, when they can still be addressed responsibly.
We study not only what AI systems do, but how people perceive, trust, and relate to them.
This pillar integrates:
The result is technology that is explainable, governable, and aligned with human expectations
[Phase 1 – Framing]
– Stakeholder co-creation
– Risk & power mapping
– AI role definition
[Phase 2 – Experimentation]
– Real-life pilots
– Trust & usability monitoring
– Human oversight
[Phase 3 – Scale]
– AI Act positioning
– TRL & SRL confirmation
– Regulatory gaps
– Fairness Gates
– Human Oversight Points
– Harms Taxonomy
Our methodologies have been applied across AI HealthTech, EdTech, workplace technologies, and social innovation, generating independent evidence for EU funding, regulation, and adoption decisions.