Human-Centric Ethical Innovation Methodologies

The GoodTech Living Lab applies a human-centric, ethics-by-design methodology to the validation of AI and digital technologies. Our methodologies are designed to generate real-world evidence on trust, safety, inclusion, and governance — not just technical performance.

They combine Living Lab practice, organisational psychology, and ethical AI frameworks to support responsible adoption, EU funding readiness, and regulated scale.

3 PILLARS

ENoLL Living Lab Best Practices

Our work follows the principles of the European Network of Living Labs (ENoLL):

  • Real-life experimentation in authentic contexts
  • Multi-stakeholder co-creation and validation
  • Openness, transparency, and independence
  • Evidence-based assessment of impact and adoption

This ensures that every Living Lab engagement is grounded in real use, real people, and measurable outcomes.

Agile People Principles

Technology adoption depends on people and organisations, not systems alone.
We embed Agile People principles to ensure:

  • Psychological safety for users, professionals, and teams
  • Inclusion, participation, and shared ownership
  • Continuous learning and feedback loops
  • Ethical decision-making under uncertainty

These principles allow risks to surface early, when they can still be addressed responsibly.

Ethical AI & Psychology of AI

We study not only what AI systems do, but how people perceive, trust, and relate to them.

This pillar integrates:

  • Ethical AI frameworks and EU AI Act logic
  • Psychology of trust, reliance, and over-reliance
  • Emotional and cognitive impact analysis
  • Human-AI interaction dynamics

The result is technology that is explainable, governable, and aligned with human expectations

THE 3-PHASE MODEL

[Phase 1 – Framing]
– Stakeholder co-creation
– Risk & power mapping
– AI role definition

[Phase 2 – Experimentation]
– Real-life pilots
– Trust & usability monitoring
– Human oversight

[Phase 3 – Scale]
– AI Act positioning
– TRL & SRL confirmation
– Regulatory gaps

What we validate instead:

Clear, human-understandable explanations of:
  • what the AI does and does not do
  • how decisions are made
  • where human oversight applies

Fairness Gates

  • detect bias and exclusion risks
  • assess disproportionate impact
  • prevent harm before scale

Human Oversight Points

  • human judgement is required
  • automated decisions are reviewed or overridden
  • accountability remains clearly assigned

Harms Taxonomy

Explore the Evidence

Our methodologies have been applied across AI HealthTech, EdTech, workplace technologies, and social innovation, generating independent evidence for EU funding, regulation, and adoption decisions.