1. Start with Human Needs, Not Data
Anchor every project in real user problems, not just available datasets.
Ask: “Whose problem am I solving, and how does AI make it easier?”
2. Design for Understanding, Not Obedience
Build AI that explains itself.
Users should understand recommendations, not just follow them. Include plain-language rationales or “why” features.
3. Include Diverse Perspectives
Involve people from varied backgrounds and disciplines in design and testing.
Diversity reduces blind spots and surfaces hidden biases.
4. Keep Humans in Control
Ensure users can override, question, or pause AI decisions.
Make “human-in-the-loop” a design principle, not an afterthought.
5. Prioritize Transparency
Be honest about what the system can and cannot do — and how it makes decisions.
Explain data sources, limitations, and intended use clearly.
6. Audit for Fairness and Bias
Regularly test outputs for bias and unequal performance across groups.
Use frameworks like model cards, bias audits, or your Bias Buster checklist.
7. Design for Emotional Intelligence
Consider tone, empathy, and trust in interactions.
Avoid overly robotic or manipulative communication styles.
8. Protect Privacy and Dignity
Minimise data collection and use only what’s essential.
Treat user data as borrowed, not owned.
9. Reflect Human Values
Align AI behaviour with social, ethical, and cultural values relevant to context.
Embed ethics reviews early, not post-launch.
10. Continuously Learn from Users
Use feedback loops to refine models and interfaces.
Human-centric AI is a moving target — keep adapting as people and contexts change.