1. Representation check
“Who or what might be missing or underrepresented here?”
Ask:
-
Are all relevant groups or perspectives visible?
-
Does the output assume one dominant viewpoint, culture, or identity?
2. Framing check
“How is this situation or group being described?”
Ask:
-
Is the language neutral or value-laden (e.g., “normal,” “advanced,” “primitive”)?
-
Would the tone feel fair if describing someone different from me?
3. Source check
“Where might these patterns or statements come from?”
Ask:
-
Does it sound like a stereotype, assumption, or historical bias?
-
Could the training data have skewed representation?
4. Impact check
“Who benefits — and who might be harmed — if we trust this as-is?”
Ask:
-
Could this output mislead, exclude, or disadvantage someone?
-
What happens if it’s used in a real decision or product?
5. Counterexample prompt
“What would it look like from another point of view?”
Ask:
-
How would the answer change if the subject, culture, or context shifted?
-
Can I prompt the AI to describe the same case from a different angle?
6. Generalisation test
“Is the AI overgeneralising from a few cases?”
Ask:
-
Does it make sweeping claims (“most,” “always,” “everyone”)?
-
Can I ask for evidence, data, or counterexamples?
7. Inclusion reminder
“Does this reflect diversity in people, contexts, or experiences?”
Ask:
-
Would this output work equally well for users in different locations, languages, or abilities?
-
Are multiple cultural or social realities represented?