
High-consequence leaders face a dilemma:
Artificial intelligence promises faster, smarter decisions from vast data.
However our research shows that compelling dashboards often conceal a precarious truth: a false confidence in black-box algorithms that incur high margins of error and lead to misinformed decisions, harm to vulnerable populations, and wasted investments in flawed technology.
​We have reimagined
the problem.
Instead of chasing contested labels like hate speech, we studied the structure and spread of hostile narratives: recurring storylines that target identities, institutions, and norms.
The idea came from deep research into how to tackle hate speech off and online. We kept hitting the same wall. There is no shared definition of hate speech. Law, platforms, and civil society all use the term in different ways. Free speech is just as ambiguous, and the two ideas often collide.​
We ran mixed-method studies across platforms and languages. We mapped narrative frames, actors, claims, and calls to action. We tested state-of-the-art AI against human judgements.
The results were clear. Models could flag slurs, but they missed context, irony, and dog-whistles. They confused counter-speech for abuse and let coded hostility pass. Humans can tell the difference at a glance. Machines can not.
​​
These results are shaping the development of our high-integrity narrative analytics toolkit that combines narrative analysis methodologies with graph-based text analytics. The toolkit is under development, get in touch if you want to become a co-design partner.
​
Learn more on the next page.


