AI governance

A long table with mismatched chairs. Around it: a werewolf, two vampires, and someone who may be a wizard. On the chalkboard behind them: WHO DECIDES, and below it in slightly smaller chalk: IS IT EVEN LEGAL. A stack of policy documents sits at one end next to an inkpot with a quill in it.

The Home’s senior team has noticed AI. Specifically, they have noticed that other organisations are using it, that the sector press has published several articles about AI transforming charity operations, and that at least three recent funding applications opened with a sentence about “the transformative potential of intelligent automation.” The Director has asked the IT manager to prepare a brief on the Home’s AI strategy for the next board meeting.

The IT manager has asked the business analyst to help. The business analyst has been reading about adversarial machine learning and has sent the IT manager a link. The IT manager has read it. Neither of them has slept particularly well since.

The challenge is not that the departments are wrong to be curious. AI tools can genuinely help a stretched non-profit with limited capacity for the routine, repeatable work that consumes time that should go elsewhere. The challenge is the gap between “we should use AI for this” and “we have thought carefully about what data this involves, what happens if the model is wrong, and whether any of this requires a data protection impact assessment.”

In an organisation with 200,000 supporter records, resident medical histories, and DBS reference numbers for 340 volunteers, that gap is where most of the risk lives.

What follows is a record of how that gap manifested in practice.