Categories
Uncategorized

Goals are dangerous

The immediate danger of AI is that is effective at achieving what companies are trying to achieve – but those goals are not consistent with a broader definition of human progress.

For example, a business that wants to maximise advertising revenue might use AI to curate a stream of fast-moving video content that’s optimised to trigger the user’s emotions and keep them glued to the screen. It’s difficult to argue that this is contributing to the progress of humanity or even the wellbeing of that individual user, but that wasn’t the goal.

Douglas Adams satirised this beautifully in the Hitchhiker’s Guide to the Galaxy. The Dolmansaxlil Galactic Shoe Corporation sold shoes that were always too wide or in some way would never fit anyone, so people were always replacing them. This is why on the planet Brontitall, there is a geological layer made entirely of compressed shoes. The planet’s inhabitants eventually evolved into birds and will never mention shoes.

Companies often fall back on Milton Friedman’s assertion that “an entity’s greatest responsibility lies in the satisfaction of the shareholders”. But that ideology falls down when a large number of other stakeholders – such as customers or employees – refuse to cooperate.

“Ethical AI” is often approached much like CSR programmes – an attempt to smooth the edges of a fundamentally unsavoury corporation, to make it acceptable to the wider public.

When done right, AI Service Design is an opportunity to zoom out and look at the larger picture. Does our solution make life better for all of our stakeholders?

This presents a challenge to some forms of narrow AI (e.g. reinforcement learning and evolutionary algorithms), which rely on having a precise, well-defined goal to optimise towards.

However, other forms of AI (supervised learning) can deal better with the complexity of reality by learning from the judgements and behaviours of humans. The key questions then become:

  • are we sampling desirable judgements from “the best version of ourselves”, or are we learning behaviour that’s strongly affected by the suboptimal nature of the environment that humans find themselves in, such as a misguided corporation?
  • are we gathering data from all of our stakeholders, or only a narrow range, such as our customers?

These are complex questions that will overwhelm many data scientists, but that’s where AI Service Designers need to come in.

Realistically, the answer isn’t to overcomplicate AI systems by stuffing them with nuanced judgements from a wide range of stakeholders. That would be an interesting and noble project, but one that would stretch the resources of many companies. Perhaps there is an opportunity for such an AI-powered “ethical monitoring” service to help companies navigate the fuzzy and complex world of ensuring their behaviour benefits – or at least, doesn’t harm – people or the planet.

Until such a system exists, we need to:

  • design our systems in a way that acknowledges the limitations of narrow AI
  • employ human judgement to deal with ethical dilemmas and edge cases
  • train humans to deal effectively and creatively with complex situations

Perhaps the employment prospects of Philosophy graduates are looking up?