top of page

Our view: the UK AI roadmap

The AI council published their guidance on what should constitute the UKs AI strategy today.

The executive summary can be found here:

What is the AI council in the UK?

There are many government and civil society organisations dedicated to understanding AI, and navigating them can be a bit of a minefield. The AI council is part of the Office for AI which is: “a joint BEIS-DCMS unit responsible for overseeing implementation of the AI and Data Grand Challenge.”

Many countries are publishing (or have published) AI strategies, and they tend to have a similar framing. The report's opening paragraph details that the country believes AI will be a critical part of their future economy, and they want to harness the benefits of AI, whilst managing the risks that it presents. Most strategies state that they want to attract top AI talent to the country. As far as I have seen, there’s very little diversity in the aims of countries, though I’m happy to be convinced otherwise.

The AI roadmap

The AI council is made up of a number of individuals who do have expert knowledge on AI ethics and possess a real desire to ensure that AI is ethically-driven. When reading the report, I got the impression the authors believed that an AI workforce would also be composed of ethicists, anthropologists, and other professionals trained in humanities based methodologies. For example, the report calls for the Alan Turing Institute to have a national presence, and the Alan Turing Institute is comprised of excellent ethical researchers, as well as computer scientists and the like.

The report states that researchers are working across disciplines to meet AI challenges, which you would assume includes the humanities disciplines as well as the sciences. However, this was not explicitly called out. Although no particular profession was mentioned, I think it is still important to explicitly state that AI ethics researchers should be included in the future workforce, as this point is not part of the wider narrative around AI. I think the average person would think that an AI researcher was a computer scientist, and not a philosopher.

The Alan Turing Institute is comprised of excellent ethical researchers, as well as computer scientists

Diversity and inclusion was a key focus for the 2nd pillar. It is well known in the ethics world that a diverse workforce is one of the key ways to ensure that AI works for all of society. However, a diverse workforce should also be listened to, and not just be for show. The recent firing of Timnit Gebru at Google is a perfect example where a talented individual was edged out of a company for not ‘towing the company line’. She was valued as a diverse member of the organisation, but only if she was silent about the discriminatory practices in the company.


Stay up to date with the world of AI Ethics


An AI literacy programme was also alluded to. In the future, many of us will have to be AI literate, though we won’t directly be working in AI. Consider the fact that today many of us can read and write, but few of us are professional writers. It was also nice to see health, social care and climate change recommended as clear areas where the UK could thrive. However, defence and security was also named as a sector where the UK could specialise. There are some very troubling technologies being developed in defence and so ethical oversight is crucial.

What happens next?

The government is not obliged to do anything with the recommendations from bodies like the Office for AI, the Alan Turing Institute and the AI council. However, we can be sure that relevant and senior members of government have been briefed on it. I just hope that future iterations make clearer that AI development is a scientific, technological AND a humanitarian practice.

Stay up to date with the world of tech ethics by signing up to our newsletter here

bottom of page