The impact of artificial intelligence (AI) on health, safety and wellbeing is gaining ever more attention.
AI solutions are crunching data from a huge range of sources – everything from the structured information in method statements and sickness records through to unstructured big data – to generate predictive risk assessments. You might be using drones that rely on sophisticated AI for commercial operations such as window cleaning, to mitigate the risk of (human) working at height. You’ll almost certainly find AI being used in surveillance cameras for access control, or to monitor worker behaviour for compliance purposes.
And perhaps less obviously – but just as pervasively – you may well find it’s being used in your organisation’s workflow management, journey planning or accounting systems to automate processes that were once done by a human being.
This last point is important. Technology is now embedded so deeply in our lives that we barely notice its influence. Or its cost. According to Eurostat almost one-fifth of people aged 16-74 in the EU had used smart watches, fitness bands, connected goggles or headsets, safety trackers and so on in 2020.
So what’s the problem? Actually, there are several.
For a start, the use of machine learning algorithms to monitor worker performance is damaging people’s physical and mental health, according to legislators in the UK. And it may see people’s jobs and livelihoods either displaced with lower skilled, lower paid work, or replaced entirely.
Its use in video surveillance can be troubling. Not everyone is necessarily aware they are being filmed, the purpose of the surveillance, and what happens with the data that is processed. For example, video surveillance used on a construction site to monitor PPE use will process a mass of surplus data that is not relevant to its purpose: making its use invasive. Or when used for access control it can be discriminatory, as the datasets that have been used to train the AI struggle to recognise people of colour.
The EU’s proposed AI Act (2021/0106 (COD) has a great deal to say about the ethical use of AI and its governance, including the use of video surveillance in public places for law enforcement purposes. It is ambitious in its scope, though (perhaps as a result) making slow legislative progress.
Finally, there is the cost to the planet. Vendors talk of ‘cloud’ solutions, but as renowned academic Kate Crawford explains, they are not fluffy, and they are not harmless: “In reality, it takes a gargantuan amount of energy to run the computational infrastructures of Amazon Web Services or Microsoft’s Azure, and the carbon footprint of the AI systems that run on those platforms is growing.”
I am currently studying for a Masters in AI Ethics and Society and would like to talk with safety practitioners for a research project about the Ai solutions they employ, whether they relate to software programs or hardware such the occupational use of cobots and robots. Please get in touch with me at [email protected] if you’d like to find out more.
 Atlas of AI : power, politics, and the planetary costs of artificial intelligence, Kate Crawford
Writer – David Sharp FCIM FIWFM TechIOSH is CEO of International Workplace, a digital learning provider specialising in health and safety training. He is a student on the Masters programme in AI Ethics and Society at the University of Cambridge, England.