Innovation in Artificial Intelligence offers a wealth of transformative growth opportunities across sectors. From healthcare to education, automotive to retail, employment to financial services, no sector is untouched by the potential offered by enhanced intelligent automation with AI. Yet the potential for unintended harm is ever present, impacting individuals, organisations, communities, and the environment. To navigate this wave of rapid AI innovation, we must critically asses how to harness its power while controlling the risks, many emergent as the surge rolls on. Responsible innovation has become a business imperative for AI innovators.
The discipline of responsible innovation has its roots in academia, dating back to early preparations for the European Commission’s Horizon programme for science funding. In recent years, adoption and demand for responsible innovation has soared, as awareness has grown of the profound intersection between technological progress and its ethical and societal consequences. With AI’s power to augment, automate, and accelerate, businesses are recognising the need to ensure their innovations are both trusted and trustworthy.
The need for responsible AI
While both applications and benefits of AI are numerous, headlines and research that highlight harm or disparity perpetuated by AI follow closely behind. In healthcare, diagnoses and treatment can be revolutionised with AI technologies to accelerate sluggish processes, increasing accuracy, and freeing doctors to provide excellence in patient care. Yet, these benefits may not be equally distributed, with evidence suggesting AI may exacerbate existing health inequalities experienced by minority ethnic groups, with lower efficacy for Black and Asian patients than white patients.
Privacy requires significant consideration, not only for healthcare, but for any AI application that processes personal data. Clearview AI, a facial recognition system designed to support law enforcement and government agencies, has been subject to fines for breach of GDPR across the EU and in the UK, and subject to binding orders for privacy violations in Canada. While the UK judgement by the Information Commissioner’s Office was overturned (ICO - the UK authority for data privacy), the ICO has noted that it retains the ability to act against companies that violate UK GDPR, with penalties up to 4% of total annual worldwide turnover.
Unfair or biased outcomes pose a further risk. AI tools are increasingly used in employment contexts, streamlining time consuming CV screening, assessments and hiring decisions, conducting surveillance, and even firing employees, but these new approaches can reinforce and amplify human bias. In a well-known example from 2018, Amazon scrapped its AI recruitment tool that was found to preferentially select male candidates, having erroneously learned from their primarily male workforce to discriminate based on gender-cues.
The explosion of Generative AI further adds potential adverse impacts, with ethical questions surrounding copyright of authors and artists, and harmful outputs posing significant risk to the safety of vulnerable users. Businesses, whether developers and deployers of AI, must become aware of their responsibilities when it comes to actions of their AI systems, with Air Canada recently held liable for incorrect information shared by its customer care chatbot.
How do adverse impacts arise?
In the example of Amazon’s recruitment tool, bias became learned by the system from its training data. If an AI algorithm is trained with datasets that embed bias, it will produce biassed outputs that can lead to unintended consequences. Bias may originate in data collection, arising from real world disparity in society, or from sampling bias in the choices of data collected. Cognitive biases can creep in through data labelling processes. Bias can also become embedded at the stage of problem formulation, through AI model design and testing, and biased deployment decisions.
From this vast range of AI applications, risks and impacts, it becomes clear there is no one size fits all solution, nor is any solution purely technical; the risks and mitigations are highly context dependent, and require human oversight and governance.
Responsible AI brings business benefits
Trustworthy AI makes business sense. It builds brand value, bolsters ESG initiatives, and helps attract and retain valuable employees, who increasingly expect ethical corporate practices. In a 2023 survey of IT professionals, BCS, The Chartered Institute for IT found that 90% would consider a potential employer’s reputation for ethical use of AI and other emerging technologies.
In the same survey, 88% of participants believe the UK government should take a lead to shape global ethical standards. Yet, despite 39% of UK companies consistently using AI technology in daily operations, response from employers indicated many are not yet prepared for AI governance, with little support in place to deal with ethical issues.
Principles of Responsible AI
To become a trusted developer, provider, or deployer of AI systems, establishing a set of principles of Responsible AI provide a foundation on which to guide responsible innovation practices throughout the innovation lifecycle.
The UK white paper on AI regulation published in March 2023 proposes five cross-sectoral principles for responsible AI innovation:
Emerging legal and governance frameworks
Around the globe, nations are designing AI governance legislation, with many, like the UK, publishing white papers, national AI strategies, and guidelines ahead of regulation. The EU Artificial Intelligence Act is the first comprehensive regulation on AI, and was approved by the European Parliament in March 2024. There will be a 24 month implementation period after its publication and entry into force.
The EU AI Act aims are to protect fundamental rights, democracy and environment while boosting innovation, and will apply to all developers and deployers of AI systems that are marketed or used in the EU. It adopts a risk-based approach, establishing obligations based on the potential risks and levels of impact. AI applications that threaten citizen’s rights are banned, such as predictive policing, emotion recognition in the workplace and schools, or any AI that exploits people’s vulnerabilities. High-risk uses of AI are those that have significant potential harm, and include critical infrastructure, employment, education, healthcare, and finance. Developers and deployers of these systems must meet governance obligations such as risk assessment and management, human oversight, monitoring and logging, and transparency.
Frameworks that support businesses to adopt responsible AI practices are emerging alongside these regulations. The U.S. National Institute of Standards and Technology (NIST) AI Risk Management Framework is a prominent example, providing a flexible resource to equip organisations to identify and manage the unique risks associated with AI systems, and adapt as the AI landscape develops. ISO, The International Organisation for Standardisation has released ISO/IEC 42001, a standard for establishing an Artificial Intelligence Management System, addressing policies, objectives and processes related to the responsible development, provision, or use of AI.
Putting responsible innovation principles into practice
Putting responsible innovation principles into practice will enable your organisation to develop and deploy AI systems that are safe and trusted, and maximise their positive benefit.
It is important to recognise that AI innovation is not purely a technical challenge, but a socio-technical one. In this rapidly-changing AI landscape, there is complex interaction between data, technology and impacts to individuals, organisations, society and environment, with inherent uncertainties and risks.
To develop an effective AI governance framework, start from principles of responsible AI. Identify your AI innovations that are high risk, and diverse stakeholder groups to give inputs and feedback to the approach. Assign accountability and responsibility, considering if new roles, responsibilities, skills, and structures are needed to implement effective governance.
With good governance and effective guardrails AI innovations will deliver safe and equitable outcomes that unlock untapped growth potential.
©2024 HiveMind Network Ltd. All rights reserved. Privacy Policy