This article does not deal with Artificial General Intelligence – a form of artificial intelligence that could perform a variety of cognitive tasks similar to a human – but rather with weak or narrow AI. This type of AI is used within business to carry out specific and defined tasks. It is necessary to make a distinction between the method by which an AI algorithm learns, their form, and the business use cases that those algorithms are then used for.
I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. - Alan Turing, 1950
Machine learning – AI algorithms do not follow explicitly preprogrammed instructions but rather discover patterns in data and then are capable of making self-determined decisions based on that data.
Deep learning – deep learning is an advanced form of machine learning and involves multiple layers of machine learning. In these algorithms it becomes increasing difficult to decipher how data inputs drive decision-making results.
Supervised learning – the AI algorithm is trained using labelled training data which is matched to labelled output data. The AI algorithm learns suggested outcomes from new data sets based on the original training data.
Unsupervised learning – the algorithm uses unstructured or unlabelled data sets to find patterns and suggested outcomes.
Reinforced learning – an AI algorithm will automatically determine the ideal behaviour within a specific context, to maximise its performance and outputs. Algorithms are not given specified outcomes but are instead forced to learn the best outcomes by trial and error.
Speech learning and natural language processing – these forms of AI algorithms have the ability to both understand and generate large amounts of natural language data and / or human speech.
Machine vision and image recognition processing – these algorithms have the ability to understand and identify objects and activities in images and from live, real-time environmental interactions.
Facial recognition processing – these algorithms have the ability to recognise human features and, in certain cases, emotional responses.
Robotics – algorithms control, direct and automate movement in mechanical machines and devices.
Robotic process automation – AI algorithms automate process tasks and interface with front, middle and back office systems.
Business use cases
Algorithmic trading – algorithmic trading is used extensively in the funds sectors and is used to autonomously trade in equities and other securities without the need for human intervention.
Facial and image recognition processing – machine vision algorithms are being used in diverse sectors ranging from advertising and sales to law enforcement.
Robo-advisers – robo-advisers or chatbots are used in a variety of customer facing industries. In the UK, as well as in sectors such as retail, they are increasingly being used in more regulated sectors such as law, banking, finance and insurance.
Big data predictive analytics – these algorithms range from predictive health analytics and patient diagnosis to preventative maintenance in the energy, mining and constructive industries, as well as providing marketing and sales insights in digital advertising and sales, and policy pricing in the insurance market.
Autonomous vehicles – a combination of AI technologies automate driverless vehicles such as cars, drones, ships and trains.
What distinguishes AI from other technologies?
There are a number of unique and inherent properties which distinguish AI from other forms of technologies, and demand that the management of risk associated with its use and deployment be treated differently:
- AI learns through access and exposure to data. Algorithms are trained on historic and existing data sets, whether they be data inputs or data outputs. Accordingly, the quality of those data sets will determine the effectiveness or otherwise of the AI.
- AI continues to learn not only from access and exposure to historic and existing data sets but also from interactions with new data in live use and in real time. These interactions can include exposure to new structured or unstructured data sets gained from exposure to the environmental stimuli in which the AI operates. Here data inputs can include structured or unstructured natural language or mathematical data sets as well as exposure to human interactions, environmental conditions and images.
- Perhaps crucially, AI algorithms can make decisions independently and also determine the basis upon which those decisions are made.
- As AI is exposed to these environmental inputs it continues to learn, and so its decision-making capabilities and outputs also continue to evolve and develop over time.
It is the combination of all four elements that makes AI unique as a technology. It can be autonomous in the way that it learns, interacts with its environment, makes decisions and determines the basis for those decisions. It is this autonomy, coupled with the ubiquitous deployment and use of AI in so many industries and markets, many of which are highly regulated, which gives rise to an equally unique set of ethical, legal and regulatory compliance risks.
Success in creating AI could be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. - Stephen Hawking
Why is AI important for the UK?
The UK Government’s Industrial Strategy “The Grand Challenge” published on 27 November 2017, identified AI and data as one of the four challenges facing the UK. As such, the Business, Energy and Industrial Strategy (BEIS) stated ambition is “to put the UK at the forefront of the AI and data revolution” predicting that UK GDP will be 10 per cent higher – or an additional £232bn per year – by 2030 as a direct result of AI.
BEIS announced the UK Artificial Intelligence Sector Deal between the Government and the private sector1, which outlines a package of £603m in new private and public sector funding for AI and up to £324m from existing government funding. The sector deal focuses on five areas:
- People – growing digital skills by investing in key skill training and retraining and widen the scope to attract talent from abroad.
- Infrastructure – creating new data sharing frameworks to address the barriers of sharing publicly and privately held data to allow for “fair and equitable data sharing between organisations in the private sector and between the private and public sectors”, in addition to over £1bn being invested in digital infrastructure.
- Ideas – boosting Research and Development (R&D) spending in the private sector to 2.4 per cent of GDP by 2027 and rising to 3 per cent in the longer term.
- Business environment – the launch of the Sector Deal itself to create a partnership between government and industry, the creation of a new AI Council bringing together respected leaders from academia and industry and the creation of a new government delivery body – the Office for Artificial Intelligence – as well as a new Centre for Data Ethics and Innovation.
- Places – ensuring that businesses around the UK grow by using AI using local industrial strategies.
If Government and business can achieve these goals there will be a growing investment and acquisition market in AI technologies and companies within the UK.
The new Centre for Data Ethics and Innovation will keep the UK at the forefront of tech evolution.
However, the legal landscape relating to the use and deployment of AI is uncertain and developing. Because of this lack of legal certainty, for those investing in or acquiring AI technologies or companies a new approach to diligence will need to take place to mitigate risk and to ensure that value, either through deployment or upon exit, is maintained.
This approach will require investors to consider a number of questions:
- What is the ethical and legal basis upon which AI makes decisions?
- In regulated markets has “compliance by design” been built into that decision making?
- Is decision making transparent? Transparency in certain forms of AI will be technically difficult or impossible to achieve, and in others, developers may not wish to disclose what is perceived to be valuable intellectual capital.
- How can bias in decision making be minimised? AI is trained using data, but if there is inherent bias in that data or in the business processes or systems the AI is replicating, then it will be of no surprise that bias will be duplicated.
- How will liability be allocated and what are the types of loss that might be suffered when AI causes damage by operating outside of its parameters? AI does not have a separate legal personality – despite efforts of the European Parliament last year to attribute “Electronic Personhood” to certain types of AI and make those types of AI liable for their own actions.
- Does liability lie with the developers of the code, the providers of the training data, or the owners or users of AI?
- What role will AI have on the impact of a company’s intellectual property strategy?
- Who will be liable if AI infringes a third party’s IP?
- How will AI impact a company’s workforce, supply chain and customers?
To operate in an efficient and secure manner, business and markets need the certainty of law and regulation. If Britain is to be a world leader in AI technology, Parliament and the courts will have to provide that certainty and do so now. Analogue law needs to evolve far more quickly to maintain relevance in our new digital world.
1 UK Government (2018) Industrial Strategy, Artificial Intelligence Sector Deal [online]. Available at: https://www.gov.uk/government/publications/artificial-intelligence-sector-deal/ai-sector-deal