Back to Expertise

AI (Artificial Intelligence) Regulation Needs to Take HI (Human Intelligence) Into Account

September 27, 2021

technology data technology and digital artificial intelligence digital

Last April, the Artificial Intelligence Act (AIA), a proposed EU regulation around artificial intelligence, was published. This proposal is a lengthy document, but its crux is that AI systems should be regulated by categorizing according to risk (risk to humans).

From a legal industry perspective, this manifests in a couple of ways. These regulations can create yet another litigation and regulation frenzy where service providers – law firms, law companies, tech companies, and others – can help AI providers defend against inevitable claims.

Another implication is a (possible) impact on legal business process tools employing AI. Here’s the good news. Most, if not all, legal process tools (contracting, eDiscovery, matter management, early case assessment, eBilling) that employ AI are part of a technology-enabled ecosystem where humans are using the tools to do work ‘better, faster, cheaper.’ As such, the risk of these tools independently doing something ‘risky to humans’ seems minimal.

Stakeholders involved in crafting these regulations also acknowledge that we need to view these systems differently:

‘There will probably be lots of discussions around what is AI and what do you do when humans are contributing certain pieces versus when an algorithm makes its own decisions without any assistance from humans…’

For now, let’s keep our eye on how these regulations develop and make sure this distinction between ‘turnkey AI systems’ and ‘AI supporting human decision-making’ is maintained.

AI should be a tool for people and be a force for good in society with the ultimate aim of increasing human well-being. Rules for AI available in the Union market or otherwise affecting people in the Union should therefore be human centric…

 https://www.law.com/legaltechnews/2021/09/21/for-ai-re..


Back to Expertise