Skip to content

Artificial Intelligence and Law – An Overview and History

| Written by Altlaw

Artificial Intelligence and Law – An Overview and History

artificial intelligence
Guest Speaker: Harry Surden, Associate Professor of Law, University of Colorado; Affiliated Faculty, CodeX

Location: Stanford Law School

First up was discussing ‘What is AI?’, generally speaking.

AI usually describes using computers to solve problems. The phrase is not universally agreed upon and can mean different things to different people.

A useful definition was given as a basis:

“AI is using technology to solve problems or make automated decisions or predictions for tasks that when people do them are thought to require intelligence. Computers make decisions very differently than people. The end decision is reached via a different journey…”

Harry repeatedly says that ‘we do not have strong AI today’. This was a surprise to me, given the amount of work I have seen expedited with artificially intelligent assistance. Harry went on to explain that lay people think AI means computers that think like people. Strong AI is perpetuated by films where humans converse with robots.

The most advanced available AI cannot think or replicate abstract reasoning. They are not thinking machines. A lot of this perception is due to media exaggeration. Thankfully, we seem a long way from computers taking over the world.

‘A 2-year-old human has better cognitive ability than the most advanced AI today’.

The term AI is a misnomer, given the state of todays’ intelligence. It’s not human-level intelligence and looks nothing like how you would expect it given the media's opinion.

So, what is the AI that we speak of?

Realistically, AI is pattern-based artificial intelligence, AKA ‘Machine Learning’. Examples of this in daily use are in the automation of driving cars and language translation. In limited domains, AI can do great things, including knowledge representation.

“We have to be realistic about AI. We have to understand what it is capable of”

In the words of my previous write-up, it is important to ‘Demystify AI’ in order to manage our own expectations. Law and policy changes will not be made by computers for a long time, if ever.

What are some major AI techniques?

  • Computer logic- rules-based approach.
  • Logic, Rules and Knowledge- Representation Based AI.
  • Modelling real-world processes or systems using logical rules: top-down rules are created for computers which are used to automate processes.

An interesting use of AI was by the company that created ‘TurboTax’. This faithfully represents the laws and meaning of legislation. The programmers, in consultations with lawyers and accountants, examine tax codes and see these rules as computer rules that need to be followed.

Why is this important?

You can now engage in computer deductive reasoning. A process that humans cannot do alone. Computers can process rules in complex chains forming non-obvious conclusions; a skill out of the reach of humans. An example of this is collecting expenses on business trips.

A lot of early AI was formed for these purposes.

Machine Learning, ‘the dominant mode of AI today’…

In the last 20 years, machine learning has become very popular. Algorithms find patterns in data and infer rules on their own. i.e. Netflix recommended films or computers identifying a specific email; as a scam. Netflix- compares you to other humans ‘like you’. The patterns are already there, and the machines must be able to ‘pull it out’ and make it applicable. This requires large amounts of data to predict patterns.

You may not have noticed it given the number of emails we receive and how quickly we read or discard them, but email spam filters have improved drastically. They have evolved and can now distinguish types of spam.

‘We have intelligent results without real intelligence’.

When it comes to what AI systems to use, you don’t have to choose one or the other. Many successful AI systems are hybrid systems. E.g. self-driving cars employ both approaches.

Human intelligence + AI Hybrids

Many successful AI systems work best when they work with human intelligence.

Many AI systems have ‘humans in the loop’ to give them an advantage over other structures.

This system of chess playing is the most superior. The human player is enhanced by the intelligence generated by the computer.

We often talk about AI being fully autonomous, making key decisions on its’ own, eventually resulting in a Terminator-style Armageddon with the machines taking over. The reality is that many AI systems ask humans for help. To make a judgement decision of an abstract conceptualisation. Humans frequently get a self-driving car out of trouble and then give power back to the machine.

‘Enhancing, not replacing’, as per my previous write-ups, is a crucial takeaway from this lecture.

Law and technology have always been linked, going all the way back to Leibniz in the 1600s. Recently speaking, the era of knowledge representation was the 1960s- 80s with legal tech and machine learning being discussed and developed from 2000 to the present.

The lecture then explored AI and its use in law with very interesting areas of discussion. This has prompted me to explore this area in more detail. It makes for fascinating discovery.

3 categories of AI and law use

1.      Administrators of law- judges, legislators, government officials and regulators.

Judges are starting to rely on bail reports that have been produced by machine learning- i.e. to formulate the chances of offending again. Predictive policing uses AI to predict where more resources should be applied and facial recognition for suspects. ‘Minority Report’s genre of ‘Science-fiction’ must have to be revisited.

2.      Practitioners of law- lawyers.

Lawyers- Technology-Assisted Review – Document review is pattern detection. Machine learning is faster and quicker. Humans are still in the loop. Regarding document review, we are ‘shrinking the haystack’ so the lawyers look at the most likely to be relevant documents. As a Review Manager, I have seen how crucial this process is to litigation or in other settings involving large amounts of data such as due diligence for mergers.

3.      Users of law- individuals, businesses, bodies that comply with the law.

Businesses are using AI for compliance. Key features in businesses are chatbots and forms of automated legal self-help.

Limitations of AI today

We do not want to exaggerate what legal AI can do. Many things are beyond the realms of capability as AI still requires data, is expensive to collect and there are no computers that can carry out abstract reasoning as they require human-written rules to follow.

Human oversight is always needed on some scale. There are still accuracy errors. Harry justifiably argued that you wouldn’t want Google Translate to translate your billion-pound merger into French and then have to rely on this translation at a later date!

Current policy topics with AI and Law

Will AI take people's jobs?

‘Where lawyers are acting like computers today, those tasks will be replaced by computers tomorrow’.

A key point to recognise here is that more jobs will be created for humans that do not currently exist. Again. I have touched on this in my previous write-ups from my lecture at the London School of Economics. Lawyers do a mixture of tasks. Traditionally, they charge the same for both.

For law students and our future lawyers, the advice was to build up your skills in these areas. Data analysis or computer programming didn’t exist earlier in this century. We no longer spend time on basic calculations where we used to. The abacus gave way to electronic calculators and this pattern will continue to be repeated. Human beings are good at imagining the bad but sometimes hesitant to imagine the good in scenarios where professionals are created and lost.

Lawyers do a lot of things that AI is bad at including:

  • Abstract thinking
  • Problem-solving
  • Advocacy
  • Human emotional intelligence
  • Policy-analysis, big picture strategy, creative thinking.

AI used in Government Decision-making

In the US, some Judges use reports created by AI, when sentencing. This is known as COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) and has been heavily criticised. This gives a numerical score for the defendant's risk of re-offending from 1 to 10. Judges frequently defer to this. These reports are created by private companies and often we don’t know what data has gone into them. This is clearly problematic. A further key issue is what if there is bias in the data? If it is based on police data what if the police practices are biased? This could result in bias subtlety embedded in the data which is training the machine model and might cause us to overestimate the risk of re-offending of certain groups. This is a major concern and could form the basis of a miscarriage of justice.

There is also the concern that these machine models can be hard to interpret. A key problem is that these machine learning systems give an illusion of objectivity. If it says that someone has a 88% chance of re-offending that sounds damning but people tend to defer to a number even if its basis is questionable but has the air of false precision. There is a huge number of factors and a lot of subjectivity in a seemingly objective area.

This excellent article from January 2020 by Stephanie Condon explores this area in detail.

The issues regarding the use of AI in law by law officials were also discussed, as were the privacy issues, although not expanded on as the speaker wishes to leave room for the very interesting Q&A that followed.

They begin at 55:00 if you would like to listen to it on YouTube.

Find out more information on AI in eDiscovery.