Why AI is very good at some things, and not so good at others
AI is going to change the world. Of course it is. It just has to.
Won’t it? 🤷
In this blog post, I want to look at some of the myths around AI, and how we’ve navigated them at Avail. As a risk averse group, lawyers have a difficult relationship with AI, and to be honest, it’s pretty clear why!
A recent book from a prominent practitioner of AI, Erik J Larson, claims that AI cannot become what we (really really) want it to be, because of the way it is designed. It’s inherently flawed. Hmm 😟, that’s a worrying starting point.
The problem with AI ‘changing the world’
The key problem is that AI is simply a narrow definition of ‘intelligence’. It’s a product of super computers applying logic, deductive reasoning and inductive reasoning to large datasets, but crucially missing out abductive reasoning. Let’s break these down:
- Deductive Reasoning – 2️⃣ ➕ 2️⃣ 4️⃣
- Inductive Reasoning – Yesterday I was fed (🥓 🥞) at 9am, so tomorrow should be the same
- Abductive Reasoning – This could also be called intuition: you don’t objectively know why the student didn’t do their homework, but you are pretty sure the dog didn’t eat it
AI was not meant to be this way when you think about it! This is why grand predictions about AI regularly fall short. For example, Elon Musk has promised self-driving cars ‘next year’ every year since 2014. He says they’re coming next year, which they feasibly could 🔮 But what does your gut tell you…?
We may be waiting awhile. But, you know who isn’t waiting about…? The police, who get better at giving speeding tickets every single year. No wonder really, because there can only ever be one Jeremy Clarkson, driving an orange supercar with a personalised ‘POWER’ numberplate and power ballads on the radio.
So, this is our starting point – AI is not magic, AI cannot replicate the human brain, and no matter how hard someone tries to persuade you otherwise, it needs to be used with caution! AI works when there is a narrow objective, or a limited number of variables.
The reality of AI
On every training session for Avail we give, we make a point of saying that “AI is very good at some things, and not so good at others”. Working with a defined, consistent dataset you can build a product that is incredibly accurate and best-in-class. At Avail, we are 9️⃣7️⃣-9️⃣9️⃣% accurate because we apply our AI to ask an objective question only, like “Is there a charge registered against this title? If so, what does it say?”.
Avail does not go beyond this brief, and by doing so avoids the problem in asking AI to use abductive reasoning. To go deeper and opine on commercial risk as well as legal risk would mean trying to replicate the intuition of an experienced real estate fee earner. There’s a reason why clients pay lawyers – commercial risk analysis involves a vortex of variables and ill-defined information.
At the end of the day, in law in particular, commercial issues are not always purely technical points. They often involve judgement based on personal risk matrices and appetite. You are dealing with people: their vested interests, emotional ties and, sometimes, irrationality.
This could be best expressed by the famous Mike Tyson quote “everyone has a plan until they get punched in the mouth”. 👊⁉️
So, where does this leave us?
In a standard real estate transaction, there is quite simply no substitute for practical experience when it comes to understanding what a client is willing to accept, or what risks they consider to be red flag commercial risks. No machine or programming can replicate that experience. AI should be used to help on transactions, but in its current form, should never be expected to replace.
And that’s what we do! Avail is an efficiency tool that uses AI to help real estate fee earners surface risk faster, identify red flags on day 1️⃣ rather than day 5️⃣0️⃣, and ultimately supercharge their final report on title 🚀. It remains objective in identifying possible legal risk and never pre-supposes commercial risk.
Nothing more, nothing less. 😌