The growth of artificial intelligence has been exponential in 2025, and global spending on AI is forecast to surpass $2 trillion in 2026, increasing at an annual rate of almost 28%. Adoption is equally remarkable.
When launched, ChatGPT reached 1 million users in just 5 days and 100 million users within two months, a pace far outstripping previous platforms.
By comparison, it took LinkedIn over seven years to reach 100 million users, Facebook four and a half, WhatsApp three and a half, and even TikTok nine months. Against this backdrop of unprecedented growth, AI tools now offer instant, detailed answers to countless questions.
But when these answers intersect with the legislation, regulation and over 100 years of case law that underpin the commercial property system in England and Wales, the risks quickly become evident.
David Thomas, Occupier Advisory partner in our Reading office, spoke to Ethical Reading, to explain.
AI in the real world: accuracy is not guaranteed
Several recent cases demonstrate the potential dangers of relying on AI-generated legal or quasi-legal information:
- A litigant in person had her UK tax tribunal case dismissed after unknowingly submitting nine fabricated historic tribunal decisions created by ChatGPT.
- The High Court has issued warnings to solicitors after fake legal citations were found in submissions generated using AI.
- Another solicitor blamed Google for incorrect cases referenced in an appeal, having failed to check their accuracy before filing, and risking professional sanction.
These examples highlight a simple truth: a tool is only fit for purpose if the user understands how to use it. And in the context of commercial property, that understanding must be deep.
As the old saying goes, if all you have is a hammer, everything looks like a nail. AI models rely on patterns from the information they are trained on, meaning their outputs can be skewed, incomplete, out of context or simply wrong.