Artificial intelligence is no longer a sci‑fi cameo on construction sites; it’s now quietly reshaping how projects are designed, priced, scheduled and delivered.
The opportunity is real: better decisions, fewer defects, safer construction. But alongside the benefits come governance and legal issues that deserve careful consideration.
Where AI fits on a project
AI can help with design optimisation and generative options in BIM. It can sharpen cost estimating, tender analysis and risk pricing using historical data. AI can be used to automatically calculate rebar tonnage and take-off to reduce costs.
On site, combining cameras with AI tools can help spot PPE non-compliance, track progress against the programme and flag quality issues as they arise. Predictive models can help forecast delays, plant failures and safety incidents. None of this replaces professional judgment, but it can make it faster and more consistent.
Governance and legal risks
The use of AI comes with risks and it’s important to identify and manage them. Intellectual property rights present complex challenges, such as determining ownership of AI-generated outputs, assessing whether the designs are similar to those created by someone previously and considering whether the tool retains a copy of your own design to train the AI model.
There are risks to the confidentiality of data, including personal data (for example, site CCTV). UK GDPR requires a lawful basis, transparency, minimisation, appropriate Data Protection Impact Assessments (DPIAs) and robust security.
There could be issues affecting workers over surveillance on site from AI or concerns about whether technology could replace them.
As many professionals have found, outputs from AI are not always accurate and a failure to supervise the output properly could result in negligent practice – particularly if it affects health and safety.
Finally, it’s imperative to avoid bias and discrimination risks, which could arise from impartial datasets or automated decision-making.
Practical strategies to mitigate risk
To use AI effectively and overcome these risks, consider the following steps:
- Establish clear governance: adopt an AI policy that defines roles, approvals and model management
- Manage data properly: map data flows, restrict personal data and run DPIAs for site monitoring
- Get assurances from suppliers on authenticity and permissions around rights
- Draft smart contracts: define the scope, performance measures, explainability, data ownership and audit rights
- Adjust liability caps and exclusions to reflect AI use cases.
- Keep a ‘human-in-the-loop’ when using AI, not just for decisions relating to design and safety
- Validate and test AI outputs before relying on them
- Keep records of decisions, particularly when departing from outputs generated by AI
- Train your teams on how to use AI appropriately and identify its limitations
- Be transparent with workers, clients and stakeholders about the tools you use
- Remember, “AI says so” is never a defence.
Next steps
AI can deliver real gains in productivity and quality if deployed effectively. But you must adopt appropriate supervision.
Treat it like any powerful tool on site: specify it clearly, supervise it properly, document its performance and don’t let it drive the excavator unattended.