pull down to refresh

Use second-layer tools for AI safety
While some enterprises find it difficult to adopt AI, they can now use second-layer AI tools that enable businesses to implement the technology safely.
As businesses accelerate artificial intelligence (AI) adoption, they often face technical obstacles such as poor data quality, data silos and integration issues with legacy systems. Ironically though, many of these challenges – from automated data cleaning and scaling cloud platforms to monitoring and maintaining the performance of AI models – are increasingly being addressed through AI tools themselves.
Sometimes called second-layer AI, these tools can play a crucial role in making AI more accessible and safer by also incorporating explainability and governance to aid compliance with evolving AI regulation. By strategically applying this second layer of AI support tools, companies can better manage the complexity of AI adoption and speed up deployment of the primary AI tools that will enhance business performance.
According to EY’s Responsible AI pulse survey, released in June 2025, seven in 10 organisations said they are already using or planning to use newer AI technologies within the next year – such as agentic AI, multimodal AI, synthetic data and more. But there are still those with fears.
A survey by IBM, from February 2025, shows that these include concerns around poor data quality and an insufficiency of proprietary data to train customized generative AI (GenAI) models; lack of in-house expertise; fears over data security; concern that expensive AI models trained to spot fraud, for example, or to manage customer relationships might lose relevancy as a business scales; difficulties integrating new AI agents and other AI tools with legacy systems.
By Sooyeon Kim