AI Initiatives Often Fail Due to Data Readiness Challenges, Hylaine Executive Warns
Companies are struggling with AI implementation due to fundamental data infrastructure and governance issues, requiring investment in data reliability engineering and cross-functional collaboration to achieve sustainable returns.

Many artificial intelligence projects are failing to deliver expected returns due to underlying data readiness problems, according to Ryan McElroy, Vice President of Technology at Hylaine. The consulting executive identified five critical barriers that commonly derail AI initiatives: data access limitations, siloed systems, poor data quality, inadequate governance, and organizational misalignment between technical and business teams.
McElroy emphasized that the most significant challenges are structural and organizational rather than purely technical. Data access issues frequently stem from legal or security restrictions, incompatible formats, or legacy systems that prevent usable data from reaching AI models. Siloed data remains particularly problematic as enterprises expand across multiple cloud platforms, while quality problems like inaccuracies and incomplete records can lead to AI hallucinations and biased outputs.
Tech leaders should prioritize building mature, AI-ready data infrastructure as their first step toward successful implementation. This includes investing in data engineering tools and talent while modernizing data architectures to handle the scale and velocity requirements of AI systems. Companies that maintain both data warehouses for structured data and data lakes for diverse data types have a significant advantage, McElroy noted.
The establishment of data reliability engineering as a core capability is crucial for ensuring ongoing data quality, availability, and observability. These capabilities streamline testing and root cause analysis when errors occur in data movement. Modern data integration tools such as FiveTran or Airbyte for ELT processes, or cloud-native ETL platforms like Azure Data Factory or Databricks, can accelerate data preparation once basic infrastructure is in place.
Governance frameworks must be defined early to ensure AI systems access compliant, trustworthy data from the start. In regulated industries like insurance and healthcare, success depends as much on governance as innovation, McElroy explained. Examples from American Express and Astra Zeneca demonstrate how robust data architecture enables AI systems to learn continuously while maintaining strict compliance boundaries.
Building organizational trust in AI requires transparency, explainability, and collaboration between IT and business teams. The most successful projects are typically led by a trio of champions: an executive sponsor, business process owner, and technical lead who ensure alignment across strategy, outcomes, and execution. McElroy recommended starting with pro-AI user groups to reduce risk and gather clean feedback before broader implementation.
Despite pressure for rapid AI adoption, McElroy cautioned against chasing short-term wins without strong data foundations. Data reliability engineering provides the necessary strategies for ensuring data quality and availability, while proper governance frameworks allow AI to scale safely. Technical safeguards like tokenizing real data and automating PII exposure alerts can support continuous compliance without slowing development.
The human element remains critical for long-term success, with many projects faltering due to skills gaps in modern cloud infrastructure and data engineering. Companies can address this through training, hiring, or contracting external experts, with hybrid teams pairing internal staff with specialists proving particularly effective. McElroy pointed to an MIT study showing that repeatable, scalable adoption—not one-off successes—drives sustained ROI from AI investments.