
A C-suite executive at a major regional conglomerate recently said: "We have spent 18 months and significant budget building AI proof-of-concepts. We have nothing in production. Our board is losing confidence, and honestly, so am I." This conversation, with variations, happens in boardrooms across the GCC every week. It does not have to.
In 2018, Gartner predicted that through 2022, 85% of AI and machine learning projects would deliver erroneous outcomes due to bias in data, algorithms, or the teams managing them (Gartner, "Lessons From Early AI Projects," 2018). Four years later, the evidence suggests that prediction was conservative. McKinsey's 2024 Global Survey on AI found that while AI adoption jumped to 72% of organizations, over 80% of respondents reported no tangible impact on enterprise-level EBIT from generative AI initiatives (McKinsey, "The State of AI in Early 2024").
The failure modes are consistent and predictable. They are rarely about the AI technology itself. They are almost always about one of three underlying organizational dysfunctions:
“Data is food for AI. In many industries, the data is not yet ready. Before you can build any AI system, you first need to organize and clean your data.”
— Andrew Ng, Founder, DeepLearning.AI and Landing AI
The following framework is designed to systematically address each failure mode. It operates across five stages, calibrated to an organization's current maturity level rather than forcing a one-size-fits-all approach.
No enterprise AI strategy succeeds without clean, accessible data. The first stage involves answering fundamental questions: How is data collected across the organization? Where does it live? Who is responsible for its quality? Can different departments access and trust the same numbers? The goal is a unified data foundation — a single source of truth that every AI application can draw from, rather than fragmented spreadsheets and disconnected systems across departments.
Emirates NBD, one of the largest banking groups in the Middle East, exemplifies this approach. Before deploying any AI models, the bank built a bankwide data lake as the foundation for all analytics and AI workloads — a multi-year initiative documented in McKinsey's case study, "How a UAE bank transformed to lead with AI and advanced analytics." This unglamorous work does not produce board-impressive demos, but it is the difference between an AI initiative that scales and one that fails at real-world data complexity.
AI use cases should be co-identified with business unit leaders — not chosen by IT alone. The structured question: which workflows carry the highest combination of cost, time burden, and available data? A simple 2x2 matrix plotting Business Impact against Technical Feasibility reveals the sweet spot — high-impact problems that are actually solvable with current data and infrastructure. These become the first production deployments.
King Faisal Specialist Hospital and Research Centre (KFSHRC) in Riyadh demonstrates disciplined prioritization. Their Centre for Healthcare Intelligence identified over 20 specific AI use cases, prioritizing radiology diagnostics — achieving a 25% improvement in diagnostic accuracy — and patient flow optimization, cutting bed wait times from 32 hours to just 6 (KFSHRC Centre for Healthcare Intelligence, 2024). The critical discipline: define success metrics before a single model is trained.
Enterprise AI should not be built as one massive system where everything depends on everything else. The right approach: build each AI capability as an independent module that can be upgraded, replaced, or scaled without disrupting the rest. For common tasks (document translation, image recognition), buy best-in-class solutions off the shelf. For capabilities that differentiate your business from competitors, invest in custom development. This "build vs. buy" decision at the capability level prevents both over-spending and vendor lock-in.
Cleveland Clinic Abu Dhabi demonstrates this composable approach effectively. Rather than building a monolithic AI platform, they integrate specialized third-party AI tools — Transpara for breast cancer screening, ARTIS Icono for stroke imaging — alongside custom clinical decision support systems, all connected through standardized clinical data interfaces. Each component can be upgraded independently as the technology evolves.
The most neglected aspect of enterprise AI is what happens after deployment. AI models degrade over time as business conditions change — customer behavior shifts, market dynamics evolve, new product lines launch. Without active monitoring, an AI system that was 95% accurate at launch can silently drop to 70% accuracy six months later, making increasingly poor decisions without triggering any obvious alarms. The operational discipline required — automated monitoring, periodic retraining, and performance validation — is what separates AI initiatives that deliver sustained ROI from expensive experiments that quietly stop working.
Lasting AI transformation requires organizational change, not just technology. An AI Center of Excellence (CoE) brings together technical talent with business domain experts from across the organization. The CoE enables departments to build with AI — providing shared tools, best practices, governance, and training — rather than being a bottleneck that every AI request must pass through. The goal: AI becomes a capability embedded in how every business unit operates, not a special project owned by IT.
Careem, the GCC-born ride-hailing company, provides a practical model. Their AI team of dozens of ML experts sets quarterly goals to measure each ML model's impact on specific business streams — a rigorous practice documented in McKinsey's interview with Selim Turki, Careem's Head of AI. This metrics-driven approach ensures AI investments are continuously validated against real business outcomes, not vanity metrics.
Need an AI strategy that actually reaches production? Bridges helps GCC enterprises move from pilot to deployment with our AI Strategy & Readiness service. Schedule a consultation →
AI & Strategy Practice
Covering technical and strategic shifts across the Middle East. Deep-diving into AI transformation, regional regulatory changes, and digital infrastructure developments impacting major enterprises in the GCC.
More insights into digital transformation.


