Winning the AI Race on Trust: The Skills Needed for the Future of UK AI with Michael Samiotis
- Juan Allan
- Aug 25
- 7 min read
Michael Samiotis on AI in the UK: moving from promise to production. Insights on growth, trust, and building a responsible, competitive data culture

The UK's competitive edge in the global AI race will not be won by building the largest models, but by being the most trusted, deploying responsible AI that solves real-world problems with auditable safety and clear ownership.
In today's tech landscape, this shift from experimentation to production is paramount.
To explore this critical transformation, we speak with Michael Samiotis, Co-founder and Chief Data & AI Officer at DigDATAble. Michael is on the front lines, helping organisations turn this hypothesis into a reality by building accountable data cultures and deploying AI that genuinely enhances decision quality and growth. He argues that the future belongs to those who treat AI not as a side project, but as a fundamental operating-model change.
Interview with Michael Samiotis

How is AI changing the tech industry in the UK today?
AI is no longer a nice-to-have capability; it now sits at the heart of how UK organisations operate. In my work with boards and delivery teams, the shift is clear: we have moved from experiments to production. AI is intertwined with the core of how organisations work across private, public and regulated sectors, from product development and operations to finance, HR, audit, risk, customer experience and public services, rather than being confined to proof-of-concept work. It is also levelling the playing field, giving smaller organisations access to tools that only large enterprises could deploy a few years ago.
The pace in the UK is brisk, and leadership teams are moving to avoid falling behind. Most boards I work with now ask two questions in the same breath: “what value can we unlock in the next quarter, and how do we do it safely?” The real transformation lies in combining AI with governed data platforms, workflow automation and modern cloud ecosystems. When those parts line up, you turn scattered data into trusted products, shorten decision cycles, remove manual effort, and improve resilience.
This is not just a technology story; it is an operating-model change. The organisations that are pulling ahead are putting clear ownership, policy and controls in place, and raising practical literacy for leaders and teams. Over the next 12 to 18 months, the winners will be the ones who move from promising prototypes to production outcomes, with evidence of value and responsible practice side by side.
What are the main opportunities for growth in AI and tech in the UK?
In my view, the UK’s biggest near-term opportunity is to compete on trust. Clients are not buying models. They are buying dependable outcomes. The strongest demand I see is for evaluation, governance, auditability and controls that make AI safe at scale. If we package that into services, platforms and playbooks that others can adopt quickly, it becomes an export advantage as well as a domestic strength.
There is meaningful headroom in modernising sectors that already sit on rich data. Financial services can move quickly on risk, payments and personalisation. Healthcare and life sciences can use AI to support diagnostics and operational planning when data stewardship is robust. Energy, utilities and transport can optimise networks and maintenance. Creative industries can accelerate production while protecting rights. Public services can improve case handling and citizen interactions with clearer service levels and oversight.
The fastest gains over the next twelve months will come from reworking everyday processes in SMEs and the public sector.
Document intelligence, service desk co-pilots, knowledge search, forecasting and scheduling can remove manual effort, shorten cycle times and keep an audit trail that regulators and customers accept.
Finally, there is a growth opportunity in renewing data platforms and skills. Many organisations still run on fragmented estates. Turning data into governed products, standardising metrics and wiring analytics into workflows is a build-once, reuse-often play that pays back quickly. I pair that with practical enablement for leaders and teams so adoption accelerates. In practice, my rule of thumb is to prove value on real work in ninety days, show it is safe and measurable, then scale what works.
What challenges do UK tech companies face when adopting AI?
The hard part is not the tooling; it is organisational readiness. Many firms still operate on fragmented data estates with unclear ownership, weak lineage and uneven quality. Without trusted, well-governed data, models underperform and confidence evaporates. Operating models are often unclear, so teams experiment in pockets while core processes and controls lag.
Governance is another pressure point. Boards want fairness, transparency and accountability, yet the decision rights, policies and audit trails needed to prove this are incomplete or unevenly applied. Questions about data protection, intellectual property and model risk management arrive faster than most organisations can answer them. Security teams worry about data leakage and shadow use, while legal and compliance teams lack the evidence they need to sign off.
Talent and change present a third barrier. There is a shortage of leaders who can translate technical promise into commercial value, and many organisations lack the practical literacy across finance, legal, operations and frontline teams to use AI well. Procurement and vendor sprawl add friction. Legacy systems and brittle integrations slow delivery. Costs are often misunderstood. The spend is not only compute; it also includes data movement, engineering, monitoring and the human work needed to keep outcomes reliable at scale, which is especially challenging for SMEs.
In short, the blockers are structural. Until ownership is clear, controls are embedded, and people are equipped to use AI in real workflows, adoption stalls. The companies that get through this are the ones that treat AI as an operating-model change, not a side project, and who build from strong data governance and culture. My rule of thumb is to assign owners, SLAs and evidence packs before any model goes near production.
How is the UK government supporting AI and tech innovation?
From where I sit, the intent is clear, and the direction is broadly positive. There is a national strategy, research activity is healthy, and public bodies are asking how to use AI to improve services and productivity. The conversation has moved on from principles to delivery, which is exactly what industry needs.
Where government support works best is in signalling priorities, convening partnerships between universities and industry, and seeding early adoption in the public sector. Guidance on responsible use gives boards the confidence to move, and targeted investment in skills and infrastructure helps organisations get started rather than stay on the sidelines.
The biggest gap is speed and consistency. Organisations want to know how to deploy AI safely in high-risk processes, what evidence regulators will accept, and how to buy responsibly without months of delay. SMEs in particular need simpler routes into public sector work, predictable procurement, and access to affordable compute and data so they can compete. Funding should be stable enough to build capability, not just run pilots.
I would prioritise three things: first, a clear, joined-up assurance path for AI in sensitive use cases, with standard evidence packs and template controls that regulators recognise. Second, procurement that rewards outcomes, with lighter processes for SMEs and faster drawdown when value is proven. Third, practical enablement at scale, including shared evaluation services, data-sharing playbooks, and hands-on training for leaders and delivery teams across regions.
Do that, and the UK will convert good intent into repeatable adoption, creating exportable capabilities as well as better public services.
Are UK companies ready to compete with the US and Asia in AI?
Yes, but not on raw scale. The UK will not outspend the US or match the volume of data available in parts of Asia, so we should choose our ground. Our competitive edge is trust, quality and domain depth. If UK firms focus on safe, auditable AI that solves specific problems in finance, health, energy, government and the creative industries, they can win work at home and export those capabilities.
The leaders I see pulling ahead are production-minded. They pair strong data governance with platform engineering, ship in weeks, not quarters, and measure outcomes in cycle time, error rates and customer impact. They treat compliance as a design constraint, not an afterthought, and they build literacy across executives and frontline teams so adoption sticks. This is a different game to chasing model size. It is about dependable delivery that regulators and boards can sign off.
There are gaps to close. Access to capital, large-scale compute and pooled data remain uneven, especially for SMEs. The answer is Lead with responsible AI, interoperability and domain expertise, prove value in ninety days on real workflows, then scale what works. That is a credible path to winning against bigger ecosystems.
What skills are most needed for the future of AI in the UK?
We need to widen the lens well beyond data science and coding. The organisations that move fastest combine solid technical craft with leadership, governance and change capability. Yes, we still need people who can build and run models, engineer data, and operate platforms reliably. But the real constraint I see in the UK market is the ability to set direction, assign ownership, manage risk and turn AI into dependable outcomes across real workflows.
I think about the skills in three layers. First, a universal foundation of data and AI literacy across the organisation. Everyone should understand what good data looks like, how an AI system reaches an outcome, and where the risks lie. That means leaders who can ask the right questions, managers who can read a dashboard and challenge a metric, and frontline teams who know when to trust automation and when to escalate.
Second, practitioner paths that blend technical and operating skills. Data engineers and analysts who can design for quality and lineage. AI specialists who can document purpose, inputs and limitations, and who can monitor performance in the wild. Product and delivery people who can link investment to value, write clear acceptance criteria, and measure outcomes with service levels and KPIs. Security, legal and compliance teams who are comfortable with model risk, data protection and audit trails, not just policy on paper.
Third, leadership capability that turns strategy into behaviour. That includes governance, decision rights, procurement that rewards outcomes, financial fluency, and the human skills that make change stick. Critical thinking, facilitation, negotiation, storytelling with data and ethical judgement are not nice to have. They are the difference between a pilot and a programme that scales.
How we build these skills matters. Classroom learning on its own will not shift the dial. The most effective approach is hands-on enablement tied to live use cases, with coaching for executives and managers, clear role definitions, and communities of practice that keep standards consistent. If the UK invests in this mix of literacy, craft and leadership, we will have an adaptive workforce that keeps pace with the technology and turns AI into a durable advantage-practical collaboration: shared evaluation services, lawful data sharing within sectors, and procurement that rewards outcomes rather than paperwork. I also ask teams to publish a one-page evidence pack before any go-live, covering purpose, data, controls and operational metrics, so decisions are faster and safer.
So, are we ready to compete? Yes, if we compete on the right terms.
Comments