Overcoming AI Adoption Barriers in UK Manufacturing

Artificial intelligence is rapidly becoming the backbone of modern manufacturing in the UK, turning traditional factories into intelligent ecosystems. However, as some manufacturers pull ahead with AI, others are stuck in pilot-mode and faced with a multitude of blockers or unwilling to introduce AI all together.

In early October, I attended the Scottish Manufacturing & Supply Chain Conference. As a panellist on the CeeD ‘AI in Manufacturing’ panel, representing technology partners and our views on AI, I was certainly in good company: 14 sessions across the two days were focused on AI topics. Clearly, manufacturers want to embrace AI, but face challenges when it comes to the actual execution.

Action shot or it didn’t happen

In this material and hands-on industry, UK manufacturers need the support to finally move beyond ambition to execution and reach new levels of agility (and it’s not just about robotics). As global competition intensifies and customer expectations evolve, manufacturers must rethink how they operate, and AI offers the tools to do just that. The key is knowing just where to start.

How UK manufacturers are using AI today

From predictive maintenance and process automation to robotics and demand forecasting, the transformation is underway. Those who embrace it early are already seeing the rewards. Across the UK, manufacturers are increasingly leveraging AI to reduce waste, optimise energy use (as the cost of electricity is a major challenge), and streamline operations. In practice, this looks like:

  • AI agents (whether that is chatbots, autonomous agents or multiagent systems)
  • Vision systems (e.g. laser tagging, quality control checks)
  • Image recognition (e.g. stock taking)
  • Scanning and 3D printing (e.g. printing spare parts)
  • Robotics (e.g. AGVs – automated guarded vehicles – on the factory floor)

Success stories are already emerging: Marks & Spencer has used computer vision to reduce warehouse accidents by 80%, while machine learning is driving smarter inventory management across several large firms. But while the potential is vast, the journey is far from straightforward. Only a small number of manufacturers are currently using AI directly in their production processes.

What’s blocking AI in manufacturing?

AI isn’t a switch that we can just turn on; especially in an industry as ‘boots on the ground’ as manufacturing. Thought is needed into how to apply it properly throughout the organisation. There are countless stories where this depth of planning hasn’t happened, leading to a failed AI project. Common challenges with AI in manufacturing include:

  • Misaligned strategy (in other words, AI not tied to business goals and/or lacking executive sponsorship)
  • Fragmented or/and outdated data systems that hinder integration
    Limited internal expertise (AI fluency), and
  • Weak governance structures (around data security, compliance and policies)

Regulatory complexity and data privacy concerns add further hesitation, and many manufacturers still lack awareness about AI’s practical applications and benefits. That’s a lot to consider. It’s not surprising that manufacturing is one of the slower industries to adopt AI. Overcoming these blockers will require coordinated efforts across government, industry, and technology providers.


Future challenges to AI adoption

To fully harness AI, manufacturers first and foremost need a skilled workforce and a robust digital infrastructure. This means secure cloud-based platforms for scalable data storage and processing, industrial IoT-enabled machinery (sensors and digital twins) to generate real-time operational data, and upskilling initiatives to empower staff with AI literacy and digital fluency.

In addition, it’s important to remember that AI adoption is 20% technology, 80% transformation. It’s primarily a case of empowering people, not just machines. A quote that stayed with me at the conference was ‘The best manufacturers are not just building products, they are building people’ and workforce skills must be seen as a strategic supply chain. To this end, Artificial Intelligence needs to be seen as ‘Assistive Intelligence’.

Therefore, the importance of change management cannot be overstated. Job roles are evolving and the companies that succeed will be the ones that have a workforce that is able to adapt to changes and have a willingness to upskill. Without these foundations, even the most promising AI tools can fall short.


The role of technology partners

Many manufacturers are stuck in pilot-mode with AI. While the appetite is there, moving beyond ideas to real-world impact takes a leap. As proven by manufacturers like Rolls-Royce, the key to bridging that gap between ambition and execution lies in strategic technology partnerships.

Collaborations with technology partners are not just technical, they are transformational. By digitalising operations, although the initial investment is high, manufacturers gain the ability to predict issues, predict demand, and optimise processes. This allows even small teams to have big leverage and, in some cases, identify new markets and service offerings as a result.

MakeUK research suggests that by 2035, as much as £150bn could be added to the UK GDP by closing the digitalisation gap in Manufacturing. There is no doubt that the future of manufacturing is intelligent, connected, and data driven. Whether it is a large enterprise or an ambitious SME, for most manufacturers now is the time to review how their operations can be modernised, streamlined, and optimised with AI. Technology partners should be there to meet you where you’re at, learn from you and help you break through the barriers unnecessarily holding many manufacturers back.

The Future of AI Adoption: Trends and Predictions for 2025-2030

I have been speaking about the challenges and potential of AI adoption for over a year now. The latest deck from my session at the South Coast User Group in April is available here. During those 12 months, a lot has changed in the world of AI…

Recently I spoke to Markus Erlandsson and Malin Martens on the CRM Rocks podcast about all things AI, adoption and the future trends for AI in general (you can find the episode here). I promised I would create a post with all the useful resources I have found over the last few months – you can find these at the end of the post.

But first, the future of AI…(based on research by the wonderful people of Gartner).

2025

Here we are, in 2025 with at least 30% of GenAI projects being abandoned after POC due to one or more of the following:

  • poor data quality
  • inadequate risk controls
  • escalating costs
  • unclear business value

We are experimenting. With that experiment, a few things have to go in the bin because they aren’t working. And that’s ok! By the end of this year, 30% of enterprises will have implemented an AI augmented development strategy. They will also have implemented a testing strategy. Software has changed the world, AI is changing software.

This is also the year that AI Legislation has started coming into effect. The EU AI Act, which I wrote about previously, is primarily responsible. Although no one seems interested in this yet, in a year, this will have massively changed. More of the staggered enforcement will come into play. Read up on it now, and make sure you get your AI inventory in place…

2026

Next year will be even more data focused when it comes to AI. AI will drive personalized adaptive user interfaces in 30% of new apps, up from 5% today. Over 100 million humans will engage robocolleagues, or synthetic virtual colleagues, at work (apparently, although I personally am skeptical of this one).

Gartner also predicts that 75% of businesses will use GenAI to create synthetic customer data. This is very exciting! By 2030, it is expected that the majority of AI models will be using synthetic rather than real data. I am a big fan of synthetic data so if you read about one thing, make it this!

2027

The lawyers will be very busy in 2027. Gartner predicts that by then, 70% of new employee contracts will include clauses for licensing and fair use of their AI persona (AretiBot, anyone?). This will have huge implications on how we work. It will also affect ‘who’ we work with. Can your AI persona work at more than one organisation at once? What happens if the organisation is making money from your AI persona? What happens when you leave the organisation?

In other news, it is expected that more than 50% of GenAI models by then will be industry or business function specific (fewer than 1% are today). This shift will make them more precise and thus more useful.

It is also the year when nearly 15% of new applications will be generated by AI without a human in the loop. Is this a good thing? Responsible AI practices suggest otherwise. Remains to be seen…

2028

Things will get weird in 2028… All the stuff we are working on right now will by then be technical debt. Yep, AI tech moves fast! Gartner predicts that 50% of enterprises will stop using large AI models built from scratch. They will do this due to cost, complexity, and technical debt in their deployments. So if you are not thinking long term (which in the world of AI is 3 years!), well, do.

What is even more exciting (and equal amounts unsettling) is that machine customers will make 20% of human-readable digital storefronts (see websites) obsolete. By 2030, 20% of revenue will be from machine customers. Imagine your printer runs out of ink. It orders itself more ink. Your smart fridge recognises you are low on milk. It orders you more milk. A human won’t be doing the buying – an agent or ‘machine’ will be. This is fascinating. I am endlessly curious to see how marketers will try to ‘influence’ machines and algorithms in their purchasing decisions. Humans have complex decision making patterns, machines less so… I have no doubt they will try to encourage your printer to buy the official ink though… 😀

Please also note that by 2028, 75% of enterprise software engineers will truthfully say they are using AI coding assistants. In early 2023, less than 10% admitted to this. I think it is 80% right now but the stats will take a while to catch up!

2030

Five years from now the world of AI will look VERY different. Synthetic data will be all the rage for AI models. More importantly, us humans will engage with software in a different way.

Historically, humans have been central to the use of software. We tell it what to do. We move a mouse, type on a keyboard, and use our voice. But in the future, we are there on a need to know basis, by exception only. Agentic AI and autonomous agents will be everywhere. Some predict there will even be organisations with more agents than employees. I absolutely believe this. Moreover, guardian agents will oversee AI agent actions. This will be very much needed as people will inevitably try to get agents to go rogue. Let’s be honest, sometimes they go rogue on their own!

I will sit down and contemplate how I want my fridge to decide what food to buy for me. I will also think about how to override that decision when I really do want some chocolate for dessert.

Enjoy the links!

Useful Links

AI Governance/Strategy

AI Tools

Copilot

Accessibility

Responsible AI

General

Top 10 Considerations for Successful AI Adoption

I have been researching artificial intelligence tools within the Microsoft stack for a while and it will be of no surprise to anyone that a lot of my time has been spent focused on Copilots. There is a huge number of Copilots out there in all areas of the stack (see image below). There is even more AI capability within Azure AI Services with 1,600+ AI models available. These provide greater control and customisation for those looking to do code-first development.

Copilots….everywhere

I researched extensively the challenges around choosing the right tools and explored how to adopt and govern them. These aspects are crucial for getting value out of AI and last year, I spoke about AI adoption at conferences, user groups, and virtual sessions. You can find the deck here. Microsoft now also has great guidance on AI adoption, strategy and planning together with AI checklists you can find here.

This blog post provides my top 10 considerations for AI adoption aimed at getting value from introducing AI tools. Last year was about execution, this year is all about results…

1. With AI, start with WHY

Some organisations have suggested AI is their business strategy which I have always found a strange statement. Organisations should have had a business strategy and a reason for existing before AI came along. AI is not a business strategy, it is a catalyst for executing a business strategy quicker (and better!). So the first question has to be ‘what problem are you trying to solve with AI?’ – i.e. what are your use case(s) that AI can handle? Why would AI be a good tool to handle these use case(s) as opposed to other options? Microsoft has a great AI decision tree for its products for those needing guidance.

2. Data, Data, Data

AI is all about data and lots of it. The quality of AI outputs will be based on the quality of the data the model is trained on. Do you have enough good data to train the models on? Are you a data-driven organisation and have you considered how this will change in future (e.g. synthetic data)? Do not underestimate the sometimes hidden cost and effort involved in getting the data ready for AI.

3. Skills, and Decisions

Do the people who will use AI tools and solutions have the right skills? Do they have the knowledge to use them responsibly? Do those making decisions on AI tools understand enough about how they work and their limitations? Is it clear when it is better to build a custom solution vs buying an off-the-shelf solution? It is important to guarantee upskilling happens at all levels to make the right decisions that will lead to value. By 2028, over half of large AI models built from scratch will be abandoned. Rising costs, complexity, and technical debt will drive this issue according to Gartner. AI technology is evolving so quickly that ‘technical debt’ conversations are not far away.

End users of the technology also face challenges. They need to develop new habits and mindsets to remember to turn to AI when in need but old habits die hard. Every organisation will have a different maturity level with most of us in the implementing stage:

4. Ethical Considerations

If the organisation you work for stated they intended to use an AI model to decide on employee promotions or salary increases, what would your reaction be? It’s important to consider any ethical implications from using AI models. If the data they are based on is biased, the models could be biased too. AI models are constantly learning and are inherently unpredictable. Therefore, human review and accountability are key. Microsoft’s responsible AI principles cover a lot of key areas for consideration:

I would also add two more:

  • Scalability – is the solution capable of flexing with the organisation’s changing needs? Or will it be short lived by design?
  • Sustainability – AI models are power hungry. Consideration needs to be given to the vast infrastructure required to support them. We also need to assess whether the environmental impact is being considered.

5. Integration with Existing Systems

AI models, whether custom made or off-the-shelf, will be interacting with existing infrastructure and software. When evaluating options, it is important to consider their capability to integrate with the existing ecosystem. This ensures they are the right fit. Selecting the right tool is crucial for long-term compatibility.

6. Security and Privacy

AI models use data. This one goes without saying. If an AI model is free to use, ask what the model owner is doing with your data. It is important to know how your information is handled. If it is free, you are the product. Security and privacy management is crucial. We must ensure AI is implemented responsibly. Rogue AI, AI jailbreaks, and AI scams are on the rise. Will we soon see an AI security breach in the news? Microsoft have created two open-source packages aimed at privacy and security, SmartNoise and Counterfit, worth looking into. It’s important to remember that everyone can bring their own AI to work. Guidance must be provided on what is and isn’t allowed in that respect as well.

7. Scalability and Flexibility

The use cases for AI can change over time. AI usage can also increase exponentially. It is important the AI model or solution chosen is capable of both scaling with new or changing use cases. It must also be flexible enough to enable higher usage over time. If AI tool decisions are being made in isolation, by e.g. different departments not communicating, multiple tools might be introduced in the organisation. These tools may address similar use cases. These tools may then need to be abandoned. This happens because they cannot scale or flex with the organisation’s needs a few months later. Having oversight of the AI requirements across the organisation is crucial. Having an AI strategy can mitigate this risk in most cases.

8. Cost and ROI

It is no secret that AI tools/solutions and even the running of the models themselves can be extremely costly. It is important to consider the long term financial investment. One should focus on years rather than months when it comes to AI as the time to realise value out of an AI solution can vary. A clear investment plan needs to be in place. The use cases need to be prioritised, approved, and socialised within the organisation and there should be a clear approach to how the return on the investment will be measured. AI solutions focused on personal productivity can be particularly difficult for calculating ROI. Conversely, AI solutions aimed at organisational enhancement commonly have clear metrics and KPIs are usually defined from the start. Before implementation it is important to have governance and monitoring in place to allow ROI to be measured. It is also worth considering creating an AI Center of Excellence to link ROI back to business objectives too.

9. Change Management

Software that is not used by humans is of little value. We can introduce extremely complex, brilliant, and expensive AI solutions. They may solve no problem whatsoever. Consequently, adoption will be low, and the benefits will be none. Introducing AI tools is like any other new technology. Some people will be categorically against it and will not touch it. Others will be very excited about its capabilities. There will also be those in between the two. It is important to guide the end users of the AI solutions. They must understand why AI is being introduced. It’s crucial to explain what use case it is addressing. Users need to know what problem it is aimed at solving.

We then need to make it ok to fail and try again. AI is new and unpredictable. Not everything will work the first time. Not everything will be useful. Some of it will be just nonsense. Establishing a change management process from the beginning is crucial. This ensures adoption starts high and also remains high as AI models, updates, and new features are introduced. AI champions or super users in an organisation can have a massive impact on this. Identify them early and involve them as soon as possible!

10. Constant Feedback

Last but not least is the need for constant feedback – both from the human and the technical sides. End users of AI solutions need to have a process for providing feedback. They need to identify what isn’t working, what could be done better and what should be done next. Any AI strategy, governance and policies introduced also need frequent review to check they are still relevant. AI is changing the way we work. We can only adapt quickly enough when the feedback loop is in place. This loop identifies where AI is winning and losing the battle.

Finally some very useful links for those of you that stayed with it until the end! Check out:

And don’t forget to let me know how your AI adoption is going!