Article in short:  

  • Danske Bank treats generative AI as both a technology and an organisational change: move fast on experimentation, but build the guardrails needed for a highly regulated industry. 
  • To enable safe use, the bank created Danske GPT (2023) – an internal, sealed solution designed to keep sensitive information within trusted boundaries – and later expanded it into a broader internal AI assistant platform and agent-based solutions. 
  • To scale adoption (not just access), Danske Bank set up an Adoption Cluster: an enablement unit with five identical cross-functional squads that help business areas take AI use cases from idea to production. 
  • After just over a year, the approach has delivered measurable outcomes: 30+ AI use cases in production and five squads running with shared engineering standards, CI/CD pipelines, security measures and governance. 

Drawing on hands‑on experience, Marijus Masteika, Chief IT Software Architect, and Rasmus Ræbild Jespersen, CC at Danske Bank, explain how Danske Bank works with generative AI in practice. They share what makes AI adoption in banking challenging, why secure foundations and skills matter, and how Danske Bank has structured its approach to scale AI responsibly. 

AI brings opportunity – and new kinds of risk 

Banks operate in an environment where trust, availability and compliance are non‑negotiable. Systems run continuously, customer data is sensitive, and mistakes can have serious consequences. 

As Marijus puts it, “Financial institutions are known to die from heart attacks. You don’t get small warnings – something happens, and suddenly the bank is gone. For us, mistakes are very painful and could be fatal.” 

AI amplifies both opportunity and risk. It can dramatically speed up analysis, writing and decision support – but it can also hallucinate, leak sensitive data, be manipulated through prompt injection, or unknowingly breach regulations such as GDPR, the EU AI Act or DORA. 

“AI is like a nuclear power station,” says Marijus. “It can provide a lot of useful and cheap energy, but if it goes out of control, you have Chernobyl.” That is why AI at Danske Bank has always been treated as both a technological and organisational challenge. 

Building a foundation that makes AI safe to use 

Early on, Danske Bank focused on creating a secure environment where employees could experiment with generative AI without putting data or compliance at risk. In 2023, this led to the development of Danske GPT – an internal, sealed AI solution designed to ensure that sensitive information stays within trusted boundaries. 

Over time, this foundation expanded into a broader internal AI assistant platform and later into agent‑based solutions. But while the technology evolved quickly, another insight became increasingly clear: access alone does not guarantee impact. 

“People ask how much efficiency the technology can bring,” Marijus says. “And the real answer is: it depends. Everything depends on skills and users.” 

From platforms to adoption

What quickly became clear was that uneven adoption limits overall impact. Some employees achieved significant productivity gains, while others struggled to apply AI meaningfully in their work. 

“To get real impact, we need mass adoption,” Marijus explains. “Not a few experts, but tens of thousands of people who know how to use these tools meaningfully.” 

To address this, Danske Bank established an internal Adoption Cluster – not as a central AI factory, but as an enablement unit designed to help business areas adopt AI independently over time.  

An internal setup built for learning and scale  

The Adoption Cluster is organised into five identical cross‑functional squads operating across Lithuania and India. Each squad combines developers, a data scientist, an architect and a business analyst, creating teams that can take AI use cases from idea to production. 

The design focuses on repeatability and knowledge transfer. Business areas always own their products and systems, while learnings – both technical and organisational – are shared across squads. 

“The benefit of identical squads,” Rasmus explains, “is that learnings in one place can be reused elsewhere, whether it’s technical patterns or ways of working.” 

Meeting teams where they are 

Not all business units are equally mature when it comes to AI. Some already have strong technical capabilities, while others are just starting. Instead of a one‑size‑fits‑all model, the Adoption Cluster works through three engagement approaches, depending on the complexity and maturity of the domain. 

Some teams work independently using internal guides and playbooks. Others implement solutions themselves while consulting AI specialists when needed. For complex or high‑risk initiatives, adoption squads fully collaborate with business teams. Over time, the goal is always progression – from collaboration, to consultation, to self‑service. 

“We want teams to move over time and become self‑sufficient,” Rasmus says. 

Building ownership sprint by sprint 

One key learning has been that forming effective project teams takes time – and that this is part of the delivery, not overhead. Domain experts contribute to business context, while adoption squads bring AI delivery expertise. Ownership shifts gradually, sprint by sprint, as domain capabilities grow. 

“The adoption team leads early on,” Rasmus explains, “but domain members are building alongside from the start.” This approach keeps formal handovers lightweight and reduces long‑term dependency. 

One year in: tangible results 

After just over a year, the approach has delivered measurable outcomes: 

  • 30+ AI use cases have been taken into production, including RAG‑based assistants and complex workflows. 
  • Several initiatives were deliberately stopped at the prototype stage when simpler solutions or insufficient return on investment became clear. 
  • Five cross‑functional squads now operate with shared engineering standards, CI/CD pipelines, security measures and governance frameworks. 

Not everything succeeded – and that was intentional. 

“We’ve had to kill some of our darlings,” Rasmus admits. “Some ideas didn’t have a strong business case, and in other cases the technology evolved while we were still working.” 

The experience has reinforced a few core lessons: secure foundations enable safe experimentation; flexible engagement models support adoption across diverse teams; and continuous learning matters more than speed alone. 

The insights shared in this article were originally presented at the Vilnius AI Summit 2026.