AI poses the most threats to financial, healthcare, and legal institutions

"Generative Artificial Intelligence is a very powerful tool. It offers many advantages and opportunities to organizations, primarily significantly improved employee productivity, work quality, and speed. However, despite the positive impacts, Artificial Intelligence also poses numerous threats. These are relevant to many businesses in various sectors already using AI, but some such as finance, healthcare, and the legal system, are particularly sensitive to any threats posed by AI," says Marijus Masteika, Danske Bank's Chief Software Architect.

He categorizes the risks posed by AI to businesses into four groups, the first being data security and confidentiality. "All financial institutions, including banks, have accumulated and manage a lot of strictly confidential, sensitive information. Any leakage of this information, even indirectly, can have a significant impact on financial markets and stock prices," says M. Masteika. Large data leaks, according to him, can occur during business information analysis. "Artificial Intelligence may unexpectedly reveal sensitive customer or business information, especially when unverified third-party automation tools and APIs are used."

The second group of threats posed by AI to organizations identified by Marijus Masteika is potential damage to an organization's financial results and reputation. "Sometimes organizations and their employees may misuse AI tools and services. For example, they may overly rely on the AI system and make critically important decisions without understanding or properly assessing its limitations," says M. Masteika. According to him, AI models can hallucinate and therefore provide incorrect, misleading information. In both cases, this can trigger a wave of negative reactions. The company may lose customer trust, suffer reputational damage, and ultimately, financial results.

Cybercriminals can exploit the vulnerabilities of the AI system

Threats posed by cybercriminals utilizing AI and exploiting its vulnerabilities are identified as the third group by Danske Bank's Chief Software Architect.

"Cybercriminals can exploit the vulnerabilities of the AI system to gain unauthorized access to the company's network, leading to significant cybersecurity incidents. Perpetrators, not only external but potentially also company employees, can use AI tools to manipulate market trends, create and disseminate deep fakes, deceive employees, and thereby destabilize the financial market," shares Marijus Masteika.

Finally, according to him, a threat, specifically severe consequences for organizations such as banks, can arise if the AI system does not comply with legislation and regulations: "We, as a financial institution, are strictly regulated and must comply with various requirements, laws not only of Lithuania but also of the European Union. We must also ensure that all national and EU laws related to the use of artificial intelligence are implemented in our organization. For example, failure to ensure that personal data is processed only with the individual's consent or non-compliance with other requirements could result in hefty fines and significant reputational damage."

Developed an internal AI tool for all employees to use

To avoid all threats arising from the use of AI, Danske Bank primarily does not allow the use of external AI tools within the organization. "Blocking them, we fully understand, significantly limits the organization and does not allow us to take advantage of the aforementioned AI benefits. Therefore, we have created an internal AI tool, DanskeGPT, which essentially replicates what the popular ChatGPT can do, but it operates in a closed system and the data used in it does not leave the organization's boundaries. We also know and always check the wide spectrum language models used by the tool."

To ensure security, all Danske Bank employees, before starting to use DanskeGPT, must undergo safe and responsible usage training. "All employees have the opportunity to use the tool, but they must first get the right to do so by undergoing training. Employees are taught not to use the tool - most of them already know it, but we explain the risks, what can and cannot be done with the tool, what information cannot be provided with the request. For example, confidential information about clients cannot be provided. This means that all the information provided to the tool must be anonymized," explains M. Masteika.

"We also can't let the robot make decisions. Artificial Intelligence is an assistant, helper, adviser to us, it provides information, but the employee makes the decision," shares Marijus.


The department dedicated for AI security and its development within the organization

Although all organization's employees can use the tool created by "Danske Bank" after training, and it can be used in all areas, according to M. Masteika, it creates a significant benefit in software development: "IT colleagues are most receptive to technologies and probably understand them the best. They are very interested in innovations and, being skilled in using technologies, the tool creates the most benefit for them. Another area where colleagues widely use "DanskeGPT" is administrative tasks, such as preparing presentations, writing emails, texts, translations. The tool indeed beautifully and clearly lays out the thoughts and ideas provided by the user. Creating such type of content for large language models is perhaps the simplest task."
"Danske Bank" also has the knowledge and resources to ensure that any AI tool used within the organization is safe and controlled. For this purpose, a separate department operates in "Danske Bank", responsible for the vision and mission of safe AI usage and development, understands threats, and decides which tools can be safely used. According to M. Masteika, there is simply no other choice if you want to increase productivity and thus maintain competitiveness as a company an as an employer. In this case, the contribution of financial institutions to the development of AI can and should be its safe use, not harming but only adding value to the organization, its employees, customers, and the entire market.