Yet as AI becomes widespread in every field, our relationship with technology at work is unavoidable – and it raises more and more questions. Is AI already smarter than humans? What can it do independently, and where does it still need human input? Why, despite impressive progress, is the human-AI duo still considered the most effective work model today? And what’s still missing for AI to operate fully on its own?
Artificial intelligence IQ: Is AI already smarter than humans?
Marijus Masteika, Chief Software Architect at Danske Bank, responsible for developing and applying AI tools in practice, explains that AI intelligence isn’t measured like human intelligence – there’s no IQ score. However, AI models are constantly tested by various organisations, and the results are publicly available.
“Every new AI model is tested against a wide range of criteria – from language comprehension to solving complex tasks. There are many public leaderboards where models are ranked and compared, and also initiatives like ‘Humanity’s Last Exam,’ which directly compare AI intelligence to human expert knowledge.” – says Masteika.
He notes that AI is also evaluated with real exams – such as math olympiads, university entrance exams, and medical graduation tests. The most advanced models achieve excellent scores.
For example, at the 2025 International Mathematical Olympiad (IMO), Google and OpenAI systems delivered gold-medal-worthy results. Meanwhile, the DeepSeek model not only reached gold medal level at the IMO (solving 5 out of 6 problems) but also scored 118 out of 120 points at the Putnam 2024 competition – far surpassing the best human score of 90 points. These achievements show that AI can already solve extremely complex problems better than most people.
“You can’t compare AI to a single person – it’s more like an encyclopedia covering countless fields. From law and medicine to languages and programming, humans simply can’t match that breadth. But in areas requiring deep understanding and conceptual thinking, like mathematics or physics, human creativity remains unique.” – explains Masteika.
.jpeg?h=267&w=400&rev=9286c43eca934db5b712400333180a60&hash=A2A06038167B92E5D01858EBEEA73627)
Why do we still hold back from sharing that we use AI at work?
Although few doubt the benefits and capabilities of AI today, some people still feel uneasy admitting they use it at work.
“There are cases where people even apologise for completing tasks with AI. It’s often seen as proof that we couldn’t do it ourselves. But to me, it’s like self-consciously admitting you typed a text on a computer instead of writing it by hand. Technology is advancing rapidly, so why not use it to work faster and better? In our organisation, we encourage everyone to use AI and offer training.” – says Masteika.
In many fields, AI has long surpassed humans – especially where speed, information volume, or technical accuracy are key. Competing with technology in these areas is simply pointless. For example, no matter how fast you type, AI can generate text dozens or even hundreds of times faster.
“If I had to choose between hiring a junior programmer or working with an AI assistant, I’d choose AI. Humans just can’t grasp and complete tasks that quickly. So, there’s no point in competing with AI or being ashamed to admit we use it – it’s much more effective to work together. If you’re an employee who’s mastered AI, your productivity multiplies.” – explains Masteika.
Competing with AI is pointless, but letting it work alone is still too soon
However, speed and a broad knowledge base don’t mean independence. While AI can handle many tasks, it’s often said that it makes mistakes, so its work needs to be critically evaluated. According to Masteika, AI’s errors are often judged more harshly – not because they’re more dangerous, but because we don’t know who should take responsibility for them.
“We often forget that humans make mistakes too, sometimes serious ones. The difference is that human errors seem more understandable and acceptable. For example, autonomous cars have been discussed for years. Imagine: in Lithuania, there are about a hundred fatal car accidents per year. If all cars were replaced by self-driving ones, that number might be cut in half. But that would still mean dozens of deaths due to AI. Paradoxically, who would take responsibility for that? Responsibility is one of the key differences between humans and AI.” – says Masteika.
He points out that AI can offer solutions, but for now, it can’t take responsibility for the consequences – and that’s why the human role remains essential.
“Where consequences are irreversible, human intervention is still necessary. Responsibility is the boundary AI hasn’t crossed yet, and it’s one of the main reasons why the human-AI duo remains the most effective work model. Humans live in the real world and are accountable for their actions, while AI cannot do that. Even if technology works perfectly, the final decision and responsibility rest with people.” – explains Masteika
.
According to Masteika, for now, AI can operate autonomously only in areas where mistakes can be checked, corrected, or reversed. Where decisions have irreversible consequences, human involvement remains essential. This is also established in legislation, which requires mandatory human oversight in high-risk areas. For example, when using AI in medical diagnostics, financial assessments, or employee selection, the final decision must be made, confirmed, and the responsibility assumed by a person or the organisation they represent.
