I'm fascinated by computer science mainly because computer scientists are like people of a tribe which create things for people of other tribes. So like in psychology, you still work with the human factor in some way, says Mária Bieliková.
Mária Bieliková works on artificial intelligence and machine learning at the Kempelen Institute for Intelligent Technologies, which she founded in 2020. This year, she won the Eset Science Award as an outstanding science personality in Slovakia. Martin Haraj interviewed her as part of the Women in Science series.
Women in Science_Mária Bieliková
Máte problém s prehrávaním? Nahláste nám chybu v prehrávači.
Professor Bieliková, you are a computer scientist, but you also wanted to be a psychologist, so what fascinates you about computer science?
I'm fascinated by computer science mainly because computer scientists are like people of a tribe which create things for people of other tribes. So like in psychology, you still work with the human factor in some way.
If you want to be a really good computer scientist, you have to understand at least a little bit about the domain that you're operating in, so the field is rather multidisciplinary.
And now, in my older days, I am growing more and more fascinated about working with people and bridging the gaps. And science is something that complements this very well.
What is the main goal of the institute that you founded in 2020?
It's this bridging of the gaps we were talking about.
We are trying to bring excellent science to companies by linking the academic field to businesses, but by saying academic field we don't just mean the Kempelen Institute, you can think of it as a little boat that's flexible, that can quickly change the direction and adapt to changes.
It can approach both parties – on the one hand, it educates innovative researchers, on the other, it tries to make entrepreneurs curious and then it serves both of these parties, so that they can work and communicate better together.
You can't do that without actually having excellent science right here at home, you have to know your understand what you are trying to get across.
Artificial intelligence has been coming to the absolute forefront of awareness around the world in the recent months. The Nobel Prize in Physics has awarded to two gentlemen who are working on artificial intelligence. Are we, as humanity, moving in a good direction in this field?
I guess it is hard to say whether the direction is good or bad. I think our whole world is built on paradoxes. I believe that there is quite a bit of this artificial intelligence around in our lives already and we would probably have significantly different lives without this technology. It is up to us what we do with it. I am happy with the way things are going so far, although maybe a they are going a little too fast.
We humans are not really prepared for it.
Most of my scientific career, I have been dealing with information processing, that is, methods of how to analyse information, how to filter it. The problem is that we not only have a lot of information, but they cause confusion, because we don't know what's true, what's false, what's real, what's not real, what's from what source, that's serious problem.
The artificial intelligence is both producing that confusion and helping us to remove it at the same time, so it's really all full of paradoxes. But that speed is totally unprecedented, it is said that today we are not only in complex times because of the web, artificial intelligence and these technologies that allow us to make changes in our lives at a great speed that we have not experienced before.
Professor Maria Bielik works on artificial intelligence and machine learning at the Kempelen Institute for Intelligent Technologies. How can we find out that the information that is reaching us is, for example, harmful?
That's hard to detect, it's also very hard for a human to detect, because there is no algorithm for that.
I don't even think an algorithm like this could exist.
But what we can do, and what we are doing at the Kempelen Institute as part of several international projects, is that we are trying to detect signals that show how credible the information is. Starting from who created it, what is the history of it, whether there were some not quite appropriate argumentation techniques used, for example, and then at the end of the day it's still up to the person to make that judgement, to evaluate it.
Technologies can be very helpful in that. The basic operation that machine learning and the deep neural networks, can do well is classification. They can either classify things into some clusters or they can directly create those clusters and then when you find out, for example, that these are some patterns with some characteristics, it can help you make a judgment.
Even among AI developers there are certainly those who have nefarious intentions, can we defend ourselves against such people?
First of all, I would oppose and protect the developers a little bit, because they just develop the technology. Yes, there are people who both intentionally and unintentionally cause problems. I will give you a beautiful example, which has been talked about very often: social networks.
They have not been developed with any malicious intent. Algorithms of the artificial intelligence are built to optimise a function. And they can be very good at it, and in many cases much better than humans. And the human are the ones who choose which function will be optimized.
If my optimization function is: I want to keep those people on that platform of mine, on that social network of mine, you've got a bunch of inputs, you see how the users behave, the artificial intelligence or the machine learning algorithms beautifully know how to recognize the signals that actually represent our emotions. The structure is such that when there are vulgar things, expressive things, that's when people react more, yet the artificial intelligence doesn't understand it at all, it doesn't know it's wrong, it just processes it. Again, there is a person who can say, this is not good, this is manipulating people, or they can say, I am making a lot of money out of this, so as long as nobody says anything to me about it, I will do it, so those intentions usually come later. I personally think people are inherently good, but we create situations that aren't really good for us humans.
You were part of the group that dealt with the credibility of artificial intelligence within the European Union. What was the essence of the group?
The European Commission, or Europe in general, places a very high value on human beings. We are always at the leaders in this area worldwide, and in 2018 the European Commission set up a high level group focused solely on artificial intelligence. It selected around 50 experts and we were creating guidelines on trustworthy AI and also a list of things we should look out for in AI systems so that they are trustworthy.
It is a very serious issue and it is not an easy one because the technology is just evolving.
In this group, there are computer scientists, psychologists, lawyers, academics, professionals from the field, both men and women, because there was an attempt to balance. And it was women who played an important role in this group because they brought different perspectives. The result was two documents that are now among the most quoted and used documents, not only in the European Union member states, but also in other countries. They are used and referenced for designing systems of trustworthy artificial intelligence. I am very grateful I could had been part of it.