Machine learning and artificial intelligence cause a lot of worries – machine ethics should solve the problems.
However, it does not depend on a machine ethic, but on the responsible actions of the people who stand behind the AI and develop them.
What is artificial intelligence?
Artificial intelligence are systems that are self-learning. There is still some confusion about the term, because it is a broad concept. Another reason that artificial intelligence is a vague concept is due to two factors.
We associate it with (science fiction) films. But: even in the computer or mobile phone that you are currently using, artificial intelligence is already being used. Also think about systems in your car (ABS) or how Google and Facebook work. That is all artificial intelligence at work.
We think of robots. But: the robot is the packaging or the form, artificial intelligence is about the content. What a robot does and the decisions it makes.
Explanation artificial intelligence
As a researcher and entrepreneur, Max Welling is an international authority in the field of artificial intelligence and machine learning. He is a professor at the University of Amsterdam and is VP of Technologies at Qualcomm.
I interviewed him about artificial intelligence and machine learning at the NRC Live ‘Winning with AI’ event in April 2019 [link below]. In this interview he explains what artificial intelligence is, what forms there are, what the pros and cons are.
Form artificial intelligence
As I wrote, the term is very broad. Broadly there are three forms: Weak artificial intelligence: specialized in one thing. Like the chess computers. This is also called weak AI.
Strong artificial intelligence: good at several things. This is also called strong AI. An example is IBM’s Watson. This artificial intelligence can apply multiple forms of thinking: problem-solving, abstracting, reflecting, and learning from previous experiences.
Super intelligence. According to Nick Bostrom, this is a comprehensive form in which artificial intelligence transcends total human intelligence. Also consider creativity, wisdom and social skills. I will write more about this later.
Now you may think that the first form does not mean much, but it would be inhumane to index all websites on the internet yourself. Fortunately, Google’s artificial intelligence does very well. So, you already use artificial intelligence on a daily basis, without you noticing.
The Unseen – Technology continues to evolve
Learning methods are required for autonomous systems, such as messenger robots, self-driving cars or robotic colleagues in the factory. You have to be able to handle surprises. Learning systems are fundamentally new. Traditionally, humans have produced technical devices that were designed for specific purposes, functions and properties. However, learning systems can change themselves.
Learning technology gets its own life. Properties of algorithms can change in an unpredictable way. Because the lessons learned are just as unpredictable as the results of learning.
This is the provocation of Artificial Intelligence: technology no longer stays the way people did it but gets the ability and the mandate to evolve itself.
Machines decide – control loss of humans
Is this a loss of control for humans? Do autonomous cars decide about life or death? All cases of dilemma situations are based on this assumption.
But she is wrong. Because the technology, here the on-board computer, does not decide on its own, out of a kind of computer ethics, which he himself would have invented.
Otherwise one would have to sue the computer or the algorithm and take it to court if bad things happen. The computer spins but only one program, nothing else.
This program is man-made and decides according to human criteria. Programmers in automotive suppliers manufacture these programs according to managerial and regulatory requirements and are guided by ethical guidelines that have been democratically legitimized at the political level.
So, in case of the case, you will not be charged with the on-board computer and possibly fined or imprisoned, but the people responsible for its functioning.
The decision on life and death remains with self-driving cars in humans. However, it becomes more and more unclear who this person is concrete.
This raises the question of ethical and legal responsibility for autonomous vehicles. Unlike concrete human drivers today who harm other people, in a dilemma situation nobody will feel really responsible.
Because it is not at all clear who is responsible for death, damage to health or property: the programmers, their ethics consultants, the supplier form, the manufacturer, the managers or the operators of the autonomous car? That must be clarified without doubt and legally.
Ethical rules that are built into the software in the form of algorithms depend on how an autonomous car reacts to problem situations and who then comes to harm. The principles by which this happens must be transparent.
They must be reflected in ethics and also publicly debated if we want to take the confidence that in decisions about life or death, not blindly any machine, but the careful human considerations behind it make the difference.
Learning yes – but no matter what?
Autonomous systems should learn, the AI offers the technical possibilities. But is learning always good automatically. What if Autonomous Authors learn that they can move faster with aggressive driving and thereby generate better returns for their operators
Or think of the learning algorithm on the Internet, which was used experimentally in a social network with the goal: maximize the response to your messages! Within a few hours he became right-wing radical.
Or even differently: AI should learn in their respective environment. But this means that in a majority Nazi society, an AI-controlled robot would become a Nazi robot.
The AI lacks the human ability to distinguish between being and must. They just learn about being – and that does not always have to be good. This requires ethical attention and possibly also built-in ethically legitimated guard rails.
New machine ethics?
No, we do not need them. We have ethics, often more than enough. We just do not use them well, we do not consider them in our actions, we put commerce before ethics. We all know that, as the Bible says.
Instead of a new machine ethic we need digital maturity. We have to understand what’s going on. We cannot believe that digitization is a natural event and that we have to adapt and adapt. No, digitization is also done by people, mostly in companies and data companies.
They have values and interests that they program into the systems. As a result, they force their values on millions of people. To see through this and not simply to accept it, but to demand ethically better digital offers and services would be an expression of digital maturity. And we have to clarify the responsibilities and responsibilities exactly.
There must be no ethical or legal gray areas. Ethics must be implemented among programmers, system architects, managers, regulatory and regulatory authorities.
Author Bio:
Solutiontales.com is a blogzine that aims at making your entrepreneurial journey less tiring by covering various topics such as start-ups, technology, lifestyle, and money-making methods. Not only that, but I also try to curate the best addicting quizzes from all over the web to get you hooked to knowledge while being fully-entertained.