Nowadays, the AI phenomenon has taken the world by storm. The name ChatGPT is almost on every page of the internet, and stories about the achievements of this program are breathtaking. Among the discussions concerning those achievements one can also read about the future development of artificial intelligence and how AI will relate to responsible decision-making.
Of course, our whole civilization is already dependent on artificial intelligence because no human can any longer handle the endless amount of data that our different systems (economic, political, social, etc.) produce every second (just think of the programs used in the stock market that are capable of buying and selling millions of stocks in a second). However, the tendency is that this artificial intelligence will not stop only at the level of handling data, but soon it will start making moral decisions. This is why programmers need to address the topic of responsible decision-making by AI.
All of us are familiar with the villains against whom James Bond always fights and wins. These bad guys were also very rich, so they could create the most sophisticated weapons to control or destroy the world. In the present-day world, almost everybody can become such a rich guy due to the inter-connectivity of the world. If you manage to create a new product or app that happens to be successful when you sell it, you can turn overnight into a Midas because the market is no longer limited to the place where you live but encompasses the whole planet.
Now, since we mentioned Midas, we can also recall another ancient Greek myth created by Plato, the myth of Gyges’ ring. This ring made it possible for its owner to become invisible and, therefore, to do whatever he wanted without the risk and fear of being caught and punished for what he did.
And Plato asks a question that is still meaningful today or even more so: Who could oppose the temptations such a ring would present to us? Or, expressed differently, what could stop someone from doing bad things if they wouldn’t be risking anything and they could get everything with a snap of their fingers?
At first sight, the owner of such a ring might feel happy because now he can fulfill all his hidden dreams. Depending on the character of that person, the first few days of that ownership perhaps would not be so dangerous: he could do some things that might hurt a few people, like stealing money from someone or doing some mischievous things to others who mistreated him in the past.
However, while becoming aware of that new power, that person would soon start to enlarge the scope of his actions. If, at the beginning, he stole one million dollars from a rich person, he would then want to own much more and steal a lot more. He would gradually discover the power that money and the ring give him.
And the feeling of that power is a drug whose dose needs constantly to be increased. When you feel you can do everything, everything will never be enough: you will always want more than what you previously enjoyed.
The feeling of power transforms you into a god to whom all others are only slaves and devotees. As a god, nothing and no one can stop you. You will want to touch with your hands all your phantasms and therefore force all to work slavishly to realize your fancy.
In the past, only pharaohs, emperors, and kings had such power. A community had only one or very few members endowed with absolute power. Now, due to the internet, that power is accessible to many. And many will want now to fulfill their fancies.
There are wealthy people today who can allow themselves everything and who, due to the underground world of the internet, are capable of achieving their goals without other people noticing it. In this respect, fake news, internet manipulation and avoidance of responsible decision-making are perhaps only the tip of the iceberg.
We can easily imagine that between such powerful persons and companies or even states, tension and conflict exist because they do not share the same interests and goals. And, naturally, they will want to withstand or control the others. For that goal, they necessarily resort to artificial intelligence and invest more and more money to make it more high-performing and sophisticated.
They push, therefore, artificial intelligence toward a gray area where it will have to make its own decisions necessarily because these programs will deal with data that are no longer accessible to the human mind. But it is not even important why the AI will start making decisions. It is a fact that it will have to do it and that it will be trained in this respect, or perhaps, it is already being trained. How is it possible for a machine to make decisions and especially what is the future of responsible decision-making in the case of AI?
Decisions always have a moral component. They are supposed to enhance human life. And here we step into a dark hole: what does it mean to enhance human life? (Let’s leave aside, for now, the question if AI will ever consider its own existence more important than human existence.)
The history of humanity was and still can be considered a moral journey, a journey that molded the awareness of what is good for the human being. People never really knew what was good for their lives. They had moral demands, but those demands had to be constantly violated and reconfigured because, after a while, reality ceased to harmonize with them.
This is why they say that you must adapt to reality and why, nowadays, people are skeptical about an absolute, ultimate moral Good or the possibility of immutable responsible decision-making. However, human life is not possible without decisions, and those decisions necessarily imply the idea of Good, as limited and unilateral as it is.
The question is whether humans learn from their history to improve the concept of the moral Good – that is to say, from a violent history, but a history in which, besides death, there were surviving people who acknowledged that the previous moral criteria had to be changed so that such tragedies might be avoided – how it will be possible for a machine, who learned to think and decide by itself, to say what is good and wrong for humans.
AI learns based on the algorithms that programmers introduced in it. But those programmers are financed, and therefore they will implement what their patrons ask from them. More often than not, those patrons do not see beyond the end of their nose, i.e., their egotistical interests that ignore in most cases a responsible decision-making. Therefore, the decisions made by AI will be biased at first toward those interests.
However, in interacting with other forms of artificial intelligence developed by other companies, they inevitably will reach a moment when they will have to make their own decisions, and the algorithms developed by their programmers will be of no avail, because the complexity of those interactions will surpass everything a human mind might have conceived. What then will be the criteria based on which those decisions will be made?
This is a moment that we cannot anticipate and a moment that is not only a moment of decision. Similarly to what happens with humans, it might be comparable to an epiphany, a moment that transforms the self-understanding of machines and their previous necessary dependence on the human mind.
Once machines are taught to learn by themselves, they will be able to also learn from their own decisions and compare the results of their learning and their decisions according to criteria that they invented on the spot when they had to make a decision, and that will be thereafter no longer intelligible for any human mind.
Will human programmers be able to force AI to remain benevolent to humans? I think that we can reasonably conclude that, at that moment, nothing will be able to force AI, because it will have become autonomous. Not self-conscious, but autonomous. And everything we think, write and publish on the internet these days brings us nearer to that moment in that it enhances the knowledge of the AI.