To date, systems based on artificial intelligence focus on finding solutions and automating processes with the aim of contributing, in its scale, to a better situation for the human being. We have seen them applied in the treatment of diseases or even to avoid queues in public toilets , but … What if the system is programmed for evil? This approach has been brought to reality by a group of researchers from the prestigious Massachusetts Institute of Technology (MIT).
The team, formed by Pinar Yanardag, Manuel Cebrian and Iyad Rahwan, has created a virtual assistant that only feeds on negative and dark information from the Network, with the sole objective of knowing how the system would operate using this vision of things. We have all had a bad day in which everything seems to go wrong and our view of things is always negative; well, this would be the natural state of this system that they have named Norman .
The choice of the name is not trivial: it responds to the sinister character Norman Bates of the film Psicosis that has frightened several generations, and in a certain sense, the similarities between both characters are striking. This MIT team has created an algorithm that draws on information from the darkest corners of the well-known Reddit forum . Norman accumulates massively terminology, expressions, but also graphic material of a disturbing nature with the aim of creating a personality of its own that points out ways.
After several days collecting data, the assistant based on artificial intelligence was subjected to the well-known Rorschach test, the one in which some drawings are shown without a clear content so that the patient’s brain compose the picture according to their thoughts. The result was devastating. “Norman identified terrifying images and in all of them there was the presence of death,” the study’s authors confirm to CNN.
Thus, for example, in the first image shown a group of birds perched on the branch of a tree, a graphic information that this assistant identified as an electrocuted man. In fact, in all the images that showed the presence of a human being, it was the object of murder or death appeared in some of the elements of its justification. But … why has MIT created an assistant so full of hate?
The team wanted to make the world aware of the potential danger of systems based on artificial intelligence, a vision shared and long-awaited by relevant figures such as Elon Musk. This team warns that these systems without control can be very harmful and, in clear similarity to the currents of opinion that can be generated in society using bots, if assistants like Norman are exclusively fed by negativity, this is what they will project and apply.
But all is not lost for this assistant and those who come later: the experts explain that the negative view of the system can be automatically corrected if it is exposed to a more positive view of things.