Now AI bots can speak for you after you die. But is this ethical?

Now AI bots can speak for you after you die.  But is this ethical?

Deadbots use artificial intelligence and machine learning and can simulate chats as an individual after their death

Machine learning systems are increasingly making their way into our daily lives, challenging our moral and social values ​​and the rules that govern them. These days, virtual assistants threaten the privacy of the home; News Recommender shaping the way we understand the world; risk forecasting systems Social workers’ advice on protecting children from abuse; While Data-driven recruitment tools Also rank your chances of getting a job. However, the Machine Learning Ethics It is still hazy for many.

While searching for articles on this topic for young engineers attending the Ethics and ICT course at UCLouvain, Belgium, I was particularly struck by Joshua Barbo’s casea 33-year-old man was using a website called December project To create a chatbot – A chat bot – It would simulate a conversation with his deceased fiancée, Jessica.

Chatbots mimic the dead

known as deadbotThis type of chatbot allowed Barbo to exchange text messages with fake Jessica. Despite the morally controversial nature of the issue, I have rarely found material that goes beyond mere factual aspect and analyzes the case through an explicit normative lens: Why is it right or wrong, morally desirable or reprehensible, to develop a deadbot?

Before we tackle these questions, let’s put things in context: Project December was created by game developer Jason Rohrer to enable people to customize chatbots with the character they want to interact with, provided they pay for it. The project is built based on the API of GPT-3, a language model for text generation by the artificial intelligence research firm OpenAI. Barbo’s case opened The dispute between Rohrer and OpenAI Because the company guiding rules The use of GPT-3 for sexual, amorous, self-harming, or bullying purposes is expressly prohibited.

Connection OpenAI position As hyper-moral and arguing that people like Barbo were “consenting adults”, Rohrer shut down the GPT-3 version of the December project.

While we may all have a hunch about whether it was right or wrong to develop a machine learning deadbot, explaining its effects hardly makes an easy task. This is why it is important to address the ethical questions the issue raises, step by step.

Is Barbo’s approval enough to develop Jessica’s robot?

Given that Jessica was a real (albeit dead) person, Barbo’s consent to create a robot that simulates her seems insufficient. Even when people die, they are not just things that others can do as they please. That is why our societies consider it wrong to desecrate or disrespect the memory of the dead. In other words, we have certain moral obligations towards the dead, as long as death does not necessarily mean that people cease to exist in the in an ethically relevant manner.

Likewise, the debate is open about whether we should protect the basic rights of the dead (for example, aggregate And Personal Data). Developing a deadbot to replicate someone’s personality requires large amounts of personal information such as social network data (see what Microsoft or forever Suggested) that has been detected very sensitive traits.

If we agree that it is unethical to use people’s data without their consent while they are alive, why would it be ethical to do so after their death? In this sense, when developing a deadbot, it seems reasonable to seek the consent of the person whose personality it mirrors – in this case, Jessica.

When the fake gives the green light

Thus, the second question is: Will Jessica’s consent be enough to consider creating her deadly robot ethical? What if it was insulting to her memory?

Consent limits are, in fact, a controversial issue. Take as an example “Rothenburg cannibal”, who was sentenced to life imprisonment despite his victim’s consent to eat. In this regard, it has been argued that it is unethical to consent to things that can be harmful to ourselves, whether it is physical (to sell one’s vital organs) or abstractly (to alienate oneself from one’s rights).

What specific terms might be something harmful to the dead is a particularly complex issue that I will not fully analyze. However, it should be noted that even if the dead cannot be harmed or abused in the same way as the living, this does not mean that they are not prone to bad actions, nor that such actions are moral. Their honor, reputation or dignity can be harmed by the dead (for example, posthumous defamation campaigns), and disrespect for the dead also harms their next of kin. Moreover, bad behavior towards the dead leads us to a society that is more unjust and less respectful of people’s dignity in general.

Finally, given the flexibility and unpredictability of machine learning systems, there is a risk that the consent given by the person being imitated (during their lifetime) amounts to little more than a blank check of their potential pathways.

With all this in mind, it seems reasonable to conclude that if the development or use of the deadbot is inconsistent with what the imitator agreed to, his consent should be considered invalid. Moreover, if it is clearly and deliberately infringing on their dignity, their consent should not suffice to consider it moral.

Who is responsible?

The third issue is whether AI systems should aspire to imitate what kind of human behavior (regardless of whether this is possible).

This has been a long-standing concern in the field of artificial intelligence and is closely related to the dispute between Rohrer and OpenAI. Should we develop artificial systems capable of, say, caring for others or making political decisions? There seems to be something in these skills that makes humans different from other animals and from machines. Hence, it is important to note the use of AI as tools technical solution Goals such as replacing loved ones may devalue what distinguishes us as human beings.

The fourth ethical question is who is responsible for the consequences of dead robots – especially in the case of adverse effects.

Imagine a dead Jessica robot autonomously learning to perform in a way that degrades or irreversibly damages her memory. Barbo’s mental health. Who will take responsibility? AI experts answer this slippery question with two main approaches: First, the responsibility lies with those Participate in the design and development of the system, so long as they do so according to their own interests and worldviews; Second, machine learning systems are context dependent, and therefore the ethical responsibilities of their outputs must be distributed Among all the agents dealing with them.

I put myself closer to first place. In this case, since there was explicit participation in the creation of a deadbot that includes OpenAI, Jason Rohrer and Joshua Barbeau, I consider it reasonable to analyze the level of responsibility of each party.

First, it would be difficult to hold OpenAI responsible after they explicitly prohibited the use of their system for sexual or emotional purposes, self-harm or bullying.

It seems reasonable to attribute an important level of moral responsibility to Rohrer because: (a) he explicitly designed the system that made it possible to create dead robots; (b) it did so without anticipating measures to avoid potentially negative consequences; (c) it is aware that it does not comply with the OpenAI Guidelines; and (d) take advantage of it.

And since Barbo customized the killer robot based on certain features of Jessica, it seems legitimate to hold him jointly responsible in case this leads to her memory deteriorating.

Ethical, under certain conditions

So, going back to our first general question about whether it is ethical to develop a machine learning deadbot, we can give a sure answer provided:

  • Both the person imitated and the person who assigned and interacted with gave their free consent to as detailed a description as possible of the design, development, and uses of the system;

  • improvements and uses that do not comply with what the imitator has agreed to or that are contrary to his dignity;

  • The persons involved in its development and those who benefit from it bear responsibility for its possible negative consequences. Both are retrospective, to account for events that have occurred, and prospectively, to prevent their occurrence in the future.

This case illustrates why the ethics of machine learning is so important. It also explains why it is necessary to open a public debate that can better inform citizens and help us develop policy measures to make AI systems more open, socially just, and fundamentally rights compliant.Conversation

(author: Sarah Suarez GonzaloPostdoctoral researcher, UOC – University of Uberta de Catalunya)

Information Disclosure Statement: Sarah Suarez-Gonzalo, Postdoctoral Researcher in the CNSC-IN3 Research Group (Uberta de Catalunya), wrote this article during a research residency at Chaire Hoover d’éthique économique et sociale (UCLouvain).

This article has been republished from Conversation Under a Creative Commons License. Read the original article.

(Except for the headline, this story has not been edited by the NDTV crew and is published from a syndicated feed.)

.