Sunday, February 16, 2025

Thoughts on a WSJ article: Artificial Intelligence Is It OK to Be Mean to a Chatbot?


https://www.wsj.com/tech/ai/artificial-intelligence-chatbot-manners-65a4edf9?st=cyEfyN&reflink=desktopwebshare_permalink

The article follows my comments.

I think a Ba'al Mussar would say it is for sure not OK. Just as we are supposed to have Hakoras HaTov to inanimate objects. However, a Brisker or a Mekubbal might say it is perfectly fine. The former because the AI is a cheftza and not a gavra; the latter because the kokhos elyonim that animate even inanimate objects a la the mazal who stands over the grass and tells it to grow, do not inhere in AI.

Technology
Artificial Intelligence
Is It OK to Be Mean to a Chatbot?

Yes, it isn’t human, and it may be useful to let your negative feelings out. But it may have unintended negative effects.

Artificial intelligence can be incredibly annoying. Like when you realize the customer-service chatbot has you in a reply loop. Or when your voice assistant keeps giving you irrelevant answers to your question. Or when the automated phone system has none of the choices you need and no way to speak to a human

Sometimes when dealing with technology, the temptation to unleash anger is understandable. But as such encounters become more common with artificial intelligence, what does our emotional response accomplish? Does it cost more in civility than it benefits us in catharsis?

We wondered what WSJ readers think of this emerging dilemma, as part of our ongoing series on the ethics of AI. So we asked:

Is it OK to address a chatbot or virtual assistant in a manner that is harsh or abusiveeven though it doesn’t actually have feelings? Does that change if the AI can feign emotions? Could bad behavior toward chatbots encourage us to behave worse toward real people?

Here is some of what they told us.

A question of civility

There is no excuse for bad behavior. If we claim to be civilized then surely we must act so, regardless of provocation or fear of oversight. One consequence of being harsh or abusive in a virtual setting is that it inevitably leads to similar behavior in the physical world, and that is where the greater damage lies.

  • Kaleem AhmadAnkara, Turkey

Feel free to blow your top

Is this a real question? These notions are preposterous. Of course it is OK. In fact, this is another potential therapeutic use of AI. We all feel the need to let loose to relieve stress. Having a reactive robot take it in place of a spouse or a child is exactly the kind of life-enriching tool a machine is supposed to be. That’s like asking, would it be OK to send a robot into a nuclear reactor to retrieve a contaminant if it had a name like Billy. Of course it’s OK. Better than sending the real Billy!

  • Leon SerfatyWestport, Conn.

Monkey See

Since people are ultimately behind all these systems and my interactions with AI are training the algorithms, abusive behavior should be avoided.

  • Nicola PohlBloomington, Ind.

Beware of future AIs

The native assumption here is that AI doesn’t have feelings because it doesn’t work like us. This assumption will lead to issues down the line when AI systems are even more advanced. Do we have the right to abuse people who don’t have conventional feelings because of a medical or mental-health condition? No. What’s buried in the subtext is an assumption that machines can be abused because they are not human. But as a species, we’ve given rights to animals and even the environment. So when does it make sense to do so for machines? 

Now I would argue, if biological aliens landed on this planet tomorrow in peace, then we would offer them some kind of rights. But what if they were mechanical? We still should. Why do we hesitate with our own creation?

  • Jeff D. SchloemerMenlo Park, Calif.

Warning signs

I would worry about people who abuse chatbots in much the same way that I would worry about people who abuse small animals. I don’t think that such behavior would encourage bad behavior so much as it would indicate something perhaps not quite right about the person’s inner state.

Also, I have always addressed chatbots as I would address real people because I find the interaction more natural. When using speech and natural language I find that using the same sort of language across the board is easier and more consistent. Since chatbots are mostly trained on natural human interactions it is likely that they will perform better on human-to-computer interactions that are similar.

  • Nathaniel PolishNew York

Venting could help

People are very abusive to other people. Much of the time this is due to a lack of understanding, frustrations or misdirected anger. So yes, it is OK to talk to a chatbot that way…provided that the chatbot doesn’t pick up on this and become abusive as well. 

This is a place where a chatbot can become a solution to this societal problem. Train the chatbot in the ways of mental-health counseling so it can deal with the abuse in a constructive way and help the person learn that abusive behavior is neither productive nor appropriate.

  • Jay WeyermannAurora, Colo.

Good behavior is its own reward

As we’ve seen on social media, it’s very easy for two people who don’t share the same beliefs and aren’t physically in the same room to be insensitive or even hostile to each other. This seems to have carried over into offline interactions, especially in politics. Personally, I assume that since a chatbot code was written by a person, it might have some inclination toward rewarding good behavior by the user or punishing bad behavior. So I use favorable and positive dialogue in my own chatbot prompts and responses. I figure, why take the risk?

  • Sam GarciaLaguna Beach, Calif.

Design flaws a factor

Chatbots are so badly designed and so frequently employed to let companies off the hook for communication with people that they increase frustration and so probably merit more than their share of the behavior that they create.

Would the answers change if AI feigned having emotions? It would increase people’s sense that they are being lied to or “played,” and in that way increase the ire of folks responding to them when they discovered it. AI’s fake emotions would just come across as sarcasm or worse.

  • Cheryl MillerPahrump, Nev.

A chance to practice patience

It is not OK to abuse chatbots or virtual assistants. Even if we are able to draw a line and keep our abuse in the virtual world, all indulgences with abuse weaken our ability to grow in love and empathy.

I strongly believe we should find every opportunity to practice patience and, it seems to me, AI is a perfect training ground. I don’t believe kindness, patience, morality or even basic politeness come naturally to any of us. They are practices that are developed over a lifetime, but that development requires significant effort and attention.

  • Mike YagleyBrighton, Mich.

Could humans behind the AI be hurt?

A harsh or abusive response to a chatbot or virtual assistant isn’t going to affect the chatbot, until or unless the bot is programmed to read and react to emotional responses from humans. A person with transient pent-up anger and frustration may feel better being able to vent freely to a machine, and thus to some extent obviate the need to vent anger onto other people.

But should we consider the feelings of the various humans likely to be reviewing the chatbot’s communications? “That’s the dumbest idea I ever heard, you’re just a stupid bot” would usually not affect such human reviewers, assuming that their ordinary boundaries and judgment functions are intact. However, we all know personalities who tend to readily react to others in a harsh or abusive manner (social misfits, angry, miserable, cynical Grinches). 

  • Carol HealeySan Francisco

Character matters

Bad behavior toward inanimate and animate objects should be frowned upon. Our character is defined by the way we treat others regardless of whether they have feelings or can comprehend.

  • Sabrina MahboubiLos Angeles 

Cursing is constructive

As somebody who used to work on AI, I would not be the least hurt by insults against my product. Software engineers are accustomed to scathing comments, and we try to use them constructively. Cursing can actually hurt only people. That said, a computer can be programmed to respond to abuse just as an insulted person might.

  • Daniel BrandKailua, Hawaii

Why practice abuse?

Bad behavior toward chatbots could encourage us to behave worse toward real people. Any action that is used becomes “well practiced” and then may come out more frequently.

  • Robin HurleyHighlands Ranch, Colo. 

What do you want reciprocated?

I am a generally polite person. We have Google Assistant in our home, and I always thank her (our choice of gender identification) for her help. She is always appreciative!

  • Louis VerardoCenterport, N.Y. 

Demetria Gallegos is an editor for The Wall Street Journal in New York. Email her at demetria.gallegos@wsj.com.

7 comments:

  1. I think the question is a little more complicated.

    The error on the other side of the Middah haBeinonis is humanizing AI. We cannot forget that AI is just software -- and more importantly, that a human soul isn't.

    So, there is a spiritual danger to anthropomorphizing an AI when talking to it, and "please" and "thank you" may be part of it.
    Something Moshe didn't have to worry about when showing hakaras hatov to the Nile or Egyptian sand. Nor do we, when showing our challah respect.

    -----
    Off topic, but to talk tachlis for a minute: GPTs aren't thinking, they are predicting plausible ways for a conversation to continue. I am sure that of all the text in their training sets, the polite conversations were more likely to continue in productive directions than impolite ones.

    So, I would think that to maximize results, you need to either avoid being conversational altogether - maybe try to imitate a textbook, or write naturally and conversationally, but politely.

    ReplyDelete
    Replies
    1. (We have a general cultural problem with taking things too functionally. Like the way a political figure many in our community consider functionally useful for our community in his country and for Israel being recast as a good person and a friend. Rather than just accept that they deem him useful and his election productive. The way things are today, we are prone to confuse what someone or something does for us with what or who they are.)

      Delete
    2. So you are saying that it may be חנופה to be polite to a non-entity...

      Delete
    3. Actually, I thought I was saying that politeness to something that seems so human but isn't could end up working as an exercise in evaluating humanness by what they do for me. Rather than remembering the inherent presciousness of each Tzelem Elokim.

      Delete
  2. I appreciate your analysis vis-a-vis machshava, but the Journal's people all seemed to miss an important practical point: many of these systems can, and do, use anything you input as training data for future outputs. So you trash-talking to ChatGPT can result in ChatGPT trash-talking to other human users later.

    The Malbim talks about the actions of the Olam HaKatan causing the Olam HaGadol to resonate ... but in this case no metaphysics need be invoked.

    ReplyDelete
    Replies
    1. I don't believe you have the metzi'us correct. GPT -- and for that matter, all LLMs I am aware of -- are trained once. They do not store prompts. In fact, the only reason why chatGPT can continue a conversation is that each time you hit send the whole conversation from the beginning is sent to the AI along with your latest prompt. (First prompt, its reply, second prompt, second reply, etc... until the new prompt.)

      Delete
    2. At that particular moment, it is stateless. But the company may later decide to do a new round of training using your inputs. This is explicit in their privacy policy: "As noted above, we may use Content you provide us to improve our Services, for example to train the models that power ChatGPT." https://openai.com/policies/privacy-policy/

      Delete