At Thanksgiving, You are the Problem

November 23, 2019

  • Media depict conservatives as the problem at Thanksgiving and provide an online chat experience to prepare for battle
  • The chatbot is helpful for anybody learning about conversational techniques, but it betrays naked political bias toward the left
  • Beware the differences between humans and robots
  • Question of the Day: can you break the robot’s rules and still have a good conversation?

Drunk Uncle

A favorite holiday trope of the mainstream media, when they’re in a good mood, is to portray anyone right of center as Drunk Uncle from SNL.

It doesn’t matter that the political instigator at your Thanksgiving Dinner will more likely be someone with stains on his shirt, who looks and sounds like Bernie Sanders.

But if you’re on the coast or otherwise surrounded by leftist enforcers, any hint of a contrarian opinion will put a target on your back. They’ll side with Drunk Uncle, as long as he’s a leftist.

But just because you’ve been turned into the problem, that’s no reason to engage in avoidance strategies. Avoidance is for unemployed Pinterest addicts, not you.

I recommend you lean into conflict.

Here’s how you can practice:

Former psychiatrist Karin Tamerius presents us with a chatbot that mimics your Angry Uncle. If you’re conservative, the uncle will argue from a liberal perspective, and vice-versa.

Her stated purpose is to prepare you for healthy conversations at Thanksgiving:

Given the challenge, it’s tempting to avoid political discussions in mixed company altogether. Why risk provoking your angry uncle when you can chat about pumpkin pie instead? The answer is that when we choose avoidance over engagement, we are sacrificing a critical opportunity and responsibility to facilitate social and political change.

I could probably say it better myself, but not that much better.

I tried the chatbot and it worked; it’s an effective role playing tool.

I chose combative answers and got real time feedback. I learned that such answers undercut the chance for a constructive dialogue. I was encouraged to choose responses that demonstrate curiosity or engage my interlocutor by telling a personal anecdote. Those types of approaches fare better than parroting slogans or talking points.

For a moment I was all on board.

Then I decided to enter the conversation as a liberal.

First, though, let’s talk about what it was like as a conservative:

True to form, the chatbot started proclaiming, “We need Medicare for All, Health care is a human right.”

In the game, the correct response is the least combative: “O.K. can you tell me more about that?” On the round that I chose the correct answer at every step, I found myself shunted into agreement with the liberal robot that there’s a dire need for healthcare reform in the US.

That may be a defensible stance, but it’s the only option I had, if I wanted to be “correct.”

As a liberal user, things got weirder. I was greeted with, “Trump has been great for America! Just look at the economy, it’s booming.”

The correct answer is to ask the conservative uncle: “how are you doing financially?”

By continuing to answer “correctly”, I soon learned that conservative uncle isn’t doing well at all, and has not benefited from the Trump tax cuts. In fact, nobody has. The liberal user and the conservative robot came to an agreement that the economy is in dire straits under Trump.

This is, of course, contrary to the hard data.

In my last post I asked, how can we talk back to robots?

In this scenario, the only way to answer the robot is to choose from multiple options. Interactions of the future may be much more sophisticated, but just as easily susceptible to bias in the algorithm.

Regardless of which ideology gets programmed in, perhaps even more concerning is that, in this exercise, the psychiatrist has reduced the humanoid to a caricature, just like people reduce their real live human opponents to caricatures.

The robot only responds well to certain stimuli, and badly to other stimuli. Sure, this may mimic predictable and documented patterns of human behavior, but human behavior is only sometimes predictable. I often say blunt and opinionated words to other humans. Those humans sometimes react in anger, and sometimes they engage in a fun and exciting debate.

The premise, snuck into the exercise with humor and novelty in the form of a cartoon chatbot, is that all humans can be reduced to machines.

Beware that premise, because you are not a machine.

Question of the day:
Can you ask the “wrong” questions and still have a good conversation with your real life angry uncle?

Tell us your story in the comment section below:


Talk Back Writers

Posted by Will Sherman