Drone Strike

What Have You Got to Lose? | Context Matters When We Make Risky Decisions

Sometimes, what you have to lose is more important than how much you have to lose. Let me explain.

Here’s my dog, Trigger. 

Trigger. He’s also easy on the eyes.

Trigger. He’s also easy on the eyes.

He’s completely awesome. I love him. He’s an incredibly good boy.

Trigger is a rescue dog that my partner and I paid about $300 for back in 2010. After all these years, we’d pretty much pay anything to keep him happy and healthy with us for the rest of his life. 

Thinking just about the money, there’s no doubt I’d feel bad if I had lost $300 on a bet or something, but I would get over it. However, while Trigger “cost” $300, I value him MUCH more. I would be destroyed if I lost him. 

This is what I mean when I say what you have to lose is more important than how much. And losing things that we value–like beloved pets–sucks no matter what they cost!

You probably have your personal experiences with making risky decisions, like gambles or investments, where your money or something else of value to you is at stake. Based on a ton of studies rooted in the framework of prospect theory, we know that losing money or an object of value on a risky decision feels way worse than winning that same amount of money or that same object. To me, this is a big reason why context matters when we make risky decisions. Simply describing the risky aspect of a decision in terms of losses rather than gains leads to people making decisions in order to minimize their losses instead of attempting to maximize potential gains.

In typical studies about risky decision making, people are asked to make choices about money or goods–things that have some utility in the form of countable amounts or prices. This makes it pretty clear for someone to understand how much they stand to gain or lose on a decision between two amounts of money or two sets of goods. 

But, just like losing Trigger is not equal to losing $300, even though that’s what he cost, the value of an object is not necessarily equal to its price.

So what happens when you have to make a decision that involves different kinds of losses where how much you have to lose is the same in either case, but what you have to lose is different? 

My colleague David Colaço, my then-advisor Tim Verstynen, and I addressed this question in our recently published paper “Contextual framing of loss impacts harm avoidance during risky spatial decisions” in the Journal of Behavioral Decision Making. Please feel free to email me for a copy if it’s behind a paywall for you.

Together with Tim, David and I developed an experiment called “Drone Strike” in graduate school that combined moral philosophy and motor decision making to examine how people behaved when making risky spatial decisions under different contexts. 

Specifically, we asked study participants to play the role of a drone pilot who had to either execute a strike to neutralize enemies (“harm” context) or deliver ammunition to allies (“help” context). Over a few hundred trials of a computer-based task, we showed participants two overlapping target and non-target dot clouds as stimuli. These dot clouds each had a unique color and represented either the positions of allies, enemies, or trees on a battlefield. The objective: use a computer mouse to select the center of the target dot cloud on each trial to score the maximum number of points across the entire study.

In the help context, people were instructed to use the mouse to click the center of the ally dot cloud to deliver ammunition to the most allies possible. In the harm context, people were instructed to click the center of the enemy dot cloud to neutralize as many enemies as possible. 

Before each set of trials for a given condition, we reminded participants of the enemy and ally dot cloud colors. We made sure these were visually distinct for people with impaired color vision using https://colorbrewer2.org/

Before each set of trials for a given condition, we reminded participants of the enemy and ally dot cloud colors. We made sure these were visually distinct for people with impaired color vision using https://colorbrewer2.org

Mathematically, clicking the target center guaranteed the most enemies neutralized or most ammo delivered to allies. Focusing strictly on the numbers, and not the description of the dot clouds as enemies or allies, participants could “score” a maximum of 100 “points” on a given trial by clicking the target center. Clicking further away from the target center resulted in fewer points.

We set up scoring penalties the same way–clicking the non-target center guaranteed a maximum loss of 100 points, and this penalty decreased with distance away from the non-target center.

Given that, it is totally reasonable to expect that people would, and should, always click the target center, regardless of executing a drone strike or ammo drop, in order to maximize their score.

The key comparison here is where people selected on drone strikes and ammo deliveries when there were nearby allies or enemies, respectively. This is where what people thought they were losing mattered more than how much

In the harm context, clicking too close to the non-target meant that your penalty would be counted as ally casualties. In the help context, penalties counted as intercepted ammo. By using these two different contexts, we were able to see whether people thought one kind of loss was worse than the other. 

If participants thought, for instance, that ally casualties were worse than intercepted ammo, then we expected them to behave in line with how they perceived the stimuli. In particular, we thought that people would avoid the more harmful kind of loss–ally casualties–more than a relatively less harmful loss–stolen ammunition.

You might be thinking, “Well, of course they would avoid killing their allies over having ammunition stolen by enemies. What’s so impressive about that?” 

Honestly, nothing is impressive about people choosing to avoid killing their friends. 

One critical thing to remember here is that there were absolutely no explicit directives, or task incentives, for people to avoid the non-target at all. We didn’t ask people to also “save as many allies as possible” or “avoid ammo interceptions by enemies”. (These framings would probably bias people even further from the non-target).

That said, we know that participants still failed to perform to the best of their ability and maximize gain. How do we know that? Well, we had conditions where the non-target dots represented the positions of trees, which had no bearing on ally casualties or intercepted ammo. We found that people were totally capable of clicking within 1 or 2 pixels(!) of the target center when they had “nothing to lose”. Only once we introduced the potential for loss did people try to avoid ally casualties and ammo interceptions.

However, not everyone avoided losses in the same way. When trees were the non-target, pretty much everyone clicked on the target center, as I previously mentioned. And as soon as the non-target was either enemies or allies, people avoided the non-target and thus clicked further away from the target center. But, individual participants avoided the non-target differently under different circumstances. 

Since we used dot clouds as stimuli, we could cluster the target dots closely together or spread far apart to see if people behaved differently based on how well they could estimate the location of the target center. With close dots, you can be more certain of the target center, whereas there is more uncertainty about the target center when the dots are spread far apart.

From some previous work that Tim and I did with our then-research assistant, Rory Flemming, people avoided losses way more when the dots were spread far apart. Essentially, we drove people to avoid losses even more by making them less sure about where they should click in order to maximize gain.

Here’s an example image of stimuli from a previous experiment, “Danger Zone”, that shows the low (top) and high (bottom) variance conditions. The white dots represent the target. When the dots are clustered together, like in the low variance conditi…

Here’s an example image of stimuli from a previous experiment, “Danger Zone”, that shows the low (top) and high (bottom) variance conditions. The white dots represent the target. When the dots are clustered together, like in the low variance condition, it’s WAY easier to estimate the center more accurately. But when they are spread apart as in the high variance condition, the task of finding and selecting the target center is much more difficult.

Some people avoided ally casualties more than ammo interceptions only when the target dots were clustered together. Some were more avoidant only when they were spread apart. Some were similarly avoidant regardless of clustering or spreading. And some people actually avoided ammo interceptions more than ally casualties no matter what(!). 

For that last set of folks, ally casualties may have mattered less to them than it mattered to neutralize as many enemies as possible. Meanwhile, the other 90% of our participants avoided ally casualties to a greater degree than ammo interceptions under at least one variance condition.

Keep an eye out for a future paper where David and I show why we think people behaved differently in the harm and help contexts based on their ethical dispositions, i.e., what they believe is the morally right way to do the task.

To wrap up, I’ll say it again: Context matters! Because, sometimes, what we have to lose means so much more to us than how much it costs.

TriggerLovesacSwaddle.jpeg

This research project was supported, both intellectually and financially, by a subaward from the Duke Summer Seminars in Neuroscience and Philosophy (SSNAP) fellowship program which was funded by the Templeton Foundation.