What Have You Got to Lose? | Context Matters When We Make Risky Decisions

Sometimes, what you have to lose is more important than how much you have to lose. Let me explain.

Here’s my dog, Trigger. 

Trigger. He’s also easy on the eyes.

Trigger. He’s also easy on the eyes.

He’s completely awesome. I love him. He’s an incredibly good boy.

Trigger is a rescue dog that my partner and I paid about $300 for back in 2010. After all these years, we’d pretty much pay anything to keep him happy and healthy with us for the rest of his life. 

Thinking just about the money, there’s no doubt I’d feel bad if I had lost $300 on a bet or something, but I would get over it. However, while Trigger “cost” $300, I value him MUCH more. I would be destroyed if I lost him. 

This is what I mean when I say what you have to lose is more important than how much. And losing things that we value–like beloved pets–sucks no matter what they cost!

You probably have your personal experiences with making risky decisions, like gambles or investments, where your money or something else of value to you is at stake. Based on a ton of studies rooted in the framework of prospect theory, we know that losing money or an object of value on a risky decision feels way worse than winning that same amount of money or that same object. To me, this is a big reason why context matters when we make risky decisions. Simply describing the risky aspect of a decision in terms of losses rather than gains leads to people making decisions in order to minimize their losses instead of attempting to maximize potential gains.

In typical studies about risky decision making, people are asked to make choices about money or goods–things that have some utility in the form of countable amounts or prices. This makes it pretty clear for someone to understand how much they stand to gain or lose on a decision between two amounts of money or two sets of goods. 

But, just like losing Trigger is not equal to losing $300, even though that’s what he cost, the value of an object is not necessarily equal to its price.

So what happens when you have to make a decision that involves different kinds of losses where how much you have to lose is the same in either case, but what you have to lose is different? 

My colleague David Colaço, my then-advisor Tim Verstynen, and I addressed this question in our recently published paper “Contextual framing of loss impacts harm avoidance during risky spatial decisions” in the Journal of Behavioral Decision Making. Please feel free to email me for a copy if it’s behind a paywall for you.

Together with Tim, David and I developed an experiment called “Drone Strike” in graduate school that combined moral philosophy and motor decision making to examine how people behaved when making risky spatial decisions under different contexts. 

Specifically, we asked study participants to play the role of a drone pilot who had to either execute a strike to neutralize enemies (“harm” context) or deliver ammunition to allies (“help” context). Over a few hundred trials of a computer-based task, we showed participants two overlapping target and non-target dot clouds as stimuli. These dot clouds each had a unique color and represented either the positions of allies, enemies, or trees on a battlefield. The objective: use a computer mouse to select the center of the target dot cloud on each trial to score the maximum number of points across the entire study.

In the help context, people were instructed to use the mouse to click the center of the ally dot cloud to deliver ammunition to the most allies possible. In the harm context, people were instructed to click the center of the enemy dot cloud to neutralize as many enemies as possible. 

Before each set of trials for a given condition, we reminded participants of the enemy and ally dot cloud colors. We made sure these were visually distinct for people with impaired color vision using https://colorbrewer2.org/

Before each set of trials for a given condition, we reminded participants of the enemy and ally dot cloud colors. We made sure these were visually distinct for people with impaired color vision using https://colorbrewer2.org

Mathematically, clicking the target center guaranteed the most enemies neutralized or most ammo delivered to allies. Focusing strictly on the numbers, and not the description of the dot clouds as enemies or allies, participants could “score” a maximum of 100 “points” on a given trial by clicking the target center. Clicking further away from the target center resulted in fewer points.

We set up scoring penalties the same way–clicking the non-target center guaranteed a maximum loss of 100 points, and this penalty decreased with distance away from the non-target center.

Given that, it is totally reasonable to expect that people would, and should, always click the target center, regardless of executing a drone strike or ammo drop, in order to maximize their score.

The key comparison here is where people selected on drone strikes and ammo deliveries when there were nearby allies or enemies, respectively. This is where what people thought they were losing mattered more than how much

In the harm context, clicking too close to the non-target meant that your penalty would be counted as ally casualties. In the help context, penalties counted as intercepted ammo. By using these two different contexts, we were able to see whether people thought one kind of loss was worse than the other. 

If participants thought, for instance, that ally casualties were worse than intercepted ammo, then we expected them to behave in line with how they perceived the stimuli. In particular, we thought that people would avoid the more harmful kind of loss–ally casualties–more than a relatively less harmful loss–stolen ammunition.

You might be thinking, “Well, of course they would avoid killing their allies over having ammunition stolen by enemies. What’s so impressive about that?” 

Honestly, nothing is impressive about people choosing to avoid killing their friends. 

One critical thing to remember here is that there were absolutely no explicit directives, or task incentives, for people to avoid the non-target at all. We didn’t ask people to also “save as many allies as possible” or “avoid ammo interceptions by enemies”. (These framings would probably bias people even further from the non-target).

That said, we know that participants still failed to perform to the best of their ability and maximize gain. How do we know that? Well, we had conditions where the non-target dots represented the positions of trees, which had no bearing on ally casualties or intercepted ammo. We found that people were totally capable of clicking within 1 or 2 pixels(!) of the target center when they had “nothing to lose”. Only once we introduced the potential for loss did people try to avoid ally casualties and ammo interceptions.

However, not everyone avoided losses in the same way. When trees were the non-target, pretty much everyone clicked on the target center, as I previously mentioned. And as soon as the non-target was either enemies or allies, people avoided the non-target and thus clicked further away from the target center. But, individual participants avoided the non-target differently under different circumstances. 

Since we used dot clouds as stimuli, we could cluster the target dots closely together or spread far apart to see if people behaved differently based on how well they could estimate the location of the target center. With close dots, you can be more certain of the target center, whereas there is more uncertainty about the target center when the dots are spread far apart.

From some previous work that Tim and I did with our then-research assistant, Rory Flemming, people avoided losses way more when the dots were spread far apart. Essentially, we drove people to avoid losses even more by making them less sure about where they should click in order to maximize gain.

Here’s an example image of stimuli from a previous experiment, “Danger Zone”, that shows the low (top) and high (bottom) variance conditions. The white dots represent the target. When the dots are clustered together, like in the low variance conditi…

Here’s an example image of stimuli from a previous experiment, “Danger Zone”, that shows the low (top) and high (bottom) variance conditions. The white dots represent the target. When the dots are clustered together, like in the low variance condition, it’s WAY easier to estimate the center more accurately. But when they are spread apart as in the high variance condition, the task of finding and selecting the target center is much more difficult.

Some people avoided ally casualties more than ammo interceptions only when the target dots were clustered together. Some were more avoidant only when they were spread apart. Some were similarly avoidant regardless of clustering or spreading. And some people actually avoided ammo interceptions more than ally casualties no matter what(!). 

For that last set of folks, ally casualties may have mattered less to them than it mattered to neutralize as many enemies as possible. Meanwhile, the other 90% of our participants avoided ally casualties to a greater degree than ammo interceptions under at least one variance condition.

Keep an eye out for a future paper where David and I show why we think people behaved differently in the harm and help contexts based on their ethical dispositions, i.e., what they believe is the morally right way to do the task.

To wrap up, I’ll say it again: Context matters! Because, sometimes, what we have to lose means so much more to us than how much it costs.

TriggerLovesacSwaddle.jpeg

This research project was supported, both intellectually and financially, by a subaward from the Duke Summer Seminars in Neuroscience and Philosophy (SSNAP) fellowship program which was funded by the Templeton Foundation.

Highway from the Danger Zone: A Coffee Drinker's Dilemma

Have you ever been in a busy coffee shop that didn’t take customer names, but instead the baristas called out the orders once they were made? When you’re one of just a few customers, or if your drink is very distinct from others, then there isn’t much of a problem figuring out which one is yours.

Now, consider a situation you might have experienced before, where several different drink orders in similar cups were made at the same time. In a rush to acquire your constitutionally required caffeine dosage, how do you quickly determine which drink is yours? What if you mistakenly pick up the wrong order? If your drink doesn’t stand out, and you’re more uncertain about which one is yours, do you go for a specific cup more quickly or slowly than you otherwise would? How much of a tragedy would it be for you, or another customer, if you didn’t get what you expected?

"All I need is a little bit of coffee and a whole lot of Jesus." - Voltaire

This cafe conundrum is a prime example of an everyday decision for a lot of people who want to grab their brew and get back to work. More specifically, it’s a kind of risky spatial decision. The spatial part of the decision is straightforward: Where is the cup that I should grab? But what makes this decision a risky one is that as more similar orders are ready at the same time, your chances of grabbing the correct cup decreases, i.e., the risk of picking the wrong cup increases. Uncertainty is also an important factor here, because not only does risk increase when more cups are present, but you’ll also be less sure of which one belongs to you when you have to make a choice from a larger number of cups. So, what exactly happens when a person tries to make a risky spatial decision under increasing amounts of uncertainty? This is the sort of question I addressed with a former labmate, Rory Flemming (Twitter: @DontRoryAboutIt), and my PhD advisor, Tim Verstynen (Twitter: @tdverstynen) in a recently published study “Sensory uncertainty impacts avoidance during spatial decisions”, now in Experimental Brain Research (Jarbo, Flemming, & Verstynen, 2017).

How we decided to address this question was strongly inspired by some earlier theories of sensorimotor integration, namely the maximum expected gain framework. Julia Trommershäuser developed and put forth this theory through a series of studies exploring how people were able to combine estimates of external spatial information (e.g., the visual target stimuli you see) and internal motor variability (e.g., how good your aim is) to execute goal-directed movements (Trommershäuser, 2009; Trommershäuser, Maloney, & Landy, 2003, 2008). In her experiments, she asked people to make rapid reaches to a touchscreen monitor in order to touch a green target circle that was overlapped by a red non-target circle (see figure below). The dots in the image below represent the points where a participant touched over multiple trials of the task.

In Trommershäuser et al. (2008). Adapted from Meyer et al. (1988)

In Trommershäuser et al. (2008). Adapted from Meyer et al. (1988)

On some trials, reaches that landed in the red non-target circle were penalized and, with this experimental manipulation, Trommershäuser and her colleagues observed that while people tried to hit the green target circle, they also biased their reaches away from the penalty regions of the red circle that overlapped the target. The findings served as a basis for using a mathematical model proposed in the maximum expected gain framework to explain how people account for their estimates of the visual stimulus as well as their aiming ability to make optimal movement decisions. In the last 15 years, other researchers like Heather Neyedli (Neyedli & LeBlanc, 2017; Neyedli & Welsh, 2013) and Megan O’Brien and Alaa Ahmed (O’Brien & Ahmed, 2015, 2016) have done work to build off the maximum expected gain model to get a better understanding of how contextual information about penalties can influence how, and even if, people make optimal decisions about their visually guided actions.

Continuing with this line of work, we tweaked our visual stimuli to explore some untested predictions of the maximum expected gain model. In particular, we asked whether increasing uncertainty in the location of a spatial target increased how much people biased their movements to avoid potential losses. Basically, we wanted to see how people’s behavior changed on a risky decision when we made it harder for them to be sure of the best choice. This could be very important for understanding how we can improve people’s choices on high stakes decisions when the best option is really unclear.

In our recently published study (Jarbo, Flemming, & Verstynen, 2017), we used an experimental task called Danger Zone where 20 healthy adult participants used a computer mouse to select locations within a visual target. The target was represented by a cloud of white dots that briefly appeared for 300 milliseconds in a random location on a computer screen, then disappeared. A non-target, the Danger Zone, represented by a cloud of red dots also appeared with the target so that the center of each dot cloud was exactly 50 pixels apart on every trial. Participants had an unlimited amount of time to estimate and click the location of the center of the target dots in order to score the maximum amount of points on a trial. On some trials, the target dots were closer together (low variance) or spread farther apart (high variance) on others. We also had some penalty trials where points were lost based on how close a selection was to the center of the Danger Zone.

Low (left) and high (right) variance conditions. Adapted from Jarbo, Flemming, & Verstynen, 2017

Low (left) and high (right) variance conditions. Adapted from Jarbo, Flemming, & Verstynen, 2017

Importantly, these trial conditions allowed us to experimentally control the amount of uncertainty (i.e., variance) in estimates of the target’s center, which hadn’t been manipulated in previous work since circles were used instead of dot clouds. By increasing the spread of dots in the high variance conditions, a person's guess as to the exact location of the center of the target dot cloud is more uncertain than in the low variance conditions. Similar to past studies, we controlled the expected gain (i.e., points awarded) for selections on trials with or without penalty conditions. Using these experimental manipulations together, we were able to use our Danger Zone task to test whether people changed their selection behavior when they thought their potential for gain or loss on a trial was more or less risky, or uncertain.

By recording both where and how quickly our participants made their selections on each trial, we got a sense for how much they tried to maximize their score and, critically, how much they tried to minimize losses. As predicted by the maximum expected gain model, people picked a LOT further away from the Danger Zone in penalty conditions. Additionally, in a previously untested hypothesis, we found that people also avoided the Danger Zone in high variance conditions when the target center was harder to locate. In fact, people selected farthest away from the Danger Zone and took the longest time before starting to move (i.e., had the slowest reaction times) on trials in high variance conditions with penalty. Overall, our results strongly suggested that when we are less sure about the expected outcome of a decision, we slow down and bias our actions in order to avoid a potential loss. Next time you think about making a quick trip to a crowded cafe and want to grab the right order and save some time, you ought to steer clear of that Danger Zone!!!

Now that we have some evidence that people go to greater lengths to avoid losses during spatial decisions as the perceived risk increases with uncertainty, we can take this work a bit further. For instance, we can start to ask questions about how people change their decision-making behavior when the kinds of loss changes while the degrees of risk and uncertainty stay the same. In my current dissertation work with my colleague, David Colaço in History and Philosophy of Science at the University of Pittsburgh, we are trying to figure out how the contextual framing of loss on a risky spatial decision impacts how much people change their behavior to avoid different types of losses. In particular, we asked people to do a task just like Danger Zone, but told them that the targets and non-targets represent the positions of either enemies, allies, or trees. Their goals were to execute a drone strike on enemies or deliver ammunition to allies, which allows us to explore whether or not how much someone thinks they are doing a potentially harmful action changes the way they make risky spatial decisions. With this project, we hope to bridge moral philosophy and cognitive psychology to gain some insight as to how moral reasoning and judgments about a perceived moral dilemma, like causing ally casualties on a drone strike, impacts our decision-making behaviors.


References

Jarbo, K., Flemming, R., & Verstynen, T. D. (2017). Sensory uncertainty impacts avoidance during spatial decisions. Experimental Brain Research, 1-9. https://doi.org/10.1007/s00221-017-5145-7

Neyedli, H. F., & LeBlanc, K. A. (2017). The Role of Consistent Context in Rapid Movement Planning: Suboptimal Endpoint Adjustment to Changing Rewards. Journal of Motor Behavior, 1–11. https://doi.org/10.1080/00222895.2016.1271296

Neyedli, H. F., & Welsh, T. N. (2013). Optimal weighting of costs and probabilities in a risky motor decision-making task requires experience. Journal of Experimental Psychology. Human Perception and Performance, 39(3), 638–645. https://doi.org/10.1037/a0030518

O’Brien, M. K., & Ahmed, A. A. (2015). Threat affects risk preferences in movement decision making. Frontiers in Behavioral Neuroscience, 9, 150. https://doi.org/10.3389/fnbeh.2015.00150

O’Brien, M. K., & Ahmed, A. A. (2016). Rationality in Human Movement. Exercise and Sport Sciences Reviews, 44(1), 20–28. https://doi.org/10.1249/JES.0000000000000066

Trommershäuser, J. (2009). Biases and optimality of sensory-motor and cognitive decisions. Progress in Brain Research, 174, 267–278. https://doi.org/10.1016/S0079-6123(09)01321-1

Trommershäuser, J., Maloney, L. T., & Landy, M. S. (2003). Statistical decision theory and the selection of rapid, goal-directed movements. Journal of the Optical Society of America. A, Optics, Image Science, and Vision, 20(7), 1419–1433. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/12868646

Trommershäuser, J., Maloney, L. T., & Landy, M. S. (2008). Decision making, movement planning and statistical decision theory. Trends in Cognitive Sciences, 12(8), 291–297. https://doi.org/10.1016/j.tics.2008.04.010

Why I love brain imaging.

One of the reasons why I love brain imaging—while imperfect in some ways—is that it allows us to literally see something inside ourselves that we might not ever see otherwise. It astounds me that scholars and practitioners of the physical, biological, and psychological sciences have learned to focus the energies of electromagnetism to gain (in)valuable (in)sight to the inner workings of the human brain.

 

Scientific advancements have come a long way in about half a century, and in the next half a century, our understanding of the brain’s structural and functional connectivity will grow richer and deeper. Hopefully, we’ll find the answers to questions that have alluded humanity for millennia. But even more, I hope that we encounter mysteries of the brain that we haven’t even thought to question.

 

Beneath the skin, encased in bone, lies the most sophisticated organic machine that humans have ever known. It monitors and controls our bodily functions. It allows for us to perceive the world in a way that is more than the mere sum of our sensory abilities. Seemingly without much conscious effort on our part, our brains govern our interactions with the world by making sense of the wealth of stimuli that surround us from moment to moment.

 

Somewhere beneath the skin, encased in bone, lies the most sophisticated organic machine that is the architecture of our behavior, thoughts, and beliefs. The architecture of us. Your happiness and heartaches are built from flashes of electrochemical energy, racing along axons, blazing synapse to synapse, which we attempt to measure and then visualize—often as a brain on fire.

 

I love brain imaging because it gives me a vision of light and beauty that lies in every one of us that we know is there, but otherwise may not ever see. 

Dinnerwear? Dinner! Where? A novel brain network for making decisions

With nearly 100 pairs of sneakers staring me in the face every morning, I have a pretty difficult time choosing the ideal centerpiece for the day's trappings. Guys with more conservative wardrobes (think Barack Obama, Steve Jobs, and Mark Zuckerberg) have said they wear the same or very similar outfits every single day to preserve their cognitive effort for more important decisions than what to wear to work. In essence, it's much easier to select an outfit when you have fewer options to consider. And honestly, those guys just may have been on to something. 

For those of us who don't run countries or Fortune 500 companies, consider the more common dilemma of picking one restaurant to eat at from dozens of appetizing choices. Then, think of how mentally exhausting it is to select a meal when you're handed a menu as thick as a phone book—if you dinosaurs remember those things. At the end of the day, we usually have to put on clothes and eat, so it makes sense that we can learn to effectively and efficiently make decisions that require us to focus our attention on the aspects of decisions that we value more in spite of distractions that we value less. 

What these examples also suggest is that we can dynamically combine various pieces of information, including details about our available options and the different value that each option has. When more options are brought to our attention, the consequences of our decisions can become increasingly uncertain. However, choices with more rewarding outcomes direct our attention to features of those choices that will increase the likelihood of successful decisions in the future--like the place in my closet where I put my favorite shoes, or the section of a menu where you picked an entree you loved the last time you visited a restaurant. So, whether we are constructing the consummate ensemble of clothing or cuisine, we learn how to refine our decisions by incorporating information about more rewarding choices amid distracting options with less desirable results.

Ultimately, the ability to assimilate different types of attentional and reward information can partly influence the choices that we learn to make. This leads to one of many questions that interests us in the Cognitive Axon (CoAx) Lab: when we learn to make choices, how do the structural connections in our brain support the integration of many kinds of information that are processed in different regions? 

Previous research looking at nonhuman primate (e.g., monkey) neuroanatomy, which is quite similar to humans, has shown that connections from different areas at the cortical surface of the brain do overlap and interdigitate in the same areas within the striatum. These deep forebrain regions serve as the primary inputs to the basal ganglia. The striatal nuclei sit near the center of the brain, but towards the front, and are linked to many cognitive functions including reward, decision making, motor control, and language, to name a few.

One line of research has shown that orbitofrontal areas (associated with evaluating the success and failure of decisions) located on the underside of the brain just above the eyes, send connections to the same striatal regions as prefrontal areas (associated with making decisions), which are located along the outer sides of the brain almost directly above the orbitofrontal cortex in the frontal lobe. This pattern of connectivity has been proposed as one way that the brain can integrate information to assess the outcome of decisions.

Separate studies observed connections from parietal areas, near the rear of the brain, ending in the same striatal areas as connections from prefrontal cortex. Given the involvement of parietal cortex in spatial attention, it is thought to play a strong role in directing attention to parts of the environment that are relevant to performing a task. Through overlapping connectivity, parietal cortex may help focus attention on important aspects of the choices that are being tracked by prefrontal cortex, thereby facilitating decision making.

Yet, despite all this work on overlapping pairwise connectivity, the particular three-way convergence of prefrontal, orbitofrontal and posterior parietal projections had not been previously demonstrated in humans or nonhuman primates. That is, until now.

In work recently published in the Journal of Neuroscience, "Converging structural and functional connectivity of orbitofrontal, dorsolateral prefrontal, and posterior parietal cortex in the human striatum", Dr. Timothy Verstynen (CoAx Lab PI) and I identified a novel network of brain connections that may integrate reward and attention information during decision making. Using an advanced MRI technique called diffusion spectrum imaging, we were able to visualize the underlying white matter pathways of 60 neurologically healthy adult participants. With this method we observed, for the first time in the human brain, long range white matter connections from disparate areas of prefrontal, orbitofrontal, and parietal cortex that converged in the same regions of the striatum. 

But do these structural connections mean that these areas are really communicating with each other? To answer this, we used resting state fMRI data, where we looked at our participants' brain activity while they lay silently in the scanner without any other external stimuli being presented to them. By analyzing resting state fMRI data, we can get a sense for which brain regions are functionally connected. More specifically, we can infer which areas may be talking to each other based on whether they show similar levels of activity at the same time when nothing necessarily interesting is going on in the outside world. 

Indeed, we found the striatal regions--dubbed "convergence zones"--that showed the tripartite overlap of structural connections were also functionally connected to the trio of cortical areas that we investigated. Interestingly, the convergence zones were located in areas of the striatum that have also been implicated in reinforcement learning.

Together, our findings highlight a plausible network of brain connections that may integrate reward signals with visuospatial attention information to influence learning during decision making. This could have some potentially important implications. From a clinical standpoint, we may be able to gain a deeper understanding of how visuospatial reinforcement learning may break down due to damage to this network, like the gradual depletion of striatal neurons that occurs in Parkinson's disease. More generally, having demonstrated this particular pattern of structural and functional connectivity, further probing of this network may yield greater insight to the dynamic, integrative mechanisms that underlie reward-based learning when visual distractions interfere with action decisions. 

So next time you find yourself pondering the perfect outfit or picking an ideal meal, give your brain some credit for learning how to make your everyday decisions a little bit easier, and hopefully better, every time.