We are searching data for your request:
Upon completion, a link will appear to access the found materials.
Robert Cialdini discusses 7 major influence buckets in his 2 books (Influence, Pre-suasion)
- Social Proof.
- Personal Consistency & Commitment.
The concept of Anchoring brought up nicely in this video https://youtu.be/uqXVAo7dVRU?t=136 does not seem to fall under any of the above stated buckets.
Does anchoring fall under any of the above buckets or is it its own separate bucket?
Persuasion Technique #1: Always Tell a Story
Humans love stories. They are ingrained in our blood.
Back in the hunter-gatherer times, we used to transmit information through stories, whether they were legends, myths, or rumors thus, we became wired to them.
Stories helped increase cooperation within tribes, as recently shown by researchers of the University College London, which in turn helped the dissemination of information, as new tools and technologies developed. This also allowed us to become much more technologically advanced and better organized than any other species.
Besides the anthropological explanation, there’s even a physiological reason why we love stories, one that makes us crave for them.
Whenever we hear a story, we secrete a hormone called oxytocin, which is synthesized in the hypothalamus and helps humans bond with each other. As Paul J. Zak, Ph.D., the man who first found the way this little peptide works, explains:
I now consider oxytocin the neurologic substrate for the Golden Rule: If you treat me well, in most cases my brain will synthesize oxytocin and this will motivate me to treat you well in return.
When your brain secretes oxytocin, you can feel empathy, kindness, and carry a cooperative attitude towards others. If you want to persuade people, you must engrain them into your messaging. Instead of telling your audience the benefits of your offer, you tell a story that shows those benefits. Stories are the packaging on which you transmit your ideas.
A similar analogy is the one used by Jonah Berger, a professor at the Wharton School of the University of Pennsylvania and author of Contagious: Why Things Catch On, which compares stories to a Trojan Horse:
[…] just like the epic tale of the Trojan Horse, stories are vessels that carry things such as morals and lessons. Information travels under the guise of what seems like idle chatter. So we need to build our own Trojan horses, embedding our products and ideas in stories that people want to tell. […] We need to make our message so integral to the narrative that people can’t tell the story without it.
In the simplest terms, a story has three parts:
- The introduction: In this part, you give a background on the character’s life, beliefs, ideas, and original situation.
- The climax: In here, you add a challenge to your character(s), whether that’s a life-changing situation, a belief shift, or whatever it may be that causes the original condition to change.
- The conclusion: Finally, you close the loop, telling how the character(s) faced the challenge, how they changed their life, beliefs, or ideas, and the new situation they are in.
There are many storytelling (or narrative) theories, many of which are incredibly complex. All of them have in common the same 3-part pattern, regardless of the way they structure them.
Your stories don’t need to be lengthy or complicated. You can use any of these three elements in a sentence, paragraph, or entire article.
Let’s take the following example:
I used to be a marketer, but it never felt quite right, so I decided to become a writer, and after a lot of practice, I started getting paid for my writing.
The underscored words are the introduction, the italics are the climax, and the bolded ones are the conclusion. Is this a good story? Nope. Is it one? Indeed it is.
If you haven’t noticed, that story is a synthesized version of my article on becoming a freelance writer. I could have easily explained that in a paragraph as well, but whatever the length, the story stays the same:
- I was a marketer.
- I decided I was going to be a writer.
- I worked my ass off, and I became one.
Let’s take a look at how you can start embedding stories (your Trojan horses) into your company.
Exercise #1: Create Your Story
Ripple – The Big Effects of Small Behavior Changes
Jez Groom has been practicing behavioral science for over ten years, working with some of the biggest organizations around the world. In 2016 he founded Cowry Consulting, the leading behavioral economics consultancy, and is currently a visiting fellow of Behavioural Science at City University.
April Vellacott has been studying the field of human behavior for nearly a decade, and holds degrees in Psychology and Behavior Change. As a behavioral consultant at Cowry Consulting, she helps global clients apply behavioral science in their organizations.
Together, Jez and April are the authors of the new book, Ripple: The big effects of small behaviour changes in business, and they join the show today to share how small behavior changes can have wide-reaching effects. Listen as they give real-life examples of how nudge theory has had massive impacts on outcomes, why friction is sometimes a good thing, and how behavioral principles can benefit any business.
If you enjoy the show, please drop by iTunes and leave a review while you are feeling the love! Reviews help others discover this podcast and I greatly appreciate them!
On Today’s Episode We’ll Learn:
- How small changes can have massive impacts on the world.
- Why friction is sometimes an important part of the process.
- What anchoring is and why it’s important.
- How design plays a role in nudge theory.
- Why there is a balance between too much effort and too little effort in the outcome.
- How behavioral principles can benefit any business.
Key Resources for Jez Groom and April Vellacott:
- If you like The Brainfluence Podcast…
- Never miss an episode by subscribing via iTunes, Stitcher or by RSS
- Help improve the show by leaving a Rating & Review in iTunes (Here’s How)
- Join the discussion for this episode in the comments section below
Full Episode Transcript:
- Welcome to Brainfluence, where author and international keynote speaker Roger Dooley has weekly conversations with thought leaders and world class experts. Every episode shows you how to improve your business with advice based on science or data.Roger’s new book, Friction, is published by McGraw Hill and is now available at Amazon, Barnes & Noble, and bookstores everywhere. Dr Robert Cialdini described the book as, “Blinding insight,” and Nobel winner Dr. Richard Claimer said, “Reading Friction will arm any manager with a mental can of WD40.”To learn more, go to RogerDooley.com/Friction, or just visit the book seller of your choice.Now, here’s Roger.Roger Dooley: Welcome to Brainfluence, I’m Roger Dooley. We’ve got not one but two guests today, both experts in behavior change for businesses. Jez Groom has been practicing behavior science for more than 10 years and was the Co-Founder of Engine Decisions at Ogilvy Change, where he held the title of Chief Choice Architect. Today, he’s the Founder of Cowry Consulting and honorary research fellow in the Department of Psychology at City University. April Vellacott is a behavioral consultant at Cowry and helps clients around the world apply behavioral science. Together, they are the authors of the new book, Ripple: The Big Effects of Small Behaviour Changes in Business.Welcome to the show, April and Jez.
April Vellacott: Thanks for having us.
Roger Dooley: Yeah, so April you studied psychology as an undergrad and earned a masters in behavior change. But Jez, you studied biochemistry. How did you end up in so many cool behavioral science roles?
Jez Groom: Yeah. That’s a good question, a question I get asked quite a lot. My background was in advertising so I worked in media and marketing, and then kind of my paths crossed with Rory Sutherland, who I know is one of your previous podcast interviewees. And the pair of us came together to create the behavioral science practice at Ogilvy, which started at first to look at how affective the advertising process. But I quickly got more interested in things that were non-advertising and got really, really enthused.
And that was kind of when I broke away and thought, actually there’s a lot more problems can be solved with behavioral science than simply communication ones. So yeah, created Cowry. I mean, the other reason was I really did want to go back to school and college and get my psychology degree. I think it was a missed opportunity. But unfortunately at the time, I had three small children and my wife thought maybe that wasn’t a good idea.
I went straight into business and essentially practically applied it. I’ve come full circle, as you mentioned earlier. I’m now also aligned with the faculty at City, but I don’t have a formal degree.
Roger Dooley: Right. No, it’s great. And I think that any kind of scientific or engineering background can be pretty useful in this field because it gets you focused on research and evidence as opposed to opinions, which is really, I think, what applying behavioral science is all about. It’s really all about the proof, the evidence, the numbers and so on.
Yeah, and one other thing, Jez. People have called you Yoda, and I guess can explain that. But more importantly, how do you use that when you’re presenting?
Jez Groom: Yeah, I mean, it’s coming to, I suppose, a pratfall effect. One of the things that I think Adam Ferrier, he’s a consumer psychologist in Australia, founder of a really interesting business called Thinkerbell. And he came and did a conference, so it was one of the early Nudgestock conferences that we did, flew him over from Australia. And he talked about this phenomenon called pratfall, which is the world isn’t perfect and our brains are very, very good at identifying kind of perfectness and seeing actually what we really want is authenticity and maybe things that are imperfect and those things are more sort of more motivating. So people prefer kind of rough cookies or people that maybe aren’t so smooth. Yeah, so I used to introduce myself as Yoda because I’m quite short. So I’m 5. I got to the age of 13, didn’t really grow in height and I’m so pleased. I’ve got three teenage boys and they are all taller than me, and I’m so pleased. So I think the combination of behavioral science skills to essentially nudge people combined with a diminutive height, I mean that, yeah, I earned the nickname of Yoda. I always did caveat it and say, “I didn’t think and I don’t think my ears are as hairy as Yoda’s.” For sure.
Roger Dooley: Just give it a few years Jez, it’ll happen. Trust me. It’s great because people think they have to be perfect and polished. And what you’re saying is, sometimes it’s okay to show a little bit of vulnerability, although preferably not in that exact thing that you’re supposed to be the expert in, but rather show it in some other way.
Jez Groom: Yeah. Yeah, a pratfall in a core competence is maybe not the best.
April Vellacott: It’s funny that you mentioned that actually, roger because I was talking to my sister on the phone earlier from isolation of course, because we’re on lockdown in the UK, and she was talking about, she’s she’s applying for jobs right now from lockdown, which is a bit tricky. She’s doing everything over Skype and Zoom. And she was telling me, she goes, “Why is that her presentation for this job was too perfect.” So I tell her that she should try and craft some of this pratfall effect into her interview. But we were struggling to think of something she could do over Zoom that wouldn’t undermine her ability to actually do the job.
Roger Dooley: I think maybe a cat walking in front of the camera for a second might do that, if she happens to have a cat.
April Vellacott: That’s a good idea.
Roger Dooley: Although, she should not do what a one small sort of a low level politician did here in the States. I think in California, where he was in a council meeting of some kind and his cat came on screen and he physically picked the cat up and chucked it into the corner. And you heard a yowl as the cat hit and now he’s under pressure to resign for animal cruelty. So you got to do that just right. Sorry. Sorry. I really enjoyed Ripple. It was a fun, easy read. The subtitle reminds me of The small BIG by Cialdini, Martin and Goldstein. But I think the key thing there, one of the most important things about behavioral science is that small, often inexpensive interventions can have an outsized impact. And there’s still a lot of people in business who just don’t get how effective behavioral science can be. Jez, I’m curious, you took an unusual approach to convince the management team at Ogilvy about the importance and effectiveness of behavioral science. So why don’t you tell the rabbit story?
Jez Groom: Yeah, so I think one of the interesting things is that I don’t think people believe in behavioral science until they experience it. So I think some of the early work done by Ariely, although it was very playful, I think really dramatized some of these biases we have in our minds and these shortcuts and processes. And yeah, when we were Ogilvy, we did four experiments to launch the practice and I’ll talk about one of them, which was kind of my favorite. Just imagine a scenario you’ve got, I don’t know, 30 relatively to very highly paid advertising executives who are very conscious. These guys and girls are cool, they’re smart, they’re advertising and they don’t want to look stupid. And we did an experiment to try and get them to behave like a rabbit, to bounce around like a bunny.
So what we did was we had a control group and essentially they were told to watch a video. It like a child’s video, something my children have seen and there was a song in it. I’m not going to try and sing it too well, but essentially it says, “See the little bunny sleeping on the floor.” And then he says, “Wake up little bunnies.” And you have to essentially jump up and then bounce around like a bunny, which is great fun for a three year old, less so for 33 year old. And so in the control group, we just sort of said, “Look, watch the video and read these instructions,” which said, just play along with actions. Unbeknownst to them, we videoed them. And of course they just sat around the room and were like, yeah, I’m not going to do that. And you could see them talking to each other saying, “Yeah, we’re not going to do this are we?” Because they didn’t know that they were be watched.
And then we had another treatment group where we had about 20 people, myself and also the chief executive and the group planning director were in on it. So essentially I had some strong messengers and the other participants sort of followed my instructions. And I said, so if everybody could lay down on the floor, which they did, which again is quite odd in a business environment. We all pretended to be asleep. And then as we got to wake the little bunnies, we all jumped up as in myself, the plan director and the CEO, and we started jumping around. And everybody followed, unbeknownst to them we had secret videos all around the room. And the great thing about it was, being in an ad agency, we managed to combine a film, which you can see at ripple-book.com. It was just essentially bringing these experiments to life.
And we showed this film in front of all their colleagues. So there was these 27 people that have been made to make relatively a little bit silly and change their behavior because they were following the messengers and joining them with the ingroup and heard. I think it’s when you do these experiments on site, in the environment and context in which people are working, that’s when you get believability. The amount of times that we’ve gone into businesses to say behavioral science really works, they go, “Yeah, I can see how it could work for that business. I can’t see how it’d work for my business, because we’re different. We’ve got different proposition and different challenges.” And then what we then do is a pilot study and we prove it works and they go, “God, this stuff really works.” And we go, “Yep. Yep, it does.” And then from then on, we can move on. But yeah, I think they’re trying to make it a little bit of fun at the start and certain business environments it’s relatively okay to do that. But yeah, we had a lot of fun at Ogilvy.
Roger Dooley: Yeah. Well I think you’re starting with the situation where most people are willing to acknowledge that other people are irrational actors, that they can be influenced by these nudges and things. But they themselves are not of course, which actually isn’t true, but I think that’s the bias you have to overcome in any kind of a business situation where you’re trying to tell me about the effectiveness of this. And they say, “I would never do that. I wouldn’t be influenced by that.” And that tends to make them doubt the effectiveness of it.
Jez Groom: Yeah. I mean, I don’t know April, if you’ve had any experiences. Certainly, I think you had an experiment with the sort of anchoring effect with academics, in your first lectures, I think.
April Vellacott: Yeah, for sure. I mean, you’re absolutely right, Roger, that once it’s been brought to life for you in a way that feels really relevant to you, then yeah, you just don’t believe it’s as true. It really brings it off the pages of a book and to the real world. So I did my undergraduate degree in psychology at the University of St Andrews in Scotland, and it was one of the most memorable moments of my degree, actually. And it just seemed like any other seminar, one morning and our lecturer put up this picture of our matriculation cards on the whiteboard and said, “I want you to all reach into your pocket and just write down the last two numbers of your matriculation card.” So we all did that. We all had really random numbers, so someone had 12, another one had 27, someone else had 98. And then they put up this picture of a bottle of wine on the screen and they asked us to write down how much we’d be willing to pay for that bottle of wine. And we got quite excited at this point because we were all students and we thought maybe we’re in with the chance of winning an expensive bottle of wine.
Roger Dooley: That’s right. The price is right. You’re the winner.
April Vellacott: So we all wrote down these prices that we were willing to pay and, we were probably quite stingy because we were students. And at that time, I was paying about four pounds per bottle of wine, not joking about that. And what was really interesting was the price that we were willing to pay for this bottle of wine was influenced by this completely unrelated number that we’d just written down. So that last two numbers of our matriculation number. And then they went on to explain, this principle that we all know called anchoring, which as you guys know, is where your judgments of value of something can be affected by the first piece of information that you’re exposed to. So I never really forgot that. And it’s those kind of ways that you bring it to life for the businesses that you work with, that as you say, really make some belief that well actually, if this applies to me, it must apply to colleagues and to our customers. And so there must be something interesting we can do here by applying painful science to make things better.
Roger Dooley: Well, sure, because I don’t think anybody would acknowledge that seeing a random two digit number would influence their estimate of the price of something. That’s a classic Ariely experiment.
Roger Dooley: But I think as a demo, just to show people that yes, even you Mr. or Ms. rational actor have these really bizarre influences on your behavior. It’s really powerful. I’m curious, I assume that you’ve done these kinds of things at clients or potential clients and such. Have you ever had at one go awry on you where somehow the demonstration didn’t work at all?
April Vellacott: I mean, we actually did it, we did that very experiment. We tried to do the anchoring experiment in real time for one of our clients. What was it a couple of months ago, I think, Jez and I, and it didn’t quite work out. So we just kind of glossed over that and moved on quite quickly.
Roger Dooley: Well, you do it enough times, you’re going to get a random result that isn’t very good. And it could be too that these days, a lot of people have a read Predictably Irrational. They might be on guard for that kind of manipulation or even decide to, “I got a low number, I’m thinking to give a really high estimate for the value of that product.” But one of the things I really liked about the book was that you emphasize a do-it-yourself approach very explicitly in every chapter. A lot of books written by consultants are sort of demonstrating much they know how smart they are and how much should they can help your organization if you hire them. But I think here, and obviously, businesses can benefit from hiring an organization like yours, but for those businesses who can’t afford outside consultants or even can’t afford them for smaller projects, I mean, large businesses can’t afford to bring in outsiders on every single thing they do, there is so much practical advice in there about how to go about doing it. And so that, I really commend you for that. That’s makes it, I think, very useful for business of any size, whether it’s large or small.
Jez Groom: Yeah. I mean, it’s a really good point. And one of the things that myself and April wanted to do was to democratize behavioral science. And I think you mentioned Ariely. I think Ariely has been kind of a professor, I think, that especially bridge that gap between the business world. Obviously sits on a few startup boards as a behavioral officer. But quite a lot of the work seems to be written in a language in broader academia. It’s just impenetrable for anybody that isn’t sitting there doing a PhD to understand or engage with and then use. And then I think there’s quite a few behavioral scientists kind of in my network that does get frustrated or do get frustrated with that. Sometimes you say things like, “yeah, there’s this amazing principle that’s called primacy effects.” And the first impression that you have within a customer experience and because the rest of the experience. And you kind of go, “Okay, so that is first impressions count then.” And you kind of go, “Yeah.”
So it’s quite a lot of behavioral insights, kind of common sort of sense or uncommon sense. But often dressed up in empirical language that makes it harder than. I think, where we were, was people in business have got businesses to run. They don’t have time to essentially get a PhD in behavioral science, and they want to know, what’s the principle? How can it be used? How can I use it for my business? If I can’t do it, can you help me do it? So we’ve spent a lot of time working with our clients to share and transfer our knowledge and skills into them. And because we believe behavioral science can be practiced by everybody, if they’re given the right processes, tools and governance to do it properly. And so I don’t know April, if you’ve got anything else to add.
April Vellacott: Yeah. It’s really nice for you to say that Roger, that first of all that it’s really easy to read and also that it’s kind of really accessible and it’s going to help people do it themselves, because that’s exactly what we set out to do. And I think for anyone who hasn’t got their hands on a copy of the book, yeah, hopefully they’ll see when they flick through it, that we’ve also written it with behavioral science in mind to make it really easy to digest and to understand. We’ve tried to keep the language as free from jargon as possible, using really simple, clear English to try and improve people’s processing fluency and reduce that cognitive load that you might get by reading a really dense book about behavioral science. We’ve chunked it into manageable chapters to make it really easy to digest. And we’ve pulled out and made salient the most important bit in the books in, key quotes are pulled out. And so, yeah, we’ve really tried to bake behavioral science into the book itself so that we kind of practice what we preach and make it as easy as possible for people to start doing it themselves, as you say.
Roger Dooley: Right. Well, that’s very meta April and it’s cool. And I agree, because you can see those little things in there that you’ve done and you’re trying to minimize cognitive friction in one sense. And it gives me the transition into the effort heuristic, because in Friction, my book Friction, I talk about effort and how it tends to discourage behaviors, which is bad if you want people to buy your stuff and you’re adding effort to the process or it’s good if you add efforts, say to consuming unhealthy foods. But sometimes people expect effort to be an important part of the process. So what I want you to explain that?
Jez Groom: Yeah. I mean, I think we have a kind of a two box matrix, two by two matrix. So I think Dilip Soman in Rotman kind of put forward essentially kind of nudges that are kind of helpful and nudges that are harmful, and then nudges that essentially are hard and nudges that make easy to do. So quite often, we’re very, very careful about our design is to say, are we designing this to essentially make it easy for the customer and helpful and reduce that friction, as you say, such that they’re not taking away sort of cognitive capacity to do things which shouldn’t be hard to do. So I think, you document the world, it’s like application forms, user journeys, emails, in conversations, all that type of stuff.
But we also talk about positive friction. So there are some times where you don’t want an extremely fluent process. You do want people to engage with system two, at which case you’ve got to stop them and essentially build in some positive friction, that we like to call it. And often in financial services. So when you want people to really think about, is this the right insurance product, of course you need to display the different ranges of insurance products in the right way so they understand them. But you might have a number of different stages that they need to go through to see if it’s the right one for them, which takes some more cognitive effort. And I think that the flip side, which again you know and some of the listeners might know is that people actually feel those positive friction experiences even more satisfying. So it’s kind of like, I suppose one of those conundrums with behavioral science, you’ve got essentially effort is bad in some cases, but effort is good in some cases. And it’s about that sort of nuance and understanding when and where to reduce friction and when to add in the positive friction.
Roger Dooley: Well, yeah, and you had a story in the book too, about some people, I think they were cleaning clothes, who expected to exert a certain amount of effort. And if there was too little effort, it didn’t seem like they were getting it done. It reminded me of an old story that maybe apocryphal, I’ve heard that this actually might not be true, but it’s a sort of common wisdom that when cake mixes were first introduced, they included everything. Now, all you need to do is add water and pop it in the oven and you were done and that they weren’t selling well. But when they required the housewife, who was the principal consumer at the time, to add an egg, then that seemed more like baking and they started selling. So, people sometimes need to feel that if they’re doing the process right, they’re putting some kind of effort into it.
Jez Groom: Yeah. I mean, similar on the cooking. I worked on a famous brand of mayonnaise. And one of the guys who was kind of thinking about behavioral science came in with a really crazy idea, like a really crazy idea. It’s like, “Why don’t we create a video which teaches people how to make mayonnaise?” And the client’s sitting there going, “Why would we do that?” Because they wont buy out product. And what he knew because he tried to make mayonnaise and I’m not a great cook. April is far better than I. But making good mayonnaise is really quite hard. So their thought process was if we demonstrate that actually it takes a lot of effort and a lot of skill to create good mayonnaise, then people might try it, get a bad result and then come out the other side to go, “You know what? It was fun and interesting, but I’m just going to buy this brand of mayonnaise and save the bucket from now on because it’s a hell of a lot easier.”
So yeah, I think, the counterintuitive nature of the human brain is kind of strange sometimes. We talk in the book about risky and brave ideas and often it takes a certain type of client, a certain type of business that has got that type of bravery and courage to try these things out. And sometimes they don’t work, but sometimes they do. I mean, in this case they make the film and the customer reaction was exactly that, that people actually liked the mayonnaise more because they respected the way that it was made.
Roger Dooley: Yeah. Or if like me, I would probably watch about the first 30 seconds of the video and say, “I can buy a jar of that stuff for $2. I’m done.”
Jez Groom: Yeah, exactly. Might not even get there. Yeah.
Roger Dooley: And so one thing that I’ve written about occasionally is the power of food samples. And oddly enough, April, you had a story about food samples in the book. Why don’t you tell your story?
April Vellacott: Yeah. It’s a story that we included in the book because I wanted to bring to life how people often use things from behavioral science quite intuitively. Like Jez said for his example, people understand that first impressions matter. But they might not necessarily have that kind of academic term to attach to that thing that they know. So when I first moved to London after graduating, I got a job at a food market. It’s one of the most famous markets in London, it’s called Borough Market. And it’s been there for like over a thousand years, I think there’s been a market on that site. And I was working on this bakery store and the days were long and they were very cold as well. So I used to squirrel away a lot of the snacks out of boredom. But something else we’d do out of boredom was cut up the really, really rich, dense brownies and give out samples to customers.
And it was amazing the effect that it had, just by trying this tiny sample of our brownies, people would, they’d buy something from the store and more often than not, it wouldn’t actually be a brownie. And the thing that looking back I realize I was using, was this principle of reciprocity. So I give you something and then you feel like you want to return the favor with a benefit in kind, because you hate feeling like you owe me something. So people would try this little sample brownie and they would buy something else from the store. So yeah, they’re the principles that even market traders have been using for thousands and thousands of years. And even if they haven’t put a name to it, they’ve kind of been using it unwittingly.
Roger Dooley: Right. And I guess it’s pretty clear that what you have there is reciprocity, not just the fact that, “Oh, Hey, this Brown is really good. I’m going to buy some of those.” Because in fact, they were buying bread and other products that weren’t necessarily brownies. So yeah, it’s very, very nice little demonstration. I’ve written about it. We have in Texas, in Texas only, of the 50 United States, supermarket chain called HEB. And there they’re quite dominant in the State and they are also the number one ranked supermarket in the country for customer preference, really outranking, even folks like Trader Joe’s and Costco who have a very loyal customer following. And one of the things that they do is extensive sampling throughout the store. And I think it has multiple effects. It invokes that reciprocity effect. It does occasionally sample of product and you say, “Hey, that actually is pretty good. I’ve bought the exact product they’re sampling because I said, wow, I want to have some of that when I get home.”
And also it creates a sense of fun and excitement through the store too. So really, it works in multiple layers and their biggest local competitors simply doesn’t do that. And I don’t know that I’ve ever been in that store where they’ve had somebody handing out samples and it just feels flat. And it’s not quite the end of that story, but eventually HEB opened up a second store, very close to their competitor. And now their competitor’s parking lot is mostly empty because people weren’t going there as much to begin with. But with the nearby HEB store, now it’s almost deserted. So they do a lot of things right and one of those is sampling. So I was surprised to find that nudges even work with criminals. One of your more interesting examples, how did you find that nudges affected the behavior in positive way of criminals or I don’t know if they’re ex-criminals? Or once you’re a criminal, are you always a criminal?
Jez Groom: Yeah. That was a really interesting brief. And this is what I meant earlier on about working in advertising was fun and actually behavioral science could apply to lots of problems, which advertising agencies would never even get asked. And we’d done some work, a member of the award panel, like the jury, had seen this work, he was in financial services. But he also had a number of different kinds of programs. And one of them was like a rehabilitation program. The people that are on probation or parole. So they’ve been in jail, tends to be relatively low level crime and then they come out, and they’re on various programs. It might be like an alcohol rehabilitation program, drugs, sometimes community service. And then they have to check in with their probation officer, they’re called the case manager.
And what was, I suppose the challenge was that if you missed an appointment, then it caused problems, so it put up a signal. And most of the time people didn’t want to miss their appointment, but life gets in the way. Sometimes they have quite chaotic lifestyles. So they have to ring in to their probation officer. And the way that it worked was the probation officer is seeing like 30 different people across the week. So you couldn’t get ahold of them on their mobile, but you could ring into a contact center. And if you rang the contact center, they could see all the diary of your probation officer. But because the people in the contact center felt like administrative sort of back office staff, quite a lot of the criminals will ring up and say, “Can I speak to my probation officer?” And they’d say, “I’m really sorry, she’s not available at the moment.”
And the say, “Okay, fine,” and just put the phone down. And then they get frustrated and maybe wouldn’t ring back and then miss an appointment. And too many of these missed appointments means that they’ve got a very high probability of going back to jail. And it can be solved quite simply. So we did two things. So the first thing was, we just changed the way that the people on the phones introduced themselves. So rather than saying, “Hello, can I help you?”, they kind of introduced their title and their actually probation consultants, that was their title, and say, “Hello, my name is Jez and I’m a probation consultant here. Can I start by taking your sort of account number or your customer number?” And they give the customer number. They get all the details in front of them that go through some security check and they’d say, “And how can help you?” And say, “Well, I need to change the appointment.” And they say, “Well, I can do that for you right now on the call. I can see there’s availability next Thursday at three. Ill put you in there.” And they say, “That’s absolutely fantastic, great.”
So a simple thing of adding authority at the beginning of the call, the introduction of the call meant that the people on parole were just far more likely to, I suppose, trust, be compliant and go along with the flow of what the authority said and we measured it. So we had pre and post. So we didn’t have a split group on this experiment, we had a pre and post. And we also changed some things on the letters that they got. So they got these reminder letters, which were just really badly designed. So there were written like letters that maybe could written in a typewriter in 1892, and weren’t really salient for the time of the appointment or the numbers that you had to call to change it.
So people would often not even read these letters. So the combination of letters that were hard to read and ring you through to a back office meant that it was just a system that had some inheritance or negative bias within it. So we made these changes, we refreshed the letters, we changed the conversations and we saw over 100% increase, it’s was like 103% increase, in the first contact fixed rate. So whereas before they were getting maybe half people saying, “Yeah, that’s fine. Sign me up.” They got another half again of people saying, “Yeah, this is a great thing to do,” which obviously saves a lot of time, efficiency and a better customer experience all around. And I mean, I think it’s fair to say that nudge theory isn’t a silver bullet and it’s not magic. So for sure not everybody is going to be influenced by these kinds of small changes.
And especially when you’re dealing with hardcore criminals, because there were some people that were on parole that have been in and out of jail for 30 or 40 years. And they’re a bit more resistant to that sort of interaction. But that said, it was a great result. And yeah, it was really, really interesting. And for further work that we’ve done, clients in the business world often say to us, “Well, we’ve got a very, very challenging environment.” We often have people ringing up about complaints and saying that our products aren’t great. And we’re like, “Yeah, it’s okay, we’ve worked with criminals in the center of the England.” So I think we’re okay. I think we’re okay.
Roger Dooley: Yeah. That’s great. I think you mentioned a key point there, and that is salience in any kind of written communication, whether it’s a letter, paper letter or an email, people do not read and it’s something I get reminded of constantly. These days, a lot of people are spending time on social media and you will see somebody post something and they will start getting replies. And it’s clear that many of these replies never got past the first sentence or two, or they miss the key point in their fifth sentence that would have made whatever their comment was, meaningless. But just people don’t read stuff. And I’m sure you’ve sent out emails to people and realize that, “Oh, hey, well, nobody read that,” or didn’t read it through because of their reply and just using some of these visual cues to eliminate anything that isn’t necessary, but also then to make prominent those things that are important and to use language that’s easily processed as opposed to more difficult to process. All those things can make such a big difference in communication.
April Vellacott: Yeah. I wonder whether that point, Roger, I wonder whether you could argue that that’s becoming even more important than ever. With everyone, like you say, consuming really snackable content on social media. If our attention spans are getting shorter, like people might hypothesize that they are, I wonder whether they’re going to think things like saliency are going to be even more important in the future.
Roger Dooley: I think so until we can get our information overload under control. I just saw a study from Adobe and this is a self-report study, so I guess I would take the numbers with a grain of salt, but people who responded were spending on average more than three hours a day on work email, and more than two hours a day on personal email. It sounds a little high to me unless some of it’s by task, like I’m doing business emails while I’m in a business meeting or something. But regardless, this is people’s perception that they’re spending five or more hours a day in email. And so you can imagine that they’re trying to get through that as rapidly as possible. They’re trying to get the essence of whatever communication is and figure out if they have to reply, if they can delete it, if they just file it and they aren’t going to be reading every word.
Jez Groom: No, no. It is interesting on that point. I mean, myself and April have just been working with a financial services client. I think financial services and utilities are definitely the worst sectors for information overload. They come with a lot of compliance, which I think actually creates a negative effect. But myself and April, only last week, we were looking at an email and the objective of the email was to get a customer to confirm their details. So they’d signed up to a financial product and we just needed to them to confirm their details before we could activate this particular product. And we looked at this very, very simple email and there were 14 friction points on this simple email. And it’s staggering that, like the subject header is wrong or isn’t motivating enough for you get to open it. The purpose of the email, isn’t very clear from the start.
And essentially it feels like their financial services company is doing you a favor by you buying a product, it’s free and the wrong way. They then come up with like a five step process, which sounds really complicated in like an Excel spreadsheet format. And it’s really unclear what you’ve got to do. And then it gets signed off by a guy who’s selling you the product and you’re sort of like, “Yeah, I’m not so sure it’s a sales person.” And then the reply address was new business at this company. And you sort of like, “Yeah, I’m not so sure these things are right.” And that our key frustration, I think at Cowry, is that there’s quite a lot of behavioral scientists that are very, very good at the science, but frankly, bloody awful at the design.
So designing what the email intervention should look like to change the behavior, such that the customer gets what they want to do in as easy way as possible. So we have a team of five designers at Cowry. They’re all psychologists, but got real strong passion for graphic design. And they bring to life these academic principles. So myself and April are pretty good at the words and the conversations, but these guys and girls bring, so it should sit on the left, or should it sit in the right, what color should it be? And how big should it be? What order should these things be? Is there an icon? All these, what font is it? Is it bold? All these sorts of things really, really do matter. And then design something for intervention to change behavior. I mean, there’s so many letters and emails get written as if they were written on a typewriter from an 1819.
And you kind of like, “Hey, I’ve got this crazy idea. Why don’t we put a picture in this one, which shows the person doing the thing that they want them to do, which is maybe, I don’t know, on a website type in register and they look quite happy about it.” And that’s going to essentially people go, “Oh, right, that’s what this email is about. It’s about registering and feeling happy about signing up. I think I might do that.” And it seems so obvious to us, but I think quite a lot of people certainly think in the business world just haven’t been exposed to a lot of these behavioral science principles. And that’s all we wanted Ripple to do, to say this is a principle that once you’ve been taught and told it, then you experiment with it and if it works then great, then you can do it throughout your life, your career, across all these different channels.
Roger Dooley: Yeah. Sometimes it’s just common sense. I think Jez, people have occasionally come to me saying, “Okay, we need to implement some kind of a nudge strategy here to increase our sales.” And you look at their website or the email, whatever it is and it’s just so badly designed you don’t need a nudge strategy, you need people to be able to find the buy button. It’s not rocket science. But anyway, we could go on forever here, I think. But let me remind our listeners that today we are speaking with Jez Groom and April Vellacott, behavior change experts and authors of the new book, Ripple: The Big Effects of Small Behaviour Changes in Business. Jez and April, where can people find you and your ideas?
April Vellacott: So we actually created a website to go along with the book Ripple because there were so many case studies and stories, like we’ve touched on today, that had videos and stuff to go with them. So we’ve got this companion hub online, which you can find at www.ripple-book.com, and it’s full of videos, articles, extra content so you can delve deeper into the stories behind each of the chapters and start doing it yourself. And from there, you can also can connect with us and you can buy a copy of the book, which is available now. I think it’s available in the U.S. on Amazon. And I think there’s an audio book coming soon as well. I think that’s going to drop any day.
Roger Dooley: Great. Well, by the time this airs, I’m sure all of those things will be available. And we will link to that website to the book at Amazon and any other resources we spoke about on the show notes pages at RogerDooley.com/podcast. And we’ll have a text version of our conversation there too. Jez and April, thanks for being on the show. It’s been fun.
Jez Groom: Thanks so much, Roger. It’s been a lot of fun.
April Vellacott: It’s been a pleasure. Thanks for having us.
Thank you for tuning into this episode of Brainfluence. To find more episodes like this one, and to access all of Roger’s online writing and resources, the best starting point is RogerDooley.com.
And remember, Roger’s new book, Friction, is now available at Amazon, Barnes and Noble, and book sellers everywhere. Bestselling author Dan Pink calls it, “An important read,” and Wharton Professor Dr. Joana Berger said, “You’ll understand Friction’s power and how to harness it.”
Schneier on Security
The “availability heuristic” is very broad, and goes a long way toward explaining how people deal with risk and trade-offs. Basically, the availability heuristic means that people “assess the frequency of a class or the probability of an event by the ease with which instances or occurrences can be brought to mind.” 28 In other words, in any decision-making process, easily remembered (available) data are given greater weight than hard-to-remember data.
In general, the availability heuristic is a good mental shortcut. All things being equal, common events are easier to remember than uncommon ones. So it makes sense to use availability to estimate frequency and probability. But like all heuristics, there are areas where the heuristic breaks down and leads to biases. There are reasons other than occurrence that make some things more available. Events that have taken place recently are more available than others. Events that are more emotional are more available than others. Events that are more vivid are more available than others. And so on.
There’s nothing new about the availability heuristic and its effects on security. I wrote about it in Beyond Fear , 29 although not by that name. Sociology professor Barry Glassner devoted most of a book to explaining how it affects our risk perception. 30 Every book on the psychology of decision making discusses it.
In one simple experiment, 31 subjects were asked this question:
- In a typical sample of text in the English language, is it more likely that a word starts with the letter K or that K is its third letter (not counting words with less than three letters)?
Nearly 70% of people said that there were more words that started with K, even though there are nearly twice as many words with K in the third position as there are words that start with K. But since words that start with K are easier to generate in one’s mind, people overestimate their relative frequency.
In another, more real-world, experiment, 32 subjects were divided into two groups. One group was asked to spend a period of time imagining its college football team doing well during the upcoming season, and the other group was asked to imagine its college football team doing poorly. Then, both groups were asked questions about the team’s actual prospects. Of the subjects who had imagined the team doing well, 63% predicted an excellent season. Of the subjects who had imagined the team doing poorly, only 40% did so.
The same researcher performed another experiment before the 1976 presidential election. Subjects asked to imagine Carter winning were more likely to predict that he would win, and subjects asked to imagine Ford winning were more likely to believe he would win. This kind of experiment has also been replicated several times, and uniformly demonstrates that considering a particular outcome in one’s imagination makes it appear more likely later.
The vividness of memories is another aspect of the availability heuristic that has been studied. People’s decisions are more affected by vivid information than by pallid, abstract, or statistical information.
Here’s just one of many experiments that demonstrates this. 33 In the first part of the experiment, subjects read about a court case involving drunk driving. The defendant had run a stop sign while driving home from a party and collided with a garbage truck. No blood alcohol test had been done, and there was only circumstantial evidence to go on. The defendant was arguing that he was not drunk.
After reading a description of the case and the defendant, subjects were divided into two groups and given eighteen individual pieces of evidence to read: nine written by the prosecution about why the defendant was guilty, and nine written by the defense about why the defendant was innocent. Subjects in the first group were given prosecution evidence written in a pallid style and defense evidence written in a vivid style, while subjects in the second group were given the reverse.
For example, here is a pallid and vivid version of the same piece of prosecution evidence:
- On his way out the door, Sanders [the defendant] staggers against a serving table, knocking a bowl to the floor.
- On his way out the door, Sanders staggered against a serving table, knocking a bowl of guacamole dip to the floor and splattering guacamole on the white shag carpet.
And here’s a pallid and vivid pair for the defense:
- The owner of the garbage truck admitted under cross-examination that his garbage truck is difficult to see at night because it is grey in color.
- The owner of the garbage truck admitted under cross-examination that his garbage truck is difficult to see at night because it is grey in color. The owner said his trucks are grey “because it hides the dirt,” and he said, “What do you want, I should paint them pink?”
After all of this, the subjects were asked about the defendant’s drunkenness level, his guilt, and what verdict the jury should reach.
The results were interesting. The vivid vs. pallid arguments had no significant effect on the subject’s judgment immediately after reading them, but when they were asked again about the case 48 hours later–they were asked to make their judgments as though they “were deciding the case now for the first time”–they were more swayed by the vivid arguments. Subjects who read vivid defense arguments and pallid prosecution arguments were much more likely to judge the defendant innocent, and subjects who read the vivid prosecution arguments and pallid defense arguments were much more likely to judge him guilty.
The moral here is that people will be persuaded more by a vivid, personal story than they will by bland statistics and facts, possibly solely due to the fact that they remember vivid arguments better.
Another experiment 34 divided subjects into two groups, who then read about a fictional disease called “Hyposcenia-B.” Subjects in the first group read about a disease with concrete and easy-to-imagine symptoms: muscle aches, low energy level, and frequent headaches. Subjects in the second group read about a disease with abstract and difficult-to-imagine symptoms: a vague sense of disorientation, a malfunctioning nervous system, and an inflamed liver.
Then each group was divided in half again. Half of each half was the control group: they simply read one of the two descriptions and were asked how likely they were to contract the disease in the future. The other half of each half was the experimental group: they read one of the two descriptions “with an eye toward imagining a three-week period during which they contracted and experienced the symptoms of the disease,” and then wrote a detailed description of how they thought they would feel during those three weeks. And then they were asked whether they thought they would contract the disease.
The idea here was to test whether the ease or difficulty of imagining something affected the availability heuristic. The results showed that those in the control group–who read either the easy-to-imagine or difficult-to-imagine symptoms, showed no difference. But those who were asked to imagine the easy-to-imagine symptoms thought they were more likely to contract the disease than the control group, and those who were asked to imagine the difficult-to-imagine symptoms thought they were less likely to contract the disease than the control group. The researchers concluded that imagining an outcome alone is not enough to make it appear more likely it has to be something easy to imagine. And, in fact, an outcome that is difficult to imagine may actually appear to be less likely.
Additionally, a memory might be particularly vivid precisely because it’s extreme, and therefore unlikely to occur. In one experiment, 35 researchers asked some commuters on a train platform to remember and describe “the worst time you missed your train” and other commuters to remember and describe “any time you missed your train.” The incidents described by both groups were equally awful, demonstrating that the most extreme example of a class of things tends to come to mind when thinking about the class.
More generally, this kind of thing is related to something called “probability neglect”: the tendency of people to ignore probabilities in instances where there is a high emotional content. 36 Security risks certainly fall into this category, and our current obsession with terrorism risks at the expense of more common risks is an example.
The availability heuristic also explains hindsight bias. Events that have actually occurred are, almost by definition, easier to imagine than events that have not, so people retroactively overestimate the probability of those events. Think of “Monday morning quarterbacking,” exemplified both in sports and in national policy. “He should have seen that coming” becomes easy for someone to believe.
The best way I’ve seen this all described is by Scott Plous:
In very general terms: (1) the more available an event is, the more frequent or probable it will seem (2) the more vivid a piece of information is, the more easily recalled and convincing it will be and (3) the more salient something is, the more likely it will be to appear causal. 37
Here’s one experiment that demonstrates this bias with respect to salience. 38 Groups of six observers watched a two-man conversation from different vantage points: either seated behind one of the men talking or sitting on the sidelines between the two men talking. Subjects facing one or the other conversants tended to rate that person as more influential in the conversation: setting the tone, determining what kind of information was exchanged, and causing the other person to respond as he did. Subjects on the sidelines tended to rate both conversants as equally influential.
As I said at the beginning of this section, most of the time the availability heuristic is a good mental shortcut. But in modern society, we get a lot of sensory input from the media. That screws up availability, vividness, and salience, and means that heuristics that are based on our senses start to fail. When people were living in primitive tribes, if the idea of getting eaten by a saber-toothed tiger was more available than the idea of getting trampled by a mammoth, it was reasonable to believe that–for the people in the particular place they happened to be living–it was more likely they’d get eaten by a saber-toothed tiger than get trampled by a mammoth. But now that we get our information from television, newspapers, and the Internet, that’s not necessarily the case. What we read about, what becomes vivid to us, might be something rare and spectacular. It might be something fictional: a movie or a television show. It might be a marketing message, either commercial or political. And remember, visual media are more vivid than print media. The availability heuristic is less reliable, because the vivid memories we’re drawing upon aren’t relevant to our real situation. And even worse, people tend not to remember where they heard something—they just remember the content. So even if, at the time they’re exposed to a message, they don’t find the source credible, eventually their memory of the source of the information degrades and they’re just left with the message itself.
We in the security industry are used to the effects of the availability heuristic. It contributes to the “risk du jour” mentality we so often see in people. It explains why people tend to overestimate rare risks and underestimate common ones. 39 It explains why we spend so much effort defending against what the bad guys did last time, and ignore what new things they could do next time. It explains why we’re worried about risks that are in the news at the expense of risks that are not, or rare risks that come with personal and emotional stories at the expense of risks that are so common they are only presented in the form of statistics.
It explains most of the entries in Table 1.
“Representativeness” is a heuristic by which we assume the probability that an example belongs to a particular class is based on how well that example represents the class. On the face of it, this seems like a reasonable heuristic. But it can lead to erroneous results if you’re not careful.
The concept is a bit tricky, but here’s an experiment that makes this bias crystal clear. 40 Subjects were given the following description of a woman named Linda:
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.
Then the subjects were given a list of eight statements describing her present employment and activities. Most were decoys (“Linda is an elementary school teacher,” “Linda is a psychiatric social worker,” and so on), but two were critical: number 6 (“Linda is a bank teller,” and number 8 (“Linda is a bank teller and is active in the feminist movement”). Half of the subjects were asked to rank the eight outcomes by the similarity of Linda to the typical person described by the statement, while others were asked to rank the eight outcomes by probability.
Of the first group of subjects, 85% responded that Linda more resembled a stereotypical feminist bank teller more than a bank teller. This makes sense. But of the second group of subjects, 89% of thought Linda was more likely to be a feminist bank teller than a bank teller. Mathematically, of course, this is ridiculous. It is impossible for the second alternative to be more likely than the first the second is a subset of the first.
As the researchers explain: “As the amount of detail in a scenario increases, its probability can only decrease steadily, but its representativeness and hence its apparent likelihood may increase. The reliance on representativeness, we believe, is a primary reason for the unwarranted appeal of detailed scenarios and the illusory sense of insight that such constructions often provide.” 41
Doesn’t this sound like how so many people resonate with movie-plot threats–overly specific threat scenarios–at the expense of broader risks?
In another experiment, 42 two groups of subjects were shown short personality descriptions of several people. The descriptions were designed to be stereotypical for either engineers or lawyers. Here’s a sample description of a stereotypical engineer:
Tom W. is of high intelligence, although lacking in true creativity. He has a need for order and clarity, and for neat and tidy systems in which every detail finds its appropriate place. His writing is rather dull and mechanical, occasionally enlivened by somewhat corny puns and flashes of imagination of the sci-fi type. He has a strong drive for competence. He seems to have little feel and little sympathy for other people and does not enjoy interacting with others. Self-centered, he nonetheless has a deep moral sense.
Then, the subjects were asked to give a probability that each description belonged to an engineer rather than a lawyer. One group of subjects was told this about the population:
The second group of subjects was told this about the population:
Statistically, the probability that a particular description belongs to an engineer rather than a lawyer should be much higher under Condition A than Condition B. However, subjects judged the assignments to be the same in either case. They were basing their judgments solely on the stereotypical personality characteristics of engineers and lawyers, and ignoring the relative probabilities of the two categories.
Interestingly, when subjects were not given any personality description at all and simply asked for the probability that a random individual was an engineer, they answered correctly: 70% under Condition A and 30% under Condition B. But when they were given a neutral personality description, one that didn’t trigger either stereotype, they assigned the description to an engineer 50% of the time under both Conditions A and B.
And here’s a third experiment. Subjects (college students) were given a survey which included these two questions: “How happy are you with your life in general?” and “How many dates did you have last month?” When asked in this order, there was no correlation between the answers. But when asked in the reverse order–when the survey reminded the subjects of how good (or bad) their love life was before asking them about their life in general–there was a 66% correlation. 43
Representativeness also explains the base rate fallacy, where people forget that if a particular characteristic is extremely rare, even an accurate test for that characteristic will show false alarms far more often than it will correctly identify the characteristic. Security people run into this heuristic whenever someone tries to sell such things as face scanning, profiling, or data mining as effective ways to find terrorists.
And lastly, representativeness explains the “law of small numbers,” where people assume that long-term probabilities also hold in the short run. This is, of course, not true: if the results of three successive coin flips are tails, the odds of heads on the fourth flip are not more than 50%. The coin is not “due” to flip heads. Yet experiments have demonstrated this fallacy in sports betting again and again. 44
Humans have all sorts of pathologies involving costs, and this isn’t the place to discuss them all. But there are a few specific heuristics I want to summarize, because if we can’t evaluate costs right–either monetary costs or more abstract costs–we’re not going to make good security trade-offs.
Mental accounting is the process by which people categorize different costs. 45 People don’t simply think of costs as costs it’s much more complicated than that.
Here are the illogical results of two experiments. 46
In the first, subjects were asked to answer one of these two questions:
- Trade-off 1: Imagine that you have decided to see a play where the admission is $10 per ticket. As you enter the theater you discover that you have lost a $10 bill. Would you still pay $10 for a ticket to the play?
- Trade-off 2: Imagine that you have decided to see a play where the admission is $10 per ticket. As you enter the theater you discover that you have lost the ticket. The seat is not marked and the ticket cannot be recovered. Would you pay $10 for another ticket?
The results of the trade-off are exactly the same. In either case, you can either see the play and have $20 less in your pocket, or not see the play and have $10 less in your pocket. But people don’t see these trade-offs as the same. Faced with Trade-off 1, 88% of subjects said they would buy the ticket anyway. But faced with Trade-off 2, only 46% said they would buy a second ticket. The researchers concluded that there is some sort of mental accounting going on, and the two different $10 expenses are coming out of different mental accounts.
The second experiment was similar. Subjects were asked:
- Imagine that you are about to purchase a jacket for $125, and a calculator for $15. The calculator salesman informs you that the calculator you wish to buy is on sale for $10 at the other branch of the store, located 20 minutes’ drive away. Would you make the trip to the other store?
- Imagine that you are about to purchase a jacket for $15, and a calculator for $125. The calculator salesman informs you that the calculator you wish to buy is on sale for $120 at the other branch of the store, located 20 minutes’ drive away. Would you make the trip to the other store?
Ignore your amazement at the idea of spending $125 on a calculator it’s an old experiment. These two questions are basically the same: would you drive 20 minutes to save $5? But while 68% of subjects would make the drive to save $5 off the $15 calculator, only 29% would make the drive to save $5 off the $125 calculator.
There’s a lot more to mental accounting. 47 In one experiment, 48 subjects were asked to imagine themselves lying on the beach on a hot day and how good a cold bottle of their favorite beer would feel. They were to imagine that a friend with them was going up to make a phone call–this was in 1985, before cell phones–and offered to buy them that favorite brand of beer if they gave the friend the money. What was the most the subject was willing to pay for the beer?
Subjects were divided into two groups. In the first group, the friend offered to buy the beer from a fancy resort hotel. In the second group, the friend offered to buy the beer from a run-down grocery store. From a purely economic viewpoint, that should make no difference. The value of one’s favorite brand of beer on a hot summer’s day has nothing to do with where it was purchased from. (In economic terms, the consumption experience is the same.) But people were willing to pay $2.65 on average for the beer from a fancy resort, but only $1.50 on average from the run-down grocery store.
The experimenters concluded that people have reference prices in their heads, and that these prices depend on circumstance. And because the reference price was different in the different scenarios, people were willing to pay different amounts. This leads to sub-optimal results. As Thayer writes, “The thirsty beer-drinker who would pay $4 for a beer from a resort but only $2 from a grocery store will miss out on some pleasant drinking when faced with a grocery store charging $2.50.”
Researchers have documented all sorts of mental accounting heuristics. Small costs are often not “booked,” so people more easily spend money on things like a morning coffee. This is why advertisers often describe large annual costs as “only a few dollars a day.” People segregate frivolous money from serious money, so it’s easier for them to spend the $100 they won in a football pool than a $100 tax refund. And people have different mental budgets. In one experiment that illustrates this, 49 two groups of subjects were asked if they were willing to buy tickets to a play. The first group was told to imagine that they had spent $50 earlier in the week on tickets to a basketball game, while the second group was told to imagine that they had received a $50 parking ticket earlier in the week. Those who had spent $50 on the basketball game (out of the same mental budget) were significantly less likely to buy the play tickets than those who spent $50 paying a parking ticket (out of a different mental budget).
One interesting mental accounting effect can be seen at race tracks. 50 Bettors tend to shift their bets away from favorites and towards long shots at the end of the day. This has been explained by the fact that the average bettor is behind by the end of the day–pari-mutuel betting means that the average bet is a loss–and a long shot can put a bettor ahead for the day. There’s a “day’s bets” mental account, and bettors don’t want to close it in the red.
The effect of mental accounting on security trade-offs isn’t clear, but I’m certain we have a mental account for “safety” or “security,” and that money spent from that account feels different than money spent from another account. I’ll even wager we have a similar mental accounting model for non-fungible costs such as risk: risks from one account don’t compare easily with risks from another. That is, we are willing to accept considerable risks in our leisure account–skydiving, knife juggling, whatever–when we wouldn’t even consider them if they were charged against a different account.
“Time discounting” is the term used to describe the human tendency to discount future costs and benefits. It makes economic sense a cost paid in a year is not the same as a cost paid today, because that money could be invested and earn interest during the year. Similarly, a benefit accrued in a year is worth less than a benefit accrued today.
Way back in 1937, economist Paul Samuelson proposed a discounted-utility model to explain this all. Basically, something is worth more today than it is in the future. It’s worth more to you to have a house today than it is to get it in ten years, because you’ll have ten more years’ enjoyment of the house. Money is worth more today than it is years from now that’s why a bank is willing to pay you to store it with them.
The discounted utility model assumes that things are discounted according to some rate. There’s a mathematical formula for calculating which is worth more–$100 today or $120 in twelve months–based on interest rates. Today, for example, the discount rate is 6.25%, meaning that $100 today is worth the same as $106.25 in twelve months. But of course, people are much more complicated than that.
There is, for example, a magnitude effect: smaller amounts are discounted more than larger ones. In one experiment, 51 subjects were asked to choose between an amount of money today or a greater amount in a year. The results would make any banker shake his head in wonder. People didn’t care whether they received $15 today or $60 in twelve months. At the same time, they were indifferent to receiving $250 today or $350 in twelve months, and $3,000 today or $4,000 in twelve months. If you do the math, that implies a discount rate of 139%, 34%, and 29%–all held simultaneously by subjects, depending on the initial dollar amount.
This holds true for losses as well, 52 although gains are discounted more than losses. In other words, someone might be indifferent to $250 today or $350 in twelve months, but would much prefer a $250 penalty today to a $350 penalty in twelve months. Notice how time discounting interacts with prospect theory here.
Also, preferences between different delayed rewards can flip, depending on the time between the decision and the two rewards. Someone might prefer $100 today to $110 tomorrow, but also prefer $110 in 31 days to $100 in thirty days.
Framing effects show up in time discounting, too. You can frame something either as an acceleration or a delay from a base reference point, and that makes a big difference. In one experiment, 53 subjects who expected to receive a VCR in twelve months would pay an average of $54 to receive it immediately, but subjects who expected to receive the VCR immediately demanded an average $126 discount to delay receipt for a year. This holds true for losses as well: people demand more to expedite payments than they would pay to delay them. 54
Reading through the literature, it sometimes seems that discounted utility theory is full of nuances, complications, and contradictions. Time discounting is more pronounced in young people, people who are in emotional states–fear is certainly an example of this–and people who are distracted. But clearly there is some mental discounting going on it’s just not anywhere near linear, and not easily formularized.
Heuristics that Affect Decisions
And finally, there are biases and heuristics that affect trade-offs. Like many other heuristics we’ve discussed, they’re general, and not specific to security. But they’re still important.
First, some more framing effects.
Most of us have anecdotes about what psychologists call the “context effect”: preferences among a set of options depend on what other options are in the set. This has been confirmed in all sorts of experiments–remember the experiment about what people were willing to pay for a cold beer on a hot beach–and most of us have anecdotal confirmation of this heuristic.
For example, people have a tendency to choose options that dominate other options, or compromise options that lie between other options. If you want your boss to approve your $1M security budget, you’ll have a much better chance of getting that approval if you give him a choice among three security plans–with budgets of $500K, $1M, and $2M, respectively–than you will if you give him a choice among three plans with budgets of $250K, $500K, and $1M.
The rule of thumb makes sense: avoid extremes. It fails, however, when there’s an intelligence on the other end, manipulating the set of choices so that a particular one doesn’t seem extreme.
“Choice bracketing” is another common heuristic. In other words: choose a variety. Basically, people tend to choose a more diverse set of goods when the decision is bracketed more broadly than they do when it is bracketed more narrowly. For example, 55 in one experiment students were asked to choose among one of six different snacks that they would receive at the beginning of the next three weekly classes. One group had to choose the three weekly snacks in advance, while the other group chose at the beginning of each class session. Of the group that chose in advance, 64% chose a different snack each week, but only 9% of the group that chose each week did the same.
The narrow interpretation of this experiment is that we overestimate the value of variety. Looking ahead three weeks, a variety of snacks seems like a good idea, but when we get to the actual time to enjoy those snacks, we choose the snack we like. But there’s a broader interpretation as well, one borne out by similar experiments and directly applicable to risk taking: when faced with repeated risk decisions, evaluating them as a group makes them feel less risky than evaluating them one at a time. Back to finance, someone who rejects a particular gamble as being too risky might accept multiple identical gambles.
Again, the results of a trade-off depend on the context of the trade-off.
It gets even weirder. Psychologists have identified an “anchoring effect,” whereby decisions are affected by random information cognitively nearby. In one experiment 56 , subjects were shown the spin of a wheel whose numbers ranged from 0 and 100, and asked to guess whether the number of African nations in the UN was greater or less than that randomly generated number. Then, they were asked to guess the exact number of African nations in the UN.
Even though the spin of the wheel was random, and the subjects knew it, their final guess was strongly influenced by it. That is, subjects who happened to spin a higher random number guessed higher than subjects with a lower random number.
Psychologists have theorized that the subjects anchored on the number in front of them, mentally adjusting it for what they thought was true. Of course, because this was just a guess, many people didn’t adjust sufficiently. As strange as it might seem, other experiments have confirmed this effect.
And if you’re not completely despairing yet, here’s another experiment that will push you over the edge. 57 In it, subjects were asked one of these two questions:
- Question 1: Should divorce in this country be easier to obtain, more difficult to obtain, or stay as it is now?
- Question 2: Should divorce in this country be easier to obtain, stay as it is now, or be more difficult to obtain?
In response to the first question, 23% of the subjects chose easier divorce laws, 36% chose more difficult divorce laws, and 41% said that the status quo was fine. In response to the second question, 26% chose easier divorce laws, 46% chose more difficult divorce laws, and 29% chose the status quo. Yes, the order in which the alternatives are listed affects the results.
There are lots of results along these lines, including the order of candidates on a ballot.
Another heuristic that affects security trade-offs is the “confirmation bias.” People are more likely to notice evidence that supports a previously held position than evidence that discredits it. Even worse, people who support position A sometimes mistakenly believe that anti-A evidence actually supports that position. There are a lot of experiments that confirm this basic bias and explore its complexities.
If there’s one moral here, it’s that individual preferences are not based on predefined models that can be cleanly represented in the sort of indifference curves you read about in microeconomics textbooks but instead, are poorly defined, highly malleable, and strongly dependent on the context in which they are elicited. Heuristics and biases matter. A lot.
This all relates to security because it demonstrates that we are not adept at making rational security trade-offs, especially in the context of a lot of ancillary information designed to persuade us one way or another.
Making Sense of the Perception of Security
We started out by teasing apart the security trade-off, and listing five areas where perception can diverge from reality:
- The severity of the risk.
- The probability of the risk.
- The magnitude of the costs.
- How effective the countermeasure is at mitigating the risk.
- The trade-off itself.
Sometimes in all the areas, and all the time in area 4, we can explain this divergence as a consequence of not having enough information. But sometimes we have all the information and still make bad security trade-offs. My aim was to give you a glimpse of the complicated brain systems that make these trade-offs, and how they can go wrong.
Of course, we can make bad trade-offs in anything: predicting what snack we’d prefer next week or not being willing to pay enough for a beer on a hot day. But security trade-offs are particularly vulnerable to these biases because they are so critical to our survival. Long before our evolutionary ancestors had the brain capacity to consider future snack preferences or a fair price for a cold beer, they were dodging predators and forging social ties with others of their species. Our brain heuristics for dealing with security are old and well-worn, and our amygdalas are even older.
What’s new from an evolutionary perspective is large-scale human society, and the new security trade-offs that come with it. In the past I have singled out technology and the media as two aspects of modern society that make it particularly difficult to make good security trade-offs–technology by hiding detailed complexity so that we don’t have the right information about risks, and the media by producing such available, vivid, and salient sensory input–but the issue is really broader than that. The neocortex, the part of our brain that has to make security trade-offs, is, in the words of Daniel Gilbert, “still in beta testing.”
I have just started exploring the relevant literature in behavioral economics, the psychology of decision making, the psychology of risk, and neuroscience. Undoubtedly there is a lot of research out there for me still to discover, and more fascinatingly counterintuitive experiments that illuminate our brain heuristics and biases. But already I understand much more clearly why we get security trade-offs so wrong so often.
When I started reading about the psychology of security, I quickly realized that this research can be used both for good and for evil. The good way to use this research is to figure out how humans’ feelings of security can better match the reality of security. In other words, how do we get people to recognize that they need to question their default behavior? Giving them more information seems not to be the answer we’re already drowning in information, and these heuristics are not based on a lack of information. Perhaps by understanding how our brains processes risk, and the heuristics and biases we use to think about security, we can learn how to override our natural tendencies and make better security trade-offs. Perhaps we can learn how not to be taken in by security theater, and how to convince others not to be taken in by the same.
The evil way is to focus on the feeling of security at the expense of the reality. In his book Influence , 58 Robert Cialdini makes the point that people can’t analyze every decision fully it’s just not possible: people need heuristics to get through life. Cialdini discusses how to take advantage of that an unscrupulous person, corporation, or government can similarly take advantage of the heuristics and biases we have about risk and security. Concepts of prospect theory, framing, availability, representativeness, affect, and others are key issues in marketing and politics. They’re applied generally, but in today’s world they’re more and more applied to security. Someone could use this research to simply make people feel more secure, rather than to actually make them more secure.
After all my reading and writing, I believe my good way of using the research is unrealistic, and the evil way is unacceptable. But I also see a third way: integrating the feeling and reality of security.
The feeling and reality of security are different, but they’re closely related. We make the best security trade-offs–and by that I mean trade-offs that give us genuine security for a reasonable cost–when our feeling of security matches the reality of security. It’s when the two are out of alignment that we get security wrong.
In the past, I’ve criticized palliative security measures that only make people feel more secure as “security theater.” But used correctly, they can be a way of raising our feeling of security to more closely match the reality of security. One example is the tamper-proof packaging that started to appear on over-the-counter drugs in the 1980s, after a few highly publicized random poisonings. As a countermeasure, it didn’t make much sense. It’s easy to poison many foods and over-the-counter medicines right through the seal–with a syringe, for example–or to open and reseal the package well enough that an unwary consumer won’t detect it. But the tamper-resistant packaging brought people’s perceptions of the risk more in line with the actual risk: minimal. And for that reason the change was worth it.
Of course, security theater has a cost, just like real security. It can cost money, time, capabilities, freedoms, and so on, and most of the time the costs far outweigh the benefits. And security theater is no substitute for real security. Furthermore, too much security theater will raise people’s feeling of security to a level greater than the reality, which is also bad. But used in conjunction with real security, a bit of well-placed security theater might be exactly what we need to both be and feel more secure.
1 Bruce Schneier, Beyond Fear: Thinking Sensibly About Security in an Uncertain World , Springer-Verlag, 2003.
2 David Ropeik and George Gray, Risk: A Practical Guide for Deciding What’s Really Safe and What’s Really Dangerous in the World Around You , Houghton Mifflin, 2002.
3 Barry Glassner, The Culture of Fear: Why Americans are Afraid of the Wrong Things , Basic Books, 1999.
4 Paul Slovic, The Perception of Risk , Earthscan Publications Ltd, 2000.
5 Daniel Gilbert, “If only gay sex caused global warming,” Los Angeles Times , July 2, 2006.
6 Jeffrey Kluger, “How Americans Are Living Dangerously,” Time , 26 Nov 2006.
7 Steven Johnson, Mind Wide Open: Your Brain and the Neuroscience of Everyday Life , Scribner, 2004.
8 Daniel Gilbert, “If only gay sex caused global warming,” Los Angeles Times , July 2, 2006.
9 Donald A. Norman, “Being Analog,” http://www.jnd.org/dn.mss/being_analog.html. Originally published as Chapter 7 of The Invisible Computer , MIT Press, 1998.
10 Daniel Kahneman, “A Perspective on Judgment and Choice,” American Psychologist , 2003, 58:9, 697–720.
11 Gerg Gigerenzer, Peter M. Todd, et al., Simple Heuristics that Make us Smart , Oxford University Press, 1999.
12 Daniel Kahneman and Amos Tversky, “Prospect Theory: An Analysis of Decision Under Risk,” Econometrica , 1979, 47:263–291.
13 Amos Tversky and Daniel Kahneman, “The Framing of Decisions and the Psychology of Choice,” Science , 1981, 211: 453–458.
14 Amos Tversky and Daniel Kahneman, “Evidential Impact of Base Rates,” in Daniel Kahneman, Paul Slovic, and Amos Tversky (eds.), Judgment Under Uncertainty: Heuristics and Biases , Cambridge University Press, 1982, pp. 153–160.
15 Daniel J. Kahneman, Jack L. Knetsch, and R.H. Thaler, “Experimental Tests of the Endowment Effect and the Coase Theorem,” Journal of Political Economy , 1990, 98: 1325–1348.
16 Jack L. Knetsch, “Preferences and Nonreversibility of Indifference Curves,” Journal of Economic Behavior and Organization , 1992, 17: 131–139.
17 Amos Tversky and Daniel Kahneman, “Advances in Prospect Theory: Cumulative Representation of Subjective Uncertainty,” Journal of Risk and Uncertainty , 1992, 5:xx, 297–323.
18 John Adams, “Cars, Cholera, and Cows: The Management of Risk and Uncertainty,” CATO Institute Policy Analysis #335, 1999.
19 David L. Rosenhan and Samuel Messick, “Affect and Expectation,” Journal of Personality and Social Psychology , 1966, 3: 38–44.
20 Neil D. Weinstein, “Unrealistic Optimism about Future Life Events,” Journal of Personality and Social Psychology , 1980, 39: 806–820.
21 D. Kahneman, I. Ritov, and D. Schkade, “Economic preferences or attitude expressions? An analysis of dollar responses to public issues,” Journal of Risk and Uncertainty , 1999, 19:220–242.
22 P. Winkielman, R.B. Zajonc, and N. Schwarz, “Subliminal affective priming attributional interventions,” Cognition and Emotion , 1977, 11:4, 433–465.
23 Daniel Gilbert, “If only gay sex caused global warming,” Los Angeles Times , July 2, 2006.
24 Robyn S. Wilson and Joseph L. Arvai, “When Less is More: How Affect Influences Preferences When Comparing Low-risk and High-risk Options,” Journal of Risk Research , 2006, 9:2, 165–178.
25 J. Cohen, The Privileged Ape: Cultural Capital in the Making of Man , Parthenon Publishing Group, 1989.
26 Paul Slovic, The Perception of Risk , Earthscan Publications Ltd, 2000.
27 John Allen Paulos, Innumeracy: Mathematical Illiteracy and Its Consequences , Farrar, Straus, and Giroux, 1988.
28 Amos Tversky and Daniel Kahneman, “Judgment under Uncertainty: Heuristics and Biases,” Science , 1974, 185:1124–1130.
29 Bruce Schneier, Beyond Fear: Thinking Sensibly About Security in an Uncertain World , Springer-Verlag, 2003.
30 Barry Glassner, The Culture of Fear: Why Americans are Afraid of the Wrong Things , Basic Books, 1999.
31 Amos Tversky and Daniel Kahneman, “Availability: A Heuristic for Judging Frequency,” Cognitive Psychology , 1973, 5:207–232.
32 John S. Carroll, “The Effect of Imagining an Event on Expectations for the Event: An Interpretation in Terms of the Availability Heuristic,” Journal of Experimental Social Psychology , 1978, 14:88–96.
33 Robert M. Reyes, William C. Thompson, and Gordon H. Bower, “Judgmental Biases Resulting from Differing Availabilities of Arguments,” Journal of Personality and Social Psychology , 1980, 39:2–12.
34 S. Jim Sherman, Robert B. Cialdini, Donna F. Schwartzman, and Kim D. Reynolds, “Imagining Can Heighten or Lower the Perceived Likelihood of Contracting a Disease: The Mediating Effect of Ease of Imagery,” Personality and Social Psychology Bulletin , 1985, 11:118–127.
35 C. K. Morewedge, D.T. Gilbert, and T.D. Wilson, “The Least Likely of Times: How Memory for Past Events Biases the Prediction of Future Events,” Psychological Science , 2005, 16:626–630.
36 Cass R. Sunstein, “Terrorism and Probability Neglect,” Journal of Risk and Uncertainty, 2003, 26:121-136.
37 Scott Plous, The Psychology of Judgment and Decision Making , McGraw-Hill, 1993.
38 S.E. Taylor and S.T. Fiske, “Point of View and Perceptions of Causality,” Journal of Personality and Social Psychology , 1975, 32: 439–445.
39 Paul Slovic, Baruch Fischhoff, and Sarah Lichtenstein, “Rating the Risks,” Environment , 1979, 2: 14–20, 36–39.
40 Amos Tversky and Daniel Kahneman, “Extensional vs Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment,” Psychological Review , 1983, 90. 293–315.
41 Amos Tversky and Daniel Kahneman, “Judgments of and by Representativeness,” in Daniel Kahneman, Paul Slovic, and Amos Tversky (eds.), Judgment Under Uncertainty: Heuristics and Biases , Cambridge University Press, 1982.
42 Daniel Kahneman and Amos Tversky, “On the Psychology of Prediction,” Psychological Review , 1973, 80: 237–251.
43 Daniel Kahneman and S. Frederick, “Representativeness Revisited: Attribute Substitution in Intuitive Judgement,” in T. Gilovich, D. Griffin, and D. Kahneman (eds.), Heuristics and Biases , Cambridge University Press 2002, pp. 49–81.
44 Thomas Gilovich, Robert Vallone, and Amos Tversky, “The Hot Hand in Basketball: On the Misperception of Random Sequences,” Cognitive Psychology , 1985, 17: 295–314.
45 Richard H. Thaler, “Toward a Positive Theory of Consumer Choice,” Journal of Economic Behavior and Organization , 1980, 1:39–60.
46 Amos Tversky and Daniel Kahneman, “The Framing of Decisions and the Psychology of Choice,” Science , 1981, 211:253:258.
47 Richard Thayer, “Mental Accounting Matters,” in Colin F. Camerer, George Loewenstein, and Matthew Rabin, eds., Advances in Behavioral Economics , Princeton University Press, 2004.
48 Richard Thayer, “Mental Accounting and Consumer Choice,” Marketing Science , 1985, 4:199–214.
49 Chip Heath and Jack B. Soll, “Mental Accounting and Consumer Decisions,” Journal of Consumer Research , 1996, 23:40–52.
50 Muhtar Ali, “Probability and Utility Estimates for Racetrack Bettors,” Journal of Political Economy , 1977, 85:803–815.
51 Richard Thayer, “Some Empirical Evidence on Dynamic Inconsistency,” Economics Letters , 1981, 8: 201–207.
52 George Loewenstein and Drazen Prelec, “Anomalies in Intertemporal Choice: Evidence and Interpretation,” Quarterly Journal of Economics , 1992, 573–597.
53 George Loewenstein, “Anticipation and the Valuation of Delayed Consumption,” Economy Journal , 1987, 97: 666–684.
54 Uri Benzion, Amnon Rapoport, and Joseph Yagel, “Discount Rates Inferred from Decisions: An Experimental Study,” Management Science , 1989, 35:270–284.
55 Itamer Simonson, “The Effect of Purchase Quantity and Timing on Variety-Seeking Behavior,” Journal of Marketing Research , 1990, 17:150–162.
56 Amos Tversky and Daniel Kahneman, “Judgment under Uncertainty: Heuristics and Biases,” Science , 1974, 185: 1124–1131.
57 Howard Schurman and Stanley Presser, Questions and Answers in Attitude Surveys: Experiments on Wording Form, Wording, and Context , Academic Press, 1981.
58 Robert B. Cialdini, Influence: The Psychology of Persuasion , HarperCollins, 1998.
Sidebar photo of Bruce Schneier by Joe MacInnis.
About Bruce Schneier
I am a public-interest technologist, working at the intersection of security, technology, and people. I've been writing about security issues on my blog since 2004, and in my monthly newsletter since 1998. I'm a fellow and lecturer at Harvard's Kennedy School, a board member of EFF, and the Chief of Security Architecture at Inrupt, Inc. This personal website expresses the opinions of none of those organizations.
The Elaboration Likelihood Model of Persuasion
This chapter outlines the two basic routes to persuasion. One route is based on the thoughtful consideration of arguments central to the issue, whereas the other is based on the affective associations or simple inferences tied to peripheral cues in the persuasion context. This chapter discusses a wide variety of variables that proved instrumental in affecting the elaboration likelihood, and thus the route to persuasion. One of the basic postulates of the Elaboration Likelihood Model—that variables may affect persuasion by increasing or decreasing scrutiny of message arguments—has been highly useful in accounting for the effects of a seemingly diverse list of variables. The reviewers of the attitude change literature have been disappointed with the many conflicting effects observed, even for ostensibly simple variables. The Elaboration Likelihood Model (ELM) attempts to place these many conflicting results and theories under one conceptual umbrella by specifying the major processes underlying persuasion and indicating the way many of the traditionally studied variables and theories relate to these basic processes. The ELM may prove useful in providing a guiding set of postulates from which to interpret previous work and in suggesting new hypotheses to be explored in future research.
Section I: Background on Behavioral Economics and Cognitive Psychology
Let’s start with a hypothetical example. Imagine you’re buying a new car. It’s going to cost $20,000. While at the dealership, you learn that adding a top of the line radio will add $150 to the total cost. You do a quick online search to see if this is a good deal, and learn that 15 minutes across town, another dealer is charging only $15 more for the same radio addition and the car price is the same. Will you drive across town to save $135? An economist would predict that you would calculate the cost in terms of your time and gas, and compare it with the benefit of saving $135, so you would make the trip. In reality, a human will say, “what’s $135 more when you’re spending over $20,000?” and then not bother with the effort of going across town.
Now take the same person who, 3 months later, is now buying a $150 smart speaker for their home. They learn that another story 60 minutes away is selling the same speaker on a flash sale for $15. Will the person go buy it? An economist would say that the time and gas would result in zero net savings, and so that a person would not go make the purchase. But chances are, this human will go the extra distance to save 90% and get such a good deal (even if the cost associated in terms of time and gas money means they aren’t saving anything).
Another great example comes from Dilip Soman, a professor at University of Toronto and the author of one of our favorite Behavioral Economics books, The Last Mile. In it, he shares, “Something that was salient to me a few years ago here in Canada, our Canadian government introduced a welfare scheme called the Canada Learning Bond. Without diving into details, it was essentially $500 for eligible low income families, with the goal of educating your kids. When the program was being put into place, I vividly recall an economist saying, ‘Who would not accept the Canada Learning Bond?’ The take up rate should be like 100 percent. It turns out take up was only 16 percent for the first few years. The challenge wasn’t that people didn’t want the money, it was a great product, but you needed a bank account, and a particular kind of bank account to accept the money. These low income people, for whom it was designed, didn’t have the time to go to the bank or didn’t want to go to the bank. So the solution wasn’t in promoting the bond or increasing the amount of dollars, it was actually making it easy for people at the last mile to sign up. That in essence is what we mean by the last mile—the fact that people inside of the organization have a particularly hard time relating to what’s going to happen at the last mile.” This is exactly why behavioral economics are so important. If we can truly understand people, we can then design solutions in partnerships with them that will truly work.
Here are a few other examples of solutions that are built on behavioral economics that defy the economic idea of the “rational actor”, but are incredibly effective:
- in male urinals to increase accuracy
- The Save More Tomorrow retirement savings plans
- The redesigned military airplane cockpits differentiating levers by touch and sight by starting with the easiest way to pay-off debt (not the most expensive)
- Changing the default selection for organ donation
Daniel Kahneman, author of Thinking, Fast and Slow and a pioneer in this field of study, said “It seems that traditional economics and behavioral economics are describing two different species.”
This is, in fact, behavioral economics at work. And, according to Dan Ariely, humans make irrational decisions like this all the time. In fact, he argues in his best-selling book that we are predictably irrational. Understanding our irrationality, and the irrationality of others, is a vital ingredient for solving last-mile barriers to social impact.
Beyond saving money, behavioral economics affects the way people make any decision, from who they love, how they spend time, where they work, and choices they make related to their health. It also helps explain why we have corruption in government, why it is so hard to pull oneself out of poverty, why leaders are not acting with enough urgency to address the climate crisis, and why we have rising inequalities.
Sections II and III will help you think more about how to use the powerful principles of behavioral economics in social impact programs and social entrepreneurial initiatives. But first, let’s take a moment to better understand the foundations of behavioral economics.
Is New York City Dead Forever?
1) I've now spent half my life – nearly 27 years – in New York City.
The latest Robinhood Investors Conference is in the books, and some hedge funds made an appearance at the conference. In a panel on hedge funds moderated by Maverick Capital's Lee Ainslie, Ricky Sandler of Eminence Capital, Gaurav Kapadia of XN and Glen Kacher of Light Street discussed their own hedge funds and various aspects of Read More
I lived on the Upper West Side for six months right after college from September 1989 until March 1990, when I was helping Wendy Kopp start Teach for America. I then returned to Boston and lived, worked, and studied there until I moved to NYC for good in May 1994. Susan had finished law school a year earlier, we'd gotten married in October, and she'd started a job as a lawyer in midtown. so I followed her there the day after I graduated from business school.
I remember being filled with trepidation – the city felt so strange, big, and dangerous (and, having spent the previous 16 years in New England, I of course hated the Yankees!). But I soon grew to love the city and now consider myself a New Yorker. I hope to never leave.
But I'm worried about my hometown right now. The pandemic hit New York harder than anywhere on earth earlier this year. Thankfully, it appears to be behind us now – the latest statistics on cases, positive test rates, hospitalizations, and deaths (I track them daily here) are now among the best in the country.
But we're nowhere near back to normal. The business districts like Midtown and Wall Street are still ghost towns – and some of these may never come back (for example, see this New York Times article: Retail Chains Abandon Manhattan). And every day, I read stories and hear about friends – wealthy educated people who make up the bulk of the tax base – who left and have decided not to move back, permanently.
I think New York will bounce back, as it always has. But not everyone agrees. To wit, a guy I met years ago, James Altucher, recently posted a provocative essay entitled: "NYC IS DEAD FOREVER. HERE'S WHY." You can read it on Facebook here.
I sent it to a bunch of friends and family, asking for feedback, and got many responses that I posted here.
Rigging the mind … with an app
Naturally, the answer is yes.
As proof we need look no further than the plethora of examples Nir Eyal presented in Hooked: How to Build Habit-Forming Products. From social media platforms to “free” games like Candy Crush and Farmville, apps have the power to shape (and even reshape) our lives. In Eyal’s words: “To build a habit-forming product, makers need to understand which user emotions may be tied to internal triggers and know how to leverage external triggers to drive the user to action.”
The real question is: Can an app change human behavior for the good? After all, it’s one thing to “hook” someone with an app that delivers endorphins the way gambling or junk food does (neither of which Eyal argues for). It’s another thing altogether to hook someone with an app aimed at changes we wat but struggle desperately to implement.
To answer that question, here’s a sneak peek at Ariely and Ferguson’s current prototype and how they’re using the principles mentioned above.
Just remember: Each of these triggers are hardwired into the human mind. That means your own changes — personal, professional, and technological — should lean on them too.
Making good change easier
It’s true: as humans, we’re terrible at change. But that doesn't mean the fight is in vain.
Instead, the implications of behavioral economics — alongside the broader sciences of human decision making we’ve touched on — should push us in two directions.
First, on the personal front, change works from the outside in. If you want to lose weight, buy a smaller plate. We set ourselves up for success or failure not because of internal factors like willpower, motivation, and drive, but because of external factors. Lasting change isn’t as much about moral fortitude as it is about arranging our environment — the world we interact with — to either trigger or inhibit our behaviors.
Second, on the professional front, products and services, apps and tools must all likewise adhere to the very same lessons. This applies to design and UX as much as it applies to marketing and management.
Whatever change you’re trying to create — whatever product you’re trying to hook your audience — begin with how humans actually make decisions:
1. Default Bias: How can you make the opt-in process automatic? What can you “pre-populate” during onboarding or roll out
2. Friction Costs: What can you remove? In the words of Nir Eyal, innovation is nothing more than understanding a “series of tasks from intention to outcome” and then “removing steps.
3. Anchoring: What do users, whether customers or employees, see first? How can you leverage that first impression at a meeting, in an email, or within an app to frame the rest of the process
4. Pre-Commitment: Are you building on small, voluntary commitments? Small yeses early on lead directly to big yeses later, especially as change gets tougher
5. Present Bias: How can you drag future results into present reality? What “hell” will your change save people from? What “heaven” will it deliver them unto?
6. Social Proof: Who do your users look to for making their decisions? How can you encourage those influencers, or even just fellow humans, to share their own commitment and actions? Unlocking human change is hard, but it’s not mysterious. Just be sure you’re using all that power… for the good.
Aaron Orendorff is the founder of iconiContent and a regular contributor at Entrepreneur, Lifehacker, Fast Company, Business Insider and more. Connect with him about content marketing (and bunnies) on Facebook or Twitter.
Mind and brain portals launch on Wikipedia
Wikipedia now has both a mind and brain portal and a psychology portal which promise not only to keep you up-to-date with the latest encyclopaedic happenings, but also to broadcast news and messages for the psychology and neuroscience community.
The mind and brain portal seems to have been kicked-off by Italian philosopher Francesco Franco (username Lacatosias) while the psychology portal was the brain-child of Zeligf.
Both have been launched in the last few weeks and like everything on Wikipedia, the quality improves as more people pitch in.
So if you’ve never thought of contributing to the world’s best and most dynamic online encyclopaedia, now’s your chance.
Link to Wikipedia Mind and Brain Portal.
Link to Wikipedia Psychology Portal.
Can I split test the price?
Technically, you can, but A/B testing your price is a dangerous territory. A number of companies (Dell, Amazon, and others) have been caught in the past and got in trouble for doing just that, showing different price for the same product to different visitors.
A better and safer approach is to test the price across objects. Don’t test the same product for $19 versus $39. You should test two different products that essentially do the same thing, but just have a different price tag.
Before deciding on your pricing strategy, it’s worthwhile to read Cindy Alvarez’s article where she makes the point that price is not the only cost to consider. When customers consider “what something costs,” they’re actually measuring three main drivers: money (cost), time (how long will it take to learn?), and mental energy (how much do I have to think about this?). Take into account the profile of your buyer.