The McCarron Lectures*

Lecture 10: Behavioural Economics and Rhetoric

Gary McCarron, Simon Fraser University


Gary McCarron is Associate Professor in the School of Communication at Simon Fraser University. Email: gmccarro@sfu.ca 

* Please see Editorial: Syntheses, Reflections, and Conjectures in Scholarly and Research Communication: SRC1+1

* These lectures are provided with the understanding that when used in class you consider fair use, providing a payment of $10 per student. These funds will be forwarded to the author. Please contact Marilyn Bittman, Managing Editor, SRC, for further payment information: bittmanme@shaw.ca


Abstract

Although behavioural economists are not rhetoricians, and rhetoricians are not behavioural economists, they are both interested in persuasion, even as they come at it from different points of view. Lecture 10 argues that behavioural economics examines our choice-making practices and considers how a range of influences works in concert with conventional economic interests to shape the procedures by which we come to decisions. These influences use rhetoric to nudge people to adopt particular beliefs, engage in specific behaviour, and endorse ideas believed to be in the public interest.

Résumé

Les économistes comportementaux ne sont pas rhétoriciens, et les rhétoriciens ne sont pas économistes comportementaux, mais ils s’intéressent tous les deux à la persuasion, même si leurs points de vue diffèrent. Le cours 10 soutient que l’économie comportementale examine notre manière de faire des choix et il considère comment un éventail d’influences, de concert avec des intérêts économiques conventionnels, façonne les procédures par lesquelles on prend des décisions. Ces influences utilisent la rhétorique afin d’inciter les gens à adopter des croyances particulières, adopter des comportements spécifiques, et appuyer des idées censées être dans l’intérêt public.

Keywords / Mots clés
Rhetoric; Behavioural economics; Persuasion; Nudging / Rhétorique; Économie comportementale; Persuasion; Incitation douce


Introduction: Getting to the nudge 

This lecture is about behavioural economics as it might be applied to the field of rhetoric studies—or, perhaps, how rhetoric might be seen as relevant to the field of behavioural economics. Frankly, it does not really matter which way I put it, because in either case, I really only want to draw attention to certain connections between the two fields without concerning myself too much about causality. Chicken-or-egg arguments can sometimes be interesting and important, but I do not believe that they really apply here. Although behavioural economists are not rhetoricians, and rhetoricians are not behavioural economists, they are both interested in persuasion, even as they come at it from different points of view. For a time in the seventies, people were writing books on the similarities between quantum physics and Eastern mysticism, so I am merely following a similar line of argument in bringing into alignment two fields that otherwise might be seen as rather different. I may be going out on a limb in seeking to make connections between the two fields, so I will leave it to you to judge how much it all makes sense. 

The point of this lecture is to show that much of the work that has been done in the past two decades that goes by the name of “behavioural economics” has obvious resonances with work done in rhetoric, and that many of the same issues pursued in rhetoric are relevant factors in the domain of behavioural economics. The two fields share a mutual concern with persuasion and behaviour, but they do so with different inflections, different terminology, and different underlying premises. If we take rhetoric in its simplest formulation to be concerned with persuasion, then its main ambition is to discover the various things people do when trying to be persuasive: the use of certain verbal expressions, the display of particular objects, the marshalling of ethos, the search for the appropriate mode of appeal, and so on. If we take behavioural economics also to be concerned with persuasion, we note that it focuses on a range of mechanisms that resonate with classical rhetoricians, but we will also see rather quickly that behavioural economists add a few things to the list that rhetoricians have tended to treat with less interest, such as incentives and heuristics. These concepts are not unknown to rhetorical scholars, of course, but they are not ordinarily the primary focus of rhetoricians either. By contrast, these are among the key notions that form the bedrock of behavioural economic theory. 

In a good deal of behavioural economics—and I will get around to explaining that phrase shortly—the main concern is with trying to explain how people make decisions; more significantly, behavioural economists are keen to understand why people arrive at a certain decision. Why did you buy that smartphone and not a different model? Aristotle might point to ethos, but behavioural economists might suggest a more substantial role for intrinsic motivation. How did you come to choose a particular investment portfolio? Kenneth Burke might say that consubstantiality played a part in your decision, whereas behavioural economics might point to inequity aversion. Of course, these sorts of decisions, such as purchasing a smartphone, are rather common, and classical economic theory has good and reasonable answers to such questions, too, usually in the form of something called utility maximization. But behavioural economists go an additional step and point out that whereas people generally make good economic choices, there are times, and perhaps those times are more frequent than we care to admit, where people make decisions that are contrary to their best interests. Or, to make that statement less judgemental, we all frequently make decisions that are at odds with the way that traditional economists believe we should make decisions, and behavioural economists are fascinated by this divergence from the orthodoxy of economic thinking. If you choose your clothes to suit your romantic partner and not because they are the best value for your money, is that rational behaviour as classical economics understands it? Traditional economists say you should always choose the best deal and avoid making irrational choices, but maybe knowing what is rational is not always that easy. When Richard Thaler won the Nobel Prize in economics in 2017, several headlines proclaimed his victory had less to do with economic science and more with showing the world that human reasoning is deeply flawed. While classical economic models predict what we should do, Thaler and his fellow behavioural economists are more interested in what we actually do.

Indeed, the concept of should is very important in the argument I will develop over the course of this lecture. Should is a modal condition and falls into the same category as words mentioned in Lecture 9 on Stephen Toulmin, words such as probably, possibly, could, might, and so on. But do not worry, I am not going to embark on a lesson in grammar by explaining modality in relation to these so-called auxiliary verbs. I only want to indicate that when anyone claims that traditional economists say that we should make decisions in a certain way—and that way, as you may have already figured out, is through the application of self-interested reason—it does not mean that if you fail to follow traditional economic thinking you have broken the law or committed a sin. It is just that conventional economic thought is based on the idea that our economic systems work most efficiently when people follow particular kinds of decision-making practices that are neither random nor illogical, and that too much deviation from those strictures could spell problems for the economy in the long run. Traditional economic theory privileges rationality as the central principle that determines intelligent decision-making.

Economists have long claimed that people are rational utility maximizers; that is, we seek to maximize things that are useful to us rather than do things (such as make decisions) that are not going to help us achieve our goals in the most effective fashion. If my trip downtown would be faster taking street A rather than taking street B, then I should take street A. If coffee is less expensive at one cafe than another, then I should get my coffee at the cafe where it is cheaper. Speed and price—or, in economic discourse, saving time and saving money—are logical principles in economic theory. Hence, when people do not act as agents motivated by the principle of rational utility maximization, it puzzles traditional economists. When given a chance to save money, in other words, why would anyone possibly choose not to? If you are rational and your aim is to maximize utility, then you should always choose to get the best deal.

Now, you are probably already saying to yourself that the cheaper coffee might not be worth the savings, and the faster route downtown might also be the less scenic. People do not only make decisions based on the same set of conditions that economic theory values, and economists are, of course, aware of that fact. Yet the essential principles of traditional economic theory include the notion that people should, the majority of the time, make decisions that save them money, save them time, save them aggravation—that is, even though we can and should take into account non-monetary considerations, we can say, as a general statement of fact, that for a variety of reasons, acting rationally means trying to spend less money and trying to save time. In other words, although they are aware that people make their decisions by taking into account a whole range of considerations, traditional economic theorists privilege a certain group of considerations—such as saving money—as primary. According to conventional economic thinking, to be reasonable is to act in rather particular ways.

But I have already pointed out that people are not always reasonable in the way that traditional economic theory predicts. The main point at which we might begin, then, is to ask what sorts of conditions need to prevail for people to act in so-called irrational ways. Consider for instance, the argument that people will usually pick the option that gives them the best reward for their efforts. Naturally, this is not actually always the case, for people are frequently short-sighted and often ignore long-term consequences in their quest for more immediate gratifications. So, short-term thinking versus long-term thinking is one of the areas where people might not appear to be entirely rational in their deliberations. Sometimes we know we are making a mistake but choose the short-term bliss rather than the long-range benefit; in such cases, it might not be that we are merely irrational but that we are willing to forgo the best result in favour of the immediate result for reasons that are not easily reducible to economic considerations. For this reason, we must be careful to avoid assuming that economic factors are the only things that figure into our day-to-day calculations. For example, because you are incredibly thirsty, you might be willing to pay more for a bottle of water than you would if you were able to wait until you reached a store where water was cheaper. Your immediate need to quench your thirst can override the economic benefits of waiting, and very few people (including traditional economists) would describe this as irrational under those circumstances. Thus, short-term satisfactions can trump long-term benefits for the simple fact that not every decision we make is governed by naked economic principles.

However, there are times when our reasons for short-term gratification are not motivated by a biological urge such as thirst or hunger. Sometimes we just seem to reason poorly—according to traditional economic theory, at least—because we allow our emotions to get the better of us. And at other times we simply do not think at all—that is, we decide to “go with our gut instinct” and ignore relevant facts, pertinent information, and expert advice. We are impulsive, intuitive, impatient, rash—we are, in other words, human, and it is our human nature that appears to encourage us to take shortcuts, reason poorly, and pick the tastiest rather than the most nutritious snack. Very often, we actually do know better, but we act as if we just do not care.

You recall that Aristotle complained that people do not always act logically, and he said that our failure to be logical at all times was one of the reasons why rhetoric was actually a useful skill to develop. He noted that people sometimes need to be persuaded of things that they are unable to decide by rational means, such as moral choices. Persuasion has a social role to play, then, because people are not able to always rely on strictly logical reasoning. Therefore, it might be better for us to accept that people are sometimes irrational in their choices and to avoid expecting perfectly logical reasoning all of the time. If we adjust our expectations, the world seems far less puzzling than if we expect completely logical decision-making from the people around us. 

To adjust our expectations in this fashion means accepting that so-called irrational ways of thinking are a fundamental part of human nature, and that classical economic theory is, to a certain extent, out of touch with how people generally reason and come to decisions. We had some sense of this argument in Lecture 9 about Stephen Toulmin. Though trained as a logician, Toulmin argued that people do not engage in argumentative communication according to the narrowly logical way in which philosophers have ordinarily understood the notion of an argument. Toulmin sought to demonstrate that there are multiple kinds of validity, and that actual behaviour, especially as it relates to changes in attitudes, opinions, and dispositions, is an essential part of how we should understand the sphere of everyday arguments. Behavioural economists suggest that we need to put aside our reliance on conventional economic theories and allow some additional factors drawn from the non-economic sphere to have a place at the table. As Kate Douglas (2015) has recently suggested:

Drill down, and it’s not difficult to see where mainstream “neoclassical” economics has gone wrong. Since the 19th century, economies have essentially been described with mathematical formulae. This elevated economics above most social sciences and allowed forecasting. But it comes at the price of ignoring the complexities of human beings and their interactions — the things that actually make economic systems tick. (p. 2) 

According to Douglas, economics needs to accept that other disciplines—biology and psychology, for instance—might actually have important things to contribute to our understanding of economic life. Economic reasoning, she argues, should not just be about the numbers.

Let me offer a quick example here to help clarify the distinction I have been assuming between traditional economics and behavioural economics. The main thing you will have gleaned so far is that behavioural economists are drawn to the fact that people are not always rational actors. Here is a simple illustration of how people sometimes reason things out in ways that are not entirely in keeping with the ordinary rules of logic that traditional economists believe reasonable people should follow.

In a study first reported in 2012, researchers showed that people who create something with their own hands tend to value it as highly as—and often higher than—a similar product made by experts. There is a tendency, in other words, to place a disproportionally high monetary value on something that we have ourselves created, despite the fact that the thing we have made frequently lacks the sophistication and polish that comes with a professionally manufactured item made by experts. In addition, the more time and effort we expend on creating something, the higher our evaluation of that object is likely to be. Things we make by our own hands are like mirrors of our ego. 

This phenomenon is known as the Ikea effect, in reference to the fact that Ikea succeeds in part because its customers must assemble the items Ikea sells, and in offloading the task of assembly to the purchaser, Ikea shifts the pride of handicraft to the consumer. The Ikea effect does not mean that people are blind to the defects of their productions, nor that they are somehow unable to see how an expertly crafted object might be superior to theirs in various ways, but that the expenditure of labour renders people liable to the potentially irrational view that their object, the thing they created with their own two hands, is somehow superior to the professionally created object. 

Naturally, there are psychological matters at play here that are easily teased out. Taking the time to build something is a personal investment, and for a range of reasons the cognitive bias embedded in the Ikea effect understandably kicks in. Time is valuable no matter how you reason things out, and time spent building your own table inherently makes that table of particular value to you. Orthodox economists, however, have trouble with this argument. It is certainly true that time is money—or, at least, time has value—but there are limits to how much we should value our time when there are other principles at play. According to conventional economic reasoning, people should appreciate the higher-quality item over the lower-quality item, and by valuing the time they spent building a low-quality object, they are blind to the fact that that time could have been invested more profitably elsewhere. Hence, orthodox economists believe that the Ikea effect describes an irrational understanding of the world. After all, considerations of the quality of the product, along with due regard for the value of the time invested in building something, should lead any reasonable person to understand that their amateur construction is inferior, not superior, to professionally built objects. You can still take pride in having done it yourself, but you should not be so foolish as to imagine that your creation is as good as or superior to a professional piece of work. 

The Ikea effect is a minor irritant in the grand scheme of things, of course, but it helps us understand that people are moved by complex and diverse motives, as Kenneth Burke argued. It also illustrates that how we come to value things does not depend strictly on the reasons that economists believe are universal because they meet the economic definition of rationality. In other words, people will assign value to things by highlighting factors that, according to the principles of neoclassical economic theory, are not exactly rational. Back to Aristotle’s complaint again: maybe it would be preferable that people always reason logically, but human nature simply does not work this way. Our motives are not always entirely apparent to us, and yet still we act on them as though our behaviour is entirely under the steady hand of our free will. And this notion of free will is an important consideration to keep in mind. 

One of the things that makes us rational utility maximizers, according to traditional economics, is that we should always make ourselves aware of all of the available information in order to make our choices according to a full account of a situation’s pros and cons. In traditional economic thought, in other words, we should take steps to ensure that our decision is influenced by a comprehensive knowledge of as many of the relevant factors as is reasonably possible. In this view, we are said to take the requisite steps to keep randomness at bay, and to ensure that our decisions are grounded in a complete accounting of the pertinent information. We do not want to be influenced by information, behaviour, or processes that are not directly germane to our decision. 

This model of decision-making suggests that the action of free will is paramount in how we come to decisions. Behavioural economists, by contrast, subscribe to the view that free will is not as readily exercised as we tend to think. We imagine that we are operating as rational utility maximizers, collecting all relevant information and then making the final choice according to our free will. According to behavioural economics, however, this is just another fantasy created by classical economic theory. Consider the following illustration of the way people make decisions by drawing on information about which they are not fully aware. 

Robert Cialdini (2016) describes the technique of a salesperson he once worked with during a research semester as the two of them went door to door peddling home fire-alarm equipment. Cialdini, who was studying the sales techniques used by this home fire-alarm company, noted that this particular salesperson, who was widely regarded by his colleagues as the company’s most successful seller, used the same technique on virtually all his home visits. After being allowed inside to pitch his product, he would speak with the residents (usually a couple, Cialdini notes) about the need for better security measures against the danger of fire and then ask them to fill out a 10-minute questionnaire to see how much they did or did not know about the actual dangers of home fires. Once they had begun filling in the questionnaire, the salesperson would mutter aloud that he had forgotten something in his car that he really needed. He would ask if they would continue with the test while he went to the car and then came back into the house. Sometimes they gave him a door key; other times they left the door unlocked while he went to and from his vehicle. Interestingly, Cialdini concluded that this little trick explained why he had the best sales record of anyone in the company. The question is why?

When Cialdini (2016) asked him why he did the same thing at each sales meeting, his answer was simple: “Who do you let walk in and out of your house on their own? Only somebody you trust, right? I want to be associated with trust in those families’ minds” (p. 7). In other words, the salesperson had discovered a very simple technique to establish himself as trustworthy or, in Aristotle’s terminology, to achieve a form of ethos that spoke directly to his trustworthiness. Time and time again, this sense that he was someone the homeowners could trust helped him to complete the sale. Moreover, it is especially interesting that the residents were mostly unaware of what they were doing in granting the salesperson permission to come and go freely as they worked on the so-called questionnaire. And whether it is a legitimate questionnaire or just a marketing ploy—and I suspect the latter—is irrelevant. Its chief function is to distract them from the salesperson’s antics and thereby reduce their vigilance. The result is that they tend to be more open to his suggestion that he will simply go to his car and let himself back in, and they tend to be less concerned in giving a stranger such freedom. His ploy helped him to manufacture the conditions in which trust is given. And because we tend to be more willing to buy things from people we trust—and because people we trust might be considered trustworthy owing to Burke’s principle of consubstantiality—this salesperson was more successful than his colleagues in signing up new customers. It is not entirely logical that the clients should respond to his tactic this way but it is at least understandable in the larger context of human relations.  

What is logical, then, is not the same as what is psycho-logical, and it is at this point that behavioural economists enter the scene with their research and theories on how people come to decide on particular courses of action—and how those decisions are arrived at—by the subtle pressures that can be applied if one understands the power of irrational impulses. The most famous word associated with this idea as it operates in behavioural economics is nudge, the notion that people can be persuaded to pursue certain actions because they have been given ever so slight a push in a particular direction. The basic idea behind nudging is that by the use of subtle tactics such as sequence, defaults, and heuristics, people can be influenced to do things one way rather than another. Most people nudge their friends and families all the time, of course, but in the hands of behavioural economists, nudging has become a well-known strategy in everything from door-to-door sales to government policy. 

For instance, we know that arranging food in a cafeteria has become a bit of a science focused on the way goods and services are deployed in physical settings, the study of “servicescapes.” Studies have shown that people are more likely to choose the item that is at eye level than the item they have to stoop down to reach. Taking this into account, school cafeterias now frequently arrange their food displays so that the healthier foods are at eye level and the unhealthier desserts are placed in separate locations to increase the energy needed to get to the sweets. By simply rearranging the layout of the school cafeteria, the results can be a 25 percent increase in children choosing the healthier foods.

This is wonderful if you are a nutritionist or parent, but some critics worry that by nudging people to make certain choices at the expense of alternatives, the people doing the nudging are using their choices and their preferences as guidelines for the rest of us. I would tend to doubt that many people would really complain that cafeterias arrange their offerings in order to nudge children to pick the healthier items, but it is worth asking the obvious follow-up questions: What if the same principles were applied in other venues and with less altruistic motives in mind? What if cafeterias displayed their food so as to encourage customers to pick the more expensive products rather than the healthier ones? The theory of nudging has become controversial in ways that harken back to Plato’s arguments about the purposes of rhetoric. In other words, there are those who worry that nudging is potentially immoral in that it influences people—and often without their awareness—to go one way rather than another; to make this choice rather than that choice; to think they have come to a wise decision when they have been subtly influenced to arrive at a preferred conclusion. Governments nudge us all the time, of course, with anti-smoking campaigns and warnings about the dangers of driving under the influence, and most people have accepted that this sort of messaging—this sort of nudging—is mainly beneficial. Public health messages encouraging vaccinations, emergency preparedness ads, forest fire prevention drives, energy conservation promotions, and a host of other public service campaigns are all forms of governmental nudging intended to shape public behaviour for specific ends. But it does not take long to imagine scenarios where governments might devise campaigns that favour certain political parties, certain corporations, or even certain ethical choices conducive to the interests of groups whose donations keep a particular government in power.

The establishment of the Behavioural Insights Team (BIT) in England means that career opportunities in the field of nudging are growing. A relatively small number of researchers began the first BIT, and there are now offices in other parts of the world, including Canada, where the first office opened in Toronto in 2019. Set up in 2010 by the British government, the BIT is now a partially private enterprise whose primary focus is, of course, nudging citizens to be more socially responsible. A number of BIT campaigns have helped nudge (or persuade) people to pay various fines by reminding them of their upcoming due date by text message. The BIT has also boosted the rate at which delinquent parties paid outstanding tax debts by directing them online to the exact form they needed to complete rather than just linking to the web page on which the form was posted. It was also able to reduce the rate of medical prescription errors simply by redesigning specific medical forms. The BIT has helped greatly increase the number of people who have agreed to be organ donors by asking a slightly different version of the traditional question. Let me take the case of organ donation as an illustration of how nudging can work.

In the past, when Britons went online to renew their car-tax licence, they would be asked if they would be willing to be an organ donor. The BIT thought there was room for improvement and tested out a number of different questions to find just the right wording. The question that was most successful was: “If you needed an organ transplant, would you have one? If so, please help others.” The BIT claims to now get 100,000 more organs per year than in the past. Just a simple rephrase nudged people to be more likely to agree to be organ donors. Researchers from a wide swath of the social science world are watching with caution and interest to see just what the BIT can accomplish.

Shaping public policy through messaging is not new, of course, as we saw when talking about people such as Edward Bernays, Harold Lasswell, and Carl Hovland. What is new is that the processes by which the messages are developed and tested have become even more sophisticated. The BIT applies the latest theories from psychology and behavioural economics to figure out how best to sway people in a preferred direction. 

This has been a longer than usual introduction, so let me now spell out how I want to proceed for the rest of this lecture. Put plainly, I am going to focus on behavioural economics as it examines our choice-making practices and consider how a range of influences works in concert with conventional economic interests to shape the procedures by which we come to decisions. This will entail a brief consideration of the techniques by which populations are encouraged, persuaded, and nudged to adopt particular beliefs, engage in specific behaviour, and endorse ideas believed to be in the public interest. Some of the issues I will raise take us back to similar problems as they were first expressed in Plato and Aristotle, but neither of these eminent Greek thinkers will be making an appearance in this lecture. This is because rather little work has been done so far in bringing rhetoric and behavioural economics into contact, despite what I believe are manifest connections in regard to things such as motive and appeal. A good deal of the work in behavioural economics is focused explicitly on understanding the mechanisms by which people come to decisions and then seeking ways to influence that process and encourage people (by nudges, in particular) to make choices of one kind rather than another. As I have already suggested, one purpose might be to get people to make healthy food choices, and various institutions have developed techniques to quietly direct people to make better dietary decisions. But once the techniques are available to everyone, we have the same problem Gorgias confesses can be found in the art of rhetoric: there is nothing to prevent people from nudging the general population in all sorts of directions to suit the interests of those in charge.1 

Traditional economics versus behavioural economics 

The main point about behavioural economics is that it is concerned with actual behaviour and not with the abstract theories of conventional economic theory. It focuses, in other words, on a number of factors other than simple economic calculation that influence how we act and behave, things such as our personal psychology, our upbringing, the range of cultural forces we are exposed to, the social context of the moment, and so on. We make both economic and non-economic decisions for a wide variety of reasons, and behavioural economists consider it unwise to reduce our decision-making reasons to a single concept: rational utility maximization.

Traditional economic theory says people seek to maximize their utility: get the best deal, spend the least amount of time and money, gain advantages over other people, and so on. In traditional economics, people are driven by rational self-interest because they are described as mathematical calculators. This does not mean that economists believe we are perfect in our calculations, but this is the default assumption. Nobel Prize-winning economist Richard Thaler (2015) explains it this way:

The core premise of economic theory is that people choose by optimizing. Of all the goods and services a family could buy, the family chooses the best one that it can afford. Furthermore, the beliefs upon which Econs make choices are assumed to be unbiased. That is, we choose on the basis of what economists call “rational expectations.” (p. 5) 

You can probably figure out while reading this passage that by the word econ, Thaler means the fictional individual that economists have created to replace the word people. In other words, an econ is a member of the species Homo economicus; in behavioural economics, by contrast, we are good old-fashioned Homo sapiens. 

Other writers have made the argument that Homo economicus is an old-fashioned idea that needs updating. The philosopher Mary Midgley (2018) suggests that economics needs to accept that its model is out of date and needs to be revitalized by taking account of other ways of thinking. By accepting that other points of view are relevant to the discipline of economics, she says, economics can be reformulated for the current age.

Many scientists are actually beginning to suspect that something of this kind [embracing other points of view] is needed for their own topics. The point has been felt particularly strongly of late since historians and other specialists failed to predict the end of the Cold War. More recently, too, it has become striking in economics, where the accepted orthodoxies dramatically failed to predict the financial disasters of 2008 and have shown no signs of developing to fit the facts of the times. So, since the effects of bad economics affect the whole population, biologists have offered to help by suggesting new methods, not fixed, like the former ones, to a familiar individualistic concept of Economic Man, but shifting, as other evolutionary thinking does, along with the changing units of human society. (pp. 18–19)

To make this argument more succinct, many commentators, including economists themselves, are questioning the wisdom of holding onto a model of economic rationality when studies of actual human behaviour reveal that we are not always rational actors. We are liable to be influenced by non-economic factors and to make choices that, at first look, seem to fly in the face of reasonable decision-making practices. We are persuaded by considerations that are not entirely rational, and while this has been a striking insight for behavioural economists, it is something of a truism for rhetoricians who have been making the same argument for centuries. So, let me explain in a more focused way what sorts of things behavioural economics has to say, and then we will be in a stronger position to see how it bears on rhetoric—and how rhetoric bears on behavioural economics.

Incentives 

One of the primary considerations that underlies economic theory of the classical sort is that people are motivated by specific incentives. Not surprisingly, traditional economics regards our primary motivation or incentive to be of a mainly economic variety, and this normally translates into money. Of course, as I mentioned at the outset of the lecture, even traditional economists are aware that people are not motivated solely by money in every instance. However, their theories about Homo economicus make it entirely illogical for people to go against monetary gain. That is, all things being equal, people will not accept a lower wage, smaller salary, or reduced payment when the higher earning is available.

There are two sorts of incentives economists usually address: extrinsic and intrinsic. Extrinsic incentives are rewards that come from outside of us: money, gifts, and goods. However, there are also non-monetary extrinsic incentives, such as social approval or social success (and perhaps physical threats, too). Getting good grades is a reward and, therefore, a form of extrinsic incentive.

Intrinsic incentives, then, refers to attitudes we might entertain, such as professional pride, duty or loyalty, and pleasure in doing something we take pride in, like running a marathon. Sometimes we wonder why people work for low remuneration, and the answer frequently is that they enjoy it or find it socially desirable. While money is essential to life in a capitalist society, it is not the only incentive that motivates us.

Behavioural economists have noted that there are times when extrinsic and intrinsic incentives come into conflict, and when this happens one will often crowd out the other. In a simple experiment, for example, two groups of university students were given the same puzzle to solve. One group was paid, and the other was not paid. Interestingly, the group that was paid tended to do more poorly than the unpaid group. Post-experiment interviews with the subjects led the researchers to conclude that the amount the paid group received was insufficient to motivate them, so they felt little inclination to complete the puzzle with any enthusiasm or interest. The unpaid group, however, was motivated by the intrinsic reward provided by the intellectual challenge of the puzzle; therefore, they worked harder to solve the puzzle for the pleasure of getting the right answer. Behavioural economists describe this as a situation in which the extrinsic incentive, payment, crowded out the intrinsic motivation. Or, to put that another way, payment demotivated them to work on the puzzle. The researchers surmised that if they had been paid more money, they might have been motivated rather than demotivated. Regardless, the chief finding for behavioural economists was that people will act contrary to classical economic principles inasmuch as in particular contexts and situations, getting no payment might be better than receiving a very small payment. 

There are other ways in which this crowding out has been shown to occur. A well-known study by economists Uri Gneezy and Aldo Rustichini (2000) looked at the behaviour of parents whose children were enrolled in a nursery school after the school decided to impose a penalty for picking children up late. Teachers had grown frustrated with so many parents arriving late and being forced to offer child-minding services beyond the agreed-upon pick-up time. The theory was that by imposing a fine for lateness, parents would start turning up on time to collect their children. 

You can probably figure out what happened. More parents started arriving late. The reason? Although the fine was intended to be a deterrent, many parents interpreted it as a fee. In other words, if you paid this extra fee, the teachers would stay late to look after your child and you could arrive late without feeling guilty. Indeed, one reason most parents did normally turn up on time was that they would have felt guilty about being late and inconveniencing the teachers; a monetary penalty was never a part of their thinking. Now that this intrinsic incentive had been removed by the extrinsic incentive of the fine, some of the previously good parents became bad parents. Their intrinsic motivation had been crowded out by the extrinsic incentive.

Behavioural economists have argued that paying people, fining people, or using money in general to persuade them to adopt particular attitudes or act in preferred ways can sometimes produce apparently paradoxical results. For instance, it is widely argued that paying people to donate blood ordinarily backfires since the reason people donate blood is to be altruistic, not to earn extra cash. The extrinsic motivation ultimately demotivates them, and they opt to not donate blood because it makes them look greedy.2

This discussion about these kinds of experiments naturally leads directly into the subject of charitable donations, a thorny and complex issue that behavioural economists have been studying for decades. Ordinarily, one might presume, giving to charity is a consequence of intrinsic incentives. At least, one might believe this is the case if one has a particularly high opinion of people. But the fact is, giving to charity has long been a contested area for social psychologists trying to pin down precisely what motivates people to be generous. And as it turns out, things are complicated by the fact that when people are donating out of a genuine wish to be helpful, they become less generous if news of their generosity is made public. That is, the extrinsic reward of public acclaim can work against the intrinsic incentive of the good feeling that comes from just knowing that you have helped while remaining anonymous.

But it is even more complicated than that, it seems. In a series of experiments that involved making a donor’s name public, it was found that getting some financial reward could motivate people to be more generous—which explains why governments give tax incentives as a form of external reward for charitable contributions. People are also motivated by what behavioural economists refer to as image motivation, which just means that we enjoy being admired, and if giving to charity makes us more admired, then we will give to charity. In other words, there are conflicting results from these studies, some of which suggest that monetary rewards backfire and others that suggest that they work to promote higher levels of philanthropy. One idea some researchers have floated is that it may be partly connected to the ease with which information about charity donations can be made public. In other words, if it requires work for me to make my name publicly known, then I will shy away from broadcasting the fact that I am a generous donor (or having others broadcast it for me) because it will appear that I am explicitly craving attention. But if my name is made public as a matter of course, then I may be motivated more by the positive image this will create. If my name will be automatically published whenever I donate to an arts organization, such as a theater, I will donate in part because I know my name will be made public, and the idea of image motivation will come into play. And because I can say I did not seek to get my name published, I can avoid the criticism of having overtly sought attention. For these reasons, it has been suggested that in this world of social media, increased donations may result from publicly releasing the names of donors.

Finally, incentives can be discussed in classical economic terms but in light of research from behavioural economics. For instance, it is perfectly true that people are motivated to work when they are paid well, but in conventional economic theory, it is sometimes said that it is inefficient for companies to raise their wages too high and thereby put profitability at risk. This sort of reasoning is probably most commonly used in relation to minimum-wage discussions. Employers oppose raising the minimum wage because they say it will cut into their profits, and a less profitable company is not good—for its employees or its owners. Proponents of raising the minimum wage argue that it will have the opposite effect, and they base their arguments on recent studies conducted by behavioural economists. What are those arguments? 

Behavioural economists have developed what they call efficiency wage theory. Now, no theory about wages is uncontroversial, and I am aware that efficiency wage theory has its opponents. But let me sketch in some of the details in order to show what it involves without getting bogged down right away in the debates it has initiated.

An efficient wage is one that minimizes a company’s labour costs. The theory is based on studies going back as far as Henry Ford’s automotive plant showing that increasing wages can lead to an increase in productivity. This is actually explained in standard economics on the principle that higher wages make workers value their job more highly and, therefore, work harder. Classic economics also says that when there is a downturn in the economy, better-paid workers will be able to afford food, shelter, and healthcare and, as such, will be more physically able to work and help the company maintain its profitability through difficult economic times. Also, paying your workers better can deter strikes.

Behavioural economists accept all of these reasons but then add a few others to efficiency wage theory. They point specifically to intrinsic incentives as being important to the theory. Of course, workers will labour for the extrinsic reward of money in the form of better wages, but they will also work harder for the intrinsic incentives such as loyalty and trust. Studies show that when your boss treats you with respect, you will return that treatment in kind and show respect, too, usually by being a better worker. Some behavioural economists have described this relationship as a “gift exchange” in that your boss is giving you something and you are reciprocating in kind. It has also been found that if your boss pays you well and treats you with respect, you will require less monitoring while on the job, effectively reducing the need for supervisors. Furthermore, people who are well paid spread the word; that is, they mention to their friends that they have a good job with a respectful boss, and this actually saves the company money when they need to expand by providing a pool of willing employees ready to go to work. In other words, it turns out that employees who are well paid do some of the work that the company might otherwise have to pay for, such as networking. The well-paid employee also helps the company avoid hiring people who are not truly motivated to do the job. Bringing non-monetary incentives into the analysis can have important implications for labour and business. 

Does efficiency wage theory answer the question about a living wage (that is, a wage that allows everyone to meet local costs of basic living)? It all depends on whom you ask, but it is certainly a theory that forces us to deal with the fact that people are motivated by a range of incentives that are not always strictly monetary. We are persuaded to do things—to adopt attitudes and perform behaviours—by a host of considerations that fall outside of traditional economic models of human nature.

Sociality in behavioural economics 

Behavioural economics also challenges the long-held assumption of neoclassical economists that people reason about their decisions most efficiently when they take only their own interests into account. This probably does not make sense at first look, but the underlying argument is pretty standard in economics. If everyone looks out only for their own self-interest—and if everyone is genuinely doing this and not cheating—we will have a self-correcting market where the determination of values will be a result of disinterested, objective, unbiased activity. If I try to sell you something and set my price according to my interests, I may have settled on a price that is so high that you are unwilling to make the purchase. So, you will try to get a lower price. I may decide to sell or not sell, or I may enter into negotiations with you over a different price. Once we agree on a price, so the theory goes, that price will become my new acceptable amount, and it will be the standard price I charge everyone. The market, in other words, works according to an invisible hand that sets prices that are fair to the wider community of purchasers and ensures that sellers make a sufficient profit. Hence, when economists talk about everyone pursuing their own self-interest, they are really saying that markets work best when they are left alone. 

Needless to say, behavioural economists have a hard time with this line of thinking since everyone knows that we do not really make our decisions according to our self-interests only. We often take into account other people and their interests, especially those people we are linked to by affection, friendship, or family. Evolutionary biologists explain that we are predisposed to aid our kin. In addition, it is sometimes suggested that behavioural economics focuses on the notion that most people, by and large, are opposed to inequality, and that we tend to take measures to even out playing fields when we have the opportunity. So, in opposition to classical economics that focuses on individualistic behaviour motivated primarily by self-interest, behavioural economists focus on our group interactions, our sense of belonging and kinship, and our inherent dislike of inequality (which, by the way, is said to be an intrinsic disposition we share with other primates). Because we prefer that things be relatively equal, all things considered, behavioural economists describe what they call our preference for equality as inequity aversion

Just like incentives come in two types, so too inequity aversion has two broad descriptions or types. The first is what is known as advantageous inequity aversion. Imagine a wealthy person walking down a city street and encountering a person begging for change. The wealthy person is distressed to see the man begging for change and thinks to himself that it would be a far better society if our standard of living was more equal—that is, he wishes for greater equity. He feels this way from a position of privilege, and this is why it is called advantageous equity aversion. Now, think about the man on the street who is asking strangers for spare change. This person is far more motivated by a desire for equity since his situation of inequity is his everyday lived experience, and it is hard to ignore what you experience daily. The wealthy man can, of course, feel justifiably upset about the inequality in society, but he can also go home to his comfortable life. The poor man may not have this luxury, and he is said to experience a preference known as disadvantageous inequity aversion. What this means is that the beggar does not want to be worse off than other people around him; that is, his position is one of disadvantage, and so he is concerned that equity be used to address disadvantages.

Behavioural economists have spent a good deal of time on the theory that people generally experience a preference for inequity aversion, and one of the ways they have studied the phenomenon is through the Ultimatum Game, a famous game in economic theory based on a variety of scenarios that all operate on the same premise. The game involves setting up a situation where inequity is produced at the outset to see how the players will respond. In its first version, the players were drawn from a pool of students at Cornell University and a pool of students at the University of British Columbia. Here’s the most common version.

Two people are selected to play the game. The players are picked at random and assigned to one of two positions, either the proposer or the responder. In its original format, the game was played for real money to make it as close to the real world as possible. I am unsure if researchers have continued with that approach, since it would get rather expensive. Still, something of value to the two players must be used to make the game work. Let us stick with the monetary version, as it is the most common. 

To start the game, the researcher gives $100 to the person designated as the proposer with the instruction to share the money with the responder. Hence, the proposer is faced immediately with a question: how much should they share? No amount is specified by the researcher, so the proposer can offer as much or as little as they wish. What is the catch, you ask? Well, if the responder refuses the offer, then all the money is returned to the researcher, and neither the proposer nor the responder gets a penny. Hence the proposer must determine an amount that the responder is willing to accept.

Now, the weird thing is that classical economic theory, which says everyone should simply pursue their self-interest, says that the proposer should be able to offer any amount to satisfy both game participants. The proposer should be able to give $1 and keep $99. Why? Because at the start of the game the responder has absolutely nothing, and since $1 is better than no dollars, the responder should be willing to accept that amount and allow the proposer to keep the other $99. One dollar is better than nothing—literally. Furthermore, neither the proposer nor the responder has worked for or earned the money in any way, so it is literally free money. The fact that the proposer has been given the money to split and the responder has been given the task of responding to the proposer’s offer does not mean anything in the grand scheme of things—it just happened to work out that way; both were chosen for their roles by random selection.

If the argument favouring a $1/$99 split does not seem to match your intuition, imagine a slightly different case. Suppose someone walks up to you on the street and says, “I am giving money away today, and I have a dollar for you. Do you want it?” Putting all weirdness aside, you should say yes, since $1 is better than no dollar. In other words, the classical economic logic of individualistic self-interest seems well suited to a situation in which someone just decides to offer you $1. A dollar cannot buy much, but it buys more than no money. By that logic, it would seem that when the proposer in the Ultimatum Game offers you $1, you should take it.

But in the course of the actual game, it turns out that most responders refuse to accept $1. In fact, after running the experiment now for many years, researchers have found that the average amount the proposer needs to offer turns out to be close to half the total amount, around $50. In fact, in some versions of the game, responders have been known to reject amounts of 40 percent ($40) as unfair with the result that nobody gets any money. Why do people reason this way in the game when they might not respond this way when approached by stranger offering a small amount of money? Well, the typical explanation has to do with inequity aversion.

When offered a small amount, the responder reacts as though they are being treated unfairly, and so the preference for inequity aversion takes over. It is not just that they are getting some money that influences their thinking, it is that someone else—who could just as easily have been them—is getting more. In fact, a study of brain scans of responders using functional MRI indicated that the part of the brain that was activated when they thought they were being treated unfairly was the same neural area that is activated when people are exposed to a disgusting smell. Thus, behavioural economists sometimes refer to an experience of social disgust to explain why we react so negatively when we perceive ourselves to be on the wrong end of an inequitable distribution of money or some other resource. It is fascinating that even though the responder and the proposer were chosen at random—and even though both participants know that this is the case—it still “feels” unfair to the responder that they are being asked to accept less than half the money when, had the coin toss gone the other way, they would have been the proposer. It does not really help the situation to assure the responder that life is unfair, that the randomness of the game’s coin toss put them on the responding side of the game. It just is not right, and so the responder demands an amount they believe is fairer. Now, had they won $20 in a lottery while their best friend won $1,000, they probably would not mind. After all, in the case of the lottery there was no third party, no researcher determining an amount, making a decision, exercising judgement, and employing free will. But once those components are added into the mix, you cannot easily persuade the responder than $1 is better than no dollar. The responder knows that, but just does not care about logic in that moment. They act in a way most of us would probably behave, but in a way, nonetheless, that puzzles neoclassical economists. 

Social reference points 

Because the Ultimatum Game entails a constellation of psychological concerns, such as intention and fairness, let us consider further the matter of the sociality of human decision-making. It makes sense when thinking about decision-making to consider the simple fact of social norms. People often copy what other people are doing, especially if they regard them as belonging to a reference group to which they belong themselves, or to which they aspire to belong. We see such copying or imitation extensively in terms of social customs and traditions, and even in religious cults, though cults are obviously an extreme example of how seeking to conform comes with its own set of dangers. Still, we often compare our behaviour to the behaviour of others to ensure that we are do not look like outliers. Behavioural economists refer to the behaviour of the people we tend to copy as our social reference points. This is rather obvious and hardly unfamiliar. It does, however, have some interesting rhetorical potential just waiting to be exploited. 

In the current age, many people are constantly taking note of their social reference points; an example is the vast audiences following the latest social media influencers. Examining the role of media influencers is fascinating, but it is not what I want to focus on, though it provides a good and powerful example of the way social reference points can be constructed. Instead, I want to point to the way that we are persuaded to attend to our social reference points by mentioning an interesting social psychology experiment that has been used as the basis for changes to various government policies. It involves the fact that because we are influenced by the behaviour of our reference group, governments can shape your behaviour—nudge you, in the technical jargon—by identifying your social reference points and then telling you what they are up to, what they are doing, and how they are behaving.3 Done properly, this can be a huge force for bringing about a desired behaviour. The study I am thinking of—and there is more than one—was done in the United Kingdom and conducted by the BIT. To determine the effectiveness of nudging, people who were delinquent in paying their tax bills received one of two letters. One letter basically said, “You’re late, so pay up.” The other letter said much the same thing except that it also contained some additional information telling the recipient that their behaviour was not in keeping with the norm because most other people had paid their taxes on time. Believe it or not, the people receiving this second letter paid their outstanding bill more quickly than people in the first group. 

Exploiting the persuasive power of our social reference points is now well known, though the extent of the rhetorical power of using techniques based on this fact is sometimes surprising. Robert Cialdini’s (2016) study of how to get homeowners to be more energy conscious shows this power quite plainly. Cialdini followed the approach outlined in the above study of British taxpayers but with a more elaborate model. Cialdini had his team send one of four possible letters to randomly selected people. Three of the letters contained information indicating good reasons why people should try to conserve energy: benefits to the environment, social responsibility, monetary savings, and so on. But the fourth letter simply stated that other people in the neighbourhood are doing their best to save energy and be more conscious of their energy usage, and perhaps you should do the same. This was called the social-proof message. The results were dramatic. Cialdini (2016) writes:

At the end of the month, we recorded how much energy was used and learned that the social-proof-based message had generated 3.5 times as much energy savings as any of the other messages. The size of the difference surprised almost everyone associated with the study — me, for one, but also my fellow researchers, and even a sample of other home owners. The home owners, in fact, expected that the social-proof message would be least effective. (p. 163) 

What is particularly telling is that Cialdini (2016) goes on to write, “when I report on this research to utility company officials, they frequently don’t trust it because of an entrenched belief that the strongest motivator of human action is economic self-interest” (p. 163). Breaking free of conventional economic thinking can be difficult.

Other forms of behaviour fall into the broad category of sociality, such as herding behaviour and identity, but I am not going to go over those in detail as they more or less repeat some of the things we have already covered. The key thing to take away is that people are not simply isolated, individualistic, utility maximizers, it turns out that we care quite a bit about how others see us, and we use those perceptions to guide our decision-making. We are persuaded by other people not owing to some defect in our natures but because none of us is the perfect econ mentioned earlier, few of us, to quote Cialdini (2016) again, accepts entirely the “entrenched belief that the strongest motivator of human action is economic self-interest” (p. 163). We respond to many social influences and tend to make comparisons to others, particularly when the issue of inequity is involved.

Heuristics and fast thinking 

Classical economic thought says we make decisions deliberately and according to complex mathematical decision-making rules. And we operate in this fashion since to do otherwise would be irrational, and no one is willingly irrational when it comes to important decisions. Occasionally markets fail, of course, but traditional economists say this happens because consumers made poor decisions based on inadequate information, or because state intervention prevented the market from operating properly—something, in other words, that interfered with the rational activity of utility maximization. The basic assumption made in traditional economic theory, then, is that choice is always a good thing and is essential for market activity (and for keeping prices down, of course). Of course, behavioural economists do not deny the primary claim that choice is good; however, they point out that people are not always able to deal with the sheer volume of choice currently available. Consumers are unable to look at every possibility because of two things: information overload and choice overload. Choosing effectively is difficult when you have so much information to sort through and so many choices to consider.

It turns out, in fact, that too many choices can produce the paradoxical effect of reducing rather than increasing sales. A well-known study by Sheena Iyengar and Mark Lepper (2000) showed that when given more rather than less choice, people actually were less likely to complete a purchase. Iyengar and Lepper set up two displays in a grocery store, one containing 24 varieties of jam, the other containing only five. Shoppers naturally spent more time at the display with 24 varieties, but that time did not translate into sales as often as when they were at the display with five varieties. Overwhelmed by too many choices, customers chose not to choose at all. The same principle has been shown to apply in universities, too. When students were given a list of 30 essay topics, they did not perform as well as when they were asked to choose from a list of six topics. Those who were given the shorter list were said to have written longer and better essays. 

Now, I should point out that the literature on choice overload is a bit contentious, and I have tried to restrict my observations to studies and examples that seem to have been easily replicated and are, therefore, regarded as reliable. I should also point out that making decisions quickly—or making decisions on more limited information—is quite common, since people do not have the time to go through every piece of available information in every single situation. The key study for much of this work, which was done by Kahneman and Tversky (1974) and focused on the idea of cognitive bias and heuristics, first appeared in the journal Science in 1974. Their claims were based on the results of various social psychology experiments and were presented in terms of what they called the particular biases or heuristics that people apply in their everyday reasoning. In other words, Kahneman and Tversky were interested in understanding how people arrive at decisions by adhering to heuristics that might not be part of classical economic theory. Once again, we are persuaded by things that are not always explicitly rational, and in making our choices, we sometimes seem to fly in the face of good, solid reasoning. What Kahneman and Tversky found was that people rely on shortcuts—or heuristics—in order to deal with information and choice overload. They identified three specific heuristics: availability, representativeness, and anchoring.

Availability heuristic 

The availability heuristic says that people are more likely to be persuaded according to what is immediately available to them. The most obvious way this happens is when we choose the thing that is right there in front of us. So, if you are trying to choose from a menu, you might pick the first item because it is right there in front of you, or you might choose the first item that you recognize if the other choices are more exotic or unfamiliar. This makes perfect sense, but the availability heuristic has some additional features, for it also involves other psychological mechanisms such as primacy and recency effects. This means that when you leave a meeting, the chances are good that you will remember what was said at the beginning of the meeting (primacy effect) and what was said at the end (recency effect) and forget a lot of what went on in the middle. This is the way our minds work, the way our cognitive biases operate.

There are good reasons why we operate this way cognitively, not the least of which is that it saves us time. But it can be problematic. Too many people are influenced by the availability heuristic in choosing their passwords and thus make themselves more vulnerable to hackers. More important, perhaps, is that the availability heuristic plays a role in our tendency toward inertia, that is, our reluctance to be persuaded to make a change even when that change might be beneficial. People are generally reluctant to change services for things such as internet or cellphone providers because they are comfortable with what they know. For this reason, the government in the U.K. has taken to making it easier to switch providers by changing regulations and regularly publishing price comparison lists. The British government wants to foster more competition in order to get its citizens to be more active when it comes to driving industries to be more responsive to consumer needs.

Daniel Kahneman’s (2011) book Thinking, Fast and Slow contains a good deal about heuristics, including long discussions about the availability heuristic. I am going to cite a couple of things from that text to show how this particular heuristic connects to a range of social issues and practices. Kahneman points out that if you hear about something frequently, you may erroneously make assumptions about that thing. If the media broadcast stories of particular kinds of criminal behaviour, for instance, you may easily be able to cite examples of this specific type of crime but you might also overestimate its actual frequency. Frequency is thus a heuristic, or cognitive bias, but frequency is not the only thing that encourages us to make poor judgements and assumptions. For instance:

These are just some examples of how the availability heuristic can predispose us to judgements or conclusions that are problematic insofar as they are not correct in terms of their factual accuracy. We can be nudged, or persuaded, by these cognitive biases without being entirely aware of how our judgements are being influenced. 

Representativeness heuristic 

This heuristic suggests that we often jump to conclusions by assuming similarities between things and situations in order to help us with a quick decision. The most famous experiment illustrating this heuristic is called the Linda problem. Daniel Kahneman has described this as “the best-known and most controversial” of the experiments he and Amos Tversky ever conducted. In its original formulation it referred to the League of Women Voters, which, though a prominent organization when the experiment was first devised, is hardly known today. So, the language of the experiment has been updated for a contemporary readership. Its elegance lies in its simplicity. This is how it goes.

Linda is in her thirties. She is clever, single, and outspoken. She is concerned about social justice and discrimination and has been an anti-nuclear protestor. Which is more probable?

  1. Linda is a bank teller.
  2. Linda is a bank teller and is active in the feminist movement.

It turns out that most people select option 2, that Linda is a bank teller active in the feminist movement. But this is not the most probable conclusion to draw, and the mistake most people make in selecting this option is an example of what is called a conjunction fallacy. This means that people have judged a conjunction of two events (bank teller and feminist) as more probable than one of the events (bank teller) in a direct comparison. The category of bank teller is the larger category than the category of bank teller and feminist. But most people, it seems, ignore the laws of probability because they have in their minds a prototype to which Linda appears to correspond (see Figure 1). 

Figure 1Owing to this perceived correspondence or conjunction, many people are prone to respond incorrectly to the question. In case you need further information to be convinced that option two is not the more probable of the two, consider the following question. Which is more probable?

  1. Mark has hair.
  2. Mark has blond hair.

It is more probable that Mark has hair, since having hair includes people who have blond hair; that is, it is the larger category and contains the category of people who have blond hair and so it is more probable. This is the same structure as the Linda problem. It is fascinating how our prototypes can misdirect us.

Anchoring and adjustment heuristic 

This heuristic occurs when we anchor our decisions according to a reference point and then adjust our choices relative to that anchor point. If you think that apples should sell for no more than $0.98/pound, then that price might become your anchor point, and you will adjust your apple-buying behaviour around that anchor. Starting with an anchor point of $0.98/pound you will judge apples as either priced low, priced appropriately, or priced high. Although this heuristic can serve to keep you from spending more than you think appropriate for apples, it can also lead to errors in judgement. In one study, schoolchildren were asked to estimate the product when you multiply 1x2x3x4x5x6x7x8. Another group the same age was asked to estimate what you get when you multiply 8x7x6x5x4x3x2x1. The second group had a higher estimate despite the fact that the answer is the same in either case (multiplication being commutative). The explanation social psychologists offer is that the second group anchored their answer to the number eight, which is higher than the number one, the anchor for the first group. This may not seem logical at first look, but the point is that people do not always reason according to strictly logical principles; anchoring is one of the heuristics we employ to make quick determinations and decisions. It is remarkable, however, how profoundly influenced we can be by anchoring.

Kahneman cites a simple experiment he did while teaching at the University of Oregon. He had a device like the one in Wheel of Fortune that he would spin in front of the class with the instruction that students write down the number the wheel stopped on. Of course, the wheel was rigged to stop on one of two numbers, either 10 or 65. The students were then asked the following question: “What is your best guess of the percentage of African nations in the UN?” The students who had just seen the number 10 appear on the wheel guessed 25 percent; the students who saw the number 65 guessed 45 percent. As Kahneman says, the number they have just been exposed to should make absolutely no difference to their estimate, especially considering that the number, so the students believed, had been arrived at completely randomly (though in fact, it had been pre-chosen). The anchoring effect, however, is robust in its application and remarkable for its influence. As Kahneman (2011) says, 

[Anchoring] occurs when people consider a particular value for an unknown quantity before estimating that quantity. What happens is one of the most reliable and robust results of experimental psychology: the estimates stay close to the number people considered. (p. 119)

I personally find anchoring and adjustment one of the stranger cognitive biases to which people are prone, considering that there is often absolutely no relationship between the number they are initially exposed to and the number they are subsequently asked to estimate. In some cases, of course, the connection is not entirely irrelevant. For instance, if I ask your best estimate on the age at which someone you know only by reputation died—say, Kenneth Burke’s grandson, the singer/songwriter, Harry Chapin—your estimate will depend on how I anchor the question. If I ask you if Chapin was more than 70 years old when he died, you will estimate an age that would be higher than if I asked you if was older than 35.

At times, the anchoring heuristic is sometimes similar to the availability heuristic. This is because the priming effect of being told a number, for instance, can influence you to think of that number first simply because it is available. Our reference points, in other words, can sometimes come from what we already know—from the status quo—and this has led some behavioural economists to identify a so-called status quo cognitive bias, or familiarity bias. This just refers to our tendency to resist change, to be reluctant to bend to influence or persuasion because we are comfortable doing things the way we have always done them. 

In some of the behavioural economic literature, the status quo bias is also discussed in connection with the phenomenon of loss aversion. This means that people are reluctant to give up something they already have, a concept you may recall from Robert Cialdini. The behavioural economist Richard Thaler (2015) reports on an experiment designed to test out the status quo bias and the loss aversion bias. In this test, lottery tickets were given to some students, and $3 in cash to an equal number of students. The distribution of the tickets and the cash was entirely random. If your ticket was drawn, you had your choice between $50 or a $70 voucher redeemable at a local bookstore. The students were then set to a task to distract them from thinking too hard about the lottery or the cash (though why anyone would be inclined to dwell on $3 is hard to imagine) before they were given a choice. If you had a ticket you could exchange it for $3; if you had $3 you could use it to buy a lottery ticket. Thaler (2015) then explains the premise of the experiment and what actually happened:

Notice that both groups are being asked the same question: “Would you rather have the lottery ticket or three dollars?” According to economic theory, it should not make any difference whether the subjects had originally received the money or the lottery ticket. If they value the ticket at more than $3, they should end up with one; if they value the ticket at less than $3, they should end up with the money. The results rejected the prediction. Of those who began with a lottery ticket, 82% decided to keep it, whereas of those who started out with the money, only 38% wanted to buy the ticket. This means that people are more likely to keep what they start with than to trade it, even the initial allocations were done at random. (pp. 148–149)

You might note that this experiment has certain limitations, and thus I should add that Thaler and colleagues have refined it and run it many additional times trying to smooth out the difficulties and make it as realistic to actual market activity as possible. In its later versions, it yields the same basic finding: people, as Thaler (2015) puts it, “have a tendency to stick with what they have, at least in part because of loss aversion” (p. 154). Perhaps most significantly, loss aversion and the status quo bias can have important consequences in other domains besides markets. To quote Thaler (2015) once again:

Think of people who lose their jobs because a plant or a mine closes down, and in order to find work, they would have to both take up another line of work and give up their friends, family, and home to which they have become attached. Helping people get back to work can often be met with inertia. (p. 154) 

In other words, the status quo bias can have important policy and political consequences, consequences that can appear as inertia, as Thaler (2015) suggests, but also as indifference or apathy. 

Other behavioural paradoxes 

Behavioural economists have also explored other forms of human decision-making to see how consistent or how logical people are, only to discover time and again that we are frequently influenced by cognitive biases and heuristic-based thinking patterns that are not in keeping with classical economic theories. In traditional economic thought, people can be divided between risk-takers and risk-avoiders, sometimes simply referred to as people who are risk-lovers and those who are risk-averse. Then, classic economic theory adds the idea of expected utility theory, which claims that people, being rational utility maximizers, will make use of all available information, assessed according to mathematical reasoning, in order to choose the options that provide the highest level of happiness and satisfaction. But guess what? People do not always reason this way.

In one study, participants were given a choice between a one-week trip to England and a three-week trip to England, France, and Italy. They simply had to choose between two options, but there was a bit of a catch. The first option was 100 percent guaranteed; the second option was only 50 percent guaranteed. In other words, the chance of winning the trip to England for one week was twice as good as the chance of winning the trip to England and two continental European countries. What did people do? It turns out that 78 percent chose the sure bet and opted to take the one-week trip to England. Only 22 percent of the participants gambled on the option for the three-week European travel. As classic economic theory predicts, people are greatly influenced by certainty.

Kahneman and Tversky then ran the same game with a different set of odds. They set it up so that the odds of winning the three-week European vacation were only five percent, while the odds of winning the trip to England for one week were 10 percent. This time, 67 percent of the study participants chose the option with only a five percent chance of winning; that is, they chose the option with a low chance of winning the European trip. Take note that the odds in both cases are identical: two to one. Still, participants changed their preferences from the more probable prize because it turns out we weight things differently depending on how we understand the odds. Once again, we tend to reason in ways that, according to the rational principles by which traditional economics is governed, are seen as illogical, irrational, and unreasonable. As I said at the beginning, we are not always consistent, though we are always perfectly human.

What does this have to do with rhetoric? 

There are many further examples of how people reason that fly in the face of traditional economic theory, including sunk costs, choice architecture, and default assumptions. But I am going to bring this lecture to a close by taking us back to the beginning, so to speak, by reiterating the relationship between rhetoric and behavioural economics.

First, it is clear that behavioural economics, like rhetoric, is concerned in various ways with the phenomenon of persuasion. Whereas rhetoricians might study the different techniques by which people are actually persuaded, such as Aristotle’s ideas about ethos, logos, and pathos, behavioural economists are interested in the reasoning processes that go on behind the scenes as we respond to these techniques in coming to a decision. Moreover, behavioural economists are frequently curious about the ways that our responses are not easily predictable from within the discipline of classic economic theory, a point that relates to Toulmin’s claim that you cannot understand all forms of human communication from inside the field of logic. In order to understand different problems of logic, he says, you have to get outside of logic. In order to understand the apparently irrational things people do in coming to a decision, it is sometimes important to get outside of the discipline that proclaims itself the dominant field for making sense of such things: economics. 

Moreover, behavioural economics studies the decision-making process from a wide range of academic perspectives, several of which I have barely touched on and a few of which I have had to leave out entirely. For instance, behavioural economics also draws on research from anthropology, evolutionary psychology, and, most recently, neurophysiology and brain mapping. Rhetoric may not rely a great deal on these other fields but it does certainly take account of findings in other disciplines in order to understand the psychology and history of persuasive tactics and strategies. Rhetoric is a transdisciplinary field of research, and this is certainly true of behavioural economics as well. In both cases, there is a strong interest in creating alliances across disciplinary boundaries to form as complete a picture of the human subject as is possible. 

One final thought. Traditional economic theory says rather little about literary or artistic modes of expression. This is probably because it is not directly implicated in the sort of rational enterprise that constitutes the image of the rational utility maximizing human subject. But we know that people are deeply impressed and influenced by things such as art and narratives, especially narratives that contain bold and graphic imagery and allusions. So here is one final example of an experiment from social psychology that shows a more direct link to communication studies than some of the others I have presented. Researchers at Stanford University presented a randomly selected group of online readers with information concerning a rise in crime rates over the past three years. In one scenario, the researchers depicted the rise in crime with the metaphor of a ravaging beast rampaging through the city and wreaking havoc. In the second scenario, the researchers provided the same information—same statistics, same details about the rise in crime rates—but they changed the word beast to the word virus. Crime was like a virus, or an illness, that was infecting the body politic. Later, the subjects were asked to indicate their preferred solution to the problem of crime. Those subjects who had heard the metaphor of the rampaging beast recommended catch-and-cage solutions. In other words, catch the criminals and lock them away, a tough law-and-order mentality. Those subjects who had heard about the virus metaphor recommended solutions that focused on the removal of unhealthy conditions. They wanted to see the things that could lead to unhealthy conditions such as poverty and unemployment eliminated as a way of dealing with crime. This takes us back to Burke’s pentad, of course, but it also shows how people reason things out—and come to particular decisions—according to how information is presented rather than just what information is presented. Reasoning, in other words, is far more complicated than the rational utility maximizing theorists would have us believe.

Notes 

  1. Gorgias explains in Gorgias (Plato, 1997) that a boxer could apply his pugilistic skills to advancing his personal interest by beating up people in order to get his own way. The problem is not with the art of boxing, which could be very useful, Gorgias says, but with the wickedness of the man who uses this skill in an unethical way. This remains a problem today.
  2. People can be paid for blood donations in some places, such as the U.K., but blood donation in Canada is still strictly on a volunteer basis, though Canada buys blood from countries where the donors were probably paid. There are also private clinics in Saskatchewan and New Brunswick that will pay for blood.
  3. Streaming music services often do this by alerting you to the music your friends are currently listening to.

References 

Cialdini, Robert. (2016). Pre-suasion: A revolutionary way to influence and persuade. New York, NY: Simon & Schuster.

Douglas, Kate. (2015). Slime-mould economics. New Scientist, 227(30131), 38–41. doi:10.1016/S0262-4079(15)30837-X

Gneezy, Uri, & Rustichini, Aldo. (2000). A fine is a price. Journal of Legal Studies, 29(1), 1–17. 

Iyengar, Sheena, & Lepper, Mark R. (2000). When choice is demotivating: Can on desire too much of a good thing? Journal of Personality and Social Psychology, 79(6), 995–1006.

Kahneman, Daniel. (2011). Thinking, fast and slow. Toronto, ON: Anchor Books.

Kahneman, Daniel, & Tversky, Igor. (1974). Judgement under uncertainty: Heuristics and Biases. Science, 185(1425), 1124–1131.

Midgley, Mary. (2018). What is philosophy for? London, UK, & New York, NY: Bloomsbury Academic.

Plato. (1997). Gorgias. (D.J. Zeyl, Trans.). In J.M. Cooper (Ed.), Plato: Complete works (pp. 791–869). Indianapolis, IN, & Cambridge, UK: Hackett Publishing Company.

Thaler, Richard. (2015). Misbehaving: The making of behavioral economics. New York, NY, & London, UK: W.W. & Norton.


McCarron, Gary. 2021. Lecture 10: Behavioural Economics and Rhetoric. Scholarly and Research Communication, 12(1), 21 pp. doi:10.22230/src.2021v12n1a381

© 2021 Gary McCarron. CC BY-NC-ND