Make Arguing Great Again
Why We Should Argue
For good ideas and true innovation, you need human interaction, conflict, argument, debate – Margaret Heffernan
I love argument, I love debate. I don’t expect anyone just to sit there and agree with me, that’s not their job – Margaret Thatcher
Arguing is hard. We’ve all felt the unpleasantness that intractable disagreements bring into our lives. We like to see arguments as contests featuring a winner and a loser. Furthermore, our likely emotional investment in our point of view oftentimes escalates the argument into a heated one. Experienced this way, outcomes are bleak. If we “win” the argument, we could lose the support of a colleague, confidant, or whoever we are arguing with. If we “lose” the argument, we lose whatever our argument set out to achieve.
However, arguing – discussing, debating, and drawing from evidence – is likely the best way to grow in our knowledge of the world. Through the process of conjecture and refutation vis-à-vis the progressively sophisticated tools that we use to discover reality, we remain in proximity to the truth. The Founders of America have long recognized the value of argument, or “free speech” in this regard. Steven Pinker, a prominent public intellectual, goes even further. He opines that “free speech”, or talking about issues openly and collectively, i.e., arguing, is the only way that humans, throughout our evolutionary history, have acquired reliable knowledge about the world.
Given, then, that arguing brings with it a host of benefits, how can we harness them to acquire collectively agreed upon truths without incurring their pernicious effects? This has occupied thinkers for millennia. In the past several decades, cognitive, social, and behavioral psychologists have contributed by researching what happens to people when they argue. By adopting their insights, and the insights of others, we can formulate strategies to help direct our arguments toward consensus-seeking, and away from the one-upmanship games that they otherwise tend to regress into.
Before continuing, it is important to note what this post does not intend to cover. It does not intend to cover instances when external circumstances, like someone’s vocation, moneyed interests, etc., stop them from openly considering particular sources of evidence. The prolific author Upton Sinclair Jr. phrases one aspect of this well: ‘It is difficult to get a man to understand something when his salary depends on his not understanding it’. Someone like that may agree with you in private, but they cannot and will not acknowledge their agreement openly, so don’t expect them to.
To give an example, some Christian teaching institutions in the United States in the early 20th century were pressured to profess conformity to previously held fundamentalist beliefs like the special creation of humanity (as opposed to our speciation from a common ancestor) or risk funding cuts from supporters and gate-keepers and possibly lose their job. Therefore, faculty members or members of the administration had to pretend to hold beliefs they didn’t actually hold – given their respect for scientific evidence – to safeguard their funding and their jobs.
In the above scenario, other tools, like persuasion and negotiation, enter the picture. They are ways of getting the other party to see tangential benefits of adopting your ideas, sometimes regardless of their standalone merits. As such, persuasion and negotiation are useful in cases where neither party can agree on a common evidential base on which to derive truths, as might be in the case above (i.e., fundamentalist interpretive commitments to religious texts versus the deliverances of modern science). The persuader must, therefore, reframe the issue by appealing to other factors, e.g., financial sustainability, the value of ecumenism, or respect for scholarly rigor to bolster his or her case. This post, however, does not intend to cover these techniques directly because they aren’t necessarily aligned with consensus-seeking that’s calibrated toward truth.
By contrast, this post intends to remain upstream and focus only on arguments where the arguer genuinely believes that his or her position has superior truth value and is interested in reaching a consensus on it. Some of what will be discussed will overlap with persuasion and negotiation, but they arrive later in an argument after the intractable disagreements are uncovered and will not be its focus.
One last note: this post does not intend to offer guidance for arguments in relational settings, e.g., arguments which are caused by one party feeling unloved or under-appreciated, and which usually manifests as verbal jousts over trivial matters, passive aggression and so on. We know how emotionally debilitating those can be for everyone involved. But they are not the subject of discussion here. Again, there may be some overlap, but relational type arguments will not be the focus of this post.
Why Arguments Often Miss the Mark
I have spent the best years of my life giving people the lighter pleasures, helping them have a good time, and all I get is abuse, the existence of a hunted man – Al Capone
The best argument against democracy is a five-minute conversation with the average voter – Winston Churchill
Arguments only confirm people in their own opinions – Booth Tarkington
To answer this question, we must understand how arguing fits into our evolutionary history. Genetic and material evidence from human and proto-human remains converge with independent lines of evidence within the neo-Darwinian evolutionary paradigm to show the valuable function sociality had in helping our species flourish (see the work of Edward Wilson, Robin Dunbar, Jonathan Haidt, Dan Sperber, Richard Joyce, and Martin Nowak). Getting our social groups to consider our perspectives or adhering to the groups’ social norms were crucial skills for any member of our species to survive in groups and flourish.
Arguing, like many other human traits, evolved to help us secure the resources we needed. Moving opinion was far more important than truth-seeking. Our brains have, therefore, learned to erect nigh impenetrable walls of emotion – the psychological experience of our value judgments – around them, all in the service of remaining convinced that our arguments are right and to more persistently argue for them, rather than seek the truth dispassionately. This, in turn, explains (and exacerbates) the host of cognitive biases, heuristic quirks, and logical fallacies we fall prey to when arguing. There are so many of these that whole books have been written about them. It would serve the reader best to consult those sources for more details (like Daniel Kahneman’s Thinking, Fast and Slow and Madsen Pirie’s The Use and Abuse of Logic) but some are worth mentioning here.
When people who hold some belief are confronted with disconfirming evidence for it, the part of their brain which handles logical arguments, the dorsolateral prefrontal cortex, doesn’t light up nearly as much as the parts of their brain which handle emotion and moral judgment, their orbitofrontal, anterior cingulate, and posterior cingulate cortex. In other words, when we receive disconfirming evidence, we don’t immediately consider whether our views might be wrong. Rather, we feel emotional discomfort (i.e., we experience cognitive dissonance). We then seek to release that discomfort by rationalizing away the evidence or resorting to ad hominem dismissals toward the source (among other things). Once we’ve removed the discomfort and reinforced our opinions, we are rewarded with a flood of adrenaline and dopamine which make us feel great.
The backfire effect documents the above phenomena. Unfortunately, a study on opinions about U.S. gun control indicates that higher intelligence predicts the participants’ likelihood of rationalizing away evidence that falsified their point of view. It seems, therefore, that intelligence doesn’t help. Rather, the first step to removing the backfire effect is to inform people about it, hence this paragraph.
The Dunning-Kruger effect, or illusory superiority, is also predicted by the evolutionary function of arguing. It is hard to know how much you don’t know about something if you don’t know what the milestones are for genuine expertise in that field. However, our baseline confidence about how much we know something tends to be inversely related to our actual expertise in the field. Recall that chronic uncertainty would have been disastrous to the survival prospects of our ancestors, but substantial cultural scaffolding since then has enabled us to normalize uncertainty in, for example, our scientific institutions which have in turn allowed us to make incredible forward strides.
This feeling of certainty and conviction, which often arises independently of rational deliberation, has, for the reasons stated above, evolutionary and cultural value. Remaining convinced about shared beliefs served our hunter-gatherer ancestors’ survival prospects well.
This explains why they feature in many of our early ethical projects, i.e., our ancient religious and legal texts (in the case of the New Testament, see for example James 1:8 and Revelation 3:16). Our evolutionary background explains many other things about us, like why our insistence on being right is like a physical addiction from which we derive pleasure; why we prefer simple solutions to complex problems; why we succumb to the confirmation bias; why we fall for confidence games (see Maria Konnikova’s The Confidence Game); and why our valuable cognitive heuristics often misfire.
On that last point, make no mistake, our cognitive heuristics are the systems that have kept us alive. They evolved to help us quickly process massive amounts of information and solve complex problems with minimal effort. Without them, we would be as inflexible as non-self-learning computers and would have long become extinct. But, like any of our cognitive abilities, when they are applied in contexts alien to their original function, they can lead us astray.
For example, our heuristically enabled ability to detect causation misfires when we do not consider regression to the mean. The psychologist Daniel Kahneman offers the case of flight instructors who recall that when they praise their cadets after exceptional flying sessions, their cadets perform badly afterward. And when they berate their cadets for terrible flying sessions, their performance improves the next time. So, they conclude that it is better to berate cadets rather than praise them to improve their overall performance.
Kahneman notes that the instructors’ conclusions fail to account for regression to the mean: both terrible and exceptional performances are statistically improbable. So, after either one of them occurs, the cadet’s next flight performance will likely regress to the mean. In other words, there is no causal connection between berating cadets and improving their flight performance. The instructors’ causal detection systems have misfired.
We also fall for the availability heuristic. We let availability unduly influence our judgment on what issues most warrant our attention or support. At the turn of the millennium, American media have been bombarding Americans with the horrors of terrorism. The U.S. has since spent over 1.3 trillion USD fighting terrorism. However, if we count the total number of people who have been killed directly by terrorist acts in the last 40ish years, about 14 000, that amounts to 90 million USD spent for each person killed. As the economics professor Abby Hall notes, if you are American or European, you will more likely be killed by a lightning strike, a bathtub filled with water, a toddler wielding a firearm, or a police officer than from a terrorist attack.
By contrast, over 35 000 people die from car accidents every single year on average in the U.S. and more die prematurely from preventable causes. However, because this information doesn’t make the headlines or rouse the emotions in quite the same way, we don’t consider how a shift of resources and attention away from fighting terrorism to improving transport/health policies (regardless of whether such a shift is politically feasible) could save many more human lives.
Our eagerness to be certain exacerbates heuristic misfiring and can cause us to rely on fallacious reasoning to hide our lack of knowledge or cogency (sometimes to ourselves as much as to others). In philosophy, these are classed under formal and informal fallacies. Informal fallacies include ad hominem type fallacies. They attempt to invalidate a point of view by attacking the character or competence of the person who holds the view, rather than the view itself.
Some attacks are more direct than others. The attacker could insinuate ignoble intentions underpinning someone else’s point of view. The attacker could imply that the other person is unpleasant to an audience by demeaning their perspective before a debate. By asking them irrelevant and hard to answer questions, the attacker can also skew the argument in their favor by implying to listeners that the other person lacks competence. Other fallacies, which you can look up in your own time, include the straw man, special pleading, and slippery slope fallacies.
Our propensity to distort reality for the sake of feeling like we’re right rather than seek the truth dispassionately might on the surface suggest that arguments governed by truth-seeking methods, therefore, hold little to no weight. However, this cannot be true. While research shows that some of us are genetically prone to seeking certainty and closure at the cost of everything else, the overall human desire for certainty lies on a continuum, and other studies show that reasonable people do change their minds when presented with strong arguments, especially if they were motivated to assess those arguments for themselves (the studies can be found here and here).
With that optimistic takeaway in mind, let us consider how we can argue better.
How We Can Argue Better
If he who employs coercion against me could mold me to his purposes by argument, no doubt he would. He pretends to punish me because his argument is strong, but he really punishes me because his argument is weak – William Godwin
The difficult part in an argument is not to defend one’s opinion but rather to know it – Andre Maurois
The only thing that permits human beings to collaborate with one another in a truly open-ended way is their willingness to have their beliefs modified by new facts. Only openness to evidence and argument will secure a common world for us – Sam Harris
Before you engage in an argument or discussion, find out if the topic is worth arguing about. If the argument is about something trivial or won’t change anyone’s mind, then don’t bother.
Once you have decided that something is worth arguing about, make sure your position is the correct one. This might seem obvious, but we oftentimes haven’t carefully examined our position well enough to know whether we can argue for it well. Research the topic, consider different perspectives, test your ideas against others. If you realize that your argument isn’t as strong as you originally thought, don’t use it and hope that your opponents won’t catch on. This isn’t high school debating. Drop bad arguments.
Also, know what kind of arguments you are making and how strongly conclusive they are. Are they deductive, inductive, abductive, arguments from analogy, or reductio ad absurdum? Are your arguments both valid and sound? Are your sources credible? Have you considered sample sizes? Are the samples representative? Are there credible replications? Have you accounted for ambiguities in the language of the studies consulted? Make sure your argument isn’t fallacious.
Look up the list of formal and informal fallacies and work through them for yourself. I’ll list some here. They include the causation/correlation divide, illicit process, false choice, generalizations, faulty/abusive analogies, red herrings, circular arguments, concealed questions, literalisms, begging the question, absurd scenarios, and the two-wrongs argument. I also recommend working through the whole list of cognitive biases at least once.
If, after extensive research, your argument checks out, consider whether your intended interlocutor is emotionally ready (and willing) to hear your argument. If they are not, consider dropping it for the time being. I don’t have to tell you how quickly well-intentioned arguments can go awry when one party is not prepared for it.
When preparing to present your argument, have a clear structure. Make it clear at the beginning what you are arguing for and why. Divide your argument into bite-sized elements. Use bulleted points, and try to adhere to a premise, supporting facts, and conclusion sequence for each point you make. Anticipate your opponents by raising common objections and resolving them. Only criticise ideas, not people.
When it is your opponent’s turn to speak, listen hard. Don’t interrupt them. Make sure your non-verbals are under control: don’t smirk, roll your eyes, or show any kind of impatience or contempt. Try to understand their argument. When they are finished, don’t assume you now fully understand their argument and its implications. Put their argument into your own words and ask if you got it right. You may also want to use the Socratic Method to further interrogate them. Criticise the strongest form of the argument they are advancing. If their argument is good, acknowledge it, but also look for a win-win, some way to show how your ideas may be a better fit, given their assumptions and core values, than their own. If you encounter information you didn’t know about, say that you’ll check it out and get back to them.
If their argument is very good, admit that your argument is weaker and treat it as a learning opportunity. We can’t know everything. Sometimes our best efforts fall short. Don’t continue to argue after you’ve lost. Don’t be a sore loser either. Don’t say “we’ll see who’s right in the end” or “I’ll get you next time”. If things get heated, you may have to apologize.
If their argument is bad, explain why. Don’t say: “that’s begging the question!” or “that’s the gambler’s fallacy!” Very few people are familiar with logical fallacies or cognitive biases. Furthermore, using sophisticated phrases to criticise people only rouses negative emotion and make them more entrenched in their position. Just explain in plain words why they are mistaken. Stick to the facts. Don’t make character attacks.
If your opponent is making character attacks on you, be sure to address them, especially if there is an audience. Point out that your opponent is trying to make you look evil, incompetent, or ignorant, and that these attacks are irrelevant to the truth of your argument. Don’t lose your cool. If this person has a history of resorting to character attacks, including indirect ones, you might want to consider bringing it up pre-emptively at the beginning of the debate before that person has a chance to speak so that the audience is prepared for it and can factor it out of their judgments when considering each side.
If there is no audience, ask your opponent clarifying questions about the intent behind their attacks and clearly state that you’d like the conversation to get back on track. If they refuse to stop, you may have to walk away.
Don’t be a sore winner. Remember that the goal of arguing is to progress in knowledge and reach consensus about it. It’s not about winning. If they have good intentions, show that you understand where they are coming from. Say that you recognize and appreciate their efforts even if you cannot fully agree with their conclusions. Don’t force them to acknowledge their loss. Nobody likes being wrong, and most people would prefer to change their minds on their own terms (remember that every healthy person has an ego. Without egos, we would have long become extinct). So, empower the other person to come to the same conclusion as you. Help them save face by saying that you would have come to the same conclusions were it not for some extra piece of information or experience you had. Offer them actions steps in line with their goals, and work with them on a joint project.
Sometimes, your opponents might tacitly acknowledge that you’ve brought up some good points, but you sense that they might not be ready to change their minds then and there. In cases like these, taking a cue from Inception, leave them with the ideas and concepts that follow from your argument. In other words, plant seeds. They may soon come to value the merits of your point of view.
Can We Really Change Minds?
No old road leads to new destinations! Change begins when one realizes that it is unwise to pour a new wine into an old wineskin. If you change your mind, you have to change your actions too! ― Israelmore Avivor
Arguing with reason is hard. Our evolutionary heritage has endowed us with impressive cognitive abilities. However, because they’ve been calibrated toward survival in social groups, we’ve evolved to be more rationalizing than rational, to desire winning more than truth-seeking. However, two millennia ago, Aristotle already believed that reason, or logos, is the defining telos of humanity. Reason (and here I fully acknowledge that I may be departing from Aristotle’s meaning of the word) has since contributed in no small way to the agricultural, industrial, and technological world we live in today.
So, let’s not give reason, and its ability to change minds, short shrift.
If you like this post and want to learn more, check out Jonathan Herring’s How to Argue Powerfully, Persuasively, Positively; Robert Mayer’s How to Win any Argument Without Raising your Voice; and Sia Mohajer’s I’m Right, You’re Wrong.