Wednesday, September 16, 2020

The Pillars of Argument: Covering Your ARS

So far, I’ve talked about clarifying the language and structure of arguments, as a first step in evaluating them. But how do we take the next step, and actually consider whether they’re good or bad arguments? And what even makes an argument a good one?

The short answer is that a good argument has premises that adequately support its conclusion. How? By meeting three criteria: they need to be acceptable, relevant, and sufficient. The resulting abbreviation, ARS, sounds a little too much like the British word “arse” to some people, so sometimes you’ll see it shuffled to spell RAS, or ARG (by changing “sufficient” to “sufficient Grounds”). But I say damn the torpedoes. Let’s keep ARS, and remember this: when you’re making a good argument, you’re covering your ARS.

Are the Premises Acceptable?

Truth and Plausibility


The most complex of the ARS criteria is acceptability, because there are several ways to judge the acceptability of premises. The first thing is that acceptable premises are true, or at least plausible--they can be provisionally accepted for the purposes of the argument. If they aren’t true or plausible, then they can’t support the conclusion, and thus aren’t acceptable. Here's an example:


P1: No mammal lays eggs.

P2: Platypuses lay eggs.

___________________________

C: Therefore, platypuses are not mammals. 


That would be a good argument, except that the first premise is false: some mammals (platypuses and echidnas) do lay eggs. Obviously, a premise can’t be acceptable if it’s false.


But then, how do we know which premises are true and which are false? That’s the big question, and there are no easy answers. Some philosophical skeptics claim there’s little or nothing we can know for sure. We might say we know something is true because we saw it with our own eyes, but we’ve seen how error-prone perception and memory can be. People in prehistory could have argued that the earth is flat because they can see with their own eyes that it’s flat. But the earth isn’t flat. Perception can be limited and deceiving, so seeing something with your own eyes doesn’t mean you see it accurately.


Another problem that extreme skeptics point to is the infinite regress of premises, sometimes called the skeptical regress. We might say a premise is true if it’s supported by other true premises, but then, how do we know those premises are true? If we say that they’re supported, in turn, by other true premises, then where does the chain of premises end? Consider the following dialogue:


Jill: “Some mammals lay eggs, because platypuses are mammals, and they lay eggs.”


Lil: “How do you know platypuses lay eggs? Have you ever seen it happen?”


Jill: “No, but I’ve read it in books written by other people who have seen it. It’s well-documented that platypuses lay eggs.”


Lil: “Then how do you know the people writing the books aren’t lying?”


Jill. “Because I don’t know of any reason why they would lie.”


Lil: “How do you know there aren’t reasons you haven’t thought of? What if platypuses are a hoax?”


Jill: “......”


This line of questioning could go on forever, if Lil refuses to accept any of Jill’s premises. And to some extent, Lil does have a point. Everything Jill knows about platypuses is secondhand. And that’s true for all of us--if not about platypuses, then about other things. Most of what we think we know is secondhand. We didn’t see it ourselves, so we’re taking people’s word for it. I didn’t see dinosaurs walk the earth, or watch the battle of Waterloo. I’ve never seen Greenland with my own eyes, and I’ve never seen a platypus in real life. But does that make it unreasonable for me to believe in all these things? No, because it’s much more plausible that they exist than that they’re elaborate, centuries-long hoaxes. The same argument about plausibility could be made to someone who says that perception is fallible. Yes, perception is fallible, but it’s not THAT fallible. Yes, it’s conceivable that we all live in the Matrix, and all our experiences have just been images piped into our brains, but it’s not at all likely. 


So, while the extreme skeptics are right that there’s little we can know for absolute certain, it’s a mistake to think we need absolute certainty to accept premises as true. Unless we’re talking about things that are true by definition (”No bachelor is married”) or logical necessity, (Platypuses either exist or don’t exist”) then when we say something is true, that just means it’s extremely likely to be true. 


So how do we know what’s extremely likely to be true? There are a few ways. If we see a thing happen with our own eyes, and it’s not wildly at odds with our normal experience, that’s a reason to think it’s probably true. If I see someone walking a dog down the street, there’s no reason to think my eyes deceived me. On the other hand, if I see someone walking a miniature horse down the street, I might want to look twice. It wouldn’t hurt to check to see if other people saw it, too, or if someone had taken a picture of the horse. The more we try to verify our direct perceptions, the more likely they are to be true. That still doesn’t make them certain, because some illusions are very persistent, and look the same to most people. The earth looks flat to everyone walking on it, even though it isn’t. Still, lots of things we experience firsthand are perfectly plausible, and there’s no particular reason to doubt them. I could speculate that I’m laying in bed dreaming right now, and not really sitting here typing, but what’s most likely is that I’m sitting here typing.

Testimony


But what about all those things we don’t see with our own eyes? What about when we need to rely on the testimony of others? After all, that’s something we have to do every day. If I watch the news, check the weather forecast, read a history book, or hear that platypuses lay eggs, I’m relying on testimony. Luckily, there are several ways of judging the plausibility of testimony. First, it’s more plausible if it fits with our ordinary experience: If the weather report says a snowstorm is bearing down on Denver in January, that’s plausible. If it says a hurricane is coming, that isn’t plausible. A person’s claims are also more plausible if they don’t conflict with other claims they’re making. If a coworker tells you they were out sick yesterday, that seems pretty plausible…unless you hear them telling someone else they went skiing. Testimony is also more plausible if the person giving it has always been trustworthy in the past, and doesn’t have conflicts of interest that might tempt them to bend the truth. If your coworker who lied about being sick calls in sick again, you have good reason to be suspicious. If a car salesman says you won’t get a better deal anywhere, you have reason to take that with a grain of salt, because he has an interest in telling you that car. That doesn’t necessarily mean he’s lying, of course. It just means his claims need extra scrutiny.


What about the testimony of experts, like doctors, scientists, and economists? Let’s stick a pin in that question, and come back to it in a later post about arguments from authority.

Other Unacceptables: Unclarity, Contradiction, and Presumption


Premises are also unacceptable if they’re too vague or ambiguous for us to judge whether they’re true, or even what they mean. I could make the claim that free will exists, and base it on a premise that says something like, “The will exists in four-dimensional quantum potentiality beyond time and space”. But what does that even mean? Nothing, because I just made it up. Such pseudoprofundity is common and impresses many people, but it’s often more or less meaningless, and can’t be taken as an acceptable premise. Premises are also unacceptable if they conflict with each other, or if they presume the truth of the conclusion they’re supposed to be supporting, as persuasive definitions and other circular arguments do.

Premises About Values


Oftentimes premises make claims about values, not facts. If I argue that bloodsports like dogfighting are morally wrong, and support it with the premise, “It is wrong to inflict suffering on animals for entertainment”, that premise isn’t an empirical fact. You can’t measure wrongness in a laboratory. Does that mean it’s unacceptable or meaningless, or that dogfighting is morally right? Of course not. There are very good reasons for thinking dogfighting is ethically atrocious, but the reasons aren’t all things that can be grounded in fact. That makes judging premises (and whole arguments) about values and ethics unusually difficult. But it’s also extremely important. Most people will find the premise above acceptable--perhaps even true--even though it’s not an empirical fact. It’s a reasonable premise that we can start to build an argument on. We can’t prove it in a lab, but we can’t write it off as meaningless or unimportant, either. It would be nice to have more certainty than that, but sometimes we have to do the best we can.


Monday, September 14, 2020

The Structure of Arguments

In my last few posts, I talked about clarifying the language of arguments. Now I want to talk look past the words and sentences, to examine the underlying scaffolding or structure of arguments. Arguments can take many forms and combine in different ways. For example, a conclusion for one argument can become a premise for another argument

Consider the following argument. 


“There’s a lot of snow in the mountains this winter. That means there will be a lot of snowmelt in the spring. So the rivers will be full in the spring and summer, and full rivers make for good kayaking. It’s going to be a great year for kayaking!”


One way to look at the structure of this argument is to make an argument map, which looks like the one on the right, with the conclusion at the top (this is arbitrary--some argument maps show the conclusion at the bottom). The map shows that this is what’s called a chain argument. Each premise supports a conclusion, which also functions as a premise in a further conclusion. 


Other arguments have a different structure. For example:


“I think Joe is short on money. He sold his kayak, and he’s been working a second job.”


Here, the two premises don’t connect in a series, as in a chain argument. Each supports the conclusion separately, so we can map it as a convergent argument


Sometimes, two or more premises have to work together to support a conclusion, because they can’t do it by themselves. We see this with the following syllogism:


P. All cats are predators.

P. Fluffy is a cat.

_________________

C. Fluffy is a predator.


Here the two premises are linked, because neither can support the conclusion alone. The first can’t support it because it says nothing about Fluffy, and the second can’t because it says nothing about predators. This kind of argument can be mapped by linking the premises together, as in the map below.


Oftentimes, when you find unstated premises, they combine with stated premises to form linked premises. In the argument above about Joe, there are actually two unstated premises: 1. If Joe is working a second job, he probably needs the money. 2. If Joe sold his kayak, he probably needs the money. So, we actually have two sets of linked premises, which then converge to support the conclusion that Joe is short on money. This means our argument map is now a combination of convergent and linked premises:

Another thing argument maps can show is objections. For example, Joe’s other friend could object to the premise that “Joe sold his kayak” by saying he didn’t actually sell it, and then support his objection with a premise like “He left it at Lisa’s house.” Objections can contradict a premise, or they can contradict a conclusion directly. For example, someone could object directly to the conclusion by pointing out that Joe just bought a new computer, which suggests that he has money. But then you can object to objections, perhaps by pointing out that the computer was a gift. An objection to an objection is called a rebuttal. All this can turn into a complex, branching tree of arguments, sub-arguments, objections and rebuttals, as in the argument map below, with objections shown in red, and rebuttals shown in orange.

So what’s the point of all this mapping? While there’s no need to do all this with every argument you see, mapping arguments is useful for at least two reasons. First, it makes the underlying structure of the argument clear, which helps in evaluating it. Second, it helps to keep track of all the premises, arguments, and objections, and to review they fit together. Think about a long debate thread on a social media site. Even if all the contributors are using crystal-clear language and stating arguments (not just naked claims), it can be hard to keep track of what points have been made, and what reasons and objections have been made to them. An argument map can show at a glance how the argument unfolded. Like a map of a city, it gives an overview of the terrain being covered. 


One thing to keep in mind is that a map like this could represent two people debating a question, or it could represent a single person debating with himself. And debating with yourself is a good thing to do (as long as you don’t do it out loud in public) because as we’ve discussed in previous chapters, it’s as important to question your own reasoning as the reasoning of others. It’s important to think of objections to our own ideas, because confirmation bias makes it easier to think of supporting evidence than contradictory evidence. If the argument map above does show a single person’s reasoning, then he’s putting some real thought into whether his friend Joe is short of money. He’s not just jumping to that conclusion. 


In fact, “Joe is short of money” doesn’t necessarily have to be considered a firm conclusion. It could also be seen as a contention, a hypothesis, or an issue under debate. If you’re putting forward a firm conclusion in order to convince someone to accept it, you’re making a persuasive argument. But if you’re trying to decide what conclusion is best supported by the premises and objections at hand, then you’re using logical arguments as a means of reasoning. While making arguments for a position is a good skill to have, reasoning is a higher goal. That’s an important thing to remember. Critical thinking isn’t about defending arguments--it’s about evaluating them. It’s not about casting around to find support for your pre-existing conclusions. It’s about deciding what conclusions really are supported.

Thursday, September 10, 2020

Humpty Dumpty and the Meaning of Words

Humpty Dumpty talking to Alice in Through The Looking Glass

One way to avoid fuzzy reasoning and miscommunication is to make sure we have the definitions of words nailed down. First, that ensures that we’re using them consistently ourselves. Second, when we argue with other people, it reminds us to check whether we’re using words in different ways than they are. That’s more common than people realize, and it causes trouble. Many heated quarrels happen unnecessarily because the people arguing don’t realize they’re using the same word in different ways.

An example I’ve seen many times recently is the word “racist”. Imagine that two people named Maria and Mike are talking. Maria says, “I wish you wouldn’t wear that sombrero on Cinco de Mayo. I think that’s racist.” Then Mike says, “How dare you call me racist! I think all human beings are equal.” Maria may see a distinction between calling an action racist and saying that a person is a racist. She may think a non-racist person can do racist things without meaning to, or having any animosity toward the other race. So she’s not saying Mike is a racist person. Mike, meanwhile, hears the word “racist” and assumes it means “a person who dislikes other races or thinks they are inferior”. That’s a pretty grave accusation, so it’s not surprising that Mike is offended. But he’s misunderstanding what Maria was trying to say. Now, I’m not about to weigh in on whether Mike or Maria’s definitions are correct. My point is that they could avoid a lot of hurt feelings by realizing they’re defining the same word in different ways.

Some people will scoff at you if you ask them how they’re using a word, because for some reason they think words have one single definition that’s set in stone forever. Often they will pull out a dictionary to prove that a word should mean a particular thing (this is called the appeal to definition fallacy, or, more tongue-in-cheek, “argumentum ad dictionarium”). But that’s not how language or dictionaries work. Languages evolve, and that’s why dictionaries have to be updated. Words shift meaning and take on new meanings all the time, and all attempts to stop that process have failed. Samuel Johnson, who wrote one of the most influential dictionaries of all time, wrote in the preface that, “academies have been instituted, to guard the avenues of their languages, to retain fugitives, and repulse intruders; but their vigilance and activity have hitherto been vain; sounds are too volatile and subtile for legal restraints; to enchain syllables, and to lash the wind, are equally the undertakings of pride.” Of course, we spell the word “subtle” differently these days, and don’t use so many commas. And Johnson wouldn’t be surprised, because he knew that language evolves.

The definitions in dictionaries are just one type of definition, called a lexical definition, which simply reports how a word is being used at a particular point in history. In Lewis Carroll’s book Through the Looking Glass, Alice meets Humpty Dumpty, and they have the following conversation:
"When I use a word," Humpty Dumpty said, in rather a scornful tone, "it means just what I choose it to mean—neither more nor less."

"The question is," said Alice, "whether you can make words mean so many different things."

"The question is," said Humpty Dumpty, "which is to be master—that's all."
Was Humpty right? If dictionaries just report how people use words, can I just use a word to mean anything I want it to? No, because the whole point of words is communication, so words have to be used in a way that people will understand. You can’t say “cat” and expect them to know you use that word to mean “walrus”. And you can’t define a word in a way that’s clearly at odds with reality. If a dictionary defined a walrus as a kind of cat, that would be a bad definition. Dictionaries can say how a word is used, but they can’t necessarily say how they should be used. At one time, people thought mushrooms were plants (they’re actually more closely related to animals), so many dictionaries probably defined them as plants. But that doesn’t mean they are.

This stuff about definitions might sound trivial, but many controversies in the culture wars turn on how words should be defined. Conservatives may say that marriage is defined as a union of one man and one woman. They believe this definition refers to an objective fact in the world, like the fact that walruses aren’t cats. But liberals don’t think the definition of marriage is fixed in this way, because they believe marriage is an evolving social construct like, for example, a language. I won’t make a judgment here, but one thing that’s clear is that you can’t resolve the issue by grabbing a dictionary. If a dictionary says a walrus is a cat, all that proves is that you have a crappy dictionary.

Another way definitions get swept up in culture wars is when people try to define words in particular ways that match their worldview. The most obvious example is the word “abortion”. Many pro-lifers want to define abortion as “the murder of an unborn baby”, while pro-choice people want to define it as “the ending of an unwanted pregnancy”. In both cases, these are persuasive definitions, used to get the upper hand in controversies by defining words in certain ways. But persuasive definitions are fallacious as arguments , because they’re circular--the definition assumes what needs to be proven. Whether abortion is murder or the justified termination of a pregnancy is precisely what’s controversial. To prove one side or the other requires arguments that don’t presuppose the conclusion. To say, “Abortion is murder, because the definition of abortion is the murder of an unborn child” is to argue in a circle. And so is saying “Abortion is not murder because abortion is just the termination of an unwanted pregnancy.”

If we want definitions to help clarify arguments, persuasive definitions are useless. They’re tools of rhetoric, not reason. A lexical definition from a dictionary may be useful, but it isn’t, well...definitive. The kind of definition we really need (and I promise this is that last I’ll mention) is a precising definition. As the name suggests, that’s a definition given to clarify exactly how we’re using a particular word. The definitions I’ve given in this book for the words “argument” and “critical” are precising definitions, because they’re intended to clarify the exact sense in which I’m using them. That’s necessary because almost all words can mean more than one thing. But they can’t mean whatever we want them to, whatever Humpty Dumpty says.

Tuesday, September 8, 2020

Clarity and Cuttlefish Ink

In the last post, I talked about what an argument is in terms of reasoning, and how people often don't even make real arguments in debates. And even when they do make them, they can be hard to spot, because they're usually embedded in a tangled of metaphors, examples, asides, emotional appeals, and other rhetorical flourishes. That’s a mixed blessing. On the one hand, it makes real-life arguments far less clear than my examples in the last post. It makes the logical structure harder to see, and therefore harder to evaluate. Sometimes that’s done on purpose. On the other hand, language that’s nothing but bare-bones argument would be cold and uninviting, like a chair without upholstery. Humans aren’t going to get rid of the upholstery of language, and we wouldn’t want to--at least not completely--but we need to be able to see under it, to determine if it has a strong underlying structure. Or if it doesn’t.

Clarifying Language

Cuttlefish Ink: Wordiness, Jargon, Euphemisms, and Dysphemisms

So how do we go about clarifying arguments? First we clarify the language, and then we clarify the structure. I'll take about structure later. For now, let's look at language, which can be unclear in many ways. It can be overly wordy and redundant, as in sentences like: “It is requested that all visitors proceed to the closest adjacent exit”. It may be full of unnecessary jargon. I once read an academic paper about “canine ludic behavior”, which in normal English means “dogs playing”. Like many kinds of language, jargon can be used for good and for ill. Sometimes it’s acceptable and even necessary. In this blog, for example, I’ve thrown out ten-dollar words like “heuristic” and “enthymeme” and talked about the difference between epistemic and instrumental rationality. None of these words are everyday English, but they’re necessary for clarifying basic concepts in critical thinking. In other cases, jargon makes language less clear. Sometimes that’s accidental, but other times it’s done to hide meaning, paper over unpleasant facts, exclude outsiders, or conceal weak reasoning behind big words and long sentences. As George Orwell once said, “The great enemy of clear language is insincerity. When there is a gap between one's real and one's declared aims, one turns as it were instinctively to long words and exhausted idioms, like a cuttlefish spurting out ink.”

Other sources of cuttlefish ink include euphemisms and their opposites, dysphemisms. Like jargon, euphemisms can be acceptable in certain circumstances. If your friend’s beloved aunt just died, you may want to lessen the blow by saying she “passed away”. That’s not deceptive; it’s just compassionate. But other times, euphemisms serve to obscure the truth. This can be merely irritating, as when an apartment is advertised as “garden level” when it’s really in the basement, but it can become truly sinister when it conceals ugly truths. When a general tells Congress there was “collateral damage in the civilian arena”, it’s because he doesn’t want to say that innocent civilians were killed and maimed, even though that’s what happened. In the pre-Civil War south, slavery was called the “peculiar institution”, to make it seem less horrible than it was. These euphemisms are not harmless.

While euphemism tries to put ugly things in a nicer light, dysphemism can be dishonest by putting reasonable things in an ugly light. For example, the Civil War was called the “War of Northern Aggression” in the south to make it seem less justified than it was. As a southerner myself, I know that southern people can still be a little too tricky with their words. I once worked with an aristocratic southern woman who came to work and said she had been “over-served”. She was actually just hungover. Bless her heart.

Vagueness

Two other enemies of clear language are vagueness and ambiguity. The distinction between the two is subtle, but important. Vagueness is simply a lack of precision or specificity. Here again, it’s not always a vice, and it can even be desirable in some cases. The framers of the US Constitution were deliberately vague about certain phrases, such as “cruel and unusual punishment” and “high crimes and misdemeanors”, to make it flexible enough to handle future challenges they knew they couldn’t foresee. On a more mundane level, if I say to a friend, “Sorry I didn’t return your email sooner, I was running errands”, there’s probably no need to specify the exact errands. In fact, my friend would think it was weird if I did. But if a parent asks a teenager where he’s been with the car for the last 12 hours, “I was running errands” probably isn’t going to cut it. It’s clearly an evasive answer in that context, and context is crucial.

Language is often vague because many concepts and categories can be relative, or have fuzzy boundaries. If I say, “Look at that huge spider!”, the word “huge” is relative. A huge spider is far smaller than a tiny horse. If we want our language to be clear, then, we need to use the right amount of specificity for the context. If you tell someone you have a really fat cat, they probably don’t need to know his exact weight. But if a vet is giving your cat a prescription, you might need to confess that he weighs twenty pounds.

Fuzzy boundaries can be tricky, too. I have a receding hairline, but it’s arguable whether I could be described as “bald”, because there’s not a clear line between “bald” and “not-bald”. A freshman philosophy student might argue that because “baldness” is a concept with fuzzy boundaries, it’s meaningless. But it isn’t. Yul Brynner was clearly bald. This is a harmless example, but people can use similar arguments in sinister ways. For example, the attorneys for the police in the Rodney King trial argued that acceptable use of force varies according to how violently a person is resisting, so there’s no easily defined line between acceptable and excessive force. That is reasonable, but then they concluded that nothing can clearly be called excessive force. And that’s no more reasonable than saying Yul Brynner wasn’t bald. This kind of argument is a fallacy (the drawing the line fallacy), but it helped convince a jury that officers caught beating a man on video weren’t guilty. Once again, bad thinking has bad consequences. 

Ambiguity

I once had a boss with a gift for mangling words, and he occasionally told me that “We should avoid ambiwiggity”. And he was right, assuming he meant "ambiguity". But what exactly is ambiguity? It just means that a word or sentence could be interpreted in multiple ways. If you tell me, “There’s a bat on the sidewalk”, until you say more than that, I won’t know whether you mean it’s a small flying mammal or a club for hitting baseballs. “Bat” is an ambiguous word. Whole sentences can be ambiguous, too, as in the following newspaper headlines:

Kids Make Delicious Snacks

Lawyers Give Poor Free Legal Advice

Queen Mary Having Bottom Scraped

Ambiguity can be hilarious, but it can also cause big problems. Like vagueness, it can lead to fallacious thinking. For example, somebody might tell me: “You say people need to get better at making arguments, but I think people argue too much these days. So I think you’re wrong.” If you’ve read this far, you can probably see what’s wrong with this argument: it confuses two different meanings of the word “argument”. As we’ve seen, an argument in the sense of “a set of statements offered in support of a conclusion” isn’t the same as an argument in the sense of “a verbal quarrel”. When the meaning of a word shifts in the middle of an argument, it’s a fallacy called the fallacy of equivocation.

Chapter 3. Making Good Arguments: Insults and Arguments

In an earlier post, I compared online debates to basketball games where your opponent pushes you, won’t count your baskets, and says he made shots he clearly missed. What’s worse, the audience--the other people watching the debate--may think he’s winning, too. If he calls you a “libtard” or a “rethuglican”, people who agree with him may think he just made some sort of point. Of course, he hasn’t, because personal insults like that don’t actually give a reason for believing his claims, or rejecting yours. They’re a kind of fallacy called an ad hominem attack, and they’re the equivalent of punching someone to make a shot in a basketball game. They shouldn’t count, but people often think they do, because they don’t understand the rules of good arguments.

So what are those rules? To start answering that question, I need to say exactly what I mean by the word “argument”. In reasoning, an argument doesn’t mean a verbal quarrel. Instead, it’s an attempt to convince someone of a claim by offering reasons for accepting that claim. An argument, then, has at least two parts: 1. a premise or premises, which are the reasons given. 2. the conclusion.


So, if I say, “Biff’s a thief. I saw him taking money from the register.”, then I’ve made an argument, because I’ve made a claim and offered reasons supporting it. If we put the argument in what’s called standard form, we have a premise and a conclusion:


Premise: I saw Biff taking money from the register.

_________________________________________


Conclusion: Biff is a thief.


But if I just say, “Biff’s a thief!”, then I haven’t made an argument, because I haven’t given any reasons. I’ve just offered a naked claim with no visible means of support. The difference is important, because reasoning is about making sure your conclusions are solidly based on good reasons. 


Of course, most arguments in real life aren’t in standard form. The conclusion may be given before the premise, as in “Biff’s a thief, I saw him taking money from the register”. But the premises and conclusions are usually recognizable, because it will usually be clear that some statements are given in support of others. And oftentimes you can spot premises and conclusions by looking for indicator words. Premises often include words or phrases like “because” or “since”, while conclusions usually include indicator words like “so”, “thus”, “therefore”, and so on. 


However, words that can be indicator words aren’t always indicator words. If I say, “Since Jed is a dog, he probably doesn’t know algebra”, the word “since” indicates a premise in an argument. But if I say, “Jed has been howling since midnight”, it doesn’t. Words are tricky--that’s one of the things that makes good reasoning hard. Another is that not all arguments have indicator words. The one above about Biff doesn’t, for example. We just have to infer that “I saw him taking money from the register” is a premise, and “Biff is a thief” is the conclusion. 


Another issue is that many arguments are enthymemes, which is just a fancy word meaning they have unstated premises. In the argument about Biff’s thievery, there’s actually an unstated premise:


Premise: I saw Biff taking money from the register. 

(Unstated) Premise: Anyone who takes money from the register is a thief.

_________________________________________________________

Conclusion: Biff is a thief.


We don’t need to put every argument in standard form, but knowing the concept is useful, because it makes the underlying structure and assumptions of arguments clear, and that makes them easier to evaluate. For example, when you put unstated premises in words it makes the shaky ones easier to spot. The premise “Anyone who takes money from the register is a thief” is pretty shaky, because it’s easy to think of situations where it isn’t true. A store manager, for example, would be authorized to take money from a register, perhaps to put it in a safe or another register. If Biff’s a store manager, then seeing him take money from the register doesn’t support the conclusion that he’s a thief. 


Once you understand the definition of arguments, and start looking for them in real-life debates, you’ll notice something strange: people don’t make many actual arguments. A lot of argumentative discourse is full of claims made without any reason given to believe them. Sometimes that’s OK, if most people accept a claim already. If I say, “Barack Obama was in office for eight years”, there’s no need to give reasons backing that up, because everyone accepts it as true. But most debates--especially heated ones--are full of claims that need to be backed up with reasons...and aren’t.


If you look at the average overheated social media debate, you’ll see insults (You liberals are a bunch of idiots!), jokes (If Trump came out for oxygen, Democrats would stop breathing), ridicule (Nobody with any sense believes that), demonization (Republicans love it when homeless people die in the gutter), cheerleading for your side (I'm a conservative and proud of it!), and old-fashioned, meaningless verbal abuse (Screw you!). You’ll see a great many exclamation marks, and when things get really hot, you might even get ALL CAPS!!! 


Actual arguments, where someone makes a claim and then backs it up with reasons, are the exceptions to the rule.  Even when people do offer arguments, lots of them are bad arguments--they’re fallacies, whose premises don’t support their conclusions. 


The only way to improve this situation is for more people to learn the difference between sound arguments and the kind of noise that fills many debates. And it’s not just the debaters that need to know the difference; but the audience as well. Many debates, especially in the social media age, aren’t just about arguing against an opponent--they’re also about convincing the people watching. Whether you’re trying to convince an individual or a group of people, making good arguments is only effective to the extent that they can recognize them as good arguments. If they don’t, there’s not much point in trying to reason with them. You can’t reason with someone who doesn’t know what reasoning is. 


Now, does that mean there’s no point in reasoning well? Certainly not. First, we need to be able to reason for ourselves, even if we’re not trying to convince anyone of anything. Second, it’s possible to teach people to reason better. If too many people don’t know how to reason well, the solution is to try to promote better reasoning, not give up on reason entirely. 


Of course, promoting reason is an uphill battle. As it stands now, far too many people think insults, ridicule, and fallacies are just as compelling as good arguments. In fact, since those things are more likely to be entertaining and memorable, people may even find them more impressive. To go back to our basketball metaphor, they think you can win a game with trash talk and personal fouls instead of actual baskets. If we want to make the game more worth playing, we have to define what counts as a real point. And the game is worth playing, because at its best, debate can help us flesh out issues and get closer to truth. Talking to other people can give us valuable insights and perspective. We can’t learn much by butting heads, but we can learn a lot by putting our heads together. But we have to be reasonable, and to do that, we need to look at how arguments work, so we can tell good ones from bad ones.


Monday, September 7, 2020

Herd Instincts

We need a few trusted naysayers in our lives, critics who are willing to puncture our protective bubble of self-justifications and yank us back to reality if we veer too far off. This is especially important for people in positions of power.      - Carol Tavris
Flock of flamingos looking in the same direction

When a grocer named Sylvan Goldman first invented the shopping cart in 1937, he found that people were embarrassed to use them. Not to be deterred, he hired several models to push them around the store and pretend to be shopping. Before long everybody was using them. Sylvan Goldman was a man who understood human nature.

He had tapped into a phenomenon known as social proof. People decide what to do by looking to see what everybody else is doing. If someone is sprawled on a sidewalk in a city, people will walk around them until one person stops and checks to see if they’re OK. Then others start stopping, too. In some cases, social proof is perfectly rational. If you’re walking to a baseball game in an unfamiliar city, and you don’t know how to find the ballpark, you can probably get there by following the crowd. If you’re a caveman and see your friends looking terrified and climbing trees, it might be a good idea to climb one, too. Back then, nonconformists got eaten by cave bears.

Perhaps it’s not surprising, then, that we look to others--especially others in our own tribe--for cues about what to do, and even how to think. The problem, of course, is that the crowd isn’t always right...even when it’s our crowd. The psychologist Solomon Asch demonstrated the power of conformity in a series of experiments in the 1950’s. He simply showed a set of lines to a group of subjects, and asked them to say which lines were the same length. The answer was obvious--it was easy to see which two lines matched. Or it should have been easy. But all the test subjects except one were actors who would give the same wrong answer on some of the tests. The real subjects in the experiment were bewildered. They looked around at the other people nervously, squinted at the lines, and then nearly 40% of them went along with the crowd. Social conformity made them see--or at least claim to see--what wasn’t there.

Whereas Asch showed how individuals can surrender their critical facilities to a group, sometimes entire groups start to think alike. This is called groupthink. Groupthink was first described by the social psychologist Irving Janis, who wanted to know what causes intelligent policymakers to do stupid things like the disastrous Bay of Pigs invasion of Cuba. He found that groupthink happens when groups of people are too set on agreeing with each other. They “go along to get along”. This can happen when there’s a leader who expects people to agree with him, or when there’s a culture of agreement, where dissent is frowned upon. Over time, the group develops a skewed view of reality, which sets them up for bad decisions and ugly surprises.

While most of us crave group cohesiveness and mutual validation, too much of it can be a bad thing. As the saying goes, “If everybody’s thinking alike, then somebody isn’t thinking.” That’s why it’s important to speak up if you think everybody’s thinking too much alike and settling into comfortable groupthink. Just don’t expect it to be easy. Devil’s advocates play an important role, but they’re rarely popular.

The strange thing about groupthink and related processes is that, while they lead to intellectual uniformity within groups, they can also lead to polarization between groups. Where I live on the Colorado Front Range, there’s a steep political gradient between conservative Colorado Springs, to the south, and ultra-liberal Boulder, to the north. In a recent experiment, researchers put people from each of these towns together to discuss political issues. People from Boulder talked to other people from Boulder, and people from Colorado Springs talked to other people from Colorado Springs. Can you guess what happened? In each group, opinions grew more homogeneous, and more extreme. The Boulder group grew more homogeneously liberal and moved further left, while the Colorado Springs did the same thing in the other direction. Conformity at one level led to growing division at a higher level.

This experiment may help explain what’s happening across the United States right now. More and more, people surround themselves with others who think like they do. In fact, they commonly move to other places for that very reason. They also associate with like-minded people online, and go to websites that confirm their pre-existing beliefs. To make things worse, search engines learn to show people exactly the kinds of sites they already agree with. All this leads to filter bubbles, where people are insulated from diverse points of view. While this effect isn’t as strong as some have claimed--many people online do see multiple points of view--there’s certainly been a trend in recent years for the left and the right to grow more insular and uniform internally as they grow apart externally. This adds to each side’s tendency to see the other as more homogenous and extreme than they really are, because in this case they’re partly right--both sides have become more homogenous and extreme! That’s why each side needs more critical thinkers, with the intellectual virtues I discussed in previous posts. Both sides need more of their members to be brave, independent, and intellectually modest enough to say, “Wait a minute. What makes us so sure we’re right?”

Sunday, September 6, 2020

Us and Them: How We See Others

Other Individuals

Just as our view of ourselves can be distorted, so can our view of others--but in different ways. As I've discussed with the fundamental attribution error, we tend to see other people’s behavior as the result of personality traits more than circumstance. This can lead us to see them in negative terms, as when we see a single case of bad driving as proof that someone is always a bad driver. But it can also cause us to see people in positive terms. If you watch someone give a talk about something they know a lot about, it’s easy to think they’re an all-around brilliant person, even though they wouldn’t sound nearly that smart talking about other things. This is the halo effect, which causes us to see people who are impressive in one way as being impressive in every way. If they’re an expert in physics, we may think they’re qualified to talk about economics, too. Or--as advertisers have known for decades--if they’re good at sports, people will trust their advice about what shoes or cars they should buy.

The flip side of the halo effect is the horn effect, which makes us think people with one unsavory quality must be bad in every way. If we find out that a dishonest acquaintance volunteers at the animal shelter, we assume they must do it for nefarious reasons. They can’t really care about animals, can they? But maybe they do. People are complicated, but we tend to see them as much more one-dimensional than we see ourselves. We sort them into boxes labeled “good” and “bad” when the truth is that most are somewhere in between, or even very good in some ways and very bad in others. The less we know people, the more we think of them in terms of one-dimensional caricatures. If Joe Bob from back home has some unsavory opinions, we may remember that he’s an OK guy in a lot of other ways. But if Joe Schmoe we’ve never met before expresses the same opinions, he must be a nasty piece of work all around.

Other Groups

Our view of others is more nuanced and charitable if we see them as “one of us” instead “one of them”. Human beings have a powerful bias toward ingroup favoritism, on the one hand, and outgroup derogation, on the other. Everybody knows this has been a common theme in history. Many tribes throughout history named themselves something meaning, “The People” or “The Real People”, while their names for others translated as “strangers” or even “enemies”. The lives of those “others” were generally considered less valuable.

We still have those tendencies. Many American conservatives take it for granted that God is on our side, and that American lives are more valuable than foreign lives. In fact, they may even see this as a moral, patriotic viewpoint. Liberals are less likely to think in those terms, but they’re still prone to biased, ingroup/outgroup thinking. They’re outraged by the misdeeds of conservative politicians, but excuse those of liberal politicians. And of course, conservatives do the same thing in the other direction. Both groups see the other “tribe” as more homogeneous and more extreme than they really are (though as I'll discuss later, tribalism and groupthink really can make groups more ideologically extreme and homogeneous). Like most in-groups, both sides of the political spectrum see themselves as diverse and decent, while seeing others as one-dimensional and sinister. You see this all the time on social media. Someone on one side will post a video of an extremist on the other side and say, “See! They’re all alike!”, as though a right-wing or left-wing zealot were representative of the average conservative or liberal. This is another instance of stereotypical thinking, and an excellent example of a logical fallacy I’ll discuss later, called hasty generalization.

Generally speaking, our cognitive biases cause us to favor ourselves over others, and to favor “us’ over “them”. You can sum up several biases in two words: egocentrism and ethnocentrism (ethnocentrism on smaller scales can be called tribalism). Human beings are prone to all these things. We’re predisposed to judge others more harshly than ourselves, and other groups and cultures more harshly than our own. Modern cosmopolitanism has caused many to move away from ethnocentrism, but it’s still a powerful human urge. Its cousin, tribalism, is still pervasive. At the local level, we favor our home basketball team over the other team, and we’re sure they’re committing more fouls than we are. Are those referees blind!? But rival tribes at one level may be part of the same tribe at other levels. Those people in the next town are “the other guys” at the local level, but “one of us” at the state or national level.

One important caveat here is that we aren’t necessarily wrong when we see ourselves, or our group or culture, as being in the right. Sometimes we really are right, and they really are wrong. Is it ethnocentric for me to say that foot binding and female genital mutilation are wrong? If it is, fine--I think they’re wrong. So, it’s not that siding with our own tribe, in-group, or culture is always wrong; it’s just that things aren’t automatically good or right because it’s us doing them (or because we’ve always done them that way). If our group or culture really is in the right, then we need to be able to give reasons--not rationalizations, but good reasons--why that’s true. If we find that we don’t have good reasons, then we need to change. That’s important, because whole groups of people can be terribly, tragically wrong. In fact, they’re often wrong precisely because they’re thinking as groups, and not as individuals. That's what I'll discuss in the next post.

The Pillars of Argument: Covering Your ARS

So far, I’ve talked about clarifying the language and structure of arguments, as a first step in evaluating them. But how do we take the nex...