36 Read: Deeper Dive into Logical Reasoning
Logical Reasoning
The materials below are attributed fully from the free online Open Education Resource, Exploring Public Speaking: The Free Dalton State College Public Speaking Textbook, 4th Edition in their Chapter 14 materials.
Learning Objectives
After reading this chapter, the student will be able to:
- Define critical thinking, deductive reasoning, and inductive reasoning
- Distinguish between inductive reasoning and deductive reasoning
- Know the four types of inductive reasoning
- Know the common logical fallacies
- Become a more critical listener to public speeches and a more critical reader of source material
Chapter Preview
Section 1 – What is Correct Reasoning?
Section 2 – Inductive Reasoning
Section 3 – Deductive Reasoning
Section 4 – Logical Fallacies
Section 1 – What is Correct Reasoning?
In Chapter 13, we reviewed ancient and modern research on how to create a persuasive presentation. We learned that persuasion does not just depend on one mode, but on the speaker using his or her personal credibility and credentials; understanding what important beliefs, attitudes, values, and needs of the audience connect with the persuasive purpose; and drawing on fresh evidence that the audience has not heard before. In addition to fresh evidence, the audience expects a logical speech and to hear arguments that they understand and to which they can relate. These are historically known as ethos, pathos, and logos. This chapter will deal with the second part of logos, logical argument and using critical thinking to fashion and evaluate persuasive appeals.
We have seen that logos involves composing a speech that is structured in a logical and easy-to-follow way; it also involves using correct logical reasoning and consequently avoiding fallacious reasoning, or logical fallacies.
Although it is not a perfect or literal analogy, we can think of correct reasoning like building a house. To build a house, a person need materials (premises and facts) a blueprint (logical method), and knowledge of building trades (critical thinking ability). If you put them out in a field with drywall, nails, wiring, fixtures, pipes, wood and other materials and handed them a blueprint, they would need knowledge of construction principles, plumbing, and reading plans (and some helpers), or no building is going up. Logic could also be considered like cooking. A cook needs ingredients, a recipe, and knowledge about cooking. In both cases, the ingredients or materials must be good quality (the information and facts must be true); the recipe or directions must be right (the logical process); and the cook must know what they are doing.
In the previous paragraph, analogical reasoning was used. As we will see in Section 2, analogical reasoning involves drawing conclusions about an object or phenomenon based on its similarities to something else. Technically, the comparisons of logic to building and cooking were examples of figurative analogy, not a literal one, because the two processes are not essentially the same. A figurative analogy is like a poetic one: “My love is like a red, red rose,” (Robert Burns, 1759-1796); love, or a loved person, and a flower are not essentially the same. An example of a literal analogy would be one between the college where the authors work, Dalton State, and another state college in Georgia with a similar mission, similar governance, similar size, and similar student bodies.
Analogical reasoning is one of several types of logical reasoning methods which can serve us well if used correctly, but it can be confusing and even unethical if used incorrectly. In this chapter we will look first at “good” reasoning and then at several of the standard mistakes in reasoning, called logical fallacies. In higher education today, teaching and learning critical thinking skills are a priority, and those skills are one of the characteristics that employers are looking for in applicants (Adams, 2014). The difficult part of this equation is that critical thinking skills mean slightly different things for different people.
Involved in critical thinking are problem-solving and decision-making, the ability to evaluate and critique based on theory and the “knowledge base” (what is known in a particular field), skill in self-reflection, recognition of personal and societal biases, and the ability to use logic and avoid logical fallacies. On the website Critical Thinking Community, in an article entitled “Our Concept and Definition of Critical Thinking” (2013), the term is defined this way:
Critical thinking is that mode of thinking — about any subject, content, or problem — in which the thinker improves the quality of his or her thinking by skillfully analyzing, assessing, and reconstructing it. Critical thinking is self-directed, self-disciplined, self-monitored, and self-corrective thinking. It presupposes assent to rigorous standards of excellence and mindful command of their use. It entails effective communication and problem-solving abilities, as well as a commitment to overcome our native egocentrism and sociocentrism.
Critical thinking is a term with a wide range of meaning, one of which is the traditional ability to use formal logic. To do so, you must first understand the two types of reasoning: inductive and deductive.
Section 2 – Inductive Reasoning
Inductive reasoning (also called “induction”) is probably the form of reasoning we use on a more regular basis. Induction is sometimes referred to as “reasoning from example or specific instance,” and indeed, that is a good description. It could also be referred to as “bottom-up” thinking. Inductive reasoning is sometimes called “the scientific method,” although you don’t have to be a scientist to use it, and use of the word “scientific” gives the impression it is always right and always precise, which it is not. In fact, we are just as likely to use inductive logic incorrectly or vaguely as we are to use it well.
Inductive reasoning happens when we look around at various happenings, objects, behavior, etc., and see patterns. From those patterns we develop conclusions. There are four types of inductive reasoning, based on different kinds of evidence and logical moves or jumps.
Generalization
Generalization is a form of inductive reasoning that draws conclusions based on recurring patterns or repeated observations. Vocabulary.com (2016) goes one step further to state it is “the process of formulating general concepts by abstracting common properties of instances.” To generalize, one must observe multiple instances and find common qualities or behaviors and then make a broad or universal statement about them. If every dog I see chases squirrels, then I would probably generalize that all dogs chase squirrels.
If you go to a certain business and get bad service once, you may not like it. If you go back and get bad treatment again, you probably won’t go back again because you have concluded “Business X always treats its customers badly.” However, according to the laws of logic, you cannot really say that; you can only say, “In my experience, Business X treats its customers badly” or more precisely, “has treated me badly.” Additionally, the word “badly” is imprecise, so to be a valid conclusion to the generalization, badly should be replaced with “rudely,” “dishonestly,” or “dismissively.” The two problems with generalization are over-generalizing (making too big an inductive leap, or jump, from the evidence to the conclusion) and generalizing without enough examples (hasty generalization, also seen in stereotyping).
In the example of the service at Business X, two examples are really not enough to conclude that “Business X treats customers rudely.” The conclusion does not pass the logic test for generalization, but pure logic may not influence whether or not you patronize the business again. Logic and personal choice overlap sometimes and separate sometimes. If the business is a restaurant, it could be that there is one particularly rude server at the restaurant, and he happened to wait on you during both of your experiences. It is possible that everyone else gets fantastic service, but your generalization was based on too small a sample.
Inductive reasoning through generalization is used in surveys and polls. If a polling organization follows scientific sampling procedures (sample size, ensuring different types of people are involved, etc.), it can conclude that their poll indicates trends in public opinion. Inductive reasoning is also used in science. We will see from the examples below that inductive reasoning does not result in certainty. Inductive conclusions are always open to further evidence, but they are the best conclusions we have now.
For example, if you are a coffee drinker, you might hear news reports at one time that coffee is bad for your health, and then six months later that another study shows coffee has positive effects on your health. Scientific studies are often repeated or conducted in different ways to obtain more and better evidence and make updated conclusions. Consequently, the way to disprove inductive reasoning is to provide contradictory evidence or examples.
Causal reasoning
Instead of looking for patterns the way generalization does, causal reasoning seeks to make cause-effect connections. Causal reasoning is a form of inductive reasoning we use all the time without even thinking about it. If the street is wet in the morning, you know that it rained based on past experience. Of course, there could be another cause—the city decided to wash the streets early that morning—but your first conclusion would be rain. Because causes and effects can be so multiple and complicated, two tests are used to judge whether the causal reasoning is valid.
Good inductive causal reasoning meets the tests of directness and strength. The alleged cause must have a direct relationship on the effect and the cause must be strong enough to make the effect. If a student fails a test in a class that he studied for, he would need to examine the causes of the failure. He could look back over the experience and suggest the following reasons for the failure:
- He waited too long to study.
- He had incomplete notes.
- He didn’t read the textbook fully.
- He wore a red hoodie when he took the test.
- He ate pizza from Pizza Heaven the night before.
- He only slept four hours the night before.
- The instructor did not do a good job teaching the material.
- He sat in a different seat to take the test.
- His favorite football team lost its game on the weekend before.
Which of these causes are direct enough and strong enough to affect his performance on the test? All of them might have had a slight effect on his emotional, physical, or mental state, but all are not strong enough to affect his knowledge of the material if he had studied sufficiently and had good notes to work from. Not having enough sleep could also affect his attention and processes more directly than, say, the pizza or football game. We often consider “causes” such as the color of the hoodie to be superstitions (“I had bad luck because a black cat crossed my path”).
Taking a test while sitting in a different seat from the one where you sit in class has actually been researched (Sauffley, Otaka, & Bavaresco, 1985), as has whether sitting in the front or back affects learning (Benedict & Hoag, 2004). (In both cases, the evidence so far says that they do not have an impact, but more research will probably be done.) From the list above, #1-3, #6, and #7 probably have the most direct effect on the test failure. At this point our student would need to face the psychological concept of locus of control, or responsibility—was the failure on the test mostly his doing, or his instructor’s?
Causal reasoning is susceptible to four fallacies: historical fallacy, slippery slope, false cause, and confusing correlation and causation. The first three will be discussed later, but the last is very common, and if you take a psychology or sociology course, you will study correlation and causation well. This video of a Ted Talk (https://www.youtube.com/watch?v=8B271L3NtAw) will explain the concept in an entertaining manner. Confusing correlation and causation is the same as confusing causal reasoning and sign reasoning, discussed below.
Sign Reasoning
Right now, as one of the authors is writing this chapter, the leaves on the trees are turning brown, the grass does not need to be cut every week, and geese are flying towards Florida. These are all signs of fall in this region. These signs do not make fall happen, and they don’t make the other signs—cooler temperatures, for example—happen. All the signs of fall are caused by one thing: the rotation of the earth and its tilt on its axis, which make shorter days, less sunshine, cooler temperatures, and less chlorophyll in the leaves, leading to red and brown colors.
It is easy to confuse signs and causes. Sign reasoning, then, is a form of inductive reasoning in which conclusions are drawn about phenomena based on events that precede or co-exist with, but not cause, a subsequent event. Signs are like the correlation mentioned above under causal reasoning. If someone argues, “In the summer more people eat ice cream, and in the summer there is statistically more crime. Therefore, eating more ice cream causes more crime!” (or “more crime makes people eat more ice cream.”), that, of course, would be silly. These are two things that happen at the same time—signs—but they are effects of something else – hot weather. If we see one sign, we will see the other. Either way, they are signs or perhaps two different things that just happen to be occurring at the same time, but not causes.
Analogical reasoning
As mentioned above, analogical reasoning involves comparison. For it to be valid, the two things (schools, states, countries, businesses) must be truly alike in many important ways–essentially alike. Although Harvard University and your college are both institutions of higher education, they are not essentially alike in very many ways. They may have different missions, histories, governance, surrounding locations, sizes, clientele, stakeholders, funding sources, funding amounts, etc. So it would be foolish to argue, “Harvard has a law school; therefore, since we are both colleges, my college should have a law school, too.” On the other hand, there are colleges that are very similar to your college in all those ways, so comparisons could be valid in those cases.
You have probably heard the phrase, “that is like comparing apples and oranges.” When you think about it, though, apples and oranges are more alike than they are different (they are both still fruit, after all). This observation points out the difficulty of analogical reasoning—how similar do the two “things” have to be for there to be a valid analogy? Second, what is the purpose of the analogy? Is it to prove that State College A has a specific program (sports, Greek societies, a theatre major), therefore, College B should have that program, too? Are there other factors to consider? Analogical reasoning is one of the less reliable forms of logic, although it is used frequently.
To summarize, inductive or bottom-up reasoning comes in four varieties, each capable of being used correctly or incorrectly. Remember that inductive reasoning is disproven by counter evidence and its conclusions are always up to revision by new evidence–what is called “tentative,” because the conclusions might have to be revised. Also, the conclusions of inductive reasoning should be precisely stated to reflect the evidence.
Section 3 – Deductive Reasoning
The second type of reasoning is called deductive reasoning, or deduction, a type of reasoning in which a conclusion is based on the combination of multiple premises that are generally assumed to be true. It has been referred to as “reasoning from principle,” which is a good description. It can also be called “top-down” reasoning. However, you should not think of deductive reasoning as the opposite of inductive reasoning. They are two different ways of thinking about evidence.
First, formal deductive reasoning employs the syllogism, which is a three-sentence argument composed of a major premise (a generalization or principle that is accepted as true), a minor premise (an example of the major premise), and a conclusion. This conclusion has to be true if the major and minor premise are true; it logically follows from the first two statements. Here are some examples. The most common one you may have seen before.
All men are mortal. (Major premise: something everyone already agrees on)
Socrates is a man. (Minor premise: an example taken from the major premise.)
Socrates is mortal. (Conclusion: the only conclusion that can be drawn from the first two sentences.)
Major Premise: All State College students must take COMM 1110.
Minor Premise: Brittany is a State College student.
Conclusion: Brittany must take COMM 1110.
Major Premise: All dogs have fur.
Minor Premise: Fifi is a dog.
Conclusion: Fifi has fur.
Of course, at this point you may have some issues with these examples. First, Socrates is already dead and you did not need a syllogism to know that. The Greek philosopher lived 2,400 years ago! Second, these seem kind of obvious. Third, are there some exceptions to “All Dalton State College students must take COMM 1110”? Yes, there are; some transfer students do not, and certificate students do not. Finally, there are breeds of dogs that are hairless. Some people consider them odd-looking, but they do exist. So while it is true that all men are mortal, it is not absolutely and universally true that all State College students must complete COMM 1110 or that all dogs have fur.
Consequently, the first criterion for syllogisms and deductive reasoning is that the premises have to be true for the conclusion to be true, even if the method is right. A right method and untrue premises will not result in a true conclusion. Equally, true premises with a wrong method will also not result in true conclusions. For example:
Major premise: All dogs bark.
Minor premise: Fifi barks.
Conclusion: Fifi is a dog.
You should notice that the minor premise is stated incorrectly. We know other animals bark, notably seals (although it is hard to think of a seal named “Fifi”). The minor premise would have to read “Fifi is a dog” to arrive at the logical conclusion, “Fifi barks.” (Also, there are dog breeds that do not bark.) However, by restating the major premise, you have a different argument.
Major premise: Dogs are the only animals that wag their tails when happy.
Minor premise: Fifi wags her tail when happy.
Conclusion: Fifi is a dog.
Another term in deductive reasoning is an enthymeme. This odd word refers to a syllogism with one of the premises missing.
Major premise: (missing)
Minor premise: Daniel Becker is a chemistry major.
Conclusion: Daniel Becker will make a good SGA president.
What is the missing major premise? “Chemistry majors make good SGA presidents.” Why? Is there any support for this statement? Deductive reasoning is not designed to present unsupported major premises; its purpose is to go from what is known to what is not known in the absence of direct observation. If it is true that chemistry majors make good SGA presidents, then we could conclude Dan will do a good job in this role. But the premise, which in the enthymeme is left out, is questionable when put up to scrutiny.
Major premise: Socialists favor government-run health care.
Minor premise: (missing)
Conclusion: Candidate Fran Stokes favors government-run health care.
The missing statement in the minor premise, “Fran Stokes is a socialist,” is left out so that the audience can make the connection, even if it is erroneous. Consequently, it is best to avoid enthymemes with audiences and to be mindful of them when used by persuaders. They are mentioned here to make you aware of how commonly they are used as shortcuts. Enthymemes are common in advertising. You may have heard the slogan for Smucker’s jams, “With a name like Smucker’s, it has to be good.”
Major premise: Products with odd names are good products. (questionable!)
Minor premise: “Smucker’s” is an odd name.
Conclusion: Smucker’s is a good product.
To conclude, deductive reasoning helps us go from known to unknown and can lead to reliable conclusions if the premises and the method are correct. It has been around since the time of the ancient Greeks. It is not the flipside of inductive but a separate method of logic. While enthymemes are not always errors, you should listen carefully to arguments that use them to be sure that something incorrect is not being assumed or presented.
Section 4 – Logical Fallacies
The second part of achieving a logical speech is to avoid logical fallacies. Logical fallacies are mistakes in reasoning–getting one of the formulas, inductive or deductive, wrong. There are actually dozens upon dozens of fallacies, some of which have complicated Latin names. This chapter will deal with eighteen of the most common ones that you should know to avoid poor logic in your speech and to become a critical thinker.
False Analogy
A false analogy is a fallacy where two things are compared that do not share enough key similarities to be compared fairly. As mentioned before, for analogical reasoning to be valid, the two things being compared must be essentially similar—similar in all the important ways. Two states could be analogous, if they are in the same region, have similar demographics and histories, similar size, and other aspects in common. Georgia is more like Alabama than it is like Hawaii, although both are states. An analogy between the United States and, for example, a tiny European country with a homogeneous population is probably not a valid analogy, although common. Even in the case where the two “things” being compared are similar, you should be careful to support your argument with other evidence.
False Cause
False cause is a fallacy that assumes that one thing causes another, but there is no logical connection between the two. A cause must be direct and strong enough, not just before or somewhat related to cause the problem. In a false cause fallacy, the alleged cause might not be strong or direct enough. For example, there has been much debate over the causes of the recession in 2008. If someone said, “The exorbitant salaries paid to professional athletes contributed to the recession” that would be the fallacy of false cause. Why? For one thing, the salaries, though large, are an infinitesimal part of the whole economy. Second, those salaries only affect a small number of people. Third, those salaries have nothing to do with housing market or the management of the large car companies, banks, or Wall Street, which had a stronger and more direct effect on the economy as a whole. In general, while we are often tempted to attribute a large societal or historical outcome to just one cause, that is rarely the case in real life.
Slippery Slope
A slippery slope fallacy is a type of false cause which assumes that taking a first step will lead to subsequent events that cannot be prevented. The children’s book, If You Give a Moose a Muffin is a good example of slippery slope; it tells all the terrible things (from a child’s point of view) that will happen, one after another, if a moose is given a muffin. If A happens, then B will happen, then C, then D, then E, F, G and it will get worse and worse and before you know it, we will all be in some sort of ruin. So, don’t do A or don’t let A happen, because it will inevitably lead to Z, and of course, Z is terrible.
This type of reasoning fails to look at alternate causes or factors that could keep the worst from happening, and often is somewhat silly when A is linked right to Z. A young woman may say to a young man asking her out, “If I go out with you Thursday night, I won’t be able to study for my test Friday. Then I will fail the test. Then I will fail the class. Then I will lose my scholarship. Then I will have to drop out of college. Then I will not get the career I want, and I’ll be 30 years old still living with my parents, unmarried, unhappy, and no children or career! That’s why I just can’t go out with you!” Obviously, this young woman has gone out of her way to get out of this date, and she has committed a slippery slope. Additionally, since no one can predict the future, we can never be entirely certain on the direction a given chain of events will lead.
Slippery slope arguments are often used in discussions over emotional and hot button topics such as gun control and physician-assisted suicide. One might argue that “If guns are outlawed, only outlaws will have guns,” a bumper sticker you may have seen. This is an example of a slippery slope argument because it is saying that any gun control laws will inevitably lead to no guns being allowed at all in the U.S. and then the inevitable result that only criminals will have guns because they don’t obey gun control laws anyway. While it is true criminals do not care about gun laws, we already have a large number of gun laws and the level of gun ownership is as high as ever.
However, just because an argument is criticized as a slippery slope, that does not mean it is a slippery slope. Sometimes actions do lead to far-reaching but unforeseen events, according to the “law of unintended consequences.” We should look below the surface to see if the accusation of slippery slope is true.
For example, in regard to the anti-gun control “bumper sticker,” an investigation of the facts will show that gun control laws have been ineffective in many ways since we have more guns than ever now (347 million, according to a website affiliated with the National Rifle Association). However, according to the Brookings Institution, there are
“‘…about 300 major state and federal laws, and an unknown but shrinking number of local laws’. . . . Rather than trying to base arguments for more or fewer laws on counting up the current total, we would do better to study the impact of the laws we do have.” (Vernick & Hepburn, 2003, p. 2).
Note that in the previous paragraph, two numerical figures are used, both from sources that are not free of bias. The National Rifle Association obviously opposes gun restrictions and does not support the idea that there are too many guns. Their website gives the background to show how that figure was discovered. The Brookings Institution is a “think-tank” (a group of scholars who write about public issues) that advocates gun control. Their article explains how it came to its number of state and federal laws, but admits that it omitted many local laws about carrying or firing guns in public places. So the number is actually higher, by its own admission. The Brookings Institution does not think there are too many laws; it thinks there should be more, or at least better enforced ones. Also, it should be noted that this article is based on data from 1970-1999, so the information may be out of date.
This information about the sources is provided to make a point about possible bias in sources and about critical thinking and reading, or more specifically, reading carefully to understand your sources. Just finding a source that looks pretty good is not enough. You must ask important questions about the way the information is presented. An interesting addition the debate is found at https://www.rand.org/research/gun-policy/essays/what-science-tells-us-about-the-effects-of-gun-policies.html Although most people have strong opinions about gun control, pro and con, it is a complicated debate that requires, like most societal issues, clear and critical thinking about the evidence.
Hasty Generalization
Making a hasty generalization means making a generalization with too few examples. It is so common that we might wonder if there are any legitimate generalizations. The key to generalizations is how the conclusions are “framed” or put into language. The conclusions should be specific and be clear about the limited nature of the sample. Even worse is when the generalization is also applied too hastily to other situations. For example:
Premise: Albert Einstein did poorly in math in school.
Conclusion: All world-renowned scientists do poorly in math in school.
Secondary Conclusion: I did poorly at math in school, so I will become a world-renowned scientist.
Or this example that college professors hear all the time.
Premise: Mark Zuckerberg dropped out of college, invented Facebook, and made billions of dollars.
Premise: Bill Gates dropped out of college, started Microsoft, and made billions of dollars.
Conclusion: Dropping out of college leads to great financial success.
Secondary conclusion: A college degree is unnecessary to great financial success.
Straw Man
A straw man fallacy is a fallacy that shows only the weaker side of an opponent’s argument in order to more easily tear it down. The term “straw man” brings up the image of a scarecrow, and that is the idea behind the expression. Even a child can beat up a scarecrow; anyone can. Straw man fallacy happens when an opponent in a debate misinterprets or takes a small part of their opponent’s position in a debate. Then they blow that misinterpretation or small part out of proportion and make it a major part of the opponent’s position. This is often done by ridicule, taking statements out of context, or misquoting.
Politicians, unfortunately, commit the straw man fallacy quite frequently, but they are hardly the only ones. Someone may argue that college professors don’t care about students’ learning because professors say, “You must read the chapter to understand the material; I can’t explain it all to you in class.” That would be taking a behavior and making it mean something it doesn’t. If someone states, “College A is not as good as College B because the cafeteria food at College A is not as good” is a pretty weak argument—and making too big of a deal over of a minor thing—for attending one college over another.
Post hoc ergo propter hoc
This long Latin phrase means “After the fact, therefore because of the fact.” Also called historical fallacy, this one is an error in causal reasoning. Historical fallacy uses progression in time as the reason for causation, but nothing else. In this scenario, A happens, then B happens; therefore A caused B. The fallacy states that because an event takes place first in time, it is the cause of an event that takes place later in time. We know that is not true, but sometimes we act as if it is.
Elections often get blamed for everything that happens afterward. It is true that a cause must happen first or before the effect, but it doesn’t mean that everything or anything that happens beforehand must be the cause. In the example given earlier, a football team losing its game five days earlier can’t be the reason for a student failing a test just because it happened first.
Argument from Silence
You can’t prove something from nothing. If the constitution, legal system, authority, or the evidence is silent on a matter, then that is all you know. You cannot conclude anything about that. “I know ESP is true because no one has ever proven that it isn’t true” is not an argument. Here we see the difference between fallacious and false. Fallacious has to do with the reasoning process being incorrect, not with the truth or falseness of the conclusion. If I point to a girl on campus and say, “That girl is Taylor Swift,” I am simply stating a falsehood, not committing a fallacy. If I say, “Her name is Taylor Swift, and the reason I know that is because no one has ever told me that her name is not Taylor Swift” (argument from silence), that is a fallacy and a falsehood. (Unless by some odd circumstance her name really is Taylor Swift or the singer Taylor Swift frequents your campus!)
Statistical fallacies
There are many ways that statistics can be used unethically, but here we will deal with three. The first type of statistical fallacy is “small sample,” the second is “unrepresentative sample,” and the third is a variation of appeal to popularity (discussed below). In small sample, an argument is being made from too few examples, so it is essentially hasty generalization. In unrepresentative sample, a conclusion is based on surveys of people who do not represent, or resemble, the ones to whom the conclusion is being applied. If you ever take a poll on a website, it is not “scientific” because it is unrepresentative. Only people who go to that website are participating, and the same people could be voting over and over. In a scientific or representative survey or poll, the pollsters talk to different socio-economic classes, races, ages, and genders and the data-gathering is very carefully performed.
If you go to the president of your college and say, “We need to have a daycare here because 90% of the students say so,” but you only polled ten students, that would be small sample. If you say, “I polled 100 students,” that would still be small, but better, unless all of them were your friends who attended other colleges in the state. That group would not be representative of the student body. If you polled 300 students but they were all members of the same high school graduating class and the same gender as you, that would also be unrepresentative sample.
In the end, surveys indicate trends in opinions and behaviors, not the future and not the truth. We have lots of polls before the election, but only one poll matters—the official vote on Election Day.
Non Sequitur
Non sequitur is Latin for “it does not follow.” It’s an all-purpose fallacy for situations where the conclusion sounds good at first but then you realize there is no connection between the premises and the conclusion. If you say to your supervisor, “I need a raise because the price of BMWs went up,” that is a non sequitur.
False Dilemma
This one is often referred to as the “either-or” fallacy. When you are given only two options, and more than two options exist, that is false dilemma. Usually in false dilemma, one of the options is undesirable and the other is the one the persuader wants you to take. False dilemma is common. “America: Love it or Leave It.” “If you don’t buy this furniture today, you’ll never get another chance.” “Vote for Candidate Y or see our nation destroyed.”
Appeal to Tradition
Essentially, appeal to tradition is the argument, “We’ve always done it this way.” This fallacy happens when traditional practice is the only reason for continuing a policy. Tradition is a great thing. We do many wonderful things for the sake of tradition, and it makes us feel good. But doing something only because it’s always been done a certain way is not an argument. Does it work? Is it cost effective? Is some other approach better? If your college library refused to adopt a computer database of books in favor of the old card catalog because “that’s what libraries have done for decades,” you would likely argue they need to get with the times. The same would be true if the classrooms all still had only chalkboards instead of computers and projectors and the administration argued that it fit the tradition of classrooms.
Bandwagon
This fallacy is also referred to as “appeal to majority” and “appeal to popularity,” using the old expression of “get on the bandwagon” to support an idea. Essentially, bandwagon is a fallacy that asserts that because something is popular (or seems to be), it is therefore good, correct, or desirable. In a sense it was mentioned before, under statistical fallacies. Of course, you’ve probably heard it or said it many times: “Everybody is doing it.” Well, of course, everybody is not doing it, it just seems like it. And the fact (or perception) that more than 50% of the population is engaging in an activity does not make that a wise activity.
Many times in history over 50% of the population believed or did something that was not good or right, such as believing the earth was the center of the solar system and the sun orbited around the earth. In a democracy we make public policy to some extent based on majority rule, but we also have protections for the minority. This is a wonderful part of our system. It is sometimes foolish to say that a policy is morally right or wrong or wise just because it is supported by 50% of the people. So when you hear a public opinion poll that says, “58% of the population thinks…” keep this in mind. Also, all it means is that 58% of the people on a survey indicated a belief or attitude on a survey, not that the belief or attitude is correct or that it will be the majority opinion in the future.
Red Herring
This one has an interesting history, and you might want to look it up. A herring is a fish, and it was once used to throw off or distract foxhounds from a particular scent. A red herring, then, is creating a diversion or introducing an irrelevant point to distract someone or get someone off the subject of the argument. When a politician in a debate is asked about his stance on immigration, and the candidate responds, “I think we need to focus on reducing the debt. That’s the real problem!”, he is introducing a red herring to distract from the original topic under discussion. If someone argues, “We should not worry about the needs of people in other countries because we have poor people in the United States,” that may sound good on the surface, but it is a red herring and a false dilemma (either-or) fallacy. It is possible to address poverty in this country and other countries at the same time.
Ad Hominem
This Latin term means “argument to the man,” and generally refers to a fallacy that attacks the person rather than dealing with the real issue in dispute. A person using ad hominem connects a real or perceived flaw in a person’s character or behavior to an issue he or she supports, asserting that the flaw in character makes the position on the issue wrong. Obviously, there is no connection. In a sense, ad hominem is a type of red herring because it distracts from the real argument. In some cases, the “hidden agenda” is to say that because someone of bad character supports an issue or argument, therefore the issue or argument is not worthy or logical.
A person using ad hominem might say, “Climate change is not true. It is supported by advocates such as Congressman Jones, and we all know that Congressman Jones was convicted of fraud last year.” This is not to say that Congressman Jones should be re-elected, only that climate change’s being true or false is irrelevant to the fraud conviction. Do not confuse ad hominem with poor credibility or ethos. A speaker’s ethos, based on character or past behavior, does matter. It just doesn’t mean that the issues they support are logically or factually wrong.
Ad Misericordium
This Latin term means “appeal to pity” and sometimes that term is used instead of the Latin one. There is nothing wrong with pity and human compassion as an emotional appeal in a persuasive speech; in fact, that is definitely one you might want to use if it is appropriate, such as to solicit donations to a worthwhile charity. However, if the appeal to pity is used to elicit an emotional appeal and cover up a lack of facts and evidence, it is being used as a smokescreen and is deceiving the audience. If a nonprofit organization tried to get donations by wrenching your heartstrings, that emotion may divert your attention from how much of the donation really goes to the “cause.” Chapter 3 of this book looked at ethics in public speaking, and intentional use of logical fallacies is a breach of ethics, even if the audience accepts them and does not use critical thinking on its own.
Plain Folks
Plain folks is a tactic commonly used in advertising and by politicians. Powerful persons will often try to make themselves appear like the “common man.” A man running for Senate may walk around in a campaign ad in a flannel shirt, looking at his farm. (Flannel shirts are popular for politicians, especially in the South.) A businessman of a large corporation may want you to think his company cares about the “little guy” by showing the owner helping on the assembly line. The image that these situations create says, “I’m one of the guys, just like you.” There is nothing wrong with wearing a flannel shirt and looking at one’s farm, unless the reason is to divert from the real issues.
Guilt by Association
This fallacy is a form of false analogy based on the idea that if two things bear any relationship at all, they are comparable. No one wants to be blamed for something just because she is in the wrong place at the wrong time or happens to bear some resemblance to a guilty person. An example would be if someone argued, “Adolf Hitler was a vegetarian; therefore being a vegetarian is evil.” Of course, vegetarianism as a life practice had nothing to do with Hitler’s character. Although this is an extreme example, it is not uncommon to hear guilt by association used as a type of ad hominem argument. There is actually a fallacy called “reductio ad Hitlerum”—whenever someone dismisses an argument by bringing up Hitler out of nowhere.
There are other fallacies, many of which go by Latin names. You can visit other websites, such as http://www.logicalfallacies.info/ for more types and examples. These eighteen are a good start to helping you discern good reasoning and supplement your critical thinking knowledge and ability.
Conclusion
This chapter took the subject of public speaking to a different level in that it was somewhat more abstract than the other chapters. However, a public speaker is responsible for using good reasoning as much as she is responsible to have an organized speech, to analyze the audience, or to practice for effective delivery.
Something to Think About
You cannot hear logical fallacies unless you listen carefully and critically. Keep your ears open to possible uses of fallacies. Are they used in discussion of emotional topics? Are they used to get compliance (such as to buy a product) without allowing the consumer to think about the issues? What else do you notice about them?
Here is a class activity one of the authors has used in the past to teach fallacies. With a small group of classmates, create a “fallacy skit” to perform for the class. Plan and act out a situation where a fallacy is being used, and then be able to explain it to the class. The example under Slippery Slope about the young woman turning down a date actually came from the author’s students in a fallacy skit.
Logical Reasoning Chapter Attribution:
Manley, J. A., & Rhodes, K. (2020). Exploring Public Speaking: The Free Dalton State College Public Speaking Textbook, 4th Edition. Manifold. Retrieved from https://alg.manifoldapp.org/read/exploring-public-speaking-the-free-dalton-state-college-public-speaking-textbook-4th-edition/
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 4.0 International License.
Authors and Contributors
Barbara G. Tucker (Editor and Primary Author)
As chair of the Department of Communication at Dalton State College, Dr. Tucker oversees programs in communication, general studies, music, theatre, and interdisciplinary studies. She is Professor of Communication and has worked in higher education for over 40 years. She lives in Ringgold, Georgia, with her husband; they have one son. She is a novelist and playwright. Her research areas are the basic course, open educational resources, historical perspectives on rhetoric, and gratitude.
Matthew LeHew (Editor)
As Assistant Professor at Dalton State College, Matthew LeHew teaches courses in public relations, integrated marketing communication, film studies, and video production. His research interests include various areas of media studies, especially examination of virtual communities for online games. He is currently writing his dissertation for the Ph.D. in Communication (Media and Society track) at Georgia State University. He lives in Marietta, Georgia with his wife, son, and two dogs.