Logic 101: A Primer in Logic (or, How to Be Logical in an Increasingly Illogical World)

It was a while ago now that I first read this Huffington Post article by C. Robert Gibson about the success of Governor Dayton’s economic policies, but I still remember how much the final paragraph incensed me. And it still does. But it’s not for reasons you might expect, i.e., economic or political beliefs/opinions. (Quite frankly, I don’t know enough about economics, nor do I follow politics/current events enough to have a strong opinion on this matter either way; so, for all I know, the facts are correct and the conclusions—save the conclusion in the last paragraph—are valid and accurate.) Rather, it’s for reasons of logical reasoning. There is an apparent critical failure of logic in that final conclusion, and I feel that such logical failings (the cynic in me says that some are intentionally faulty so as to be misleading) are becoming more and more prevalent nowadays. So I wanted to address this issue head on so that we can all be better prepared to not be misled by faulty logic, whether accidental or intentional.

Let’s start with the basics of a logical argument—or, in logical terms, a syllogism. A syllogism is a form of deductive reasoning whereby two (or more) premises are presented in support of a conclusion (e.g., All dogs are mammals; Fido is a dog; therefore, Fido is a mammal). For cleaner presentation, sometimes syllogisms are presented with the premises and conclusions vertically stacked, often with a horizontal line separating the conclusion from the final premise. (With this presentation, it is clear that the final statement is the conclusion, so the “Therefore” is implied and, therefore, may not always be written.)

Example 1:

All dogs are mammals.
Fido is a dog.                               
Therefore, Fido is a mammal.

The above syllogism is an example of a categorical syllogism, because the premises and conclusion pertain to categories and how things belong (or don’t) to those categories. Another type of syllogism consists of conditional syllogism, wherein one of the premises is a conditional statement—that is, an “If…then…” statement. In abstract form, these conditionals are often written with ps and qs representing the different propositions of the conditional premise (e.g., If p, then q; p; therefore, q).

Before discussing errors in logic, we must first establish that there are two ways that logical argument can be erroneous: (1) they can be inaccurate, and/or (2) they can be invalid. Accuracy (or veracity) refers to whether or not the conclusion of a syllogism is true with respect to the way the world actually is. An accurate argument will be a true representation of how the world is. Validity refers to whether or not the conclusion logically follows from the premises. A valid conclusion is one that is logically supported by and follows from the premises—that is, the premises necessarily require that conclusion. Accuracy and validity are orthogonal, however, meaning that you can move in one dimension without moving in the other. (The x– and y-axes of the well-known Cartesian coordinate system are an example of orthogonal dimensions.) Thus, a syllogistic conclusion will fall into one of these categories: valid and accurate; valid, yet inaccurate; invalid, yet accurate; invalid and inaccurate. The table below shows an example of each.

post-8-acuracy-vs-validity-table

Whereas validity is a property arising from the structure and soundness of the argument, accuracy is usually a product of the premises and whether or not they are true or false. A conclusion that logically derives from false premises will, while valid, be inaccurate. What this means, then, is that there are two factors to consider when presenting or assessing arguments: (1) the veracity of the premises (i.e., are the facts true?), and (2) the validity of the argument. So, if you want to dismantle an opponent’s argument, you have two choices: (1) show his/her premises to be untrue, and/or (2) show his/her logic to be invalid.

This also means that we have to be paying attention to both veracity and validity when presented with logical arguments so as not to be misled. We humans are susceptible to faulty reasoning on account of our love for mental shortcuts that ease cognitive burden. As a result, we can be misled for a variety of reasons, so we need to be on our guard. For example, we might overlook logic and accept an invalid conclusion as being valid if it is true. Such is the case with the accurate but invalid conclusion in the above table (“Some snakes are venomous”): we have a tendency to accept it, because it’s true (there are venomous snakes), but it’s not a valid conclusion. Other times, we can be misled by the form and superficial features of a syllogism, such as with the parallel phrasing (each statement begins, “Some…”) in the Example 2.

Example 2:

Some A are B.
Some A are C.  
Some B are C.

You may have noted that, without using real-world categories, it is harder to determine if a syllogism is valid or not. When they become thusly abstract, they can indeed be harder to assess/validate, because we no longer have concrete, real-world information to test it against. In other words, we can no longer use accuracy as a proxy for validity. So it can be a good exercise to work with such abstract syllogisms, for it removes accuracy (and facts) from the equation, forcing us to test the conclusion using solely logical reasoning. A challenging exercise, to be sure—but worth it.

An excellent—and, dare I say, infallible—way to test syllogisms (and a great technique for when they get hard) is to use Venn diagrams. Sometimes, depending on the premises, there can be multiple ways of drawing the circles in the Venn diagram, but the important thing to remember is this: if there is even one way you can diagram the syllogism such that the premises are satisfied and the conclusion is not, then it is not a valid conclusion. Let’s do this with Example 2.

post-8-venn-diagrams-abstract-examples

In the top left diagram, you can see that both premises are true: sets A and B overlap, meaning that there are some A that are B; sets A and C also overlap, meaning that some A are also C. The conclusion is also true here, because the way all three overlap, there are some members of set B that also belong to set C. In the top right diagram, we can again see then both premises are true, as well as the conclusion. Even in the bottom left diagram, all three statements (both premises and the conclusion) are true. Yes, in this instance, all members of A belong to set B, but that does entail that at least some do. So, while it is underinformative to say that some A are B in this situation, it is, nonetheless, accurate. Finally, however, in the bottom right diagram, we see that both premises are supported, though the conclusion is not. Thus, since there is a way to satisfy the premises without necessarily satisfying the conclusion, the conclusion is invalid.

We can do the same thing with our accurate yet invalid example from the table above:

Example 3:

All snakes are reptiles.
Some reptiles are venomous.  
Some snakes are venomous.

The below Venn diagram shows that there is a way the circles/sets can be drawn such that the premises remain satisfied without the conclusion being satisfied. Therefore, the conclusion is invalid.

post-8-venomous-snake-venn-diagram-showing-invalidity

However, as with Example 2, this syllogism could be diagrammed in such a way as to also support the conclusion, as shown below.

post-8-venomous-snake-venn-diagram-showing-not-showing-invalidity

Thus, it is important to remember to look for a way to diagram a syllogism in such a way that the conclusion is not supported. Human reasoning often succumbs to the confirmation bias, which is the tendency to look for and accept information that supports the beliefs we already have, rather than look for information that might invalidate our beliefs. (Veritasium, a YouTube channel I love and follow, has a fun video on the confirmation bias, if you want to see it in action.) So, in order to not fall prey to the confirmation bias as you test logical arguments, remember two things:

First, focus on validity, not just accuracy;

Second, don’t fall prey to the confirmation bias; instead, actively seek out ways in which the conclusion might be false, even though the premises may be supported.

And now on to conditional syllogisms, so that we can address the logic in article mentioned at the beginning. Conditional syllogisms, like categorical syllogisms, can be just as misleading and difficult to assess at times. However, at least errors with conditional reasoning get fun Latin names. (Categorical syllogisms probably do as well, but I only ever hear the conditional syllogisms go by Latin names.)

As stated above, conditional arguments take the form of “If…then…” statements—or, more accurately, the primary premise takes such a form, as with the abstract example below.

Example 4:

If p, then q.
p.                      
Therefore, q.

In conditional syllogisms, the first premise (the “If…then…” statement) is usually meant to represent some sort of general truth (whether affirmed or supposed) such that proposition q is always true in circumstances where proposition p is true (and, in many cases, is a perhaps assumed to be a result of p). (Thus, if you want to counter an opponent’s’ conditional syllogism, one option is to show that this conditional premise is not a general and absolute truth.) The second premise then states the condition of one of the propositions from the first premise. With their being two propositions, each being either true or false, that means there are four basic options for premise two: p (i.e., affirming proposition p, the antecedent proposition—that is, saying that it’s true), q (i.e., affirming proposition q, the consequent proposition), not p (sometimes written ¬¬¬p; i.e., denying the antecedent proposition—that is, saying that it’s false), and not qq; i.e., denying the consequent proposition).

With conditional syllogisms, there are two valid arguments. The first, modus ponens (a.k.a., affirming the antecedent), was shown in Example 4. The second, modus tollens (a.k.a., denying the consequent), is shown in Example 5. Modus ponens is valid, because it follows the general truth established in the first premise: if p always results in q, then should p be true, we can logically expect q to be true. Modus tollens is valid because it is the contra-positive of the initial conditional. If p always results in q, then an absence of q necessitates that p was not present, otherwise q would also be present.

Example 5:

If p, then q.
¬q.                       
Therefore, ¬p.

For each of these two valid conclusions, there are two invalid conclusions. The invalid form of modus ponens is the fallacy of affirming the consequent, as is shown in example 6. The reason that this argument is invalid is because the conditional premise is not bidirectional—that is, it does not say, “If, and only if,…then…” (If the conditional were thusly bidirectional, then, in addition to knowing that, given p, we can expect q, we can also know that, given q, we can expect p, because the two propositions are mutually dependent on one another; you only get one if the other is also there.) In other words, the conditional only says that when you have p, you must also have q; it says nothing about q’s ability to exist or be true apart from p.

Example 6:

If p, then q.
q.                      
Therefore, p.

To see the invalidity of this argument more clearly, let’s look at a real-world example. (This also gives us a concrete example of a proposition. Basically, a proposition is any statement which can have a truth value—that is, can be either true or false.)

Example 7:

If x is an apple, then it is a fruit.
x is a fruit.                                           
Therefore, x is an apple.

Again, as with categorical syllogisms, it is much easier to see the validity or absurdity of logical arguments when we work with concrete, real-world examples. Yes, it is true that if something is an apple, then it is a fruit; so, if I’m holding an apple, it’s true that I’m holding a fruit (modus ponens). However, there are many kinds of fruits, so just because what I’m holding is a fruit does not mean that it is an apple; it could be a banana, a strawberry, an orange, a kiwi, a mango, etc. In other words, the converse is not necessarily true.

On to the last fallacy, the invalid form of modus tollens and the one that leads to the erroneous logic in the Huffington Post article: the fallacy of denying the antecedent (see Example 8).

Example 8:

If p, then q.
¬p.                      
Therefore, ¬q.

To show the absurdity of this argument, let’s again put it in real-world terms.

Example 9:

If x is an apple, then it is a fruit.
x is not an apple.                              
Therefore, x is not a fruit.

Just like with the fallacy of affirming the consequent, the error in this logic stems from the fact that the conditional is not bidirectional. Because there are many types of fruits, just because what I’m holding is not an apple does not entail that it can’t be a fruit. I could be holding a peach, and that would still satisfy both premises (it says nothing to disconfirm the conditional statement in premise 1; and, being a peach, it is, in fact, not an apple), yet does not satisfy the conclusion (being a peach, it is still a fruit), making it invalid. Thus, as mentioned with the categorical syllogisms, it is so important to assess the accuracy and validity of arguments, being careful to look for ways in which they may be false, being intentional about searching out evidence to disconfirm theories—even your own—and being mindful of the human inclination towards the confirmation bias.

So, how does this connect back to the article mentioned at the beginning? In short, Gibson’s article attempts to disprove the effectiveness of trickle-down economics, which he presents as an implied conditional: though he never states it explicitly, it is implied that he is disproving the claim, “If trickle-don economic policies are used, then the economy will be improve.” I’m no economist, so correct me (and forgive me) if I’m wrong, but I know of no a priori reason why this should be considered to be a bidirectional statement. I think it stands to reason that more than one economic policy is able to yield economic growth. Thus, we have a standard, unidirectional conditional syllogism. The author then goes on to describe how Governor Dayton’s economic policies, which are not trickle-down, have led to positive and widespread economic growth in Minnesota. He then uses this to conclude—rather erroneously, by my estimation—that “trickle-down economics is bunk. Minnesota has proven it once and for all. If you believe otherwise, you are wrong.” With all due respect, Mr. Gibson, it is you who are wrong. What Minnesota has proven is that Dayton’s plan did work, not that other plans don’t.

I can’t claim to know what he is thinking, but, from what he wrote, it would seem that he fell prey to the fallacy of denying the antecedent. It seems as though he believes that, because non-trickle-down economics worked, trickle-down economics does not work. That is, he seems to believe that denying the antecedent leads to a logically valid conclusion. He wants to make that conclusion , but the logic is not on his side. He wants to show that a particular theory (i.e., trickle-down economics) is wrong, yet he only provides evidence—ample and compelling evidence, at that, so kudos to him—that another theory is supported; however, that is not the same as disproving the other theory. Affirmation of the consequent does not—and cannot—entail anything about the antecedent. Sometimes, different means can be used to accomplish the same end, and the fact that one works does not mean that another can’t. Just because I successfully use a microwave to boil a cup of water doesn’t mean that I couldn’t successfully boil water in a pot on the stove.

So let us keep that in mind. Multiple paths may lead to the same destination, even if we can’t see them all. So before we blindly accept a path or theory, remember the confirmation bias, and instead of simply seeking out evidence to support the theory we have or already believe, seek out evidence that might disprove it. As Derek says in the aforementioned Veritasium video, “You can never prove a theory true. […] That is what’s so important about the Scientific Method: we set out to disprove our theories, and it’s when we can’t disprove them that we say, ‘This must be getting at something really true about our reality.’ So I think we should do that in all aspects of our lives. If you think that something is true, you should try as hard as you can to disprove it. Only then can you really get at the truth and not fool yourself” (emphasis added by me, based on his inflection).

Yours truly,
D. R. Meriwether
Renaissance Man

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s