All 5 books, Edward Tufte paperback $180
All 5 clothbound books, autographed by ET $280
Visual Display of Quantitative Information
Envisioning Information
Visual Explanations
Beautiful Evidence
Seeing With Fresh Eyes
catalog + shopping cart
Edward Tufte e-books
Immediate download to any computer:
Visual and Statistical Thinking $5
The Cognitive Style of Powerpoint $5
Seeing Around + Feynman Diagrams $5
Data Analysis for Politics and Policy $9
catalog + shopping cart
New ET Book
Seeing with Fresh Eyes:
Meaning, Space, Data, Truth
catalog + shopping cart
Analyzing/Presenting Data/Information
All 5 books + 4-hour ET online video course, keyed to the 5 books.
Rhetorical ploys in evidence presentations

Here is the beginning of a collection of rhetorical ploys in evidence presentations, verbal moves that replace real evidence.

FAUX CONSERVATISM This takes the form of "Our results are conservative; we made conservative assumptions about the model." The claim is that possible biases in assumptions work against the view advanced by the researcher. This is in fact an implicit error statement. Such claims are sometimes a rhetorical tactic that substitutes verbal fudge factors for quantitative assessments of error. See for example the Boeing/Columbia slide with the headline: Review of test data indicates conservatism for tile penetration

IGNORING SELF-CONTRADICTION See the Boeing slide. See also Richard Feynman's example in his Challenger report at page 137 of "What do you care what other people think?":

SIGNIFICANCE This is a pun on "statistical significance." Substantive significance is claimed as a result of statistical significance tests.

BIZARRE LEVELS OF STATISTICAL SIGNIFICANCE The null hypothesis is rejected at the .000000001 level and that extremeness is claimed as evidence for the researcher's alternative hypothesis. The "significance" level is largely a function of the ridiculousness of the null hypothesis, the (sometimes contrived) sample size, and the data results. In these calculations, the assumption of the independence of observations is often ignored, thereby inflating the probability level.

EDITORIAL ADVERBS PLACED IN FRONT OF NUMBERS Characterizing the data in advance of saying what the data are. Note the 2 uses of the word "only" in this news report: "Despite the costs, there is a limit to what the public will accept, according to the poll presented to the Harvard forum. Only 41 percent supported a special tax on junk food - an idea Sturm said would be difficult to administer - and only 37 percent said they strongly supported requiring restaurants to list calorie content on their menus." (Extra Pounds Cost Big Dollars: Rise In U.S. Obesity Labeled A Crisis, August 16, 2003, by John A. MacDonald) Note also the "only 37 percent said they strongly supported"; how many supported, strongly or not so strongly? How many strongly opposed?

-- Edward Tufte


The use of absolute terms and relative terms is usually a good way to hide poor evidence or inappropriate sample sizes.

Words such as 'No ...', 'Every...', 'Biggest ...', 'Massive ... (usually percentage increase)', 'Less than ...', 'More people than ever before...' are good examples of these sorts of terms.

A good example I came across was in an article on the representation of women in Australian citizens' awards and decorations. In its opening paragraph it stated 'No woman has ever been awarded the Cross of Valour (The highest civilian award for bravery in Australia)...' Quite an impressive opening statement for an article like this. However, nowhere was it mentioned that only three men have been awarded this medal in the 25 years since it began, this is a country of currently 20 million people.

Probably the most abused is 'Less than...' For example, 'Less than 50% of the population does XYZ...' Is it 10%, 45% or 49.9%?

-- Andrew Nicholls (email)


E.T.'s third point about significance and fourth point about bizarre levels of statistical significance bring up the crippling effects that the misuse of statistical hypothesis testing has had on data analysis in science, especially in domains like psychology, medical research, and ecology (my domain of interest).

A paper by Marks Nester in 1996 ("An applied statistician's creed" Applied Statistics 45:401-410 available at www.jstor.org with a subscription) speculates that people use statistical hypothesis tests because:

(a) they appear to be objective and exact
(b) they are readily available and easily invoked in many commercial statistical packages
(c) everyone else seems to use them
(d) students, statisticians, and scientists are taught to use them and
(e) some journal editors and thesis supervisors demand them.

Nester's analysis - especially point 1 - implies that the use of statistical hypothesis tests in general is a rhetorical ploy - even when the P-value is not outrageously low.

I agree with Nester's analysis. Relying on P-values turns complex questions into simple dichotomies for us to choose between. But the world is not dichotomous; there are always shades of gray and alternative explanations. For statistical hypothesis testing to work, the hypotheses (null and alternative) must be mutually exclusive and jointly exhaustive. Since these conditions are rarely met outside statistics textbooks, we must always be ready for a more complex explanation than a P-value provides.

One of the ridiculous null hypotheses that E.T. alluded to that is frequently tested is of the form "Treatment A will be the same as Treatment B." This assertion is impossible if you are measuring a continuous variable since anything you do to one treatment and not the other will make at least some difference. Thus, Nester boldly but correctly asserts that "paired and unpaired t-tests, analyses of variance (except for estimating variance components), linear contrasts and multiple comparisons, and tests of significance for correlation and regression coefficients should be avoided by statisticians and discarded from the scientific literature." He closes by pointing out that his is one paper of many with a similar message and provides a spate of references to other works.

As the work of E.T. and others like William Cleveland make clear, the higher ground for data analysts is to look at effect sizes and variation in data sets. Means with confidence intervals and sample size are the truly interesting numbers to pull out of data sets, second only to the actual data themselves.

-- Anthony Darrouzet-Nardi (email)


I'd add RED HERRINGS -- including "evidence" not germaine to the argument -- and DEQUANTIFICATION which, as is the case with visual expressions, serves only to obfuscate reality. Dequantification would include any instances where actual facts are replaced with generalizations or simple adjectives (less, more, about as much as, approximately, etc.). As is the case with omitting the source, the implication here is "trust me:" the author knows how much less (or more), but chooses not to share that data. Worse, maybe the author doesn't really know.

Describing these as "verbal ploys" seems to suggest malice when incompetence is also possible (probable?). Shouldn't any logical fallacy in a verbal argument be included in a discussion of how people replace evidence with rhetoric?

A quick Google search for "faults in logical arguments" found http://mason.gmu.edu/~arichar6/logic.htm which summarizes several kinds of fallacious argument (bibliography, endnotes are included in the article).

-- Scott Zetlan (email)


In addition to the red herrings noted by Scott Zetlan and Jeffrey Berg, any manuever that exploits the naivete, ignorance or, especially, complacency, of the "target" audience will be attempted. Here's a few, in no particular order, that I've seen used to try to gain some measure of rhetorical advantage:

Relying on most folks inability to distinguish percentage from percentile, or mixing these terms in the same context.

And the same with statistics and statistical.

And mean and median (forget about mode). Being purposely unclear on the averaging method used.

Playing against popular fears of statistics ("lies, damned lies and statistics," as the good Mr. Disraeli told us) and the inability of most to understand why sound statistical methods do work.

Using "fact" in opposition to "theory."

Stating that a conclusion is self evident in the hypothesis, without showing the work.

Leaving out estimates of error, or disregarding very high or very low error estimates.

Watch out for "As this chart shows ..." and "statistical dead heat" and one I, in a moment of weakness, used myself, "accurate estimate."

Purposely disregarding the incompleteness of a dataset, or claiming that only a small part of the dataset is relevant.

Mistating the source of a citation, quote or reference. Many times the phrase "form follows function" has been ascribed to "the Bauhaus" in attempts to acclaim or discredit the spare, functional International style in modern architecture, yet the phrase originates with the American architect Louis Sullivan in the late 1800's.

Twisting the context and purpose of a term or attempting to receive sanction by citing its author. "Paradigm," as so lucidly proffered by Thomas Kuhn in the context of scientific discovery, is now used in every venue with Professor Kuhn's name used to lend authority to the claim. ("Thomas Kuhn described the change of paradigms that occur with advances in scientific thought. In a similar manner, we've established a new paradigm in athletic shoes ..."). Also, claiming to be undergoing or recently having undergone a "paradigm shift."

Claiming that a current statement, action or decision will or will not become a precedent for future events.

And, out and out consistent lieing. The bigger the lie and the more often it's said, the harder it is to believe it could be false.

-- Steve Sprague (email)


Hypothesis testing statistics allows yet another trick. Take the contrary evidence a bit at a time, each time declaring it insignificant.

I recently saw someone take evidence annually against his favoured null hypothesis (i.e. that humans have done nothing to accelerate global warming).

-- Matthew Leitch (email)


"The bigger the lie and the more often it's said, the harder it is to believe it could be false."

See "The New Censorship", an essay by Curtis White in the August, 2003 Harper's, excerpted from his coming book "The Middle Mind: Why Americans Don't Think for Themselves".

"The new censorship does not work by keeping things secret."

-- Steven Byers (email)


Representative: Sometimes people will say, "This is a representative sample of the data," without saying what it's representative of. It could be "representative" of the absolute best data.

-- Zen Faulkes (email)


BURDEN OF PROOF ARGUMENTS, OR ASYMMETRY OF EVIDENTIAL RESPONSIBILITY A claim, which may be valid, that one's opponents are required to prove something if their position is to survive.

(A variant form is this: THE ABSENCE OF EVIDENCE IS NOT EVIDENCE OF ABSENCE)

The obvious current example is the Columbia: safety advocates were in effect required to prove that the shuttle was unsafe to fly, rather than launch advocates proving that the shuttle was safe. This was also true in the Challenger case.

Perhaps there is something of a burden-of-proof issue lurking in this sequence of comments by Mark Reilly and ET in the CAIB thread on this board.

Mark Reilly writes: "Is there any independent, scientific evaluation of the learning outcomes of groups exposed to different methods of explanation? For example, what is the objective, credible evidence of the evaluation of the effectivenes of explanation using Powerpoint versus other methods?"

ET then writes: "There is an interesting idea here: should PP be tested by the methods used in evidence-based medicine for treatments and in clinical trials for new drugs? That is, should the safety and efficacy of PP have to be demonstrated before it is forcibly administered millions of times a day to captive audiences? How does it compare with other methods? What are the test criteria? Right now, we are engaged in a massive uncontrolled non-randomized test of PP on millions of distinctly unwilling patients.

Oddly enough, PP is already deep into Phase I Clinical Trials--testing for the Maximum Dose to Toxicity! But normally for a new drug even to reach MDT trials, there must be considerable prior evidence, including test-tube experiments, tests on mice and other animal models,and pharmaceutical reasoning. PP has in effect flunked these prior tests and, if it were a drug, would never even have reached MDT trials. It is amusing, however, to contemplate laboratory mice being forced to watch PP presentations hour after hour.

The experience of the Columbia Accident Investigation Board represents a fairly clean test of PP in action for serious problems--and CAIB reached a vivid conclusion about PP.

More generally, when information technology (IT) has been assessed for outcomes, the evidence has pointed to significant exaggerations of the alleged benefits of IT. Indeed, the economic benefits of IT technologies have been hard to find, at least until perhaps quite recently, in a good many microeconomic studies--as Robert Solow (Nobel prize economist) said "Computers show up everywhere except in productivity statistics." More on this point in our thread

-- Edward Tufte


To supplement the remarks about "representative" cases by Kindly Contributor Zen Faulkes, here is a series of articles by William Kruskal and Frederick Mosteller, the great statisticians, on the concept in practice:

Kruskal, W. and F. Mosteller (1979). "Representative sampling, I: Nonscientific literature." International Statistical Review 47: 13-24.

Kruskal, W. and F. Mosteller (1979). "Representative sampling, II: Scientific literature, excluding statistics." International Statistical Review 47: 111-127.

Kruskal, W. and F. Mosteller (1979). "Representative sampling, III: The current statistical literature." International Statistical Review 47: 245-265.

Kruskal, W. and F. Mosteller (1980). "Representative sampling, IV: The history of the concepts in statistics, 1895-1939." International Statistical Review 48: 169-195.

I must have the reprints around somewhere.

-- Edward Tufte


http://www.cnn.com/2003/TECH/space/09/02/asteroid.reut/index.html

Headline Grabber "mis" information: Giant asteroid could hit Earth in 2014.

Data buried two paragraphs deep: 1 / 909,000 chance of a hit!

Michael Round

-- Michael Round (email)


On the UK TV (ITN) news last night in an item about domestic violence:

"Nearly a quarter of women . . . fewer than one in six men"

-- Martin Ternouth (email)


"Post hoc ergo proctor hoc" - AFTER the fact becomes BECAUSE of the fact.

-- David Montgomery (email)


An example of "Post hoc ergo proctor hoc" could be the argument in the lawsuit against MacDonald's Restaurant blaming them for obesity and diabetes in children.

-- David Montgomery (email)


I came across a number of the ploys mentioned above in a brochure put out by the Western Australian Government. On the one page they had -

"There were 276,546 persons (15 percent of the population) were born in the main English speaking countries."

"More than 200,000 (11 percent) of Western Australians speak a language other than English at home."

I like the contrast of six significant figures in one statement and one significant figure in the next. They are both based on the same census data.

But the best statement was saved for last -

"Perth's population comprised the highest proportion of people born overseas of all Australian capital cities, after Sydney."

Sounds better than the second highest proportion.

-- Andrew Nicholls (email)


Andrew's fourth paragraph reminds me of the comment by Robert Kaplan: Professor of Accounting and inventor of Activity-based Costing.

"Never give a costing to six significant decimal places when the first digit is wrong."

-- Martin Ternouth (email)


ARBITRARY ASSUMPTIONS. When trying to prove a point using some sort of estimation process, people will say "Let us assume that..." without giving any reason or justification as to the basis of the assumption.

I saw this in on a website (the URL I seem to have misplaced, unfortunately) that was talking about road deaths at certain speeds. It used an apparently well used formula that gave a prediction for deaths at various traffic speeds. However all of the inputs to the calculations were based on assumptions of unknown values, and it gave no evidence as to why it arrived at these assumptions. Therefore, the final figures it came out with were not of much use.

There may be some merit in taking a range of assumptions, but the more variables you are using, the wider the range you need and all you end with are a range of possible outcomes, that may not bear much relation with reality.

-- Adam (email)


I always smile when I read about some significant event that has happened for the 'first time' since the last time it happened.

For example, from the Atlanta Journal-Constitution "Dow closes above 10,000 for first time since May 2002"

http://www.ajc.com/business/content/business/1203/11wallstreet.html (requires a one-time only free registration)

-- Craig Pickering (email)


Craig's post also highlights a "big round number". Is this actually a financial milestone regardless of the units that make it round? Would anyone publish the headline "Dow closes above 13,137 Canadian $ for first time since May 2002"?

-- Dave Nash (email)


The article by Professor I.J. Good is instructive -"A classification of fallacious arguments and interpretations," Technometrics, Volume 4, Number 1, Pages 125-132, 1962.

-- AM


I remember several of Good's papers (he published hundreds and hundreds) from graduate school; if this is a good Good paper, can someone get permission and post it here?

-- Edward Tufte


I have been thinking about ET's fifth point "editorial adverbs placed in front of numbers" - characterising the data in advance of reporting them. (For instance, "only 41% support ...".)

Is the use of "only" in this way always bad practice? It leads the reader towards a conclusion, but can't that be legitimate in some circumstances (for instance, a brief newspaper or magazine article)? Surely what is important is that the data is interpreted fairly and the honest intentions?

Similarly Zen Faulkes (see above, 28 August 03) complained of those who write "This is a representative sample..." without explaining what it is representative of. In a short article, you may not have the space to go into specifics, but you can tell the truth.

The point is that "Graph excellence begins with telling the truth about the data" (VDOQI, p 53). Integrity is vital, detail isn't always practical or useful.

-- Sally Bigwood (email)


There is much to be said for understanding the economics of writing in analyzing these problems. When one thinks about what determines what/how a factual research result is written, one must account for the inclusion of space considerations, intended audience, possible opinion bias, and simply style into determinants of written precision regarding data.

The first point is simple: writers are often given a fixed amount of space through which to convey a story. Interestingly, to gain more space, one must often (not always) pay more for column inches either in advertising or direct purchase price. There is an implicit cost problem for the consumer (the reader) of how much they are willing to pay for additional inches. When viewed this way, the reason blurbs are so bad at conveying information is obvious: scientific and research results should be detailed descriptive statements without prejoratives. These are often long, and due to space considerations, is it better to allow colloquial compression of results so that they are reported at all in exchange for clarity and precision?

In many ways, the second point is a function of the first: Individuals are very deliberate about what they read. Many people like a specific newpaper or new magazine. I would guess that subscriptions are heavily correlated with educational background and amount of intellectual stimulus in current employment. As such, the more widespread and general the audience (Newsweek e.g.), the less precise the language will be.

The last two issues are what it seems most people have been posting about: content and stylistic bias obfusacating what should be a conservative descriptive statement. How much does the fact that audiences, unless they are looking for scientific fact, want their article to not be as dry as most factual statements are? I would be interested in how much one cares about such obfuscations could be used as a Rorschach test for ones own bias regarding the opinion. Does one notice such obfuscations more when they are against ones own bias ( i would guess), and how sensitive is one to obfuscations congruent with their own bias?

I guess the point of this posting is to give note to the idea that some responsibility for loaded adverbs and semantic shortcuts may be a symptom of the reader and the reading medium restrictions as much as the writer.

-- Daniel Egan (email)


The presenter's FALSE DENIALS, expecially in the form of "I'm not going to waste time telling you 'blah,' but blah, blah, blah." This technique is frequently coupled with ad hominem attacks. J>D>

-- J. D. McCubbin (email)


To be explicit about the use of FALSE DENIAL: first, they are violative of E.T.'s advice to focus on the content; second, it's a time waster; third, it is contradictory; fourth, (for me) it immediately reduces the speaker's credibility. J>D>

-- J. D. McCubbin (email)


How about "TOTAL INSTEAD OF AVERAGE"?

The total of a small set given rather than the average or the actual data. The canonical example is:

"The group has a total of x years of experience among them."

The ostensible intent of this device is to telescope the presentation, giving the important info rather than taking up space with annoying details. But I see it as manipulative in a number of ways:

- often the number of people this is a total of is hidden or hard to extract

- it seems like a ploy to hide low outliers

- most importantly, even though one can go to the effort of computing the average or determining the individual data points, it is the extreme value of the exaggerated larger sum that is remembered instead.

-- Mitch Harris (email)


It also masks the quality of the experience; is it twenty years of experience, or one year of experience, twenty times?

-- Clive Rushton (email)


How about THE STRAW MAN? For example any sentence beginning with, "There are some who say..." or "It has been suggested that..." This rhetorical device is often designed to deflect scrutiny of one's own assertions (or motives) onto the putative assertions of others, which are actually veiled assertions you are trying to establish in people's minds about their credibility vs. yours. This is one way of "defining" your opponent. In the current presidential campaign, for example, much is being made of who has an optimistic, or pessimistic, vision for America. Contained in this rhetorical exchange, on one side, is an attempt to portray a challenge to an incumbent as inherently pessimistic, never mind that a challenge to an incumbent, based on the incumbent's record, is fundamental to a truly democratic contest. An inverse variant might be the use of THE ROYAL WE, projecting consensus where there may be none, i.e. "We are all united in the belief..." Few have the courage to be the lonely, vocal skeptic.

But in the discussion of rhetoric here so far, there lies an inherent flaw in the logic, no pun intended. It seems to me that rhetoric, whether it is based on solid evidence or not, tends to prevail on the basis of its emotional thrust, not always logic necessarily. Politicians and lawyers understand this instinctively and we see many arguments succeeding, not based on the worthiness of the substance, but on the persuasive skill of the rhetoritician. How often have we despaired as we've watched an obviously contrived appeal to the emotions vanquish rational, methodical arguments? Yes, yes, it is noble and virtuous and right to make arguments based on facts, but I think we have to acknowledge that people make irrational arguments, not simply because they are incompetent or stupid, but because they are very often frighteningly effective in shaping the conclusions people draw. Like magic, which is the art of drawing attention away from what is really happening, the selectiveness and emotional import of rhetoric can be used in analogous ways. Rhetorical obfuscation may not only obscure the truth, it also may conceal motives--protecting the status quo, manipulating opinion or preserving the flow of resources in the case of NASA. This often leaves a rational skeptic probing for the motive and working back.

Now the ironic new twist in this obfuscatory rhetoric is the emergence of a presentation style that poses as rational and factual, when in fact it is not. This is the true "power" in PowerPoint. The emotional thrust is reassurance: matters are well in hand, the experts are on top of it. This deception is effected through the projection of data through a veil of apparently knowledgeable, but ultimately incomprehensible, jargon, or through superficially credible-looking shorthand data. Mix with a few weighty-sounding adjectives and, voila, people are afraid to question your ostensible authority on the matter for fear of looking stupid or out of step.

-- Dan Spock (email)


The American Statistical Association has graciously posted I.J. Good's paper referenced above on their website.

http://www.amstat.org/misc/FallaciousArguments.pdf

[link updated January 2005]

-- Niels Olson (email)


THE CONVERT

This ploy starts with a confession that the author used to be a nonbeliever in what he is trying to convince the audience. It then proceeds to tell a story of how he changed based on overwhelming evidence, failing to make specific references.

-- Branimir Dolicki (email)


I understand how you feel blah blah blah, I felt the same way blah blah blah...but now I have found that blah blah blah.

Feel - Felt - Found

-- Anon (email)


Rhetorical linguistic pet-peeve: Use of the expression "a fraction of" as if it was not content free. E.g., "Gasoline now only costs a fraction of what it did a couple of years ago" What fraction? 4/5ths? 5/4ths? It implies a substantial reduction without conveying any actual information.

(Bonus non-rhetorical linguistic pet-peeve: Price/performance is a ratio with price in the numerator and performance in the denominator. So if you increase the price-performance of something, you are getting a worse deal than before!)

-- Luiz Pires (email)


I am surprised that this thread went through a presidential election without mentioning the most common ``rhetorical ploy in evidence presentations... that replaces real evidence'' in American politics today.

MESSAGISM or ON-MESSAGISM: the art/craft of stating one's message, rather than answering the question that was asked.

This rhetorical ploy has an impressive pedigree, being championed by the President of the United States. From the 1994 Texas gubernatorial Bush/Richards debate:

Q:"You're the son of a President of the United States ... You proudly say that you've never held public office, but you sought public office [a congressional seat in 1978] and you got beat. Maybe you're proud of it. I don't know ... isn't it a little disingenuous to portray yourself as an outsider?"

A: "I'm not proud of the fact that I got whipped in '78. I did come in second in a two-man race ... but here's my point. If you want someone to think the way it's been all along in Austin, then they should not be for me. I am not happy with the welfare system ..." And so on through the familiar list.

A popular article in The Atlantic Monthly, by James Fallows, describes this in detail (https://www.theatlantic.com/doc/prem/200407/fallows) (subscription required). Video which contrasts Bush's debate demeanor of 1994 and with that of 2004 can be found at (http://www.adbuzz.com/bushbuzz/BrainTenYrs*.mov). The President is not the only practitioner of this art but is the most highly lauded for his "effectiveness'' using the method.

Messagism is among the most subtle tactics of totalitarianism -- the form of question and answer is maintained, but the power of the interviewer is subverted and corrupted. The tail wags the dog, and the respondent selects the subject of his response. How antithetical this technique of argumentation is to the principles of scientific inquiry, namely 1) to construct a specific question 2) to determine a specific answer.

-- Sean Garrett-Roe (email)


VAGUE NUMBERISM

It isn't a rhetorical ploy so much as a linguistic pet, "Peeve."

e. g.:

"We have worked on this for A NUMBER of years." "We have assigned A NUMBER of people to work on this." "The precedents are important for A NUMBER of reasons."

How many years? How many people? How many reasons? If we know, we should say so.

"We have worked on this for seven years; we have assigned sixteen people to work on this; and our precedents are important for three reasons [state the reasons]."

This can become a rhetorical ploy when, for example, the speaker uses the vague phrase "a number of people" to suggest a relatively or unexpectedly LARGE number when the actual (and unrevealed) number is relatively or unexpectedly SMALL. e. g. "We have assigned one person to work on this."

-- Mike Christenson (email)


Here is an exquisite rhetorical ploy, but what is its proper name? "Non sequitur"? But that is rather general since all logical fallacies are conclusions not deriving from the premises.

"The president's instructions from the beginning were to fully cooperate with the investigation,'' White House spokesman Taylor Gross said. ``As part of cooperating, we are not going to comment on any matters that come up during the investigative process.''

The White House spokesman could have just as well said, "As part of cooperating, we're asking all citizens to stand on their heads."

-- Edward Tufte


Integrity

Last line of an editorial by the editor of the British Medical Journal, Richard Smith:

Competing interests: RS is the editor of the BMJ and accountable for all it contains. He had his expenses paid to travel to York to deliver the talk but was not paid a fee.

I found the bluntness of this statement refreshing. We are taught in the military that responsibility can be delegated, but never accountability. I was further pleased when I used Google to pick the thread where I would make this post and was informed of no fewer than 61 pages on this site containing the word integrity.

-- Niels Olson (email)


Jennifer Loven's recent article on Bush's use of straw-man arguments does a good job of describing and illustrating straw-man rhetorical ploys, how they are used, and why.

In my experience, there are two things to do when presented with a strawman. The first: ask "Who is 'some people'?" or "Who is 'they'?". This is particularly useful if you need the speaker to think logically now and in the future. The second response is to quietly excise the strawman from your own thinking.

-- Cameroon (email)


Wikipedia's list of cognitive biases, link via RobotWisdom.

-- Niels Olson (email)


Wedges in Evidence Presentations

Is it possible that there is a term besides "rhetorical" that could better describe the contents of this thread? The term "rhetoric" has certainly fallen on hard times and is now mostly used in a perjorative sense, but it has a long history of positive uses as well. For many, it is a means of cultivating "a practical discursive power that might enable good deliberation and effective intervention in public and private discourse of all kinds." This is the definition that has been adopted by 20th century rhetoricians like Kenneth Burke, who saw that the practice of "rhetoric" is an essential part of personal "deliberation" and public "intervention" into civil argument and not merely a general term for the abuses of argument.

Since it would not be courteous of me to ask for a name change without suggesting a name, I would say that since the posts here seem to cluster around various kinds of fallacies, a term related to "fallacy" might be appropriate. However, because "fallacy" is itself a difficult term--take ad hominem, for example; is an ad hominem attack on the person and character of Rush Limbaugh (or Al Franken) in a debate about politics the same as an attack on the person and character of Slobodan Milosevic in a debate about race?--it might be better to think of the problems listed here as points where the conversation becomes jammed, or wedges that separate the speaker from reality. Or, in the case of sensitive consumers of presentations, the wedge could be the practice that separates the speaker from his or her audience.

-- John Jones (email)


Paul Meehl wrote a wonderful book chapter that categorically addresses the consequences of rhetoric in evidence presentations: Why I don't go to case conferences. The setting is psychiatry and psychology case conferences at a medical school, but one could easily adapt his comments to any number of subjects. Found via Aaron Swartz's blog.

-- Niels Olson (email)


Misleading odds ratios hyperventilate findings

See Thou shall not report odds ratios from the excellent Language Log.

-- Edward Tufte




Threads relevant to evidence reasoning:


Privacy Policy