Paragraph 4 : Does word “Significant” carry any baggage that is not intended in this context?
Graphic : One has to wonder how much of the repackaging is based on visual components such as the charts, diagrams, etc.
Repackagers : Could these be considered “intellectual middlemen” or those not adding any extra value, only complicating the relationship between the creator and the consumer???
English example : Former Downing Street press chief Alastair Campbell did not “sex-up” the Iraq dossier. See the BBC for more.
It might be useful to carefully distinguish between events or raw facts and our explanations of events. Though definitions can be difficult, one operational test we can apply is how insane it feels to deny events or raw data, vs. the relatively free choice we have in determining explanations.
Once we label something “evidence” we have started to attach interpretations to events. I would propose that evidence is always mediated and, according to the narrow of scope of the objectives of the speaker, marketed.
For the purposes of investigation, let “weak corruption” be the type that lulls someone into believing one’s marketing message outside the original scope of objectives. Let “strong corruption” be the type that denies anything out of their scope of objectives. While those corrupted in the first sense may occasionally make unwise leaders; those corrupted in the second sense may become self-righteously and systematically destructive of relationships, culture, society, etc.
We rarely pay much attention to anyone who hasn’t shaped events with a story. We seem to be innately interested in well-crafted interpretations of events. Maybe a better measured of corruption is in the scope of objectives respected by the dialog surrounding the story than the degree to which the story supports a narrow scope of objectives. Good information architecture can often be said to be about respecting the dialog surrounding the story.
Thanks to Kindly Contributor Tchad for the spotting “significant”
and tactfully pointing out its problems.
On Tchad’s English example of the “sexed up” case for British intervention in Iraq: in my last paragraph the phrase “evidence fixed around policy” is a quiet reference to a phrase in the notorious Downing Street memo describing US WMD intelligence practices 6 months before the war started. The WMD affair is just one more dreary episode in a long history of chronic problems in intelligence agency evidence-processing, as those agencies tend to report what political authorities want to hear. The political science literature on intelligence agencies for at least 50 years has contemplated this exact issue: how pitching out corrupts within.
Is this part of the “Corrupt Techniques in Evidence Presentations” chapter?
A few thoughts:
This section concentrates on the effects of distortions, purposeful or not, in evidence as it is re-presented to ever widening populations. Such programs take advantage of our native desire to believe in the goodness of people: “they wouldn’t say it if it wasn’t true.” They also prosper on our belief that laws and regulations will prevent such obfuscation: “they couldn’t say it if it wasn’t true.”
There is also the refocusing of mediated evidence from a wider population back to the individual, especially with regard to public opinion and public policy. A report is generated (and let’s just say it’s properly done without prejudice for a particular viewpoint), parts of this report are interpreted (cherry-picked) by a small number of people, or an organization, with a particular viewpoint to promote and broadcast to a wider, perhaps pre-selected, audience, who then exert some form of political pressure which passes back through a handful of individuals (lobbyists, congressional staffers), who then reinterpret such pressure as evidence and re-present it to a policy maker, who then acts on this by, among other things, crafting legislation (or at least attaching his or her name to a bill) or by voting one way or the other on pending legislation. This process may go through several generations of refocusing and further refocusing (back out to the wider population-back in to the policy maker…), sort of an evidentiary laser generating process, until the latest iteration is taken (or passed off as) as an original report.
Secondary presenters may not state who’s funding their operations. They also may cite related research (“performed at a major university”) in support of their program without providing proper references to said research. Alternatively, they may provide such an overwhelming display of citations, disclaimers, references and other fine-print justifications that we accept their headline claims since all those notes must mean they did a really good job of research and reporting.
Leaks to journalists can be a risky road to travel with regard the dissemination of original information. While it may be the only way to get sensitive information past higher levels of information control, it does have its pitfalls. When you do this, you give up your authorship, at least for a time, and hand it over to the journalist, necessarily a re-interpreter; you’re cited only as an unnamed source. The veracity of your information, in the public mind, lies solely on the reputation and integrity of the writer and the paper (for instance) and counts on readers to believe it because “they read it in the paper.” If they don’t believe the paper, they won’t believe you either.
One of the more amusing — and informative — abuses of the scatterplot that I’ve seen was
in a college embryology lecture. In the semester’s first lecture, the instructor drew on the
chalk board an abscissa with three categories: Primary Literature, Review Articles, and
Textbooks. The ordinate was labeled “truth”. A steeply descending line was then drawn
through the graph-space.
The message could not have been clearer. The assigned reading included a couple of book
chapters and several research papers a week. The tests were impossible if you hadn’t read
the papers.
The theory here is based, in part, on the evidence collected (largely by several editors of the major medical journals, including The Lancet and NEJM) on the process for testing new drugs and the big compromises about the distribution of drug-test evidence that medical schools make in exchange for payments for running clinical trials. In an early draft of the essay above, I listed all the restrictions that drug companies place on evidence collected in new drug research and then suggested that alert consumers of evidence should not trust evidence that emerged under that set of restrictions. Earlier in my chapter on evidence corruption, there is definitive evidence about the relation between the quality of research designs in drug testing and the enthusiasm for the benefits of the drug (the worse the design, the greater the enthusiasm). And a few other examples in that chapter might be relevant.
What happens to evidence about new drugs is a strategic case: an important issue of
broad scope, intense evidence collection with restricted distribution, sometimes
undistinguished monitoring by the Feds, and an enormous amount of material produced
by the bureaucracies of secondary presentations.
One other inspiration was a NASA scientist who complained to me that every report of
scientific evidence had to go through the PR department.
Another inspiration came from the NASA emails going around Houston when the Columbia was damaged but still flying: those emails referred to the Boeing analysis of Columbia’s plight as the “Boeing PowerPoint Pitch”. The pitch language had penetrated all the way to the very most serious analysis done. The WhatPoint Pitch? The PowerWhat Pitch? The PowerPoint What? (I’m saving this anecdote for the PP chapter).
In several ways, the PP essay (which follows the corruption chapter) provides evidence about the bureaucracies of secondary presentations, especially the section on PP guidelines at the Harvard School of Public Health.
On intelligence agencies and their problems in distinguishing between detective work and
pitching: there is some 50 years of political science literature on intelligence agencies about exactly this problem.
Perhaps a sidenote delineating the recent work on evidence restriction in big pharma work
would be useful. And perhaps one to the political science literature on intelligence
agencies.
To develop the theory fully, however, would require a long essay, similar to the
evidence in the essay on the cognitive style of PP. Or another book. By someone else.
For Kindly Contributor Alex Merz: How wonderful! Any chance of contacting the creator of
the scatterplot linking truth to distance from the primary source for an original sketch?
I took a course called Drug Action and Design in the spring; the professor, an immunologist currently working at the National Institute for Alternative Medicine, used primary sources in a way similar to that of Professor Merz’s embryology prof, and she expressed the same skepticism of review articles and textbooks. Our assignments were to present articles to the class. Some of her choosing, some of our choosing. Her stated goal was to teach us how to discern good original research from bad. And she regularly filleted our articles. She was much more polite in her admonishment of the student presentations, but she wasn’t going to leave the students with bad analysis. Just about the time I could get through a NEJM article with only occasional reference to a dictionary, now I’m slowed down again trying to figure out what a Wilcoxon matched-pair signed-rank test is, and whether its use indicates obsfucation by the investigator.
On the completely opposite end of the spectrum is Paul Starr’s The
Social
Transfomation of American Medicine. His opinion of medicine in America is so obvious from the first page that I actually found his bias helpful in separating fact, opinion, and manipulation of the facts.
Interestingly, both my professor and Starr are opinionated and know it. It made for lively conversation, lively reading, and healthy skepticism.
On pharma in particular, here’s what I recently posted on my blog:
Having interviewed with some of the pharma companies before getting into Tulane I’m pretty much convinced their young, motivated sales reps are brainwashed by headquarters to buy into some morally corrupt ideas about education (sales pitches) without any obligation to make all the relevant facts known to their students (your doctors), indeed, an obligation to withhold disparaging information (that might affect their commision) is strongly advocated (enforced).
I think one of the key things the pharma companies do is to get some of these reps before they’re 25, before that critical bit of prefrontal cortex develops that is responsible for risk analysis and judgement. I was amazed by how innocent and forthright the young sales representatives were. I think they truly believe in what they’re doing. I can easily see the pharma HR and marketing departments holding meetings along these lines:
Marketing: You’ve got to target the new college grads for these sales rep jobs.
HR: Why?
Marketing: Our indoctrination works best when they’re young and they make tons of money for their age bracket, enough to qualify for a mortgage. if we can get them saddled with debt before they figure it out, we’ll have garunteed buy-in when the time comes for the next hard sell. And that makes for less turnover.
HR: We’ll have a booth at State U tomorrow.
And you should see the booths. Bigger, brighter, all the graphs have a positive slope, and the reps on hand are always young and beautiful.
From the NY Times story: “Donald Kennedy, the editor of Science magazine and a former
head of the Food and Drug Administration, said in a telephone interview on Saturday that
he found the idea of public relations firms ghostwriting for government scientists
“appalling.”
“If we knew that it had been written by someone who was not a scientist and submitted as
though it were the work of a scientist, we wouldn’t take it,” Mr. Kennedy said. “But it’s
conceivable that we wouldn’t know, if it was carefully constructed.”
He added that the practice of putting public relations polish on scientific work has already
been practiced by industry. “We had seen it coming in the pharmaceutical industry and
were sort of wary about it,” he said. “The idea that a government agency would feel the
necessity to do this is doubly troubling.” “
From the article: “Of 3,247 early- and mid-career researchers who responded, less than 1.5% admitted to falsification or plagiarism, the most serious types of misconduct listed. But 15.5% said they had changed the design, methodology or results of a study in response to pressure from a funding source; 12.5% admitted overlooking others’ use of flawed data; and 7.6% said they had circumvented minor aspects of requirements regarding the use of human subjects.”
It would be good to have similar data for secondary
presentations of the primary research. Here are some research designs:
Method 1: start with a sample of secondary
presentations then match them up against the primary source. Example: Richard Feynam’s experience on the California textbook review committee, recounted in Surely you’re joking Mr. Feynman. Here is a link to Feynman’s discussion: http://www.textbookleague.org/103feyn.htm This link leads to The Textbook League, a newsletter which reviews school textbooks and finds lots of foolishness.
Method 2, the other way around: start with a primary source and then track all the secondary presentations of that single primary.
A variation on Method 2: start with classic primary reports in various fields and then see how textbooks, review articles, newspaper accounts, weblogs, and so on then interpret the classic. The result: a series of studies in the degree of corruption in secondary presentations. There is surely a research literature already on this topic.
Sometimes the secondary communications are confronted; problems may arise: “In
Training Video, Merck Said Vioxx Did Note Increase Risk of Heart Attack,” The New York
Times, July 21, 2005: http://www.nytimes.com/2005/07/21/business/21vioxx.html?
oref=login
Near the end of the story, there is an exchange which suggests it may be not be a good course of action for a witness to spar with an experienced litigator (the witness says that the lawyer is playing games, the lawyer for the opposing side responds with “this is not a game, my client is dead.” Long ago I assisted with an antitrust case run by a sharp and quick lawyer. His staff recounted the following story demonstrating his articulate quickness: the opposing side had not produced a possibly harmful document early in the case; then it appeared that the document might be helpful and the document, lo and behold, showed up. The antitrust lawyer asked what accounted for this runaround. The reply came: “A secretary had stored the document in an inaccessible place.” The lawyer immediately responded “How did she get it there?”
The same articulate quickness by experienced litigators is sometimes found when they cross-examine very distinguished economists in anti-trust cases.
This meta-analysis Lancet presents a particularly persuasive tree diagram on page five arguing that Merck had data indicating the risk of myocardial infarction in 2000 and that the risk was well documented by the end of 2000. The meta-analysis is republished here on the site of attorney F.G. Vaughn Marshall. Basically, as soon as they went to phase three trials with lots of patients the risk revealed itself definitively. The cumulative number of patients indicates that Bombardier’s study of 8076 patients in early 2000 was the grand piano that broke the camel’s back.
The Bombardier study’s effect also highlights the arbitrary nature of the 95% confidence interval. The authors declare the findings against Vioxx to be unambiguous after one more study, that of Geba. Geba’s contibution to this meta-analysis, however, with less than a thousand patients, amounts to not much more than a rounding error in Bombardier’s wake.
On page 5, note the drift of the effect and tightening up of the error bars over time.
This is a fine example of the Evidence Decay Cycle described in the Beautiful Evidence
chapter on evidence corruption: https://www.edwardtufte.com/bboard/q-and-a-fetch-
msg?msg_id=0001et&topic_id=1&topic=
Most contributors to this thread have cited studies in medical/biological fields, hardish sciences that often fall under the limelight of public scrutiny, and thus the need for vulgarization. (for which word the thesaurus can’t make its mind between adulteration and clarification …). I would venture that the Evidence Decay Cycle (E.D.C) is less sharp in presentations of Physics or Chemistry experiments.
At the other end of the spectrum, evidence in soft sciences exhibit short half lives, the E.D.C being rapidly followed by the “Everybody knows”, aka “Studies show”, aka “Family Feud” phenomenon. Here I must confess to a cardinal sin : being a management consultant. As such, I’m often pressed to base my problem solving, my presentations and my conclusions on “Fact”, and preferably on “Accepted Fact”.
Two of my favorite are the following :
Exhibit A:
Studies show that out of everything that you worry about :
40% will never happen,
30% has already happened,
12% is about your health,
10% are problems whose effects will cancel out
and only 8% are legitimate and need attention.
Exhibit B:
Studies show that we remember :
10% of what we Read,
20% of what we Hear,
30% of what we See,
50% of what we See and Hear,
80% of what we reformulate, and
100% of what we Reformulate while in situation
Needless to say, my (less than strenuous) research has never uncovered said studies. And I will not go into a detailed analysis – at least not in this forum – of why these “Facts” are obviously not “True” (hint, can you say 7+/- 2?).
However, it is interesting to note that when I’ve attempted to raise the point with my colleagues, I’ve been rapidly catalogued as the resident nerd.
Moreover, I’ve actually given a seminar where those two sets of numbers where used, and trainees dutifully wrote them down : fortunately for me, no one started discussing the numbers …which shows the level of mid range management’s gullibility / lack of critical eye.
A final point on the use of such “Fact” as a backbone to my job, and to go back to “Pitching Out” and “Corruption”. It is a given that my firm will never win a contract if we cannot gain the confidence of our future client. This confidence comes from many things : a few white headed consultants, charisma, good delivery, knowledge of the subject at hand … but I’ve never seen it won so rapidly as with a few well placed “studies show…”.
I’ve taken the “moral” position that as long as in my day to day job, I’m professional, honest, and trustworthy with my clients, it doesn’t matter if I’ve had to bend a few rules to convince their bosses to hire me. Of course, a drug marketer who does the same thing is an amoral, inhuman beast … fine lines.
ET suggests that creators of evidence-based reports “…should prepare their own secondary reports to replace the repackagings.” The first repackaging and the best defense should be the “executive summary.”
Replace some of the secondary repackagings by presenting a good accurate executive summary, perhaps a little longer than the term generally implies. Even if the consumer –at least the government official and journalist consumer– has the whole report, all that 90 percent of them will read is the executive summary. Most will just lift whole sentences and paragraphs from it, which mitigates mischief to some degree.
My essay mentions, as an example of how pitching out corrupts within, “the chronic problem
of government intelligence agencies: once the collection and selection of evidence starts to
become fixed around a pre-determned policy line, intelligence agencies may become
permanently unintelligent, confused about the boundaries between detective work and
marketing.”
See Fred Kaplan, “Beyond the NSA Scoop: A Tale of Intelligence Fiascos,” a review of James
Risen’s book, State of War:
I find it interesting to read some of the discussions about policy versus analysis. See the June 2004 roundtable and the 1999 Psychology of Intelligence Analysis.
i found the original text, and the responses/conversation here pretty fascinating…I’m an artist, about to open a show exploring war and the question- how, as a nation, do we decide what to kill and die for?
in trying to say something not available on the nightly news, and avoid spending energy repeating liberal or conservative perspectives on today’s war, we have found ourselves deconstructing story
and
asking questions about the relationship between a citizenry hooked on narrative to a phenomenon far too complex to be explored & ethically wrestled with through narrative
and
the relationship between
pitching one’s leadership
and
pitching a construction of global affairs dependent on war as plot
What is your thinking on this Haditha article, that the early statement, put out like so many other “first and last report this incident” statements, may have created a sense among the early actors, like Capt. Pool, as further facts came back from the unit and the community, that they had to defend the initial position, no matter how absurdly weak it was apparently becoming?
Here’s an abstract from a paper that has just been published, which refers to ET’s essay. It
may be of interest to some of you.
In lecture halls, in secondary school classrooms, during training workshops, and at
research conferences, PowerPoint is becoming a preferred method of communicating,
presenting, and sharing knowledge. Questions have been raised about the implications of
the use of this new medium for knowledge dissemination. It is suggested PowerPoint
supports a cognitive and pedagogical style inconsistent with both the development of
higher analytical thinking skills and the acquisition of rich narrative and interpretive
understanding. This paper examines how PowerPoint invites and seduces educators to
reshape knowledge in particular ways, and subsequently how this knowledge is presented
to students in the classroom. The particular forms of knowing, relating, and presenting
with PowerPoint are decided in part by teacher habituation to the software tool’s default
patterns, but also by the very nature of the presentation medium itself.
Adams, C. (2006) PowerPoint, habits of mind, and classroom culture. Journal of
Curriculum Studies, 38 (4) 389-411.
The New York Times has an interesting article on Weighing the Costs of a CT Scan’s Look Inside the Heart (http://www.nytimes.com/2008/06/29/business/29scan.html?hp). There is a debate amongst cardiologists on the utility – or lack thereof – of CT scans of the heart.
Some of the proponents of using CT scans of the heart happen to be those have purchased
the machines and need to utilize it.
I was struck by the following statement from Dr. Hecht (page 5 of the article), who supports the use of CT scans for diagnosis:
Cardiologists like Dr. Brindis hurt their patients by being overly conservative and setting unrealistic standards for the use of new technology, Dr. Hecht said.
“It’s incumbent on the community to dispense with the need for evidence-based medicine,” he said. “Thousands of people are dying unnecessarily.”
I have been following this thread with interest and am reminded of the essay by Stephen J Gould on the “creeping fox terrier clone”. This is a classic example of how an original phrase can repackaged and cloned over successive generations of textbook until the information becomes 10th hand and passes into accepted wisdom. The fox terrier was first quoted in a book some 100 years ago as being the most appropriate approximation for the size of one of the earliest horse types, Eohippus. It is unlikely that most readers today have the remotest idea how big a fox terrier is, yet the anaology is remarkably persistent; see for example, http://www.nature.ca/notebooks/english/dahorse.htm (accessed 29.6.08). Interestingly, however, Wikipedia cites the Gould essay and explains the reasoning behind the original comparison. (http://en.wikipedia.org/wiki/Eohippus , accessed 29.6.08) Perhaps an example of Wikipedia’s reliability?
Interesting page.
Paragraph 4 : Does word “Significant” carry any baggage that is not
intended in this context?
Graphic : One has to wonder how much of the repackaging is based on
visual components such as the charts, diagrams, etc.
Repackagers : Could these be considered “intellectual middlemen” or
those not adding any extra value, only complicating the relationship
between the creator and the consumer???
English example : Former Downing Street press chief Alastair
Campbell did not “sex-up” the Iraq dossier. See the BBC for more.
Sidenote 2 : New York?
It might be useful to carefully distinguish between events or raw facts and our explanations of events. Though definitions can be difficult, one operational test we can apply is how insane it feels to deny events or raw data, vs. the relatively free choice we have in determining explanations.
Once we label something “evidence” we have started to attach interpretations to events. I would propose that evidence is always mediated and, according to the narrow of scope of the objectives of the speaker, marketed.
For the purposes of investigation, let “weak corruption” be the type that lulls someone into believing one’s marketing message outside the original scope of objectives. Let “strong corruption” be the type that denies anything out of their scope of objectives. While those corrupted in the first sense may occasionally make unwise leaders; those corrupted in the second sense may become self-righteously and systematically destructive of relationships, culture, society, etc.
We rarely pay much attention to anyone who hasn’t shaped events with a story. We seem to be innately interested in well-crafted interpretations of events. Maybe a better measured of corruption is in the scope of objectives respected by the dialog surrounding the story than the degree to which the story supports a narrow scope of objectives. Good information architecture can often be said to be about respecting the dialog surrounding the story.
Thanks to Kindly Contributor Tchad for the spotting “significant”
and tactfully pointing out its problems.
On Tchad’s English example of the “sexed up” case for British intervention in Iraq: in my last paragraph the phrase “evidence fixed around policy” is a quiet reference to a phrase in the notorious Downing Street memo describing US WMD intelligence practices 6 months before the war started. The WMD affair is just one more dreary episode in a long history of chronic problems in intelligence agency evidence-processing, as those agencies tend to report what political authorities want to hear. The political science literature on intelligence agencies for at least 50 years has contemplated this exact issue: how pitching out corrupts within.
Dear Dr. Tufte,
Is the citation (2) for the Martin Amis book (bottom of your page) a typo “New Yori 2001”?
This looks to be another fascinating discussion thread, I cannot wait for the book.
Cordially,
Alex
Is this part of the “Corrupt Techniques in Evidence Presentations” chapter?
A few thoughts:
This section concentrates on the effects of distortions, purposeful or not, in evidence as it is re-presented to ever widening populations. Such programs take advantage of our native desire to believe in the goodness of people: “they wouldn’t say it if it wasn’t true.” They also prosper on our belief that laws and regulations will prevent such obfuscation: “they couldn’t say it if it wasn’t true.”
There is also the refocusing of mediated evidence from a wider population back to the individual, especially with regard to public opinion and public policy. A report is generated (and let’s just say it’s properly done without prejudice for a particular viewpoint), parts of this report are interpreted (cherry-picked) by a small number of people, or an organization, with a particular viewpoint to promote and broadcast to a wider, perhaps pre-selected, audience, who then exert some form of political pressure which passes back through a handful of individuals (lobbyists, congressional staffers), who then reinterpret such pressure as evidence and re-present it to a policy maker, who then acts on this by, among other things, crafting legislation (or at least attaching his or her name to a bill) or by voting one way or the other on pending legislation. This process may go through several generations of refocusing and further refocusing (back out to the wider population-back in to the policy maker…), sort of an evidentiary laser generating process, until the latest iteration is taken (or passed off as) as an original report.
Secondary presenters may not state who’s funding their operations. They also may cite related research (“performed at a major university”) in support of their program without providing proper references to said research. Alternatively, they may provide such an overwhelming display of citations, disclaimers, references and other fine-print justifications that we accept their headline claims since all those notes must mean they did a really good job of research and reporting.
Leaks to journalists can be a risky road to travel with regard the dissemination of original information. While it may be the only way to get sensitive information past higher levels of information control, it does have its pitfalls. When you do this, you give up your authorship, at least for a time, and hand it over to the journalist, necessarily a re-interpreter; you’re cited only as an unnamed source. The veracity of your information, in the public mind, lies solely on the reputation and integrity of the writer and the paper (for instance) and counts on readers to believe it because “they read it in the paper.” If they don’t believe the paper, they won’t believe you either.
One of the more amusing — and informative — abuses of the scatterplot that I’ve seen was
in a college embryology lecture. In the semester’s first lecture, the instructor drew on the
chalk board an abscissa with three categories: Primary Literature, Review Articles, and
Textbooks. The ordinate was labeled “truth”. A steeply descending line was then drawn
through the graph-space.
The message could not have been clearer. The assigned reading included a couple of book
chapters and several research papers a week. The tests were impossible if you hadn’t read
the papers.
These are the new concluding pages of a long chapter on evidence corruption (elsewhere
on this thread); some examples in that chapter are relevant. That chapter is at https://www.edwardtufte.com/bboard/q-and-a-fetch-msg?msg_id=0001et&topic_id=1&topic=
The theory here is based, in part, on the evidence collected (largely by several editors of the major medical journals, including The Lancet and NEJM) on the process for testing new drugs and the big compromises about the distribution of drug-test evidence that medical schools make in exchange for payments for running clinical trials. In an early draft of the essay above, I listed all the restrictions that drug companies place on evidence collected in new drug research and then suggested that alert consumers of evidence should not trust evidence that emerged under that set of restrictions. Earlier in my chapter on evidence corruption, there is definitive evidence about the relation between the quality of research designs in drug testing and the enthusiasm for the benefits of the drug (the worse the design, the greater the enthusiasm). And a few other examples in that chapter might be relevant.
What happens to evidence about new drugs is a strategic case: an important issue of
broad scope, intense evidence collection with restricted distribution, sometimes
undistinguished monitoring by the Feds, and an enormous amount of material produced
by the bureaucracies of secondary presentations.
One other inspiration was a NASA scientist who complained to me that every report of
scientific evidence had to go through the PR department.
Another inspiration came from the NASA emails going around Houston when the Columbia was damaged but still flying: those emails referred to the Boeing analysis of Columbia’s plight as the “Boeing PowerPoint Pitch”. The pitch language had penetrated all the way to the very most serious analysis done. The WhatPoint Pitch? The PowerWhat Pitch? The PowerPoint What? (I’m saving this anecdote for the PP chapter).
In several ways, the PP essay (which follows the corruption chapter) provides evidence about the bureaucracies of secondary presentations, especially the section on PP guidelines at the Harvard School of Public Health.
On intelligence agencies and their problems in distinguishing between detective work and
pitching: there is some 50 years of political science literature on intelligence agencies about exactly this problem.
Perhaps a sidenote delineating the recent work on evidence restriction in big pharma work
would be useful. And perhaps one to the political science literature on intelligence
agencies.
To develop the theory fully, however, would require a long essay, similar to the
evidence in the essay on the cognitive style of PP. Or another book. By someone else.
For Kindly Contributor Alex Merz: How wonderful! Any chance of contacting the creator of
the scatterplot linking truth to distance from the primary source for an original sketch?
I took a course called Drug Action and Design in the spring; the professor, an immunologist currently working at the National Institute for Alternative Medicine, used primary sources in a way similar to that of Professor Merz’s embryology prof, and she expressed the same skepticism of review articles and textbooks. Our assignments were to present articles to the class. Some of her choosing, some of our choosing. Her stated goal was to teach us how to discern good original research from bad. And she regularly filleted our articles. She was much more polite in her admonishment of the student presentations, but she wasn’t going to leave the students with bad analysis. Just about the time I could get through a NEJM article with only occasional reference to a dictionary, now I’m slowed down again trying to figure out what a Wilcoxon matched-pair signed-rank test is, and whether its use indicates obsfucation by the investigator.
On the completely opposite end of the spectrum is Paul Starr’s The
Social
Transfomation of American Medicine. His opinion of medicine in America is so obvious from the first page that I actually found his bias helpful in separating fact, opinion, and manipulation of the facts.
Interestingly, both my professor and Starr are opinionated and know it. It made for lively conversation, lively reading, and healthy skepticism.
On pharma in particular, here’s what I recently posted on my blog:
I think one of the key things the pharma companies do is to get some of these reps before they’re 25, before that critical bit of prefrontal cortex develops that is responsible for risk analysis and judgement. I was amazed by how innocent and forthright the young sales representatives were. I think they truly believe in what they’re doing. I can easily see the pharma HR and marketing departments holding meetings along these lines:
And you should see the booths. Bigger, brighter, all the graphs have a positive slope, and the reps on hand are always young and beautiful.
Here’s a good one: the R & D office at EPA wants to spend up to $5,000,000 to PR
consultants to ghostwrite articles “for publication in scholarly journals and magazines”
among other things: http://www.nytimes.com/2005/07/18/politics/18contracts.html?oref=login&pagewanted=all
From the NY Times story: “Donald Kennedy, the editor of Science magazine and a former
head of the Food and Drug Administration, said in a telephone interview on Saturday that
he found the idea of public relations firms ghostwriting for government scientists
“appalling.”
“If we knew that it had been written by someone who was not a scientist and submitted as
though it were the work of a scientist, we wouldn’t take it,” Mr. Kennedy said. “But it’s
conceivable that we wouldn’t know, if it was carefully constructed.”
He added that the practice of putting public relations polish on scientific work has already
been practiced by industry. “We had seen it coming in the pharmaceutical industry and
were sort of wary about it,” he said. “The idea that a government agency would feel the
necessity to do this is doubly troubling.” “
Here’s the permanent link to the NYT article ET cites above. Courtesy of Aaron Swartz’s NYT Link Generator.
Thank you Kindly Contributor Niels, it’s good of you to point out
Aaron’s link generator and its great value. I didn’t know about it.
Aaron visited here yesterday and we had a good long walk and
talk.
An interesting article on (self-reported) bad behavior among US scientists: http://www.nature.com/nature/journal/v435/n7043/full/435718b.html
From the article: “Of 3,247 early- and mid-career researchers who responded, less than 1.5% admitted to falsification or plagiarism, the most serious types of misconduct listed. But 15.5% said they had changed the design, methodology or results of a study in response to pressure from a funding source; 12.5% admitted overlooking others’ use of flawed data; and 7.6% said they had circumvented minor aspects of requirements regarding the use of human subjects.”
It would be good to have similar data for secondary
presentations of the primary research. Here are some research designs:
Method 1: start with a sample of secondary
presentations then match them up against the primary source. Example: Richard Feynam’s experience on the California textbook review committee, recounted in Surely you’re joking Mr. Feynman. Here is a link to Feynman’s discussion: http://www.textbookleague.org/103feyn.htm This link leads to The Textbook League, a newsletter which reviews school textbooks and finds lots of foolishness.
Method 2, the other way around: start with a primary source and then track all the secondary presentations of that single primary.
I once took George Miller’s “Magical Number 7 +/-2” paper and
tried to track down some secondary accounts. This informal census yielded the commentary in the “Cognitive Style of PowerPoint” and in our thread https://www.edwardtufte.com/bboard/q-and-a-fetch-msg?msg_id=0000U6&topic_id=1
A variation on Method 2: start with classic primary reports in various fields and then see how textbooks, review articles, newspaper accounts, weblogs, and so on then interpret the classic. The result: a series of studies in the degree of corruption in secondary presentations. There is surely a research literature already on this topic.
Sometimes the secondary communications are confronted; problems may arise: “In
Training Video, Merck Said Vioxx Did Note Increase Risk of Heart Attack,” The New York
Times, July 21, 2005: http://www.nytimes.com/2005/07/21/business/21vioxx.html?
oref=login
The archive link to the article is: http://www.nytimes.com/2005/07/21/business/21vioxx.html?ex=1279598400&en=3c8e6e628004bcb2&ei=5090&partner=rssuserland&emc=rss
Near the end of the story, there is an exchange which suggests it may be not be a good course of action for a witness to spar with an experienced litigator (the witness says that the lawyer is playing games, the lawyer for the opposing side responds with “this is not a game, my client is dead.” Long ago I assisted with an antitrust case run by a sharp and quick lawyer. His staff recounted the following story demonstrating his articulate quickness: the opposing side had not produced a possibly harmful document early in the case; then it appeared that the document might be helpful and the document, lo and behold, showed up. The antitrust lawyer asked what accounted for this runaround. The reply came: “A secretary had stored the document in an inaccessible place.” The lawyer immediately responded “How did she get it there?”
The same articulate quickness by experienced litigators is sometimes found when they cross-examine very distinguished economists in anti-trust cases.
I often find it revealing to ask:
“Is this an evidence-based decision, or decision-based evidence?”
I can’t remember where I first heard this, but I think it was about the British Labour government, quoting “policy” rather than “decision”.
This meta-analysis Lancet presents a particularly persuasive tree diagram on page five arguing that Merck had data indicating the risk of myocardial infarction in 2000 and that the risk was well documented by the end of 2000. The meta-analysis is republished here on the site of attorney F.G. Vaughn Marshall. Basically, as soon as they went to phase three trials with lots of patients the risk revealed itself definitively. The cumulative number of patients indicates that Bombardier’s study of 8076 patients in early 2000 was the grand piano that broke the camel’s back.
The Bombardier study’s effect also highlights the arbitrary nature of the 95% confidence interval. The authors declare the findings against Vioxx to be unambiguous after one more study, that of Geba. Geba’s contibution to this meta-analysis, however, with less than a thousand patients, amounts to not much more than a rounding error in Bombardier’s wake.
On page 5, note the drift of the effect and tightening up of the error bars over time.
This is a fine example of the Evidence Decay Cycle described in the Beautiful Evidence
chapter on evidence corruption: https://www.edwardtufte.com/bboard/q-and-a-fetch-
msg?msg_id=0001et&topic_id=1&topic=
Most contributors to this thread have cited studies in medical/biological fields, hardish sciences that often fall under the limelight of public scrutiny, and thus the need for vulgarization. (for which word the thesaurus can’t make its mind between adulteration and clarification …). I would venture that the Evidence Decay Cycle (E.D.C) is less sharp in presentations of Physics or Chemistry experiments.
At the other end of the spectrum, evidence in soft sciences exhibit short half lives, the E.D.C being rapidly followed by the “Everybody knows”, aka “Studies show”, aka “Family Feud” phenomenon. Here I must confess to a cardinal sin : being a management consultant. As such, I’m often pressed to base my problem solving, my presentations and my conclusions on “Fact”, and preferably on “Accepted Fact”.
Two of my favorite are the following :
Exhibit A:
Studies show that out of everything that you worry about :
40% will never happen,
30% has already happened,
12% is about your health,
10% are problems whose effects will cancel out
and only 8% are legitimate and need attention.
Exhibit B:
Studies show that we remember :
10% of what we Read,
20% of what we Hear,
30% of what we See,
50% of what we See and Hear,
80% of what we reformulate, and
100% of what we Reformulate while in situation
Needless to say, my (less than strenuous) research has never uncovered said studies. And I will not go into a detailed analysis – at least not in this forum – of why these “Facts” are obviously not “True” (hint, can you say 7+/- 2?).
However, it is interesting to note that when I’ve attempted to raise the point with my colleagues, I’ve been rapidly catalogued as the resident nerd.
Moreover, I’ve actually given a seminar where those two sets of numbers where used, and trainees dutifully wrote them down : fortunately for me, no one started discussing the numbers …which shows the level of mid range management’s gullibility / lack of critical eye.
A final point on the use of such “Fact” as a backbone to my job, and to go back to “Pitching Out” and “Corruption”. It is a given that my firm will never win a contract if we cannot gain the confidence of our future client. This confidence comes from many things : a few white headed consultants, charisma, good delivery, knowledge of the subject at hand … but I’ve never seen it won so rapidly as with a few well placed “studies show…”.
I’ve taken the “moral” position that as long as in my day to day job, I’m professional, honest, and trustworthy with my clients, it doesn’t matter if I’ve had to bend a few rules to convince their bosses to hire me. Of course, a drug marketer who does the same thing is an amoral, inhuman beast … fine lines.
ET suggests that creators of evidence-based reports “…should prepare their own secondary reports to replace the repackagings.” The first repackaging and the best defense should be the “executive summary.”
Replace some of the secondary repackagings by presenting a good accurate executive summary, perhaps a little longer than the term generally implies. Even if the consumer –at least the government official and journalist consumer– has the whole report, all that 90 percent of them will read is the executive summary. Most will just lift whole sentences and paragraphs from it, which mitigates mischief to some degree.
My essay mentions, as an example of how pitching out corrupts within, “the chronic problem
of government intelligence agencies: once the collection and selection of evidence starts to
become fixed around a pre-determned policy line, intelligence agencies may become
permanently unintelligent, confused about the boundaries between detective work and
marketing.”
See Fred Kaplan, “Beyond the NSA Scoop: A Tale of Intelligence Fiascos,” a review of James
Risen’s book, State of War:
http://www.observer.com/20060123/20060123_Fred_Kaplan_culture_books.asp
The CIA Center for the Study of Intelligence has some interesting publications that touch on the issue.
http://www.cia.gov/csi/pubs.html
I find it interesting to read some of the discussions about policy versus analysis. See the June 2004 roundtable and the 1999 Psychology of Intelligence Analysis.
i found the original text, and the responses/conversation here pretty fascinating…I’m an artist, about to open a show exploring war and the question- how, as a nation, do we decide what to kill and die for?
in trying to say something not available on the nightly news, and avoid spending energy repeating liberal or conservative perspectives on today’s war, we have found ourselves deconstructing story
and
asking questions about the relationship between a citizenry hooked on narrative to a phenomenon far too complex to be explored & ethically wrestled with through narrative
and
the relationship between
pitching one’s leadership
and
pitching a construction of global affairs dependent on war as plot
mediation of identity, evidence as backstory
marketing as national dialogue…
thanks for food for thought.
http://www.sojourntheatre.org
Credibility of analysts of information technology, an interesting Information Week article by
Paul McDougall and Larry Greenemeier at
http://www.informationweek.com/story/showArticle.jhtml?articleID=178601879
The article suggests that there are differences between IT research firms and, say, Consumer
Reports.
See CBS News, “Rewriting the Science:”
http://www.cbsnews.com/stories/2006/03/17/60minutes/printable1415985.shtml
See Haditha Evidence.
What is your thinking on this Haditha article, that the early statement, put out like so many other “first and last report this incident” statements, may have created a sense among the early actors, like Capt. Pool, as further facts came back from the unit and the community, that they had to defend the initial position, no matter how absurdly weak it was apparently becoming?
Here’s an abstract from a paper that has just been published, which refers to ET’s essay. It
may be of interest to some of you.
In lecture halls, in secondary school classrooms, during training workshops, and at
research conferences, PowerPoint is becoming a preferred method of communicating,
presenting, and sharing knowledge. Questions have been raised about the implications of
the use of this new medium for knowledge dissemination. It is suggested PowerPoint
supports a cognitive and pedagogical style inconsistent with both the development of
higher analytical thinking skills and the acquisition of rich narrative and interpretive
understanding. This paper examines how PowerPoint invites and seduces educators to
reshape knowledge in particular ways, and subsequently how this knowledge is presented
to students in the classroom. The particular forms of knowing, relating, and presenting
with PowerPoint are decided in part by teacher habituation to the software tool’s default
patterns, but also by the very nature of the presentation medium itself.
Adams, C. (2006) PowerPoint, habits of mind, and classroom culture. Journal of
Curriculum Studies, 38 (4) 389-411.
The New York Times has an interesting article on Weighing the Costs of a CT Scan’s Look Inside the Heart (http://www.nytimes.com/2008/06/29/business/29scan.html?hp). There is a debate amongst cardiologists on the utility – or lack thereof – of CT scans of the heart.
Some of the proponents of using CT scans of the heart happen to be those have purchased
the machines and need to utilize it.
I was struck by the following statement from Dr. Hecht (page 5 of the article), who supports the use of CT scans for diagnosis:
Cardiologists like Dr. Brindis hurt their patients by being overly conservative and setting unrealistic standards for the use of new technology, Dr. Hecht said.
“It’s incumbent on the community to dispense with the need for evidence-based medicine,” he said. “Thousands of people are dying unnecessarily.”
I have been following this thread with interest and am reminded of the essay by Stephen J Gould on the “creeping fox terrier clone”. This is a classic example of how an original phrase can repackaged and cloned over successive generations of textbook until the information becomes 10th hand and passes into accepted wisdom. The fox terrier was first quoted in a book some 100 years ago as being the most appropriate approximation for the size of one of the earliest horse types, Eohippus. It is unlikely that most readers today have the remotest idea how big a fox terrier is, yet the anaology is remarkably persistent; see for example, http://www.nature.ca/notebooks/english/dahorse.htm (accessed 29.6.08). Interestingly, however, Wikipedia cites the Gould essay and explains the reasoning behind the original comparison. (http://en.wikipedia.org/wiki/Eohippus , accessed 29.6.08) Perhaps an example of Wikipedia’s reliability?