All 4 books by Edward Tufte now in
paperback editions, $100 for all 4
Visual Display of Quantitative Information
Beautiful EvidencePaper/printing = original clothbound books.
Only available through ET's Graphics Press:
catalog + shopping cart
All 4 clothbound books, autographed by the author $150
catalog + shopping cart
Edward Tufte e-books
Immediate download to any computer:
Visual and Statistical Thinking $2
The Cognitive Style of Powerpoint $2
Seeing Around + Feynman Diagrams $2
Data Analysis for Politics and Policy $2catalog + shopping cart
Edward Tufte one-day course,
Presenting Data and Information
Atlanta, February 24
Miami, February 27
Tampa, March 1
Boston, March 14, 15, 16
Oakland, April 20
San Jose, April 21
Palo Alto, April 24
An account of current online prediction markets by Brendan I. Koerner at Slate.
So far the validation of such market predictions seems unsystematic, anecdotal, and based on economisting theology. It is difficult to figure out a good research design that would assess the quality predictive information deriving from such betting.
Would the same claims for predictive accuracy be made for betting on horse racing? The stock market? Insurance? Gambling? When does participation in predictive markets differ from lotto--a stupidity tax? How do we get good comparison predictions made by other methods? In the face of no other information, the average of all independent predictions should in the long-run do well. When is this the case? How well? Under what conditions? How to make quantitative estimates? Are betting pools like taking the average?
Does it matter who participates at what level of investment? What about dependencies among the choices of the participants? Purely strategic betting? How is the prediction affected by the framing of the betting question, as will surely be the case? Should some predictions receive more weight than others in the aggregation, just as higher quality studies receive more weight in meta-analysis? Size of bet does not seem to be a good proxy for how informed the bet is; indeed there might be an inverse relationship! Can knowledge of the prediction affect the predicted outcome, as is often the case in human affairs?
How about a market predicting the conclusions of studies of prediction markets--want to bet on the findings of studies from the University of Chicago School of Business compared to those from a Department of Sociology in France? Or classical vs. behavioral economists?
Prediction markets seem a bit like factor analysis or data mining in statistics--techniques to try when you don't have any good ideas.
-- Edward Tufte
As an economist, I think E.T. somewhat misunderstands the purpose of such markets. It is not to produce more accurate forecasts. The value of such markets is to more efficiently allocate risks. If a risk matters to an individual, say the arrest date of Saddam, and if you believe it will be later than most other people, and if you are willing to pay something to avoid later arrest dates and if somebody else is willing to pay to get later arrest dates, then all can gain by trading based on these conditions. Essentially, you would be paying somebody that is more willing to tolerate late arrest dates than you are. It is largely irrelevant whether your or their (or anybody else's) guesses are more accurate than any other prediction.
Having said that, E.T. raises some valid concerns about the operation of such markets. To the extent that it satisfies a taste for gambling rather than real values placed on the risks, the market may be efficient but not serve the public interest. To the extent that bids can be manipulated, etc., this market may not work very well at all. But, the evaluation of such markets should look at what their purpose is - the reallocation of risk - and not on the accuracy of different predictions of the future.
-- Dale Lehman (email)
Good point. It is however the advocates of prediction markets that are confounding and stretching the use of the word "market". According to the Slate article and to the rationale of the abandoned ARPA/DOD project, some of the prediction "markets" are in fact for prediction, as if the bets were a kind of meta-analysis and summarizing of the views of informed observers who happen to be betting, sometimes with real money and sometimes with play money. ARPA/DOD was not seeking to insure against event outcomes, they were seeking to forecast them. Also the claim of the elections prediction market is that they are better at forecasting elections than pre-election surveys (the participants aren't trying to insure themselves in the event of election of a particular candidate).
To avoid this confounding, maybe they shouldn't call themselves a "market" in the sense of economics. How about "office pool"? Or "focus group"?
-- Edward Tufte
I have two lines of speculation on the use of these markets to get predictions.
Firstly, I wonder if their accuracy rests on whether the results to be predicted depend on popular opinion or not. For example, election results and the success of movie releases depend on what lots of people think. The timing of the next terrorist attack is usually kept as secret as possible.
Secondly, evaluating the accuracy or otherwise of probability judgments is very hard. In an article called "Practical Probability" on my website www.dynamicmanagement.me.uk I reviewed the history of attempts to define what an accurate probability is before concluding that the best test seems to be success in real or simulated prediction tasks. When do prediction markets provide such a test? Sometimes but not always it seems from the article that prompted your posting.
-- Matthew Leitch (email)
The Iowa Electronic Markets (IEM), run by the University of Iowa, quite regularly outperform polls as predictors of election outcomes.
Certain markets for Florida oranges over time have been far better predictors of the weather than professional forecasters. If my memory serves me correctly, markets within Hewlett-Packard were better predictors of the quantity of printer sales than statistical forecasts.
Putting all of that aside, though, the purpose of the use of such "uncoventional" markets is not to "predict." The utility of markets is in their ability to aggregate information that might otherwise never be aggregated. These markets allow us to possibly see phenomena that might not otherwise be observed in disaggregated information.
-- Sean (email)
MIT Technology Review just launched Innovation Futures, based on the same premises discussed above. To me, it looks like a cross between the original DARPA idea and a fantasy football league. It looks like you can form teams and actually win prizes too.
[link updated February 2005]
-- Kendrick Hang (email)
As the CEO of a leading provider of prediction markets (NewsFutures powers MIT Technology Review's Innovation Futures, among others), I can give some answers to some of professor Tufte's excellent questions.
1) Should these things even be called markets? Yes, these are markets where people trade predictions, just like they would any other commodity.
2) Are markets more accurate than "average predictions" (aka polls)? Yes, this has been shown over and over, famously by the IEM in the case of elections. Why is it so? There are two reasons. The first is that, for instance in the case of elections, polls must rely on "representative" population samples, which are difficult to gather and qualify properly. The second, more fundamental reason is that polls average opinions that are expressed independently of each other, whereas markets force people to engage in a conversation and come to a consensus, aka the trading price. This can make a huge difference, even when everything else is controlled for. In a real-world experiment we ran comparing the predictions of award nomminees made either through polling or trading, polls achieved 50% accuracy compared to 75% for markets. The striking thing is that the several hundred people who participated were the same in both cases.
3) In another real-world experiment we recently conducted with Stanford's Justin Wolfers and Overture's David Pennock, we compared the accuracy of predictions made with play-money (on our NewsFutures.com site) or real-money (on the Ireland-based Tradesports.com site) over more than 200 individual NFL games in the last season. The play-money predictions were just as good as the real-money ones, which shows that the really important thing is to involve traders who are knowledgeable, and not, as is taditionally assumed, that they "put their money where their mouths are". Those market predictions also compared well to those of 1810 individual sports fans: Over the course of the NFL season, only 5 individuals outperformed the markets.
Finally, I want to thank Edward Tufte for proposing a fascinating vast research program. It is true that the current media hype about prediction markets tends to overshadow how much we still have to learn about them.
-- Emile Sevan-Schreiber, Ph.D. (email)
Strategic bids or "predictions" undermine the assumptions of prediction markets. Here is some strategizing about the Google market by the consistently interesting and insightful Henry Blodget:
Note the dominant issue in the Google auction is the framing of the market.
-- Edward Tufte
On August 3, 2004 Edward Tufte wrote that "Strategic bids or `predictions' undermine the assumptions of prediction markets."
To the contrary, strategic traders on net improve the accuracy of predictions. This is the message of a simple economic model, and of laboratory experiments that confirm this model. The intuition is simple. Strategic traders are in essense noise traders, since they trade for reasons other than their best estimate of asset value. And adding noise traders to a market induces other traders to become better informed.
-- Robin Hanson (email)
I'm grateful that Robin Hanson posted his comment and the links to his papers with Ryan Oprea. Some of their colleagues provided earlier contributions to this thread. The material proved inspiring:
In effect, strategizing participants are reframing the prediction task, and causing other participants to reframe the prediction task because of their knowledge about the existence of strategizing participants and their possible effects on the price. It is faint reassurance that everything will be all right at equilibrium. In addition, there is all sorts of empirical evidence that variations in framing have an enormous impact on predictions. Indeed, that is probably the most secure empirical finding of behavioral economics, a finding which, in my view, thoroughly complicates and undermines microeconomic models.
And, of course, those setting up the prediction market have framed the original prediction question in a particular way; a different framing will yield different predictions, as is well known in questionnaire design and in behavioral economics.
There is a world of difference between a "simple model" and the realities of prediction markets; what overlap there is results from puns. That is, everyday words that, when used in economic theory, have extremely narrow meaning (often only as a mathematical symbol) and no relationship at all to what the everyday word describes other than the punning relationship. Yet the inferences ground out from the mathematical model are made about the everyday world! After all, the title of the paper is "Manipulators Increase Information Market Accuracy;" yet there is no credible link, indeed no link at all, between these words and the mathematics of the paper.
The famous beauty contest remarks of Keynes (The General Theory, pages 154-156) are relevant:
"It might have been supposed that competition between expert professionals, possessing judgement and knowledge beyond that of the average private investor, would correct the vagaries of the ignorant individual left to himself. It happens, however, that the energies and skill of the professional investor and speculator are mainly occupied otherwise. For most of these persons are, in fact, largely concerned, not with making superior long-term forecasts of the probable yield of an investment over its whole life, but with foreseeing changes in the conventional basis of valuation a short time ahead of the general public. They are concerned, not with what an investment is really worth to a man who buys it for keeps, but with what the market will value it at, under the influence of mass psychology three months or a year hence.
Or, to change the metaphor slightly, professional investment may be likened to those newspaper competitions in which the competitors have to pick out the six prettiest faces from a hundred photographs, the prize being awarded to the competitor whose choice most nearly corresponds to the average preferences of the competitors as a whole; so that each competitor has to pick, not the faces which he himself finds prettiest, but those which he thinks likeliest to catch the fancy of the other competitors, all of whom are looking at the problem from the same point of view. It is not a case of choosing those which, to the best of one's judgement, are really the prettiest, nor even those which average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects average opinion to be. And there are some, I believe, who practice the fourth, fifth and higher degrees.
If the reader interjects that there must surely be large profits to be gained from the other players in the long run by a skilled individual who, undeterred by the prevailing pastime, continues to purchase investments on the best genuine long-term expectations he can frame, he must be answered, first of all, that there are indeed such serious-minded individuals and that it makes a vast difference to an investment market whether or not they predominate in their influence over the game-players. But we must also add that there are several factors which jeopardise the predominance of such individuals in modern investment markets. Investment based on genuine long-term expectation is so difficult to-day as to be scarcely practicable."
The Hanson-Oprea mathematical model comes with an enormous list of assumptions (it would be helpful to see a complete list): "a Kyle-style market microstructure, by assuming risk neutrality" and a strategic trader who has "an additional quadratic preference regarding the price." It is my view that, unfortunately, such models have no evidential value with regard to the empirical world, whatever their internal validity to microeconomics. To assess this point, the reader should take a look at the economic model under discussion at http://hanson.gmu.edu/biashelp.pdf
(To my steadily deteriorating eye for mathematical style, the paper's math appears clunky and heavy-handed. Perhaps the math could be tightened up before submitting the manuscript for publication; a good mathematical economist might provide a helpful look at the manuscript.)
The noise trader metaphor appears to be wishful thinking. Why would strategizing traders foolishly bother to strategize if their only expected effect was to raise the IQ of the other participants? Why have strategizing traders suddenly become foolish about their consequences? And no one ever makes market gains because of clever strategies? Are all those traders out there wasting their time? And how do the other participants know that there are strategic traders in their midst? When? What proportion? And so on.
Assume that there are 10 objections to prediction markets, that each objection is contradicted by a simple microeconomic model, and that each of the 10 models has 50 assumptions. Now what happens with the 10 models and 500 assumptions meet up?
It is a helpful exercise to place the complex reasoning of very smart trader and analyst, Henry Blodget against the untestable and superstitious theological assumptions of a microeconomic model. So model Blodget. By the way, he has an interesting risk function: he is trying to make a market prediction (and therefore give investment advice) but avoid going to jail for doing so!
If my Yale colleagues Donald P. Green and Ian Shapiro ever write a second edition of their brilliant book Pathologies of Rational Choice Theory, I predict there will be a chapter on prediction markets. Their book is about the discrepancies between the strong claims of rat choice models applied to politics and the thin, absent, and nullifying empirical evidence for those claims. Need the paradox of voting be mentioned?
I hope Hanson and Oprea will read Green and Shapiro's book.
Now that everyone has read the 2 articles posted by Hanson and Oprea, and my long comment, perhaps it is time for the thread to change direction.
-- Edward Tufte
Two papers to read about the Iowa markets:
Worth a look.
[Editor's note added January 10, 2007: Christopher Wlezien reports:
"Here is a paper I've written with Bob Erikson. It challenges the commonly- held
assumption that prediction markets are better than polls as election predictors."]
-- David Passmore (email)
Since the request was to change direction, here's something that's not quite on topic:
The saga of Long Term Capital Mangement's ultimately catastrophic experience employing the Nobel-prize winning Black-Sholes model was covered by NOVA in 2000. See The Trillion Dollar Bet.
Here's a thought: could something like Heisenberg's uncertainty principle be applied to explain the relationship between a predictive market and reality?
-- Niels Olson (email)
An interesting essay by Louis Menand based on Philip Tetlock's new book, Expert Political Judgment: How Good Is It? How Can We Know?
-- Edward Tufte
As I understand it, averaging judgements has a fairly long history. Larrick's recent paper "Intuitions about combining opinions: Misappreciation of the averaging principle" discusses some of the history in the lit review. (I've only read a working draft.) He notes that the average judge is not isomorphic with the average of the judges. Apparently, quite a bit of research was performed in the early 20th C. on averaging estimates from groups of people. An example well known is guessing the number of jelly beans in a jar, where groups of guesses averaged tend to outperform most guesses.
From Larrick's article, I gather that a guess is part good information and part noise. If the noise is distributed on both sides of the correct figure, it cancels out to some degree. This is distinct from predicting lotto results from lottery tickets, because there is no good information to be had before hand, since the process is wholly random (or at least close enough). A horse race or football game contains quite a bit of randomness to play with the results; however, it seems hard to argue that horse races and football games are wholly random. For that to be true, wouldn't high-school football teams qualify for the Superbowl now and again?
I don't see how the time variable creates a qualitative shift. No person guessing the number of jellybeans in a bowl has complete knowledge about the bowl's contents, but they have varying levels of accurate information based on how they calculate their estimates. (Presumably, a series of randomly drawn numbers averaged out would not be particularly accurate.) Before a football game, let's say, there are factors that can be known that will influence the outcome of the game. If those factors influence the outcome, then they should have some predictive value.
Consider another situation: predicting the weather. If we took twenty people and had them predict tomorrow's high temperature, the average of the estimates would presumably be more accurate than most of the individual guesses. Each person has her own experience with weather and has some information and some noise. However, predicting the high temperature for a year ahead is going to considerably more problematic because, except for seasonal ranges that will give some clues, there is no useful information.
As I understand it, a prediction market (or office pool, if one prefers) is a tool for solving the coordination problem of aggregating information. There exist for many questions myriad people with bits of information about a question. The accuracy of the information will, of course, be less accurate as the time horizon goes into the future, so relatively short-term questions may be best addressed (I suppose). I think the prediction market is analogous to just averaging guesses with the jellybean contest. People buy and sell based on some accurate information and lots of noise. Unless a pathology obtains, the noise cancels out to some degree and leaves a more accurate prediction as a result.
Clearly pathologies can, and will, obtain; however, they do not discredit the idea in some naive Popperian sense. If that were the case, we could drop a hammer and a feather and disprove Galileo's work on falling bodies! But they do point out shortcomings in the theory and problems to be solved and understood. It may be that prediction markets will turn out to be a dead end, some sort of mirage. However, the general accuracy of horse-betting odds, jellybean guessing, and the IEM seem to suggest something may be there worth utilizing.
At the risk of being obnoxious, didn't a great author once write "Correlation is not causation, but it sure is a hint"? :^D
-- John Hull (email)
Hurricane Prediction Markets
From the most recent National Hurricane Center Discussion message on Tropical Storm Ernesto (5 PM EDT Sunday, 27 August):
THE OFFICIAL TRACK FORECAST IS ABOUT THE SAME AS THAT FROM THE PREVIOUS ADVISORY. THIS IS BETWEEN THE GFDL AND U.K. MET OFFICE TRACKS TO THE WEST...AND THE NOGAPS AND GFS TRACKS TO THE EAST. THE GFS HAS SHIFTED EAST FROM THE EARLIER RUN AND THIS MODEL HAS NOT BEEN VERY CONSISTENT FROM RUN TO RUN THUS FAR. THE OFFICIAL FORECAST LEANS A LITTLE TO THE LEFT OF THE CONSENSUS...MAINLY OUT OF RESPECT FOR THE USUALLY RELIABLE GFDL MODEL. THIS NHC FORECAST IS ALSO VERY CLOSE TO THE LATEST FSU SUPERENSEMBLE TRACK.(ellipses in original)
In contrast to terrorism prediction markets, it seems unlikely speculators could do much to change the course of a natural event. Would a predictive hurricane market be of value? What if the traded commodity was accuracy of the available forecasting models? Could market performance be an incentive to develop better models?
-- Niels Olson (email)
I'm trying to figure out the idea of "The Wisdom of Crowds" (a book by Surowiecki). In particular, I'm trying to figure out why many people think that the betting markets predict election outcomes better (i.e., more consistently and with better accuracy) than polling. Consider the following three points.
1. Polls are conducted using refined statistical techniques, whereas bets are placed by self-selected individuals who are mostly just playing and having some fun. [I think the number of people typically polled on an issue is about the same as the number of people betting on an event - i.e. about 500-1000 - as in the IEM.)
2. A poll asks people who they will vote for whereas the bettor is trying to predict the outcome which is based on his estimate of how *others* will vote. Clearly, trying to estimate other people's actions could not possibly be as accurate as trying to predict one's own action on election day!
3. And, importantly, bettors may bet on a candidate they would not be voting for in order to hedge - i.e., to offset the psychic (emotional) loss if his candidate loses with the financial gain from the bet.
Wouldn't this sort of risk-averse behavior tend to reduce the predictive power of the betting markets?
-- Jaffer Qamar (email)
My dear friend and former colleague Michel Balinski, now at Ecole Polytechnique, has an interesting paper possibly relevant to the theory of prediction markets.
See Michel Balinski and Rida Laraki, "A Theory of Measuring, Electing, and Ranking" at
-- Edward Tufte
A market of markets? And faux validation.
Since there are now multiple prediction markets betting on the same prediction, it is surely time for a meta-market that averages over prediction markets within the same domain. Multiple markets within the same domain also increase the chances that at least one PM will predict successfully. If we only knew in advance which one.
For the multiple markets (producing, oddly enough, divergent predictions!) on the 2008 election, see
In addition, the validation of prediction markets has been based on their last-second convergence to outcome value, which is not a very strong test, as PM exploits information not available to other methods of prediction (such as surveys or models).
-- Edward Tufte
If predictions markets are in fact markets, a question which (to me anyways) is still open, and exactly the same object is traded on two different places at two different prices, and the transactions costs and barriers to trade are minimal, then there should be no need for a meta-market. Through arbitrage the prices will converge to a single price.
For example, suppose apples cost a dollar apiece on the 10th Street market and 2 dollars on the 11th Street market, and there are no barriers to buying or selling. An abritrageur can buy the apples on 10th Street, walk one block, and sell them for 2 dollars. To make absolutely sure she makes the sale, she might sell them instead for $1.90. Enough people engaging in this activity will cause a run on the 10th Street market (driving up the price on 10th Street) and a glut of apples on the 11th Street market (driving down the price on 11th Street). Eventually, the only difference in price will be the marginal cost to the public of being bothered to walk an additional block.
It is for this reason there is virtually no discrepancy in the price that, say, one Euro costs to buy in London as in New York, Chicago, Tokyo, Hong Kong, Singapore, or Dubai. As soon as the prices get out of agreement, lots of traders simultaneously short sell on the overpriced market and buy on the underpriced market, forcing the prices back into agreement.
Arbitrage opportunities exist usually in immature markets, when information is poor and trading levels low. This is likely the case in predictions "markets" now, but not for long.
-- Eric Howell (email)
Here is a paper I've written with Bob Erikson. It challenges the commonly- held assumption that prediction markets are better than polls as election predictors.
-- Christopher Wlezien (email)
Interesting studies of prediction markets from The Wharton School:
"The Fizz-dom of Crowds: If Prediction Markets are so Great, Why have they been so Wrong Lately?" by James Ledbetter
More anecdotes from elections, this time contrary to earlier anecdotes. Microeconomics needs to move beyond case studies to better evidence.
Some of the issues with prediction markets (particularly where the output is a probability) is a mis-understanding of probability. Not every favorite will win; if every favorite did they were underpriced!
I've written about this (in context of the election prediction markets) on my blog here: http://blog.mercury-rac.com/2008/01/25/how-to-interpret-prediction-market-results-on-elections/
-- Jed Christiansen (email)
Effort to distort prediction market for the president election?
Why, for the last 6 weeks, was the Intrade prediction market (widely distributed on Real Clear Politics) so slow in reacting to presidential campaign events? Did the market know something we didn't know or was the market distorted by a partisan advocate?
Here's an intriguing report by Josh Rogin, CQ Politics, "Trader drove up price of McCain 'stock" in online market"
Note the relevance of multiple prediction markets in Rogin account, similar to how we identify biased or high-variance survey polling operations (see pollster.com) by comparisons with other surveys.
Some questions provoked by this story: Why is any bet, before the fact, ever a bad bet in a market? Since there were several other non-manipulated markets (maybe), can the distorted prediction market (Intrade) be played off financially against the undistorted markets (Betfair, IEM)? What are the effects of US anti-gambling laws (which create a government monopoly on gambling) on prediction market operations (where the eligible bettors now consist of those bold enough to bet real money at an Irish website)?
How can "market" and "market prediction" be defined untendentiously? Will prediction markets go the way of such over-reaching fads as focus groups and data mining?
An issue in polls vs prediction markets vs statistical models vs guru predictions: In assessing the accuracy of election predictions by different methods, when is the line drawn for final predictions for polls and for prediction markets? Suppose someone made a highly accurate prediction of the final outcome of vote shares for every state 6 months ago? Or a few weeks ago? Bill Clinton, for example, said at the height of the short-lived Palin bounce that Obama would win in a landslide. Or that the statistical models of elections on average predicted, months ago, an Obama win by 52-48 or 53-47?
I continue to be concerned about the operations of rough real-world markets vs the fantasies of classical theories of markets. The once widely celebrated genius of derivative markets, now fully revealed, provokes additional concern. More generally, I believe that the principle "It's more complicated than that" almost always trumps hedgehog, true- believer models in serious real-world issues.
Surely some Kindly Contributors can help us with this.
-- Edward Tufte
The Real World Economics Review (formerly the Journal of Post Autistic Economics) is holding open voting for their Dynamite Prize (formerly the Ignoble Prize), for the economists who are most responsible for blowing up the world economy.
Many of the names have appeared in this forum
-- Niels Olson (email)
-- Niels Olson (email)