Evaluating intelligence is tricky. Really tricky. 1 Sherman Kent, one of the foremost early thinkers regarding the analytic process in the U.S. national security Intelligence Community wrote in 1976, "Few things are asked the estimator more often than 'How good is your batting average?' No question could be more legitimate-and none could be harder to answer.'' So difficult was the question that Kent reported not only the failure of a three-year effort in the 1950s to establish the validity of various National Intelligence Estimates but also the immense relief among the analysts in the Office of National Estimates (forerunner of the National Intelligence Council) when the Central Intelligence Agency (CIA) "let the enterprise peter out.'' Unfortunately for intelligence professionals, the decisionmakers whom intelligence supports have no such difficulty evaluating the intelligence they receive. They routinely and publicly find intelligence to be "wrong'' or lacking in some significant respect. Abbot Smith, writing for Studies in Intelligence in 1969, cataloged many of these errors in "On the Accuracy of National Intelligence Estimates.'' The list of failures at the time included the development of the Soviet Union's hydrogen bomb, the Soviet invasions of Hungary and Czechoslovakia, the Cuban Missile Crisis, and the Missile Gap. The Tet Offensive, the collapse of the Soviet Union, and the weapons of mass destruction (WMD) fiasco in Iraq would soon be added to the list of widely recognized (at least by decisionmakers) "intelligence failures.'' Nor was the U.S. Intelligence Community the only one to suffer such indignities. The Soviets had their Operation RYAN, the Israelis their Yom Kippur War, and the British their Falklands Islands venture. In each case, after the fact, senior government officials, the press, and ordinary citizens alike pinned the black rose of failure on their respective intelligence communities. To be honest, in some cases, the intelligence organization in question deserved the criticism but, in many cases, it did not-or at least not the full measure of fault it received. Whether the blame was earned or not, however, in the aftermath of each of these cases, commissions were duly summoned, investigations into the causes of the failure examined, recommendations made, and changes, to one degree or another, ratified regarding the way intelligence was to be done in the future. In contrast, while much of the record is still out of the public eye, intelligence successes have rarely received such lavish attention. Why do intelligence professionals find intelligence so difficult, indeed impossible, to evaluate while decisionmakers do so routinely? Is a practical model available for thinking about the problem of evaluating intelligence? What are the logical consequences for both intelligence professionals and decisionmakers that derive from this model? Finally, is there a way to test the model using real world data? Prior to answering these questions a story seems relevant.