If economist John R. Lott didn’t exist, pro-gun advocates would have had to invent him. Probably the most visible scholarly figure in the U.S. gun debate, Lott’s densely statistical work has given an immense boost to the arguments of the National Rifle Association. Lott’s 1998 book More Guns, Less Crime — which extolled the virtues of firearms for self-defense and has sold some 100,000 copies in two editions, quite an accomplishment for an academic book — has served as a Bible for proponents of “right to carry” laws (also known as “shall issue” laws), which make it easier for citizens to carry concealed weapons. Were Lott to be discredited, an entire branch of pro-gun advocacy could lose its chief social scientific basis.
That may be happening. Earlier this year, Lott found himself facing serious criticism of his professional ethics. Pressed by critics, he failed to produce evidence of the existence of a survey — which supposedly found that “98 percent of the time that people use guns defensively, they merely have to brandish a weapon to break off an attack” — that he claimed to have conducted in the second edition of “More Guns, Less Crime”. Lott then made matters even worse by posing as a former student, “Mary Rosh,” and using the alias to attack his critics and defend his work online. When an Internet blogger exposed the ruse, the scientific community was outraged. Lott had created a “false identity for a scholar,” charged Science editor-in-chief Donald Kennedy. “In most circles, this goes down as fraud.”
Lott’s recent baggage makes him an impeachable witness in the push to pass state-level right to carry laws, and raises questions about his broader body of work. Kennedy and others have even likened Lott to Michael Bellesiles, the Emory University historian who could not produce the data at the heart of his award-winning 2000 book “Arming America”, which had seemed to undermine the notion that there was widespread gun ownership and usage in colonial America. But while Bellesiles resigned after a university panel challenged his credibility, thus far Lott has escaped a similar fate. An academic rolling stone, Lott has held research positions at the University of Chicago and Yale law schools, but currently works at the American Enterprise Institute (AEI), a Washington think tank much smiled upon by the Bush administration. AEI will not say whether it will investigate its in-house guns expert; by e-mail, AEI president Christopher DeMuth declined to comment on the possibility.
Lott’s defenders rightly point out that the missing survey — which was completely lost in a computer crash, Lott says — isn’t central to the argument of “More Guns, Less Crime”. But as Harvard economist David Hemenway wrote in a recent critique of Lott’s latest book, “The Bias Against Guns”, one must have “faith in Lott’s integrity” before accepting his statistical results. That is because in the dauntingly complex subfield of econometrics, statistical manipulation is a constant concern. In a recent attempt to rescue his beleaguered “More Guns, Less Crime” hypothesis from criticism, Lott has been caught massaging his data to favor his argument. In subsequent exchanges with Mother Jones, he changed his story several times about a key data table that was misleadingly labeled — and then surreptitiously amended — on his website. Nevertheless, most pro-gun scholars and political conservatives have yet to call Lott to account.
Lott’s colleagues credit him with having a brilliant empirical mind and for publishing an impressive array of scholarly papers, as well as for being a pioneer in making his data available on the Internet. Yet Lott is also known for a fiery personality. Yale economist Ian Ayres, who helped Lott get a research job at the Yale Law School but has since criticized his former colleague’s work, says: “A lot of people would say, thank God Lott is still in the academy, but thank God he’s not at my school.”
Lott made his name as a guns expert in the standard academic way: By publishing in a peer-reviewed journal. In an influential 1997 article in the Journal of Legal Studies, Lott and co-author David Mustard examined crime data from all 3,054 U.S. counties from 1977 to 1992 to test the impact of right to carry laws. During those years ten states passed such legislation, and Lott and Mustard’s regression analyses — complex statistical techniques used to uncover apparent causal links by controlling for other variables — found right to carry laws had stunningly deterred violent crime, particularly rape and murder. Their study, they wrote, showed concealed handguns to be “the most cost-effective method of reducing crime thus far analyzed by economists.”
In a country with over 200 million guns in circulation and some 29,000 gun deaths a year, Lott’s work fed into a fraught political debate. U.S. firearms researchers, notes University of California-Berkeley criminologist Franklin Zimring in a recent article, find themselves “organized into sectarian groups” even on seemingly straightforward empirical questions, such as the number of times per year that guns are used for self defense. In this fray, Lott portrays himself as a dispassionate scientist rifling through mounds of data. “My only objective is to study the measurable effect that gun laws have on incidents of violence,” he writes in “The Bias Against Guns”.
But this is not the first time Lott has been accused of overstating his results. In early 1997, Lott testified before Nebraska lawmakers with advance galleys of his Journal of Legal Studies article in hand, claiming to have proven a causal link between right to carry laws and lower crime. Yet soon afterwards in the same journal, economist Dan Black and criminologist Daniel Nagin found that slight alterations to Lott’s data and model dramatically skewed the outcome. For instance, removing Florida from the analysis caused the beneficial impact of right to carry laws on murder and rape to vanish entirely.
Lott had an answer to Black and Nagin — as he has for each subsequent critic. They tend to be mind-bogglingly complicated, involving things like ordinary least squares and Poisson distributions. In calling Lott’s overall thesis junk science, Skeptical Inquirer magazine noted his tendency to make “arguments so complex that only other highly trained regression analysts can understand, let alone refute, them.” This was not meant as praise.
Still, economists like Stanford’s John Donohue and Georgetown’s Jens Ludwig say that when first published in 1997, Lott’s work was novel and even cutting edge. But the intervening years — and increased scholarly scrutiny — have not been kind to the “More Guns, Less Crime” idea. In fact, social scientists have turned away from the thesis even as Lott has stuck by his original conclusions. As a result, to maintain his argument Lott has had to go to considerable lengths, as demonstrated by a recent brouhaha over a massive critique of his work in the Stanford Law Review.
The Stanford Law Review critique, authored by Yale’s Ayres and Stanford’s Donohue, analyzed more recent crime statistics, extending Lott’s original 1977-1992 crime dataset to include data through the late 1990s. As it turned out, after 1992, partly due to the end of the 1980s’ crack cocaine-related crime wave, crime rates dropped dramatically in states with large urban centers, many of which had not passed right to carry laws. This fact proves highly inconvenient to the “More Guns, Less Crime” argument. After testing Lott and Mustard’s analysis with more years of data and different econometric tweakings, Donohue and Ayres conclude, “No longer can any plausible case be made on statistical grounds that shall-issue laws are likely to reduce crime for all or even most states”; their analysis even suggested such laws might increase violent crime.
This may seem like an ordinary scholarly dispute, but it quickly devolved into the sort of controversy that has followed much of Lott’s recent work. Lott was invited to write a response to Ayres and Donohue, scheduled to run simultaneously in the Stanford Law Review. He accepted the invitation, but then suddenly withdrew his name from the response as the editorial process wound down. The cause, according to then Stanford Law Review president Benjamin Horwich, was a minor editing dispute involving literally one word; Lott, however, complains of an editorial “ultimatum” from the journal.
And so Lott’s response was published under the name of two co-authors, economists Florenz Plassmann and John Whitley. They accused Donohue and Ayres of having “simply misread their own results” and, in a feat of statistical one-upmanship, claimed to extend the crime data even further — through 2000 — thereby rescuing the “More Guns, Less Crime” hypothesis in the process. But when Ayres and Donohue analyzed this new data, they say they found severe coding errors that, when corrected, thoroughly obliterated the attempt to confirm the “More Guns, Less Crime” thesis. Similar coding errors, wrote Donohue and Ayres, have cropped up elsewhere in Lott’s work, including in his new book, “The Bias Against Guns”.
A charge of coding errors, while not unheard of, is embarrassing, since it implies that only by using mistaken data can Lott preserve his thesis. The errors might have been accidental, but since the Stanford Law Review exchange, Lott has continued to defend the erroneous work. “There’s a bit of concern over making the error, but now there’s huge concern over not backing away from the results now that it has been pointed out,” says Ayres.
In May, Lott told the Chronicle of Higher Education that the claim of coding errors had not been reviewed by a third party. Now, though, he admits the errors but calls them “minor” and claims they don’t appreciably affect the results of the Plassmann-Whitley paper (which is, of course, really his own). “I knew he was going to say that,” says Donohue when informed of Lott’s response.
To get to the bottom of the dispute — which goes to the heart of the continuing validity of “More Guns, Less Crime” — Donohue and Ayres responded to Plassmann-Whitley by contrasting two key tables, one that uses their (read: Lott’s) data and one that corrects the coding errors. The first table, using miscoded data, shows statistically significant decreases in murders, rape, and robbery. The second, using corrected data, shows statistically insignificant decreases in murder, rape, and robbery, along with statistically significant rises in property crimes, auto theft, and larceny, which Plassmann and Whitely had also noted in their paper.
In the face of this evidence, how can Lott continue to claim the coding errors don’t matter? In an interview conducted on August 18 (transcript), Lott told me that he had posted “corrected” tables on his website for all to see. But when I downloaded Lott’s “corrected” version of the contested table, it showed the same numerical values as that of Donohue and Ayres — that is, the coding errors were gone — but bizarrely claimed the properly coded data still indicated statistically significant drops in murder, rape, and robbery. That’s because Lott had introduced a new twist: Rather than simply fixing the incorrectly coded data, he omitted a key calculation regarding statistical significance used in the Plassmann-Whitley paper. (For statistics geeks, it’s called “clustering at the state level.”) Faced with no other way to save his thesis, you could say that Lott changed the rules — rules his own team had laid down — in the middle of the game.
Confronted with this, Lott’s subsequent actions raise even more questions. On the website, Lott claimed the “corrected” table used “clustering,” when it did not. In a heated interview on August 19 (transcript), Lott said this labeling claim must be an error. But the very next day, he e-mailed a file containing precisely the same table, claiming that all the tables on his website were “clearly and properly labeled.”
On September 2, Lott changed his story yet again, emailing me that “the file should now be returned to what had been up there before.” But when I downloaded the new file, the key table had been altered to remove the questionable clustering assertion, but had inexplicably reverted to the incorrectly coded Plassmann-Whitley findings that Donohue and Ayres had long since debunked, and Lott himself had admitted to me were incorrectly coded. And despite all these changes, as of October 13, Lott’s website still labels the table as last being corrected “April 18, 2003.”
Perhaps because correcting Lott’s coding errors sinks his latest attempt to revive his “More Guns, Less Crime” hypothesis, Lott since has taken to criticizing the Stanford Law Review for not being “a refereed academic journal,” as he put it in an e-mail. That’s true: The nation’s most prestigious law reviews are run and edited by students, which hardly keeps leading academics from publishing in them. Yet Lott’s critique is once again misleading: His own newspaper op-eds aren’t peer reviewed, and Lott admits that Regnery Press, his latest book publisher, does not use peer review. Furthermore, now that Lott has left academia and has an ethics cloud over his head, he may have difficulty being published in peer-reviewed publications. “It’s strange that he’s putting so much of his weight on the fact that Stanford is not a refereed journal,” says Ayres, “because there’s a possibility that this is where he’s going to be moving towards himself.”
Given all the questions about Lott’s ethics — and his stubborn reluctance to back away from his mistakes — pro-gun scholars might feel an intellectual obligation to challenge him. Some do: Randy Barnett, a “pro-gun rights” legal scholar at Boston University, insists that a non-politicized investigation is needed to determine whether the missing defensive gun use survey actually existed, since “fraud is what is on the table.” One of Michael Bellesiles’ most dogged critics, Northwestern University law professor James Lindgren, also prepared a report investigating Lott’s survey claims. “I have serious doubts whether he ever did the study,” says Lindgren, “and the only evidence that he’s brought forward for having done the study is ambiguous” — an NRA activist who claims to remember having been called and asked about defensive gun uses.
But many gun rights conservatives have taken a pass on the Lott issue. A glowing review of “The Bias Against Guns” in National Review — which made much hash of the Bellesiles affair — failed to mention Lott’s recent difficulties in corroborating the existence of his survey. “It’s so interesting that Michael Bellesiles gets hung from the highest tree, while Lott, if anything, he’s been more prominent in the last couple of months,” says Donohue.
The right has good reason to stick by Lott: “The entire ideology of the modern gun movement has basically been built around this guy,” says Saul Cornell, an Ohio State University historian who has written widely on guns. Over the years the pro-gun intellectual agenda has had two prongs: Defending a revisionist legal understanding of the Second Amendment in constitutional law, and refuting social scientists and public-health researchers who argue that the widespread availability of guns in America plays a key role in the nation’s staggering number of homicides and suicides. Without Lott’s work, the latter argument becomes much harder to make.
More conservative soul searching may result from a forthcoming National Academy of Sciences report from an expert panel dedicated to “Improving Research Information and Data on Firearms.” Scheduled for release in late fall, the panel’s report will address Lott’s work. Duke University economist Philip Cook, co-editor of the Brookings Institution book “Evaluating Gun Policy”, draws a historical analogy: In the late 1970s, after economist Isaac Ehrlich published a complex analysis supposedly proving that every execution in America deters about eight murders, the NAS released a devastating expert report debunking Ehrlich’s findings. The same thing could happen to Lott.
If it does, we can be reasonably sure of one thing: Lott will have a response ready. “Lott will never say, ‘that’s a good point.’ Lott will offer you some rebuttal,” says Georgetown gun policy expert Jens Ludwig. But if Lott won’t fully address the errors that undermine his thesis, it may fall to someone else — his conservative peers, the American Enterprise Institute, perhaps — to step in and do it for him.