US juries get verdict wrong in one of six cases: study
Jun 28 09:55 AM US/Eastern
So much for US justice: juries get the verdict wrong in one out of six criminal cases and judges don't do much better, a new study has found.
And when they make those mistakes, both judges and juries are far more likely to send an innocent person to jail than to let a guilty person go free, according to an upcoming study out of Northwestern University.
"Those are really shocking numbers," said Jack Heinz, a law professor at Northwestern who reviewed the research of his colleague Bruce Spencer, a professor in the statistics department.
Recent high-profile exonerations of scores of death row inmates have undermined faith in the infallibility of the justice system, Heinz said.
But these cases were considered relative rarities given how many checks and balances - like rules on the admissibility of evidence, the presumption of innocence and the appeals process - are built into the system.
"We assume as lawyers that the system has been created in such a way to minimize the chance we'll convict the innocent," he said in an interview.
"The standard of proof in a criminal case is beyond a reasonable doubt - it's supposed to be a high one. But judging by Bruce's data the problem is substantial."
The study, which looked at 290 non-capital criminal cases in four major cities from 2000 to 2001, is the first to examine the accuracy of modern juries and judges in the United States.
It found that judges were mistaken in their verdicts in 12 percent of the cases while juries were wrong 17 percent of the time.
More troubling was that juries sent 25 percent of innocent people to jail while the innocent had a 37 percent chance of being wrongfully convicted by a judge.
The good news was that the guilty did not have a great chance of getting off. There was only a 10 percent chance that a jury would let a guilty person free while the judge wrongfully acquitted a defendant in 13 percent of the cases.
But that could have been because so many of the cases ended in a conviction: juries convicted 70 percent of the time while the judges said they would have found the defendant guilty in 82 percent of the cases.
The study did not look at enough cases to prove that these numbers are true across the country, Spencer cautioned.
But it has provided insight into how severe the problem could be, and has also shown that measuring the problem is possible.
"People have to have some faith in the court system. We have to know how well our systems are working," Spencer said in his suburban Chicago office.
"We know there are errors because someone confesses after the fact or there's DNA evidence," he said.
"What's the optimal tradeoff given that juries ultimately will make mistakes? ... Are those balances something society is okay with?"
Spencer's study does not examine why the mistakes were made or which cases ought to be overturned.
Instead, he determined the probability that a mistake was made by looking at how often judges disagreed with the jury's verdict.
"If they disagree they can't both be right," he explained.
Spencer found an agreement rate of just 77 percent, which means a lot of mistakes were being made.
Spencer hopes to find funding for a much larger study whose results could be representative of the overall system.
Finding a solution will be much harder to do than quantifying the problem, Heinz warned.
"The sources of the errors are quite resilient to correction," he said.
"They have to do with all sorts of biases and the strong presumption of guilt when someone is arrested and brought to trial."
The study will be published in the July edition of the Journal of Empirical Legal Studies.
I've read this three times and still don't understand the methodology (chalk that up to poor reporting -- but hey, why let facts get in the way of a good lede?). Near as I can tell, though, the "error rate" was determined by the number of times a judge disagreed with a jury. Okaaaaay ...
So I guess the times a judge has told me he thought the defendant was guilty counts, . . . how?
Did anybody actually think, at one time, that our juror system was error free? Did we need a "study" to prove it was not?
The fact that we are humans seems to naturally dictate that, of course, errors occur. That's why we have appeals, and writs, and pardon procedures, and . . . oh forget it. I'm so sick of this issue.
This is the worst example of data analysis I have ever seen. "If they disagree, they can't both be right." SO WHO IS RIGHT? WHOEVER VOTES FOR NOT GUILTY--AND THEN THOSE ARE CONSIDERED THE INNOCENTS WRONGLY CONVICTED?
And the most important part is not guilty beyond a reasonable doubt and actually not being the person who did the crime are two COMPLETELY DIFFERENT THINGS!
Which begs the question: "What was 'wrong' about the verdict?" If the judge thought guilty, but the jury acquitted, was that a wrongful acquittal, or was that the judge getting it wrong? If the judge thought the defendant was guilty, but the jury acquitted because the judge suppressed evidence which would have resulted in a conviction if heard, is the judge "wrong" to say the defendant was guilty, or is the jury verdict "wrong" albeit the only proper legal verdict?
Also, the study presumably only looked at jury trials, because in bench trials there would be no jury to compare the court ruling with. Do the judge's decisions in those cases not count at all? How about this: we can probably assume that the bench trial occur with greater frequency in courts where the judge is known to be more likely to acquit, and more jury trials will occur in courts where the defense feels the judge is more likely to convict. Therefore the numbers will be skewed by the fact that the trials they are looking at will be more likely to occur in courts where the judge is known to favor conviction in close cases.
Given the description of the study in the article, drawing any conclusions whatsoever from this data seems an exercise in futility.
The study itself may be found here.
I hate to admit that I actually looked this stuff up . . .
Here's a link to the actual report:
In the report, they looked at 290 cases. The basis of determining a possible "error" was whether the judge hearing the case would have reached the same result as the jury. According to the raw data (again, I can't believe I actually read this stinking thing) the judge and the jury reached different results 62 times. Of those 62 cases, the judge would have convicted when the jury did not. Only 15 times out of 290 did a jury convict when the judge would not have.
It's also interesting (as much as this can be interesting) that the study cites that the jury might be wrong 1 of 8 times, when the news article says 1 of 6 times.
The article also fails to mention the ultimate conclusion of the study is that the accuracy of jury verdicts appear to be something that could be studied, if somebody wanted to actually take the effort to study them in more detail.
My test would have to be whether someone else came forward and provided sufficient evidence to show the guilt of someone other than the person accused. I get the impression that does not happen very often. Credible retractions of accusations would fit in there somewhere too. Otherwise, you could study this issue to death and still not know who was right and who was wrong. I certainly believe that wrongful acquittals are more prevalent than this piece claims (assuming the state's burden of proof were ignored in determining truth).
The paper actually makes a distinction between a "precedurally correct" verdict, defined in the paper as "one which applies the legal standards correctly: if proof is not demonstrated to the standards prescribed by the law, the defendant should be acquitted," and a verdict which is correct from an "omniscient viewpoint," which is "one that would be reached by an impartial and rational observer with perfect information (including complete and correct evidence) and complete understanding of the law. If the defendant committed the crime, the correct decision is guilt, regardless of the strength of evidence." But here's where I think something came off the tracks: "From the
observed agreement rates, the probability of a correct verdict by the jury is estimated at 87% for [one data set] and 89% for the [the other]. Those accuracy rates correspond to error rates of 1 in 8 and 1 in 9, respectively. The accuracy rates apply to both the "procedural" and the "omniscient" interpretations of correct verdict noted earlier."
The math may work out that way, but that can't be true. Assuming evidence was suppressed or otherwise not presented to the jury, the judge is far more likely to know what that evidence was than the jurors are, whereas the evidence leading to the "procedurally correct" verdict presumably would be the same for the judge and the jury. Therefore, when the result is "incorrect" under the omniscient viewpoint, the judge would be far more likely to recognize that and disagree with the jury. Since this disagreement percentage plays a role in the calculation of the percentages of error, the rates would have to be significantly different for "omniscient" and "procedural" analyses of correctness. I just can't see how that could not be true, regardless of how the math works out.
|Powered by Social Strata|
© TDCAA, 2001. All Rights Reserved.