Emperors With No Clothes and Junk Science

April 25, 2017
By Hanging Out with Carl Gunn

BLOG BULLETS:

  • A recent report on forensic science by the (former) President’s Council of Advisors on Science and Technology notes that various developments and studies over the past two decades have raised increasing doubt about the validity and reliability of some important forms of forensic evidence.
  • In addition to pointing out some doubts about forensic evidence less commonly used in federal court, the report concludes, as to DNA, that analysis of “complex DNA mixtures” has not been established to be valid and reliable, and, as to fingerprints, (a) concludes there are “substantial” false positive rates that are higher than likely expected by jurors and (b) suggests some standards for fingerprint testing to lessen the risk of such misidentification.
  • The report ends with a list of recommendations, including recommendations that federal judges (a) take “appropriate scientific criteria” into account in determining foundational validity under Rule 702 of the Federal Rules of Evidence and (b) ensure expert testimony is limited to what the empirical evidence supports.

 

NOW THE BLOG:

Another e-mail with another interesting report, about forensic science testimony, recently came across my computer – a report by the President’s [the old one, I’m afraid] Council of Advisors on Science and Technology, entitled, “Forensic Science in the Courts: Ensuring Scientific Validity of Feature-Comparison Methods.”  The report, which is attached in full here, is about “feature-comparison methods,” which the report defines as “methods that attempt to determine whether an evidentiary sample (e.g., from a crime scene) is or is not associated with a potential ‘source’ sample (e.g., from a suspect), based on the presence of similar patterns, impressions, or other features in the sample or source,” and gives the examples of DNA, hair, latent fingerprints, firearms and spent ammunition, toolmarks and bitemarks, shoeprints and tire tracks, and handwriting.

In explaining the reason for the report, the first paragraph of the Executive Summary states, in what might be a nice introduction for a motion challenging the admissibility of expert testimony:

Developments over the past two decades – including the exoneration of defendants who had been wrongfully convicted based in part on forensic-science evidence, a variety of studies of the scientific underpinnings of the forensic disciplines, reviews of expert testimony based on forensic findings, and scandals in state crime laboratories – have called increasing attention to the question of the validity and reliability of some important forms of forensic evidence and of testimony based upon them.

The report goes on to cite, at page 4 of the Executive Summary, a 2009 National Research Council report it describes as “the most comprehensive review to date of the forensic sciences in this country” that

made clear that some types of problems, irregularities, and miscarriages of justice cannot simply be attributed to a handful of rogue analysts or underperforming laboratories, but are systemic and pervasive – the result of factors including a high degree of fragmentation (including disparate and often inadequate training and educational requirements, resources, and capacities of laboratories), a lack of standardization of the disciplines, insufficient high-quality research and education, and a dearth of peer-reviewed studies establishing the scientific basis and validity of many routinely used forensic methods.

And the 2009 report found these shortcomings “especially prevalent” among the feature-comparison disciplines.
This discussion of the 2009 report follows a page on which the new, President’s Council report cites several examples of problems established by prior studies, which include the following:

• a study in which DNA testing revealed that 11 percent of hair samples found to match actually came from different individuals;
• a study that found “there was insufficient research and data to support drawing a definitive connection between two bullets based on compositional similarity of the lead they contain”;
• a study in which a committee found that “confirmation bias,” defined as “the inclination to confirm a suspicion based on other grounds,” contributed to a fingerprint misidentification in a Spanish terrorist bombing case; and
• studies concluding that “current procedures for comparing bitemarks are unable to reliably exclude or include a suspect as a potential biter.”

The President’s Council report then goes on to discuss the importance of forensic science techniques having both “foundational validity,” meaning that they “in principle, be reliable” (emphasis in original), and have “validity as applied,” meaning that they be “reliably applied in practice” (emphasis in original).  Foundational validity requires empirical testing that shows the results are reproducible and provides valid estimates of accuracy.  Validity as applied requires that the examiner have been shown both to be capable of reliably applying the method and to have actually applied it reliably.  Validity as applied also requires any assertions of accuracy or degree of confidence be scientifically valid; the report emphasizes there should be a focus on and testimony about error rates shown by studies, not general expressions of confidence or consensus among experts.

The report then goes on to discuss some of the specific types of feature comparison, beginning with DNA.  It acknowledges the foundational validity of DNA testing of samples from a single individual or just two individuals, but notes “the chance of human error is much higher” and that there is a need for proficiency testing.  It then goes on to consider the testing of “complex DNA mixtures” where there is an unknown number of contributors and concludes that “subjective analysis of complex DNA mixtures has not been established to be foundationally valid and is not a reliable methodology.”

The report then goes on to consider bitemark analysis, fingerprint analysis, firearms analysis, footwear analysis, and hair analysis, and offers the following conclusions on those types of feature comparison:

• For bitemark analysis, the report concludes “[c]urrent protocols do not provide well-defined standards”; the few studies which have been undertaken show false positive rates of 10% and higher; and examiners cannot even consistently agree on whether an injury is a bitemark.  As a result, bitemark analysis is “far from meeting scientific standards of foundational validity.”
• For fingerprint analysis, the report notes the long use of this method with no empirical studies at all of the error rate; two recent studies which do show foundational validity, but with false positive rates that are “substantial” and “likely to be higher than expected by many jurors,” specifically, 1 error in 306 cases in one study and 1 error in 18 cases in the other; concern about “confirmation bias,” which suggests examiners should document their analysis of the latent print before looking at any known print; concern about “contextual bias,” which makes it important that the examiner not be exposed to other information about the case; and the need for better proficiency testing.  The report then concludes that scientific validity requires the expert (1) have undergone relevant proficiency testing and report the results of that testing; (2) disclose whether the features of the latent print were documented in writing before looking at the known print; (3) provide a written analysis of the comparison; (4) disclose whether he or she was aware of any other facts about the case; and (5) verify the latent print is similar in quality to the range of latent prints considered in the foundational studies.
• For firearms analysis, the report notes most pre-2009 studies were “inappropriately designed” to assess foundational validity and “underestimated the false positive rate by at least 100-fold”; notes that there has been one “appropriately designed” study since 2009 that estimated the false positive rate at 1 in 66, with a confidence bound indicating it could be as high as 1 in 46; and concludes “the current evidence still falls short of the scientific criteria for foundational validity” because more than one valid study is needed and the studies should be published in peer-reviewed literature.
• For footwear analysis, the report found there were no appropriate studies and that footwear analysis is “unsupported by any meaningful evidence or estimates of . . . accuracy and thus [is] not scientifically valid.”
• For hair analysis, the report notes a “handful” of studies from the 1970’s and 1980’s that subsequent studies found to have “substantial flaws in the methodology” and a 2002 FBI study finding that in 9 of 80 hair cases in which the FBI lab found hairs microscopically indistinguishable, DNA analysis showed the hairs came from different individuals.

The report then ends with a list of recommendations, including some for DOJ and some for the courts.  I’m not optimistic about the recommendations to DOJ being adopted by our new administration that seems to be less focused on justice and more focused on “getting criminals off the streets,” but you could potentially use the recommendations for the courts.  One of those is that “[f]ederal judges should take into account the appropriate scientific criteria for assessing scientific validity,” including both foundational validity, under Rule 702(c) of the Federal Rules of Evidence, and validity as applied, under Rule 702(d).  Then another is that federal judges “ensure that testimony about the accuracy of the method and the probative value of proposed identifications is scientifically valid in that it is limited to what the empirical evidence supports” and that statements suggesting greater certainty be prohibited.

You have the full report linked above to read in depth, but it seems one could use it for several things.  It may suggest Daubert challenges, either to the reliability of the evidence in general, its quality in the case at bar, or the qualifications or methodology of your particular expert.  It may suggest avenues of cross-examination and/or discovery to seek.  It may suggest rebuttal evidence – possibly including even introduction of the report itself, as an admission of a party-opponent in the form of a “statement” by a government agency with expertise.  (For some ideas and cases to support making this argument, see the post entitled, “Government Confessions!  Or at Least Admissions,” in the June 2014 link at the right.)  And I’m sure others will come up with other ideas.  Keep pushing the envelope and see what you get.

Share