- Low drug testing error rates don’t necessarily mean most positive tests are accurate.
- There’s a Ninth Circuit opinion, not just statisticians, explaining this.
- So think more about challenging positive drug test results.
NOW THE BLOG:
A couple of months ago, the FPD office here circulated an e-mail with some experiences a couple of the DFPD’s had had with mistakes in drug testing. It brought to mind what seems obvious with any testing process – certainly any testing process that relies on human beings at some point – namely, that there’s always a chance of a mistake – or, put a little more technically, an “error rate.”
I have a statistics background from college and am still very good friends with my former statistics professor, so this brought to mind a statistical phenomenon my former professor once explained to me that seems counterintuitive at first but is easily shown mathematically when you think it through. It also suggests an argument we might want to make in cases where clients are denying the drug use that a dirty drug test suggests.
The phenomenon is called “Bayes Theorem,” Bayes being some old, famous statistician or mathematician, and theorem being what those guys call their theories of the case. (Maybe that analogy isn’t completely fair, since mathematics and statistics are probably a little more definitive in their proof than we lawyers are in our cases, especially on the defense side.) It works like this. Suppose there’s an error rate of 1%, or 1 in 100, so out of 100 tests that are really negative, 1 will be erroneously found to be positive, which we’ll call a “false positive,” and out of 100 tests that are really positive, 1 will be erroneously found to be negative, which we’ll call a “false negative.” (Note that you could conceivably have different error rates for positives and negatives, but we won’t complicate the analysis with that possibility for now.) Then suppose that the “true positive” rate in your samples is also 1%, or 1 out of 100, so that 99 out of 100 samples provided are really negative and 1 out of the 100 samples is really positive.
Now consider what that means about a sample of 100 tests. First, 99 of those tests will be really negative, which we’ll call “true negatives,” and 1 will be really positive, which we’ll call a “true positive.” Since there’s only a 1% error rate, that “true positive” has a 99% chance of testing positive, so let’s assume it shows up in the testing as a positive. Next consider the testing of the 99 “true negative” samples. Applying our 1% error rate to that sample gives us a very high chance of at least one error, i.e., a negative sample testing positive even though it’s really negative. That’s what we called a “false positive” up above.
Now look at what we have. First, we have only two positive test results – the positive result for the “true positive” and the “false positive” result for one of the “true negatives.” Second, one out of those two positives – or 50% – is a false positive. In other words, even though there’s only a 1% overall error rate – which seems pretty good – 50% of the positive tests are false positives. That doesn’t sound like a very high probability for violating your client’s probation or supervised release and putting him or her in prison.
Lest you think no judge would buy into this mathematical analysis, let me point out that this theorem is expressly discussed and acknowledged – albeit in a non-criminal context – in a Ninth Circuit opinion written by Judge Kleinfeld – Gonzalez v. Metropolitan Transportation Authority, 174 F.3d 1016 (9th Cir. 1999). I’ll lay it out exactly the way he says it in that opinion:
A more complete record can also illuminate another aspect of efficacy, the Bayes’ theorem problem that affects any random test given to a low incidence population. Nothing in this world is perfect. Suppose the combination of errors in the tests, including containers marked with someone else’s name or number than the person who urinated into them, typographical errors in the reports of test results and identifications of which employees produced which results, anomalous chemical reactions with other substances in people’s bodies such as medications and foods, and other random errors, cause an error rate such that one person out of 500 gets a report of “dirty” urine when it was actually “clean.” Suppose that there is a high rate of alcohol drug use among the employees . . . , and on any particular day one worker in 10 has alcohol or drugs in his blood. Then with a 1/500 false positive rate, out of 1,000 tests, 2 will be positive even though the employee’s urine was clean, and 100 will be positive correctly. Only one of the positives out of every 51 is false. Fifty out of 51 are accurate. That is a fairly effective test, in terms of reliability.
But if the workers are generally “clean,” the reliability of the test goes way down. Suppose on a particular day only one worker in 500 has ingested drugs or alcohol. Then with a 1/500 false positive rate, out of 1,000 tests, 2 will be correct positives and 2 will be false positives. Half the employees who get a “dirty” urinalysis report are unjustly categorized. A positive result is as likely to be false as true on so clean a population, even though the test is identical to the one that was quite effective for a population with a higher incidence of drug and alcohol usage.
Id. at 1023.
Judge Kleinfeld’s caveat about “a population with a higher incidence of drug and alcohol usage” of course triggers the thought that our clients on probation and supervised release are such a population. But granting that our clients’ incidence of use may be higher, how high is it? (Note that their incentive not to use is higher as well, since they can go to prison if they do use.) If you think about just the clients you see again in violation hearings, you might think it’s pretty high – maybe a lot higher than that 1% “true positive” rate I use in my hypothetical above. But what about all the clients we never see again because they’re submitting 4, 6, or 8 clean tests every month? If we add those clients into the mix, that 1% “true positive” rate starts becoming more plausible.
I’m not sure where to get the statistics on the rates that matter – the “true positive” rate and the error rate – but I think it’s worth thinking about. And if we don’t get reliable evidence about those rates, doesn’t this counterintuitive “Bayes Theorem” raise more of a doubt about drug test results than we – or better yet, the court you want to argue this to – might initially have had?