99% of Credit Reports Are Void of Material Errors!
No, that’s not a fictitious headline designed to infuriate you into reading this piece. It’s actually part of the findings of a recently published study on the accuracy of credit reports by an organization called the Policy and Economic Research Council, or “PERC.” In fact, according to their research 0.93% of credit reports had an error that, when corrected, resulted in an increase of greater than 25 credit score points, which is what they considered as a “material error.”
The PERC study, which I encourage you to download from their website here, is one of those studies that gives pause because, frankly, the results are almost completely unbelievable. I mean, who honestly believes that 0.93% of credit reports contain material errors? Can that really be true?
First things first; who is PERC? PERC is a think tank and most of their experts have the letters “Ph.D.” after their names, which means they’re some of the sharper knives in the drawer. I spent hours reviewing the results of their study and another hour interviewing PERC’s President, Dr. Michael Turner, for this piece. I came away with conflicting thoughts about the study results. On one hand they’re still very hard to believe. On the other hand, this is the kind of brainpower you want doing these types of studies.
I don’t expect any of you to simply believe that such a small percentage of credit reports contain material errors. It’s healthy to question their results, which is exactly what I did with my questioning. The study and the methodology certainly include evidence suggesting that the results are, at least, questionable and perhaps don’t reflect the reality outside of their relatively small sample. Here are some things for you to keep in mind as you chew on “less than 1%.”
PERC is funded by the credit industry.
Dr. Turner confirmed what’s already overtly disclosed on their website, which is that PERC receives funding from not only all three of the credit reporting agencies but also their trade association, the CDIA. I’m certainly not insinuating that this had any influence on their study results but I think it’s fair to point out who helps to keep their lights on. They also receive funding from a number of foundations and non-profits, including United Way.
The PERC study was funded by a grant from the Consumer Data Industry Association.
This is the CDIA, which is the trade association of the credit bureaus. Again, PERC overtly disclose this on page four of the study results. I know the CDIA fairly well as I’ve interviewed them several times and I’m currently working my way through their online “Fair Credit Reporting Act Certification” training course and offering suggestions to the content and flow of their user interface.
PERC used VantageScore instead of FICO scores.
When I posed the “why” question to Dr. Turner his response was “Using a different score –FICO or otherwise– would have no bearing upon the direction or magnitude of our findings.” True, but then why not simply choose the industry standard that everyone understands, which is FICO?
Here’s the reason they should have chosen the FICO score; “The researchers may argue that (the score choice) is academic but it’s difficult to estimate “material impact” from a score most lenders don’t use”, according to Cordell Wise, former Principal Scoring Consultant for FICO. “Consumers may be assured from this report that their Vantage score is unlikely to be affected but it won’t mean much when they go to apply for a loan and their creditor is using a FICO score.”
The sample for their study was relatively small and non-random.
This is where Dr. Turner and I aren’t going to agree. There are well over 600 million consumer credit files across the big three credit bureaus. That’s over 200 million consumers, times three credit bureaus. PERC used a non-random sample of 2,338 compensated consumers sourced from a company called Synovate, and 3,876 of their respective credit files.
That’s a 0.00001% sample of consumers and a 0.0000065% sample of credit files. Dr. Turner assured me that the sample size for their study was “more than adequate.” And, in fact, the Federal Trade Commission is planning a similar study due in 2012 and their sample is apparently going to be only 1,000 consumers. Incidentally, credit-scoring models are built using stratified samples as large as several million credit files. If it was possible I would have liked to have seen a much larger sample size.
Is a “material error” really one that, when corrected, results in a 25-point score increase?
In all fairness to Dr. Turner he does point out, after my badgering, that “A one point change could be significant depending upon the score relative to the risk tier (score) cut-off point.” I don’t want to put words in his mouth but what he’s saying is that if you have a 619 score (with an erroneous credit file) and your score changes to a 620 after the error is corrected and that 620 qualifies you for a loan, then that one point was, in fact, very meaningful.
According to their study sample, 3.1% of consumers saw a score increase of at least 1 point and 4.4% saw their scores change one way or the other, which is important because the study was about overall accuracy, not accuracy just in favor of consumers. If you apply 3.1% to the entire credit file population that would equal 18.6 million consumers with errors that are depressing their credit scores at least 1 point.
Study participants were given a toll-free number to one of the three credit bureaus.
I understand why this was done. They needed to be able to track the progress and results from the 2,338 consumers who participated. The toll-free number was one of the ways to do so. According to Dr. Turner the results from the one credit bureau that provided the toll free number didn’t vary in “any meaningful way” from the results from the other two.
I’m willing to look the other way on this but wow, wouldn’t it be cool if everyone who had errors on their credit reports could just pick up a phone and talk to someone in the United States? I point this out because their study concludes that 95% of the study participants were “satisfied with the outcomes of their disputes, suggesting widespread satisfaction among participants with the FCRA dispute resolution process.” To me that figure is as difficult to believe as the “fewer than 1% have material errors” figure.
The credit bureaus and the CDIA were involved in the study.
PERC thanks the three credit bureaus for providing “numerous insights, guidance, and invaluable assistance with the implementation of the research.” Additionally, Dr. Turner confirmed that the CDIA was “provided opportunities to review drafts of (their) study and provide comments.” He also made it clear that the results of the research were PERC’s, not the CDIA’s and that PERC was “under no obligation to accept or integrate (their) feedback.” Again, this fuels the perception, whether it’s right or wrong, that the credit-reporting players were influential. I didn’t ask for redlined versions of their study because I knew they would decline.
PERC announced the results of their study via a press release on May 5th. Today is May 23rd and there has been sparse journalistic coverage of their study by the big time mainstream media. I believe this speaks volumes about the perception of their results. I likened them to someone changing an “F” on a report card to an “A.” It’s just so counterintuitive that fewer than 1% of credit reports have meaningful errors and that such a large percentage of consumers are satisfied with the dispute resolution process that even if it’s true, very few will actually believe it.
The results of the study have infuriated some consumer advocates who feel the results fly in the face of the realities of credit report accuracy. “These (study) results disgust me and the more I read them the more they read like a bad joke”, according to Michael Citron, CEO of DisputeSuite, a credit repair software and training provider. ”We’ve got access to a data set of well over 100,000 consumers who have disputed information in their credit files during the past six months and have seen an average of eight corrections or deletions per consumer credit file during that time frame.”
Further, in June of 2004 an organization called the U.S Public Interest Research Group (or “PIRG”) published a study that concluded 79% of credit reports contain some sort of error, serious or otherwise. PIRG, who didn’t respond to my request for an interview, is a consumer advocacy group and their study methods were also questionable. Their sample consisted of 197 consumers, which is 100 fewer people than were on my flight to San Francisco last week.
The bottom line is credit reports contain errors. And, I firmly believe they contain far more serious errors than the PERC study concludes and far fewer overall errors than the PIRG study concludes. Until the FTC performs their accuracy study we’re limited to the battle of the four-letter organizations PERC v PIRG. Regardless, we’re all just one consumer and the exact percentage and magnitude of errors is irrelevant if our credit reports are inaccurate and cost us a loan, insurance or a job.
I want to thank Dr. Turner and his staff who were all very cordial and forthcoming with me for this piece. My pattern of inquiry clearly indicated that I considered their study results to be very questionable. They could have given me “no comment” but did the exact opposite. Now it’s your job to review their study results and draw your own conclusions while you’re hoping to avoid becoming “material.”
John Ulzheimer is the President of Consumer Education at SmartCredit.com, the credit blogger for Mint.com, and a Contributor for the National Foundation for Credit Counseling. He is an expert on credit reporting, credit scoring and identity theft. Formerly of FICO, Equifax and Credit.com, John is the only recognized credit expert who actually comes from the credit industry. The opinions expressed in his articles are his and not of Mint.com or Intuit.