blurry-booksFor nearly eight years, I have worked as a program evaluator and policy analyst. It is simple: every social service program or public policy has a purpose, and it takes the deliberate effort of statisticians like me to analyze the outcome and determine whether or not that purpose was achieved. Programs and policies can fail, and they can have unexpected consequences that are both good and bad. These never cease to be surprising, even to a veteran analyst.

To explain, in one of my policy analysis lectures that I give to university seniors, we play a game called “Good Policy/Bad Policy.” I list two social service programs/policies and explain their respective goals. One is a success, the other is a failure—and I invite the students to tell me which one they think was successful and which one they believe failed. I compare Women’s, Infants’ and Children’s (WIC) to Supplemental Nutrition Assistance Program (SNAP or food stamps), child care subsidies to Head Start. When I explain that the limited food options provided by WIC have been shown to reduce Medicaid costs 3 to 1, while the more diverse SNAP program lacks compelling health benefits (and is rife with fraud), the students are surprised at the results. When I explain that parents who use child care subsidies are more likely to obtain and keep gainful employment, while Head Start costs nearly four times as much with no long-term educational benefits, many had predicted the opposite. I use this exercise to demonstrate the importance of gathering and analyzing objective data—in order to have objective truth. I explain that is does not matter if you oppose or support a program or policy, objective data is critical to determining whether or not a policy met or failed to meet its objectives.

This is why I am particularly disturbed by problems with abortion data in the United States. When the Centers for Disease Control (CDC) released its report at the end of November with a 5% nationwide decrease in abortions, both pro- and anti-abortion folks alike were quick to credit themselves with this decrease. Pro-life organizations insisted that anti-abortion laws, changing public attitudes or abstinence education were the reason for this drop. Likewise, pro-abortion rights organizations were just as eager to credit comprehensive sex education, greater access to contraception, or a lack of abortion providers/oppressive laws as the reason for the decline. Without policy analysis and evaluation, how can we know who is correct?

Rather than exploit the decline for good public relations, my colleagues at the Charlotte Lozier Institute (CLI), a pro-life think tank in Washington D.C., decided to take a look at the data and form an evidence-based conclusion. Their conclusion: the data we have is insufficient to form any conclusions. Their report notes that submission of data to the CDC is voluntary and often inconsistent, and that even in states that imposed laws to mandate abortion reporting, reporting requirements lack common denominators across states that allow for valid aggregation or meaningful interpretation. They further explain that the only other source with which to compare the CDC calculations is the Guttmacher Institute (GI), an agenda-driven organization to whom reporting is also voluntary. CLI found an enduring 32% difference between CDC reports and GI reports, with GI reporting approximately 1.2 million per year to the CDC’s approximate 825,000. There lacks sufficient evidence to suggest that either entity’s figures are correct, which leads to CLI’s conclusion that state abortion reporting laws are in dire need of reform.

Shortly after the report was released, Charlotte Lozier Institute President Charles A. Donovan penned an op-ed in the New York Times calling for common ground in passing reforms to address these problems. Since accurate data only serves to inform both sides of the abortion debate equally, this would be beneficial for everyone involved in reproductive health research and policy concluding that without such information, we are “debating in darkness.”

While an op-ed in the New York Times expressing an anti-abortion point-of-view is quite rare, the reaction to the article by one abortion rights supporter was equal parts hostile and dismissive. Anna North of Buzzfeed.com rejects Donovan’s call for common ground, disagreeing with his premise that both sides wish to lower the abortion rate. She cites Planned Parenthood and quotes late-term Dallas abortion provider Curtis Boyd who says, “The number of abortions needed are the number that women want.” And while North says that while many would like to see fewer abortions due to more contraception or sex education, anything else can “seem like an effort to limit women’s options.” Ultimately, she asserts that having accurate numbers on abortion is not important.

Obviously, North does not understand that accurate numbers on abortion are the only way to know if more contraception or sex education are the reason for fewer abortions. She apparently does not understand that the only way to determine if a decrease is due to those things she finds desirable (fewer unwanted pregnancies) or things she finds undesirable (women continuing unwanted pregnancies) is to gather accurate and complete data. If North and her like-minded allies were truly invested in efforts to lower unwanted pregnancies, they would need to know if those efforts worked. Likewise, those truly appalled at women lacking access to abortions would also need to know if this was indeed the case.

I have always said that an unwillingness to gather information and evaluate a program or policy can only have three possible explanations. The first is sheer arrogance: a person has such an assurance that they are correct and successful that an evaluation is not necessary. The second is apathy: a person shows complete indifference toward a program or policy and does not see it worth the effort to evaluate. But the third is fear, where a person is apprehensive about whether their program or policy works and is therefore scared to conduct an evaluation that may yield undesirable results. North’s opinion appears to be a hybrid of all three: the certainty that sex education and contraception do lower the number of intended pregnancies, a callous dismissal for the need to know if women are harmed by a lack of abortion—but moreover, a fear that this information may lead to results that are not favorable toward her position. None of these reasons justify opposing efforts to simply make sure that data on abortions in the U.S. is both accurate and complete.

Fellow Charlotte Lozier Institute scholar Michael New recently pointed out this problem at National Review Online. Taking issues with North’s response, he writes, “Better abortion-reporting standards should interest all parties in the abortion debate. Improved data on the incidence of abortion could reveal insights about the impact of various pro-life laws, contraception programs, and sex-education classes.” In sum, regardless of whether the goal is to decrease the number of abortions or not: there is still a need for accurate data. Yet the comments garnered on the article echoed the fears that North issued about having accurate information, unfounded concerns that these statistics may somehow be used against the abortion industry. The word “terrorism” is thrown around (yes, numbers can be terrifying). Some say, “What's to study about abortion? It's an age old medical procedure,” or call the demand for accurate abortion data “unnecessary spending.” There is either a fear of correct data or no desire whatsoever to gather correct data, regardless of the fact that data is used to make policies that affect millions of people each year.

I do wonder, had a pro-abortion rights organization made the call for accurate data, would anti-abortion organizations express the same degree of cynicism and hostility? While on-going problems with biased, agenda-driven studies discredited nearly each week at ReproductiveResearchAudit.com would give me great skepticism about such a proposal, what Donovan is suggesting involves removing agenda-driven forces from data reporting in order to gather valid information that may be used to advocate any position. Since many are coming against this call for better data, I must ask why. Is it arrogance, apathy or fear?