Weighing the options for evaluation data: Are routine health data enough?

by Emily Weaver, PhD. This Science Speaks blog post discusses using existing data to conduct an evaluation of the Tibu Homa Project in Tanzania and shares lessons learned.

By Emily Weaver, PhD

Health projects and donors in low-resource countries increasingly aim to leverage existing data to evaluate strategies and programs. Under the right circumstances, this approach can produce faster results and capitalize on investments already made in data systems. Both are worthy goals. Existing data, however, aren’t always enough to meet evaluation needs that include constructing a sample adequate to represent the program population, having data gathered at the beginning and the end of the intervention period, and devising and applying a framework to capture the information required.

A recent evaluation of the Tibu Homa Project in the Lake Zone of Tanzania, is an instance in which the existing data proved inadequate but we learned valuable lessons that can inform the design of routine and program data (quality, content, and structure) so that they are more useful for evaluation purposes.

MEASURE Evaluation, funded by USAID, conducted an evaluation of the Tibu Homa Project, which focused on improving quality of healthcare for children age five and under. The evaluation began after the project ended, so we hoped to use existing data sources to assess changes in program outcomes over time. Luckily, as part of the Tibu Homa Project performance management system, the project had extracted thousands of data points from health registers and patient records during monthly visits to health facilities. As a result, the project database contained data that were reported regularly into the routine health information system, plus additional relevant data from facility registers and patient records.

Because of the cost advantages of using existing data, the first thing we did was attempt to leverage this extensive database. In addition to the cost benefit of using existing data, routine health information system data are generally collected with greater frequency than would be possible if you were collecting primary data. For the Tibu Homa Project, routine health information system data that were recorded monthly provided rich information that is generally not available with standard baseline and end line evaluation surveys. These data also covered all patients in program facilities, not just a patient sample, as would likely be the case with primary data collection.

That was the good news. The less-good news was that the project database had several drawbacks from an evaluation perspective. As would be true with most secondary data sources, our evaluation team had not had input into the data collected. As a result, the database did not contain all the information we wanted. Understandably, the project recorded the information it needed for its work, which was not necessarily the same information we needed for the evaluation. Further, the database was in a format that served the Tibu Homa Project but did not accommodate the evaluation analyses we would require.

Other challenges we discovered were that the number of facilities in the database changed each month, which meant information was missing. Because the project had ended, there was no clear way to understand why some facilities were absent. That was a chief drawback because incomplete information compromises the validity of a study—leading to study results that may not be representative of the intended population. Without knowing more about why information was missing, we were unable to assess whether we would get valid results from these data.

Further, because we had not collected the data we had no control over data quality. Although we used several techniques to assess the data quality, we had additional questions about certain data characteristics. Again, because the program had ended, we had no resources at hand to clarify these characteristics.

With these strengths and drawbacks of the routine health information system and patient data in mind, we decided to use them as a complement to other primary data our team would collect, but not to use them as the foundation for the evaluation.

USAID and country partners have invested time and money into improving the quality, content, and structure of routinely collected program data. Data quality assessments of routine health information system data are increasingly routinized and used to improve quality. As awareness increases about the database structure and data content needs for evaluation purposes, the scales are tipping in favor of using them more frequently.  The Data for Impact or D4I project—a new project that is available to help USAID missions and countries increase the use of existing data for analytics and to improve implementers’ and partners’ abilities to do this kind of analysis—was designed to support these kinds of opportunities. For more information, visit https://www.data4impactproject.org/.

Read the report: Assessing Training Approaches and a Supportive Intervention for Managing Febrile Illness in Tanzania – Tibu Homa Performance Evaluation Report

Republished with permission from Science Speaks

Filed under: Data Quality , D4I , Data for Impact , Data use , Data , Evaluation
MailLinkedInTwitterFacebook
share this