Waves of Opportunity from Big Data, and the Wake of Questions Left Unanswered

A guest post from Brittany Iskarpatyoti discusses big data, a key topic at the American Evaluation Association annual meeting, including the questions it raises.

By Brittany Iskarpatyoti, MPH

20171108_155010.jpg
Photo by MEASURE Evaluation: Kathryn Newcomer, AEA president, at the 2017 AEA conference
WASHINGTON, DC—Every year, the American Evaluation Association (AEA) holds an annual conference for evaluators from across different sectors (health, education, agriculture, government, etc) to discuss the current state of and new opportunities for the field. In her opening keynote November 8, AEA President Kathryn Newcomer discussed a tsunami of data that has become available.

“Big data” has volume, velocity, and variety and is quickly being embraced as the next big thing to understand the world around us. But Newcomer was quick to remind us that “simply having it isn’t the same as learning from it.” As evaluators, we have something to offer when it comes to big data- experience. We don’t need to replace existing knowledge and skills, but build off new ideas.

MEASURE Evaluation’s Carolina Mejia discussed this idea, and its limitations, in her presentation November 9, “Big Data Analytics for Measuring Gender Norms: Too Big to Ignore.” She discussed an ongoing activity to understand the methodological challenges in using social media data (from Twitter) to understand public attitudes and behaviors. Social media data offers quick and cost-effective data to understand certain trends, like frequency of tweets over time, which may be helpful to understand when and where people are talking about an issue.

20171109_113321.jpg
Photo by MEASURE Evaluation: Carolina Mejia, PhD, MPH, MEASURE Evaluation M&E and gender advisor, presenting at the 2017 AEA conference
But there is so much we don’t know from standard analytics. How are people talking about an issue? What is the content or sentiment of this data? While sentiment analysis is available from social media analysis programs such as Crimson Hexigon, Mejia found that when comparing computer algorithm-generated sentiment to sentiment coded by a human evaluator, there was only a 41 percent match. So, while the data may be available, qualitative understanding may still require time for people to code and analyze the data.

Furthermore, while evaluators have been well trained on research ethics and navigating institutional review boards (IRBs) with traditional data sources, big data doesn’t often follow traditional norms and leaves several unanswered questions. Information is created by the user for public consumption, but does that ethically include research? When and how are data de-identified? IRBs are challenged by making decision on protecting user rights and the confidentially of public data. And laws surrounding issues such as the copyright ownership of literary works such as blogs, pictures, videos, and recordings are evolving and should be carefully monitored.

So while as evaluators we certainly have a lot to offer, we also certainly have a lot to learn. As both a user and evaluator of big data, I’m excited to explore the possibilities.

Follow me on Twitter for live reflection on AEA Evaluation 2017 at @bschriv.

Learn more about MEASURE Evaluation's presence at Evaluation 2017Read additional blogs from Evaluation 2017 on how health information systems can improve program evaluation and equity in evaluation

Filed under: , , ,