As data overflows online, researchers grapple with ethics

by Vindu Goel, The New York Times

fb oblong_smaller_ccScholars are exhilarated by the prospect of tapping into the vast troves of personal data collected by Facebook, Google, Amazon and a host of start-ups, which they say could transform social science research.

Once forced to conduct painstaking personal interviews with subjects, scientists can now sit at a screen and instantly play with the digital experiences of millions of Internet users. It is the frontier of social science — experiments on people who may never even know they are subjects of study, let alone explicitly consent.

“This is a new era,” said Jeffrey T. Hancock, a Cornell University professor of communication and information science. “I liken it a little bit to when chemistry got the microscope.”

But the new era has brought some controversy with it. Professor Hancock was a co-author of the Facebook study in which the social network quietly manipulated the news feeds of nearly 700,000 people to learn how the changes affected their emotions. When the research was published in June, the outrage was immediate.

Now Professor Hancock and other university and corporate researchers are grappling with how to create ethical guidelines for this kind of research. In his first interview since the Facebook study was made public, Professor Hancock said he would help develop such guidelines by leading a series of discussions among academics, corporate researchers and government agencies like the National Science Foundation.

“As part of moving forward on this, we’ve got to engage,” he said. “This is a giant societal conversation that needs to take place.”

Scholars from M.I.T. and Stanford are planning panels and conferences on the topic, and several academic journals are working on special issues devoted to ethics.

Microsoft Research, a quasi-independent arm of the software company, is a prominent voice in the conversation. It hosted a panel last month on the Facebook research with Professor Hancock and is offering a software tool to scholars to help them quickly survey consumers about the ethics of a project in its early stages.

Although the Federal Trade Commission, which regulates companies on issues like privacy and fair treatment of Internet users, declined to comment specifically on the Facebook study, the broader issues touch on principles important to the agency’s chairwoman, Edith Ramirez.

“Consumers should be in the driver’s seat when it comes to their data,” Ms. Ramirez said in an interview. “They don’t want to be left in the dark and they don’t want to be surprised at how it’s used.”

Facebook, which has apologized for its experiment, declined to comment further, except to say, “We’re talking with academics and industry about how to improve our research process.”

Much of the research done by the Internet companies is in-house and aimed at product adjustments, like whether people prefer news articles or cat videos in their Facebook feeds or how to make Google’s search results more accurate.

But bigger social questions are studied as well, often in partnership with academic institutions, and scientists are eager to conduct even more ambitious research.

The Facebook emotion experiment was in that vein. The brainchild of a company data scientist, Adam D. I. Kramer, but shaped and analyzed with help from Professor Hancock and another academic researcher, Jamie E. Guillory, it was intended to shed light on how emotions spread through large populations. Facebook deliberately changed the number of positive and negative posts in the subjects’ news feeds over a week in January 2012, then looked at how the changes affected the emotional tone of the users’ subsequent Facebook posts.

In another well-known experiment, Facebook sent voting reminders to 61 million American users on Election Day in 2010. Some users also saw a list of their friends who said they had already voted, and the researchers found that the specific social nudge prompted more of those people to go to the polls. The study prompted some to suggest that Facebook had the power to sway election results.

More recently, the dating site OKCupid conducted experiments on its users, including one in which it hid profile text to see how it affected personality ratings, and another in which it told some daters they were a better or worse potential match with someone than the company’s software actually determined.

Such testing raises fundamental questions. What types of experiments are so intrusive that they need prior consent or prompt disclosure after the fact? How do companies make sure that customers have a clear understanding of how their personal information might be used? Who even decides what the rules should be?

Existing federal rules governing research on human subjects, intended for medical research, generally require consent from those studied unless the potential for harm is minimal. But many social science scholars say the federal rules never contemplated large-scale research on Internet users and provide inadequate guidance for it.

For Internet projects conducted by university researchers, institutional review boards can be helpful in vetting projects. However, corporate researchers like those at Facebook don’t face such formal reviews.

Sinan Aral, a professor at the Massachusetts Institute of Technology’s Sloan School of Management who has conducted large-scale social experiments with several tech companies, said any new rules must be carefully formulated.

“We need to understand how to think about these rules without chilling the research that has the promise of moving us miles and miles ahead of where we are today in understanding human populations,” he said. Professor Aral is planning a panel discussion on ethics at a M.I.T. conference on digital experimentation in October. (The professor also does some data analysis for The New York Times Company.)

Mary L. Gray, a senior researcher at Microsoft Research and associate professor at Indiana University’s Media School, who has worked extensively on ethics in social science, said that too often, researchers conducting digital experiments work in isolation with little outside guidance.

She and others at Microsoft Research spent the last two years setting up an ethics advisory committee and training program for researchers in the company’s labs who are working with human subjects. She is now working with Professor Hancock to bring such thinking to the broader research world.

“If everyone knew the right thing to do, we would never have anyone hurt,” she said. “We really don’t have a place where we can have these conversations.”

Dr. Gray advocates a simple litmus test for researchers: If you’re afraid to ask your subjects for their permission to conduct the research, there’s probably a deeper ethical issue that must be considered.

For Professor Hancock, solutions could include an opt-in process for projects that involve big changes in an Internet user’s experience, and a debriefing system to inform users about smaller tests after the fact.

Companies will not willingly participate in anything that limits their ability to innovate quickly, he said, so any process has to be “effective, lightweight, quick and accountable.”

While some would say the risks of the Facebook study were obvious, Professor Hancock said the researchers did not realize that manipulating the news feed, even modestly, would make some people feel violated.

He learned otherwise from hundreds of anguished and angry emails he received after the work was published. “They said: ‘You can’t mess with my emotions. It’s like messing with me. It’s mind control.’ ”

 

Tags: , ,