The recent Gallup Survey shows that Americans are becoming more distrustful of the media for potential bias in the news content. However, they should be also worried about their consumption habit and their own bias that distort their ratings of news content. In fact, people who are the most distrustful of the news media tend to be the most biased readers, according to a research in which Knight Foundation has partnered with the Gallup in 2017. That throws us to an equally vexing questions: how to study bias?
Data Scientists and Social Scientists have devised a number of ways to study, research and understand bias which is notoriously hard to measure. A well-renowned study conducted by economists Claudia Goldin and Cecilia Rouse showed that the woman admittance rate to city orchestras had increased significantly when participants auditioned behind a curtain and evaluators were not allowed to see the auditioning musicians. The lesson from this and other similar studies is that to evaluate solely on the "quality" of auditioning musicians, the experiment needs to be double blind.
In the context of biased news, Gallup and Knight Foundation have partnered in 2017 to create an experimental news platform as part of a larger research endeavor. The platform pulled news content and related articles from diverse media outlets and invited a random sample of Americans who had taken Gallup surveys to participate in the rating the trustworthiness of the content. Half of the participants (Experimental Group) were not allowed to see the source of the content and news article, while the remaining half (Control Group) were allowed to see the source as they would do on a typical website. A total of 3,081 participants provided ratings of 1,645 different articles originally published by one of seven well-known sources.
The findings from the Gallup-Knight experiment were startling. Participants in the Experimental Group who identified themselves Republicans rated content from The New York Times and Vox more trustworthy than the self-identified Republicans in the Control Group who read the identical content knowing their sources. Ditto for the self-identified Democrats: participants in the Experimental Group rated content from Fox News more trustworthy than those in Control Group. Now, consider a reader's trustworthiness score as a sum of quality of article, reader's personal view and brand bias. Since Experimental Group and Control Group take into account the first two sources of variability--quality of the article and reader's personal view--as both groups are alike, leaving only the brand bias as the factor getting reflected in the score variability. An individual's (in Control Group) brand bias is thus estimated as the absolute difference in the her or his trustworthiness score and the mean trustworthiness score of the blind group, or Experimental Group, for the same article. On a scale of 1 to 5, 35 percent exhibit large bias, implying that their average trustworthiness score vary as high as 1.5 points on a scale of 1 to 5 compared to the respective participants in the blind group, or Experimental Group. Throw the names of Clinton and Trump, the trustworthiness ratings gap significantly jumps between Experimental Group and Control Group.
The bias is not a one way street. It's complex, and for a Data Scientist or a Social Scientist, to measure and interpret it becomes more of an art than science.
Data Scientists and Social Scientists have devised a number of ways to study, research and understand bias which is notoriously hard to measure. A well-renowned study conducted by economists Claudia Goldin and Cecilia Rouse showed that the woman admittance rate to city orchestras had increased significantly when participants auditioned behind a curtain and evaluators were not allowed to see the auditioning musicians. The lesson from this and other similar studies is that to evaluate solely on the "quality" of auditioning musicians, the experiment needs to be double blind.
In the context of biased news, Gallup and Knight Foundation have partnered in 2017 to create an experimental news platform as part of a larger research endeavor. The platform pulled news content and related articles from diverse media outlets and invited a random sample of Americans who had taken Gallup surveys to participate in the rating the trustworthiness of the content. Half of the participants (Experimental Group) were not allowed to see the source of the content and news article, while the remaining half (Control Group) were allowed to see the source as they would do on a typical website. A total of 3,081 participants provided ratings of 1,645 different articles originally published by one of seven well-known sources.
The findings from the Gallup-Knight experiment were startling. Participants in the Experimental Group who identified themselves Republicans rated content from The New York Times and Vox more trustworthy than the self-identified Republicans in the Control Group who read the identical content knowing their sources. Ditto for the self-identified Democrats: participants in the Experimental Group rated content from Fox News more trustworthy than those in Control Group. Now, consider a reader's trustworthiness score as a sum of quality of article, reader's personal view and brand bias. Since Experimental Group and Control Group take into account the first two sources of variability--quality of the article and reader's personal view--as both groups are alike, leaving only the brand bias as the factor getting reflected in the score variability. An individual's (in Control Group) brand bias is thus estimated as the absolute difference in the her or his trustworthiness score and the mean trustworthiness score of the blind group, or Experimental Group, for the same article. On a scale of 1 to 5, 35 percent exhibit large bias, implying that their average trustworthiness score vary as high as 1.5 points on a scale of 1 to 5 compared to the respective participants in the blind group, or Experimental Group. Throw the names of Clinton and Trump, the trustworthiness ratings gap significantly jumps between Experimental Group and Control Group.
The bias is not a one way street. It's complex, and for a Data Scientist or a Social Scientist, to measure and interpret it becomes more of an art than science.