Thousands of games later, Fake News: The Game is still going strong. From the start, we couldn’t wait to get our hands on the game play analytics, so we turned to the SQL visualization service Periscope Data. We spent some time sifting through different data points in order to see what inferences we could make about news literacy and are thrilled to present our findings. 

But first, some background.

 

Fake News Is More Than a Game

We branded, designed, and launched Fake News as a game (in fact, it’s formally called Fake News: The Game!). That said, while we were concepting months before launch, we knew there were greater implications on how users – almost entirely voting-age Americans – interacted with our game.

From the start, our marketing, engineering, and data analysis teams worked closely to ensure that we configured the Fake News database to track specific events, and visualize that data with services like Periscope and Firebase.

What exactly were we looking for? Efficiency. We wanted to track how often users were right, and what correlations with other metrics indicate a likelihood to guess incorrectly. 

Nearly one month after launch, here’s what we found. 

 

Our Findings

Which topics were most challenging to guess correctly?

Fake News contains hundreds of headlines from around the web. Players guessed correctly over three quarters of the time when the headline covered the environment and travel.

graph annotated

Takeaway

After seeing this trend, we discovered that fake headlines about the environment are largely conspiratorial (more so than other categories excluding politics), which we believe make most more skeptical. For example, “Executive branch agents raided and shutdown the EPA, Collected material documenting that climate change is a hoax.”

Additionally, fake headlines covering travel are very political – i.e. “Breaking: Crew of Air Force One Refuses to Fly Obama 6000 miles ‘Just to Play Golf’” – while the real headlines are more concerned with safety and in-flight human interest stories – i.e. “Plane evacuated after passenger ‘passed gas’.” The conclusion here is that players recognized when fake stories about travel were being overly politicized for the sake of hyperbole.

 

Did the player’s ability to identify fake news improve over time?

The short answer is yes, average scores improved by nearly 200 percent as users played the game more.

fakenews-graph-2

It’s possible, however, that some users played so many times that they saw repeats, knew the answer, and improved their scores. There are hundreds of headlines in the Fake News database, so while a few outliers may have skewed this data by playing a dozen or so times, it’s safe to assume that players were still improving.    

Takeaway

Most resources available covering fake news (and how to identify it) aren’t interactive, nor do they integrate ways to track progress or gather data. Our goal in gamifying fake news was to lower the barriers of entry to education around fake news. In doing so, we’re able to see that scores improve as the game is played, which is an indicator that players were getting better at identifying patterns in fake (and real) news.

 

Did players guess correctly more often when presented with real or fake headlines?

Real headlines were easier to identify by 11 percent on first play (65 percent guessed correctly to fake news headlines’ 54 percent).  

fakenews-graph-3

Takeaways

This isn’t a big margin, but there’s one important insight: we shouldn’t underestimate fake news and its ability to deceive. Roughly half of all users on their first play failed to identify fake from real. For all the skeptics out there who believe they’ve never been impacted by fake news, it’s more than likely you’ve read a fake headline during the 2016 election cycle and believed it to be truth.

 

So now what?

Panic!

Kidding. There’s hope.

Social media companies disrupted traditional media distribution by creating platforms that enable anyone to disseminate “news” without some level of corroboration. The darker side of human nature showed itself as individuals (and often formal organizations) created content to deceive massive audiences. With any new technology, someone will find a way to use it for personal gain and, thus, point out inherent flaws in the tech.

Now for the brightside: social media is finally doing something about it (albeit, maybe a little too late). Facebook is (very) publicly making strides to curb fake news. The network will start using “updated machine learning to detect possible hoaxes and send them to fact checkers, potentially showing fact-checking results under the original article,” according to Reuters. Twitter is integrating new ways for users to report false information. There are websites like PolitiFact and Snopes that have entire teams dedicated to sniffing out the truth and exposing the fakes (it should also be noted that both of these websites were vital in the creation of Fake News: The Game).

Social media companies are starting to realize that they are media companies, which means they have a duty to broadcast the truth, not false rhetoric and sensationalism. According to Pew, a majority of Americans now use social media as a primary source of news. Without safeguards that most traditional media organizations have – i.e. journalistic ethics, rigid editing regulations, professional reporters – false information will continue to spread.

What was left in the wake of old media’s fallout is coming to a head, and the tech companies responsible are now picking up the pieces.

 

Let’s talk

If you have further inquiries or want to discuss fake news (the phenomenon or the game) further, drop me a line: js@isl.coAlso, quick plug: Fake News is now available for Alexa! Enable today.