Tag Archives: ethics

Secret Facebook experiment on users – do emotions spread through networks?

According to this article on Slate, Facebook secretly conducted a psychological experiment on around 689,000 users to see if emotions can be transmitted through social networks. This is an awesome experiment, but some deem it unethical as users did not explicitly give permission for this to happen.

What was the experiment?

The phenomenon of emotional states passing from person to person, known as “emotional contagion“, is a by-product of human empathy in relationships and is discussed extensively in academic literature. However, it had been believed that in-person interactions and non-verbal cues were essential for the transfer of an emotional state from person to person.

This experiment sought to test whether in fact emotions could be transmitted from person to person through their networks, solely via the medium of a written update viewed on a user’s Facebook News Feed page. On Facebook, the constant stream of status updates provides too many for users to see them all, so Mark Zuckerberg’s engineers designed an algorithm that allows the News Feed to filter, select and display status updates that it believes are the most relevant and interesting for the user.

The investigators in this experiment worked with Facebook’s Core Data Science team to modify the algorithm. A group of people saw more emotionally positive updates than usual, some saw more negatively emotional updates, and some saw fewer updates of an emotional content. The status updates of the user themselves were then observed over the next few days to analyse their emotional content in order to investigate whether or not their was a correlation.

Intriguingly, the researchers did indeed see that people were more positive or negative after seeing more positive or negative updates from their friends, and less emotional in general if the emotional content of their News Feed was reduced. The effects were small but measurable and statistically significant.

What’s the problem?

Experiments on human test subjects can sometimes result in physical or mental harm. For an experiment to be carried out in an ethical manner where the subjects know there is a possibility that some harm could arise, the scientists or experimenting agents need to gain “informed consent” from the subjects.

In this case, the permission was not explicitly given by the users for this to occur. Rather, it was taken as implicit from a short clause buried deep in Facebook’s Data Use Policy.

Of course, in an experiment where the investigators are testing whether people develop negative emotions if exposed to the negative emotions of others, this has serious consequences. Apart from the immediate pain that the users are unwittingly suffering from being exposed to more negativity than they would have otherwise experienced, there is the possibility that additional knock-on consequences may have occurred.

For example, the users might have been in a mentally fragile state at the time of using Facebook. The increased negativity may have caused a level of psychological pain that then caused them to behave in a damaging way to themselves or others, such as violence, self-harm, or even suicide.

Is it fair of Facebook to have subjected users to this possibility without their explicit consent?

My personal opinion on the experiment and its ethics

For me, there is no doubt that is an excitingly innovative use of a social network to test a real-world question on how human interactions change their emotional state.

Although the consequences were potentially negative, I personally think that this a fair use of the system and makes a very interesting point that people place corporations like Facebook in an extraordinary position of power.

When you sign up to a service like Facebook and log in to absorb information from your personal News Feed, in essence you are handing over the keys to your mood.

In this case, Facebook did not explicitly gain permission for the experiment, but they were well within their right to do so as their users had already signed over to this possibility.

For those who protest that users will have seen more emotionally negative updates that they usually would have done, the key phrase I suggest needs to be examine is “usually would have done”. This usual state of affairs implied by “usually” is determined by an algorithm that the user has already designated as a key agent in the provision of knowledge. They have given it permission to process and serve knowledge under a variety of conditions, so this experiment is just one of them.

What can we learn from this experiment?

  1. Be happy and positive on Facebook (and in life generally), as it is scientifically established that your emotions are mirrored by the people around you. So if you like people and want them to be happy, be happy yourself!
  2. The content of your personal Facebook News Feed is determined by an algorithm. Given the amount of time that Facebook users spend viewing this information, and the documented impact it has on their emotions, one should think carefully and critically about how much time to spend on Facebook and how much to pay attention to what one sees there.
  3. Pay attention to the small print when you register for websites and think deeply about what the possible implications could be. What uses could the website’s creator possibly have in mind for your data and how could it affect you in real life?
  4. Be selective about the algorithms that you allow into your life and the power that you are given them over your well-being.
  5. On a scientific level, in-person interaction and non-verbal cues are not preconditions for emotional transfer between people: verbal communication is enough for emotional contagion to occur. The effects are small, but statistically significant.

Where can I read the academic paper?

The short paper, published as a collaboration between Facebook,  the University of California San Francisco and Cornell University, can be read for free online here.