When society is divided and tensions run high, those divisions play out on social media. Platforms such as Facebook hold up a mirror to society – with more than 3 billion people using Facebook’s apps every month, everything that is good, bad and ugly in our societies will find expression on our platform. That puts a big responsibility on Facebook and other social media companies to decide where to draw the line over what content is acceptable.
Facebook has come in for much criticism since its decision to allow controversial posts by United States President Donald Trump to stay up and misgivings on the part of many people, including companies that advertise on our platform, about our approach to tackling hate speech. I want to be unambiguous: Facebook does not profit from hate. Billions of people use Facebook and Instagram because they have good experiences – they don’t want to see hateful content, our advertisers don’t want to see it and we don’t want to see it. There is no incentive for us to do anything but remove it.
More than 100 billion messages are sent on our services every day. In all of those billions of interactions, a tiny fraction is hateful. When we find them, we take a zero-tolerance approach and remove them. When content falls short of being classified as hate speech – or of our other policies aimed at preventing harm or voter suppression – we err on the side of free expression because the best way to counter hurtful, divisive, offensive speech is more speech. Exposing it to sunlight is better than hiding it in the shadows.
Unfortunately, zero tolerance doesn’t mean zero incidences. With so much content posted every day, rooting out the hate is like looking for a needle in a haystack. We invest billions of dollars each year in people and technology to keep our platform safe.