The ethics of advertising hate speech

By

Katherine Reedy

Facebook and Google have recently come under fire for allowing advertisers to target ads towards users who express an interest in hate speech or racist sentiments. Facebook has also been criticized for allowing Russian-linked accounts to purchase thousands of ads intended to influence the presidential election.

Bret Giles, professor of practice in marketing in the W. P. Carey School of Business at Arizona State University, shares his insights on the ethics of digital advertising in light of these events.

Question: What responsibility do companies like Facebook and Google have to consumers when it comes to monitoring and regulating the use of their advertising platforms?

Bret Giles

Answer: Just as companies of any kind have a responsibility to create a safe environment for customers and employees, so too do Facebook and Google have an inherent obligation to create such an environment within their platforms. For digital platforms such as these, much of what can be done to manipulate them outside their intended purpose is still being learned, is difficult to anticipate and is evolving exponentially. This doesn’t negate any responsibility on the part of Google or Facebook, but it does highlight the difficult balancing act of providing scalability through technology and machine learning with accountability and oversight.

Q: What business risks do these large tech companies take by becoming associated with advertisers that target users open to hate speech?

A: Facebook and Google’s ad platforms are designed to deliver the most relevant advertising possible at an individual level, hopefully giving people a chance to discover products and services they would otherwise never see. Those very platforms can also be used in unintended ways that hurt people. While most people probably don’t think that Google or Facebook are purposefully providing a venue to encourage hate speech among advertisers, the question is fair in terms of what they might or might not be doing to actively prevent it. In this instance, the risk of inaction is substantial, which is why we have seen swift action to make necessary changes to both systems. Not only may people see the platforms in a negative light, but longtime advertisers may also become worried — and both of those actions have negative financial repercussions.

Q: Going forward, what can Facebook do to prevent these kinds of incidents from occurring?

A: Facebook continues to learn how people behave, both good and bad, within the advertising platform they offer. It is through this continued learning from which they should draw, expanding their emphasis not only on preventing incidents, but also on anticipating behaviors as their platform matures. As marketers, we speak of empathy in understanding people’s needs by gaining their perspective and looking at the world through their lens. The same holds true here. The goal should not be to play catch-up, always one step behind how the platform might be ill-used; instead, it should be to learn from those perspectives so you can anticipate what is possible before it becomes an unintended reality.

Q: How can social-media users protect themselves against fake, misleading or hateful ads?

A: The best protection is to always remember there is a motivation behind each and every ad out there, be it on a social-media site, a search engine, a television channel or in a magazine. The goal of the ad is to get the user to take some sort of action, and that action may or may not be in their best interest. When something doesn’t seem quite right, don’t second-guess that concern. Look to other trusted sources to corroborate the ad or social-media post. Technology is available that can assist in this effort. YourAdChoices allows a user to control how sites can use their personal information to target the advertising they see.