Sign1 by dailchambers via Flickr
Amidst media reports that advertisements on YouTube have appeared on videos promoting hate and violence, more than 250 companies have ceased paying for ads on the platform. The risk for charities goes beyond reputation with poorly placed ads threatening to undermine their mission.
Warning: I am going to attempt to talk about the way in which advertising works online and in particular on YouTube. It is extremely complicated and I am not an expert. Having said that, part of the problem is that too few people in our sector are willing to broach the subject. It is for this reason that I am grasping this particular thorny issue. Please feel free to comment and correct any errors that you spot in my thinking.
On February 14th, the Wall Street Journal struck the first blow against YouTube’s advertising model. Accusing popular vlogger PewDiePie of repeated antisemitism, it succeeded in separating him from a lucrative venture with Disney. In many ways, it’s a perfect story for a ‘traditional media’ behemoth like the WSJ; a well known celebrity, a global establishment brand, a scandal and an opportunity to discredit one of the ‘new media’ giants which it now finds itself competing with. A great story indeed (though one in which the WSJ has now found itself accused of hypocrisy lacking journalistic standards) The floodgates are now open.
In recent weeks the British media have generated several compelling stories by trawling videos of hate speech and simply naming those household names it found advertised alongside. Names such as BT, Waitrose, McDonald’s, the Co-op Group, Mercedes Benz, Transport for London, Channel 4, the BBC and even the British Government have pulled advertising after journalists found their names alongside unfavourable content.
Cancer Research are one of the names caught up in the furore. Other big names in the sector will surely follow. Charities Aid Foundation has recently pulled all ads on YouTube and through the Google Display Network until we can have absolute assurance that our name will not be associated with inappropriate content. In the future, we will be placing advertisements on a select few websites on the Display Network where we can have confidence in the nature of the content our name will appear next to.
Advertising has always been an ethically complicated subject for charities. The desire to share your organisation’s message, drive forward campaigns and reach potential donors must be balanced against the risks associated with appearing in a context which may not align naturally with an organisations’ charitable cause. However, in the internet age, and particularly in light of massive social media platforms, the task of balancing risk and opportunity has become labyrinthine.
Reputation is important for all organisations, but it is particularly crucial for charities. The motivation to give is as much emotional as it is rational and trust can be damaged more easily than it can be earned. With trust in institutions falling in recent years, charities can ill afford to allow even the perception – regardless of the reality – that their attempts to illicit donations are undermining their charitable principles. But YouTube advertising risks more than reputation. When advertisements are placed automatically and without human moderation, there will always remain the possibility that adverts will appear alongside content that runs counter to the values of the advertiser. When this happens, charities will, even if only to a limited extent, be funding those propagating the very dark forces within society that they are trying to tackle. For a donor, few things could more effectively chill the ‘warm glow’ of giving.
Unpicking YouTube advertising
The first issue to tackle is that of YouTube’s content screening. Community guidelines and corporate policy prohibit hate speech (including specific definitions) as well as a wide range of illegal, graphic or harmful content. However, this process clearly isn’t fool proof as the above scandal attests. A way to insulate YouTube from any content which breaches these rules would be to have all new videos screened by human moderators. However, given that about three hundred hours of video are uploaded every minute, such an outlay of labour would be unthinkable for the YouTube economic model. Indeed, The Times reports Matt Brittin, Google’s head in Europe, as confirming (whilst apologising) that Google (which owns YouTube) employees would only examine inappropriate content “through two lenses”, when the videos were flagged by other users or detected using automated technology.
This is where things get particularly complicated. Such “automated technology” is not only at the heart of how videos get blocked through image recognition (mostly for nudity) or phrase tracking in titles, tags and comments – it is also fundamental to how content gets promoted to users, how revenue from advertising is awarded and how any given advert is placed next to content. To make things even more difficult, YouTube (and Google) does not make these algorithms public. As a result, under certain advertising arrangements, it can be difficult to predict where an advert will appear.
AdWords, Googles advertising platform, offers a complex array of ad tailoring parameters such as;
- demographic groups,
- ‘affinity audiences’ and ‘custom affinity audiences – allowing you to drill down to specific audience interests (ie an animal charity might target people who search for or visit sites about rehoming unwanted pets),
- in-market audiences (audiences who are researching products and actively considering buying a service or product like those you offer),
- placements (adverts on specific websites or pages)
In theory, these options allow the client to target a specific audience and avoid appearing besides inappropriate content. However, even when you use the placement option in AdWords within the ‘Display Network’ or YouTube in order to advertise on a specific site, page YouTube channel or video, Google small print warns that; “your ad may still run in all eligible locations across the Display Network.”
This brings us to ‘remarketing’. Remarketing is a Google AdWords service which allows you to prompt people who have visited your website with targeted advertising as they continue to migrate to other websites on the Display Network. As unnerving as it may be for internet users to have adverts for a product stalk you around the internet, this is, quite understandably, a very attractive proposition for advertisers. However, in an environment where journalists are trying to generate stories that hinge on screenshots of famous brands advertising next to offensive content, remarketing comes with a degree of risk. A journalist can easily visit your website and then immediately view some offensive content and then claim that the ads that they are presented with reflect poorly on your brand. The fact that such a story would give an entirely false impression of the relationship between advertiser and content is moot; the image is – until the public develop a more nuanced understanding of how advertising online works – damning.
How do we react?
Many charities will, as CAF has for the time being, choose to suspend advertising on YouTube both to protect their reputation but also to avoid legitimising the very groups or behaviours in society they seek to counter. However, this may not be a sustainable or even ethically consistent position in the long run. YouTube – like Facebook or Snapchat – should not be seen merely as a media outlet in the same way as a newspaper. For the most part, users identify with channels and content hosted on YouTube rather than the platform itself. Its sheer scale means that to boycott YouTube, for now at least, would be like boycotting television advertising because you don’t want your product to be associated with a particular presenter on a niche Romanian talk show. The content does not reflect the platform, it reflects society, good, bad or indifferent. Users are ultimately curating the adverts that they are shown by the content that they choose to consume. Accountability is shared. That is not to say that YouTube isn’t going to have to improve its advertising controls or that charities aren’t going to have to play a part in influencing this: they clearly are – but first of all, we are going to have to become more adept at, and more thoughtful when using available controls.
Fundamentally, we may need to confront the realty that our key audience might view content that we would prefer them not to. The existence of extreme and in some cases hateful content across the internet is a concern for society, but it is also, in some ways, an opportunity. As charities suspend their advertising on YouTube in their droves, others, sensing the need to challenge such content, may see value in proactively seeking out opportunities to place contrary messaging. For example, a charity that seeks to challenge homophobia may actively seek to place messages next to homophobic content in spite of the fact that they may be contributing funds to the owner of the video in the belief that the net impact will be positive. Some will feel the need to take responsibility for their audience, taking the opportunity to confront them with a brand that reflects their higher moral self, juxtaposed awkwardly with their choice of offensive content.
In the wake of the US Election, the way in which algorithms draw on user data to promote and share content has become a topic of mainstream discussion. This discussion, rather than simply posing us unwelcome questions, is a vital opportunity for charities to shape the digital world. The potential for algorithms to affect the future of charities has a scope which is far wider than advertising. Indeed, attempting to explore the potential ramifications of algorithms on our sector results in a bewilderingly stark set of predictions. Despite the barriers to understanding such issues, charities cannot be left out of answering – both by promoting good practice and leading by example – the most important questions of our time.