Algorithm Is Gonna Get You: what the rise of algorithms means for philanthropy

algorithm, algorithms, algorithmic, philanthropy, charity, AI, machine learning

Image by Flavio Takemoto via freeimages.com

Algorithms are beginning to have a radical impact on society. Whether you believe this is for the better or the worse, it is also going to change our very understanding of philanthropy and civil society. 

 

Discussions of trends in technology can often be seen within the media and the wider public as niche at best and irrelevant at worst – until suddenly they are not. One issue that has recently become much more prominent is the nature of the algorithms that underpin much decision making by artificial intelligence. This is in part due to allegations that they had a significant, even determinate effect on the recent US elections and in part due to claims that they represent both the cause of an apparent spike of ‘fake news’, and also the solution to that same problem.

 

Before we launch into an examination of these issues, it is worth taking a step back and trying to pin down exactly what we are talking about here. Algorithms are essentially a set of rules or a sequence of instructions. They utilise known information to advise a sequence of actions. We do this all the time intuitively. For example, experience has taught me that, unless I am planning to make a brave sartorial statement, it makes sense to put underwear on before my trousers.

 

The fact that people are using algorithms to gain a competitive advantage or to drive efficiency is by no means new. For a long time, number crunchers have analysed data to inform stock market strategy, the pricing of consumer goods, the cost of insurance and the efficacy of government policy choices and systems. However, rapid advances in computer processing power and the availability of huge amounts of data have enabled a far more widespread use of algorithms to identify trends and, crucially, to automate responses to those trends. In particular, the advent of ‘deep learning’ has led to algorithms which can effectively mimic many human decision making processes, but at a speed and scale that humans are simply not capable of. They also operate (at least in theory)  in ways that are free from the false or lazy assumptions and prejudices that we all carry around with us.

 

In this way, the combination of big data and algorithms that can harness its potential is having a positive effect on many areas of contemporary society: from improving the efficacy of Google searches to making far more accurate weather predictions. Philanthropy will not be isolated from this trend: in fact, it is likely to be particularly affected given that it seeks to identify levers for and to institute positive societal change. However, as this article will show, the rise of algorithms presents as many challenges to philanthropy as it does opportunities. If one thing is for sure, it is that those interested in philanthropy must respond to the trend.

 

Optimising the online donor experience

 

Online donation is on a trajectory to become the dominant method of giving;particularly when you roll mobile into the equation. Of course, some people may still prefer to make cash donations – and there is good, if slightly unnerving evidence to suggest that some of our more base instincts particularly lend themselves to face to face fundraising. But the way we give online, the tools we use, the information we are shown and the way that online giving is integrated with social media will inevitably mean new opportunities for using machine learning to optimise the donor experience.

 

If you want to understand how algorithms are likely to impact on philanthropy then all you need to do is ask yourself one simple question: where is the data? One obvious place to start is with online donation portals. For example, algorithms which use social benchmarking (such as that used by Amazon), to show customers products that other people with similar viewing habits have visited, could be utilised to nudge donors towards more, and more diverse, giving. It is unlikely that we will have to wait too long to see this kind of profiling being used by giving portals. JustGiving, a fundraising portal in the UK, announced back in 2015 that when it came to collection they were taking “whatever we can get our hands on” including terabytes worth of web logs detailing the subsequent pages visited by donors and paying for processing power in the cloud to analyse it. It seems inevitable that donors will end up being shown information about causes which is ever-more tailored to their interests.

 

The matching of donors to causes is potentially a boon for both donors and the organisations that they support. A world in which donors are automatically introduced to information about how they can tackle issues about which they care deeply without their feeling intruded upon or harassed sounds like a charitable utopia. However, as anyone who has read any science fiction will know, we should be careful what we wish for.

 

As stated above, algorithms are not prejudiced. Unfortunately, people are (and yes, that includes donors). Given that algorithms designed to show us content we may be interested in work by crunching data about our habits and the habits of others ‘like us’; they tend to reflect those prejudices in the decisions they make. As a result, people who, for example, abhor racism but still hold unconscious biases, will not be shown content which might helpfully challenge that bias.

 

When it comes to the most advanced forms of machine learning, designed to tailor content that users are likely to respond to, the giants of social media reign supreme. As such, most people who work in fundraising and media roles for civil society organisations (CSOs) will be well aware that the success of their organisation’s social media profile is largely dependent on the way in which the algorithms which sit behind the platforms filter content for their audience.

 

Of course, giving people content that they are likely to appreciate is a pretty succinct way to summarise good business strategy; but it does also raise some fairly serious issues for the future of charitable giving and the causes it supports. Take Facebook for example, which, amongst other things, weights content that it shows based on what your friends find interesting. Given that humans are essentially fairly tribal creatures, we tend to connect strongly with people who share similar views (I am not going to get into the psychology of whether that relationship can be just as descriptive in reverse), meaning that what our friends find interesting can be an effective proxy for establishing our own interests. However, when it comes to raising awareness amongst people who are not already well-engaged, a system that is designed to reinforce group beliefs is ill-suited to challenging assumptions, promoting social change or reaching new donor audiences.

 

echo chamber, philanthropy, civil society, charitable giving, algorithm, algorithmic, AI, machine learning

Sharing the same views. Image by Gary Scott via freeimages.com

The fact that social media algorithms are creating information “echo chambers” is a problem for philanthropy not merely because it makes their own messaging and fundraising strategies more difficult, but because the failure to counter confirmation bias within groups and communities will likely lead to social issues that philanthropists and the organisations they support will subsequently have to address. In this way, algorithms may be creating a problem whilst simultaneously undermining our ability to solve it. Given that Facebook has announced plans to create their own in-house giving portal to allow donors to connect with causes and make donations entirely within Facebook, this is an issue that CSOs and the philanthropy community need to get to grips with urgently.

 

Hedge Fund philanthropy

 

Our understanding of the way that donors respond to information about the impact of their donations is not complete, and it may well be that some of our assumptions are false: some studies have actually shown that donors give less when confronted with impact data, even when it is very positive. However, what is not in question is the increasing demand amongst donors and institutional funders for ever more data on impact. This is perhaps best demonstrated by a UK report which found that funder demand was the driving force behind increasing impact measurement, with only a mere 5 per cent of organisations indicating that service improvement was the main driver.

 

impact

Chart taken from ‘Making an Impact‘, a report by UK think-tank NPC

This demand amongst funders – and amongst individual donors in particular – is set to increase as a new generation of donors integrate their ideas and preferences into their philanthropy. The Millennial Impact report found that young people are more likely to associate themselves with causes rather than organisations, possibly suggesting less loyalty to specific CSOs and a greater willingness to see their money go to whichever organisations offers greatest impact. Furthermore, a study carried out in the US by Fidelity has shown that millennials are more than twice as likely to be influenced by new technologies and alternative forms of giving as baby boomers when it comes to their philanthropy. Taken together, this loyalty to causes and willingness to be guided in their giving by technological advances – on top of a growing demand for data on impact – represent an aligning of the stars to create the perfect environment for algorithmic philanthropy.

 

The most likely early adopters of algorithmic philanthropy will be proponents of Effective Altruism (EA). Effective Altruism (EA) is a movement based on the work of the Utilitarian moral philosopher Peter Singer, which has found favour among a growing group of academics, practitioners and donors. It has proved particularly popular with many of the new breed of donor coming out of the Silicon Valley tech industry, such as Facebook co-founder Dustin Moskovitz. EA asks donors to take themselves out of the decision making process and instead allow their giving to be driven entirely by evidence. Put simply, it is about doing the greatest amount of measurable good possible with the resources available and evaluating the outcome. In this way, for proponents of EA, the data should not merely guide how you pursue a charitable cause, but dictate the cause itself too.

 

Given the growth of EA, it is easy to imagine philanthropists coming together, collecting publicly available data on the ‘market’ and creating a fund or endowment. This could prove very attractive, particularly as it would have hardly any running costs once established. Imagine a direct cash transfer programme that simply give money to people who need it (such as Give Directly) that used big data to assess which recipients were likely to add maximum community value / spend money most wisely. This seems intuitively to be desirable doesn’t it? It fits all of our assumptions about who deserves our help and what effectiveness is. However, the use of algorithms in other areas of society have shown that whilst handing over decision making to algorithms might reduce human error, it doesn’t remove the injustices of the society in which the algorithms are created. Indeed in some cases, it can reinforce them.

 

The problem is that algorithms take data from the world as it is now. That world is one in which prejudice and injustice has resulted in a system in which some people are profoundly disadvantaged by societal instruments and pressures which are entirely out of their control. The effects of those forces are picked up in the data by algorithms and where they have predictive power, used in decision making. The result can be to amplify existing social injustices.

 

In her book, “Weapons of Math Destruction,” Cathy O’Neil describes how this is already happening. One particularly striking example O’Neil details is that of “recidivism models” in criminal sentencing. For years, there has been widespread concern in the US about inconsistencies in sentencing whereby minorities often seem to receive more punitive treatment. As a result, a number of states have resorted to using algorithms to calculate the likelihood that those convicted will reoffend, guiding sentencing with reason and challenging unconscious biases. Objectively, this seems fair and it certainly feels efficient. With prison populations and costs spiralling out of control in the US, it makes sense to lock up those who are likely to reoffend and to minimise jail time for those who are not. However, closer analysis of the data that might best predict the likelihood of recidivism reveals it is also a proxy for societal inequality. Alongside what you might expect, such as the nature of prior convictions, drug and alcohol use and previous police encounters are included. The former is strongly associated with poverty and the latter has been shown to be impacted by the racial profiling by law enforcers. Furthermore, factors such as where you live (which could be a proxy for both poverty and race) and the criminal convictions of family members suggest a model that reinforces the social structures of inequality.

 

So in our example of a direct giving algorithm, it may well be that people of a certain ethnic group, location, gender, religion or age are more likely to make good use of any aid provided; but to deliver aid on that basis would seem merely to reinforce societal inequality. Similarly, were microfinance providers to profile farmers in this way to favour only those most likely to repay, this would no doubt advantage those facing only economic barriers, whilst – relatively speaking – further marginalising those facing multiple barriers to prosperity.

 

The best place to see both the potential power and dangers of algorithm-led decision making is the financial trading industry. When we think of trading we tend to have images of trading floors full of smartly-dressed but sweaty looking people (probably men) shouting into multiple phones, waving pieces of paper around and looking at giant boards full of inscrutable numbers while whispering to one another about how the weather is likely to affect the price of sugar. However, the reality of modern trading is quite different.

 

Trading these days mostly takes place in server farms and is largely automated. Money here is made not by cleverly predicting some market shift by taking on board various macro economic and market specific variables, but by responding as fast as possible to price changes and trades logged by others. Trades are logged so quickly – this kind of trading is called High Frequency trading (HFT) – that it is impossible for any human to keep track of what is going on. When I say ‘as fast as possible’, I really do mean that. It is difficult to overstate the premium that is placed on the ability to respond the fastest. One way to illustrate this point is to reference the $2.8 billion paid by the first 200 companies in 2014 to use a new fibre optic cable spanning the 827 miles from the Chicago exchange to the New York Exchange (in New Jersey). By traveling slightly straighter this cable promised to cut milliseconds of the time it takes to deliver information between the two exchanges. Given the power of the algorithms being used in HFTs, even thousands of a second represent a significant advantage.

 

big data, algorithm, algorithms, philanthropy, giving, charitable, AI

This is actually Facebook’s data centre. Image by mahmoud99725 via Flickr

This system, in which humans take a back seat in the decision making in trading, represents an extreme example of what the trend for integrating big data and computer aided, rule based decision making could look like for philanthropy. We have already outline some of the potential applications of such a model but taking HFT as allegoric – which I recognise is somewhat of a stretch – we may also want to consider some of the issues that HFT has encountered. Indeed, given that some people are already calling for philanthropists to develop hedge fund style “quants” for philanthropy, the time to raise any potential issues is now.

 

Cast your eyes back to Friday 7th, 2016. Only a couple of weeks had passed since Britain had voted 52 per cent to 48 per cent in Favour of leaving the European Union. When traders Europe arrived to work in the morning and turned on their computers, what they saw must have beggared belief. Overnight, Sterling (the UKs currency) had plummeted in value by a staggering 8 per cent. Even more remarkably, within half an hour, Sterling had almost entirely recovered. Those traders not on the phone to their IT departments trying to establish why their computer was malfunctioning began to scramble to find the cause of this bizarre event. The culprit? Most are pointing the finger at algorithms.

 

Only two weeks earlier, on June 23rd, the British public voted in a closely contested referendum to leave the European Union (EU). Traders around the world, understanding the ramifications of negotiations on ‘Brexit’ on the global financial market – and in particular, whether a so called ‘Hard Brexit’ which would see Britain face tariffs in order to trade with the continent – had introduced codes into their algorithms to monitor news sources for indications of likely approaches to diplomatic negotiations. While traders in Europe were fast asleep, an article was published on the Financial Times website with the title “Hollande demands tough Brexit negotiations”. Apparently, as some algorithms reacted by selling shares, other algorithms subsequently responded to those orders prompting Sterling to fall off a cliff. I say apparently, because the sheer volume of trades on the global market and the speed at which decisions are made, makes it almost impossible to find out why an event like this has happened, and crucially, it makes it very difficult to predict or prevent in the future.

 

A flawed human’s evaluation

 

I won’t pretend that I can approach this issue without prejudice (in the way that an algorithm might be able to), or that I can take on board all of the available information in coming to a conclusion. Indeed, it is fair to say that given my vested interests in philanthropy advice and services as a CAF employee, information suggesting that algorithm-led philanthropy is somehow a terrible idea is likely to resonate with me due to my inevitable confirmation bias on the issue. However, despite my strong preference for a world in which my vocation still exists, I am objective enough to see that a demand for more information on all aspects of giving by donors, regulators and even beneficiaries, the ability to access that data electronically, access to huge cloud-based processing power and the prevalence of algorithmic decision making in other spheres of life are all pointing to the inevitability of a future in which some, most or nearly all philanthropy will be either influenced by, or entirely directed by, algorithm. Furthermore, I recognise the possible positive outcomes for philanthropy and those that it benefits in a trend for algorithmic data crunching. Though there are many, for me, the principal benefits are;

 

  • The ability to use data to identify effective solutions – particularly those which have been undervalued. Solutions to problems are often unglamorous but algorithms are only concerned with what works – unless we program them otherwise.
  • By responding to activity elsewhere in the philanthropy ‘market’ algorithms can avoid duplication, or ensure spending is complimentary.
  • Remove false assumptions from decision making. We carry with us the lessons gleaned from all our experiences and regularly use these as a way to make intuitive decisions, rather than engaging in detailed thinking each time the question arises. This can result in false assumptions. Algorithms do not need to cut corners like this.
  • Reduce costs through disintermediation. A whole host of services that advise donors or manage their philanthropy could be automated at a fraction of the cost.
  • Counter the perception that philanthropy is politically motivated or that wealthy individuals are exerting power in their own interests. Philanthropists are coming under increasing scrutiny for the way they give away their wealth. Algorithms could provide a way to give selflessly.

Given this, I can do something that even the most powerful AIs might still struggle with: I can incorporate this knowledge into a moral framework and consider how we might be able to mitigate the potential ethical, social and political unintended consequence of algorithmic philanthropy. Again, there are likely many more but those which strike me are;

 

  • The data used by algorithms will necessarily be drawn from our unfair world and as a result the actions it will suggest might reflect, or even reinforce the current structure of society. Algorithms are not prejudiced, but society is.
  • Data reflects the world as it is and algorithms can only seek to find improvements within the current system rather than imagine a different system. Some of the most important philanthropic movements- such as the anti-apartheid movement or the campaign for same sex marriage for example – required an imagining of a systemic change.
  • Algorithms can be so powerful and process so much data that humans cannot have proper oversight. This creates a dangerous gap in governance where we can neither understand and learn from errors or predict future problems.
  • Many people are motivated to give because they have an emotional connection to a cause. Will people want to give if their actions are being dictated by an algorithm?

The friendly ghost in the machine

 

Given the inevitability that technological advances will come to dominate our lives in ways which are negative as well as positive, it is possible to feel powerless. But we can do something. First, we need to be as cold and calculating as computers are in sizing up our future robot overlords and recognise that they are neither good nor bad, and that they are merely reflections of the prejudices that we feed them (would a computer get that joke?). To this extent, we need to be cognisant of this and be careful to redress unhelpful prejudices.

 

Secondly, we need to by empowered by the knowledge that civil society is in fact ideally placed to do this and has in fact been doing this in other spheres of life for generations. Thirdly, each of us, in our philanthropy – be it modest or grand – has the power to push the CSOs that we fund to ask questions of themselves and others. This could be as simple as enquiring into the way a charity we give to uses data or as far reaching as the approach of LinkedIn and eBay founders Reid Hoffman and Pierre Omidyar who donated $10m each to the Ethics and Governance of Artificial Intelligence Fund. Finally, we need to ensure that we evaluate the appropriateness, advantages and limitations of algorithms in guiding our opinions, choices and of course our giving.

 

 

Adam Pickering

 

One response to “Algorithm Is Gonna Get You: what the rise of algorithms means for philanthropy

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s