Artificial Intelligence: Applications and Implications in the Charitable Giving Sector

| GS INSIGHTS


In just the last several years the term “artificial intelligence” has become part of our cultural lexicon, a fact indicative not only of the rate of development of AI systems, but the extent to which they have been marketed to the public. Research suggests AI will soon grow to be a $50 billion industry, with the technology diffusing rapidly across all major economies. China alone accounted for 60% of total global investment in AI from 2013 to the first quarter of 2018. For nonprofits the promises of AI are the same as for the commercial sector: lowered costs, streamlined operations, better data collection, and—potentially at least—better end service.

Whether AI can deliver on those promises is up for debate. What is not in question is that the technology will impact nonprofits' institutional missions—including education, equality, and governmental policies affecting the lives of people who have been traditionally denied a seat at the decision-making table. The potential for AI to increase societal good exists, but the opportunity for nonprofits to advocate for people outside the high tech and big business bubbles will soon pass. Understanding AI is the first step for nonprofits in deciding whether their relationship to the technology will be as users, influencers, or both.

AI has already been useful to nonprofits. Some are applying it to hiring, fundraising, donor engagement, and using its social listening tools. Predictive AI is able to help program teams identify areas of need by degree, geography, and demography. Medical agencies were early adopters of tools helpful in locating and prioritizing at-risk pregnancies, while educational groups have been able to match tutoring methods to students for college prep. The same methods are applicable to the charitable sector, where AI usage will grow by an estimated 361% in the next two years, according to research by Saleforce.org.

AI is an encompassing term. One of the most widely used applications under that umbrella is the chatbot. There are different types, among them simple scripted chatbots tasked with answering common queries, slightly more complex service chatbots that decode user needs based on keywords, and contextual chatbots that utilize machine learning. These latter varieties gradually acquire knowledge from both the operator and customers. Within the charitable sector, chatbots are making inroads into support services. For example, in 2017 the U.K. charity the Children's Society launched a chatbot on Facebook and tasked it with answering fundraising questions.

But there's a difference when people interface with chatbots compulsorily, for example when calling a government agency, and those who interface optionally, such as when seeking information about whether to donate money to charity. Even the biggest chatbot proponents admit that the technology does not provide anything close to a personal touch. The Children's Society chatbot was programmed to handle 50 questions. Any query that fell outside those presets would have gone unanswered.

Another subset of AI is the voice assistant. Voice assistants are not the same as chatbots, though both are conversational interfaces. However, chatbots generally don't do well with complex questions and lack an understanding of context. Because of advances in natural language processing, voice assistants do better with those challenges. The most popular usage for them is in searches, but they work well for texting, shopping, and customer service. Last year the NSPCC (National Society for the Prevention of Cruelty to Children) became the first charity to adopt the technology by tasking an Amazon Alexa voice assistant with taking donations.

NSPCC's use of a voice assistant, while groundbreaking in the charitable sector, doesn't give an idea of the overall utility of AI. It can be applied to repetitive tasks and data input, but its true value lies in its ability to learn. A nonprofit could hypothetically ask an AI program to monitor fundraising calls and listen for biomarkers in the recipients' voices. Undetectable by human ears, biomarkers can indicate whether a person is less or more receptive to what they are hearing. The program could tell callers in real time whether receptive voice biomarkers are coming from the potential donor on the other end of the phone, and it could coach callers on which approaches to use to elicit those biomarkers, and thus more donations. While this AI application does not actually exist, the technology does.

AI can also be applied to the opposite end of the funding pipeline, for example by analyzing factors that influence program outcomes, and sorting and classifying individual donee cases. It could help nonprofits gather, process, and present highly granular data related to program impacts. In an era when many donors are demanding more transparency about what exactly their gifts are accomplishing, AI could give organizations that use the technology a crucial fundraising edge over those that don't.

Artificial intelligence is largely about using intense computational power to generate useful probabilities. It is helping animal conservation nonprofits catch poachers by predicting where they go and using the data to map out more efficient park ranger routes; AI is being used by human rights nonprofits and banks to analyze financial data and trace money generated by human trafficking; and IBM's Watson AI system recently analyzed a woman's genetic information, compared it to 20 million oncology studies in a mere ten minutes, and diagnosed her unidentified illness as leukemia—something doctors had not accomplished after multiple examinations.

The power of AI systems to affect charitable giving is enormous. But nonprofits have a stake in AI for broader reasons. Numerous studies have now revealed that, rather than working with neutral data like a sculptor molding base clay, the data itself is often biased. The result is that the very social iniquities many nonprofits are determined to combat could grow more entrenched as AI proliferates. And in the same way a skyscraper's support pilings are impossible to repair without exorbitant expense, by the time foundational AI systems are fully in place and commercial sectors built atop them, there will be little chance and less incentive to go back and fix the underlying problems.

Those problems are almost too numerous to cite. Among them: predictive criminal justice algorithms that rated first time offenders of color highly likely to re-offend, while career felons who were white were rated as safer; facial recognition software (often used by police) that returned a 1% false match rate for white males but 35% for dark skinned women; automated bank lending decisions that replicated historical bias against low-income communities; an AI recruiting tool developed by Amazon that taught itself to hire men over women; and a Microsoft chatbot that learned and publicly posted racist and sexist language.

All of these missteps can be repaired with programming tweaks, but the underlying issue is that the data—income, arrest records, postcodes, social affiliations—used to train computers is historical, and history is rife with human prejudice that computational systems then learn. Exacerbating the problem is a faction within the high-tech community that insists that AI be color- and gender-neutral, even though this approach presupposes utopian data sets that in reality don't exist. The global research and advisory firm Gartner predicts that by 2022 approximately 85% of AI projects may deliver erroneous outcomes due to data bias, algorithmic bias, or bias within development teams.

This represents a grave problem for the charitable sector, a case of bailing water from a boat that might keep springing leaks. Essentially, nonprofits face AI questions on two fronts. Can the technology be used to improve the charitable sector's ability to do good work? And should the charitable sector help prevent the technology from creating more work to do? While the examples of AI bias mentioned above may be unintended consequences of bad or lazy practice, there are other issues associated with the technology that are anything but accidental. Most serious among those is the fact that the global military sector's rush toward more efficient weapons is a primary driver in AI development.

Considering that AI systems will have an effect on everything from global GDP to political stability, it's no surprise that surveys reveal a substantial amount of concern around the issue. Up to 70% of the U.S. public are afraid of losing their jobs to automated systems. And most tellingly, a December 2018 poll by Elon University found that even 37% of “technology experts, scholars, corporate and public practitioners, and other leaders,” when asked to look ahead to the year 2030, doubted AI would have improved humanity's existence. The results show what sort of uncharted territory the nonprofit sector faces, and the seriousness of the decisions to be made.

In 2015 the issue was considered important enough that a group of AI experts, robot makers, programmers, and ethicists published an “Open Letter on Artificial Intelligence” stressing the need to develop the technology with its potential societal benefits foremost in mind. Since then, many other observers have sounded claxons, cautioning that if the systems are built with speed and profit as their primary functions, while safety and ethics are secondary, collateral damage could overwhelm any benefits. For nonprofits the question isn't really whether they will use AI—they will. But the question of whether they will be influencers remains unanswered.