<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=5449063228504841&amp;ev=PageView&amp;noscript=1">

Podcast | AI in Fundraising: Balancing productivity, responsibility, and authenticity

Two of the hottest topics in philanthropy are the declining number of donors and the impact of artificial intelligence (AI). In this episode of Fundraising Today and the Go Beyond Fundraising podcast, we bring these issues together in a stirring conversation with Nathan Chappell, author of The Generosity Crisis.

As a founder of the Fundraising.AI collaborative, Chappell discusses the role of AI in the fundraising sector. He highlights the key elements that are necessary to ensure “responsible AI,” such as reducing bias, maintaining privacy and security, and adopting a sustainable approach. He also emphasizes the importance of embracing AI as a tool to improve both efficiency and precision. 

What’s more, Chappell points to the larger part AI has in addressing the “generosity crisis” by helping to reverse the drop in household participation in charitable giving. However, he warns about how vital it is for fundraisers to maintain authenticity and connection in the era of AI and cautions against its misuse as a mere transactional tool. 

Instead, Chappell calls for a balanced approach: fundraisers must harness AI for productivity but also invest extra time in fostering individual connections.

Want to hear more? Listen to the full episode:

Connect with Nathan Chappell

Read the blog post: Braving the Wild West of AI + Fundraising with Nathan Chappell

Get more Go Beyond Fundraising Podcasts

 

Transcription

Leah Davenport Fadling: Welcome everyone to another episode of Fundraising Today and another episode of Go Beyond Fundraising podcast. Today, I am joined by a gentleman that I’ve wanted to have on the show for quite a while now. So, it is my joy to (welcome) Nathan Chappell to the show today.

 

Nathan Chappell: Awesome, thanks Leah, it’s so great to be here. I was looking at the people that you’ve had on the show, and so many people I have just tremendous respect for. Dana Snyder, Soraya Alexander, Kishshana Palmer, Gabe Cooper. So, truly the honor is mine to be in that company. So, I can’t wait to learn from you.

 

Leah Davenport Fadling: Oh, I’m hugely flattered. It’s amazing to think back. We’ve been doing this show since 2018, had some amazing folks come through the door. Once again, really happy to have you with us today.

 

So, Nathan, you are the author of The Generosity Crisis. Before we get into our main topic today, which is AI, artificial intelligence in fundraising, I’d love to get more background about you and the impetus behind writing this book because I think it will play a really important role in informing some of the different things we talk about today when it comes to AI.

 

Nathan Chappell: Yeah, that’s great. It’s great to be here and truly it’s interesting because people ask, “How do you get into both AI and Generosity Crisis?” “What do they have in common with one another?” So, it’s great to unpack all that.

 

I feel so fortunate that the two areas that I was passionate about throughout my entire career — I got into nonprofit, was a technologist prior, 20-something years ago. Started and sold two tech companies. Ended up spending 20 years leading nonprofits, where I really applied more for-profit business practices in those nonprofits. And then started getting really deep into AI 2017.

 

And then for a number of years, I was, as a nonprofit leader, concerned about the decline in household participation in terms of the numbers of individuals that were giving. So, I think just fortuitously, what’s the application of AI to help solve for this quote-unquote generosity crisis? Does the crisis exist? Is it only my organization that’s struggling with this idea of raising more money with less people?

 

It’s been an amazing 18 months or so since the book came out, just to hear the response and almost in a way creating a safe space to talk about what’s not going well in our sector, which I don’t think we were very comfortable doing. So, it’s been kind of a wild ride since the book came out. And I’ve been incredibly flattered to spend so much time with nonprofit leaders and volunteers and tech leaders around the areas that we can improve upon. That’s really what it’s about.

 

Leah Davenport Fadling: I want to explore this idea of a generosity crisis because I don’t think any of us would say that Americans or people are any less generous in their day-to-day lives. I don’t think they would perceive themselves to be such. But I do think that we’re seeing a shift in how people give with things like crowdfunding, peer-to-peer looking different, even rising ways of transferring funds from one person to another with digital platforms like Venmo and PayPal and Zelle. All of which are ways that are difficult to track according to the ways that we’ve usually measured generosity and giving to 501(c)3’s in the U.S.

 

Nathan Chappell: Yeah, that’s exactly it. You hit the nail on the head. We even say in the book, “Perhaps there’s not a generosity crisis. Perhaps there’s just a redistribution of generosity in different forms.” We’ve never had so many different ways of expressing our generosity. There are some — this is probably the number one reason I don’t get invited to a lot of parties, because I’d say this isn’t really a fun topic to have at a dinner party. And a lot of people don’t invite friends that are going to be just doom and gloom the whole time.

 

I would say, on one side of the coin, yeah, there’s absolutely, undisputed, a decline in the number of households that give to traditional 501(c)3s. So, that 16% decline in the last 20 years, it’s not just the U.S. It’s Canada, the UK, Australia. Most developed nations have a very similar trajectory.

 

So, essentially, the average person is essentially questioning the relevance of the nonprofit. And whether or not those nonprofits are now, are they a hindrance to generosity, or are they amplifying generosity? And to a younger generation that is very used to growing up and questioning everything, is really asking the question like, what does the tax status even matter? Why does the 501(c)3 designation even mean anything if I can make a difference either in the purchases I make, or I can go on GoFundMe and help a neighbor? What is the relevance of the nonprofit?

 

And so, the book is really focused on calling that question. People like Elon Musk says all his companies are philanthropy, and if philanthropy is the love of mankind, he definitively will say that all his companies are philanthropy. So, the question, of course, is that if I buy a Tesla, am I a philanthropist? So, that’s on one end. But that doesn’t help the 10 million people even in the U.S. that work for nonprofits or the 1.7 million nonprofits that are really struggling right now to figure out when does this stop? At what point do we stop losing these donors?

 

And frankly, the book is a fairly stark wake-up call to say, “Look, if we continue on this path, giving to traditional nonprofits ends in 49 years, if we don’t screw it up even more than we’re screwing it up now.” But it’s really a call to this idea of radical connection, this idea of people giving to people. A lot of what you shared is that there’s more opportunities to give back and to pay it forward than there ever have been before. And I think those are things that we wrestle with in modern society.

 

Leah Davenport Fadling: Absolutely. And I think, in the same way that we think about how different ways of giving are disrupting some of these traditional ways to express generosity, artificial intelligence is doing a really similar thing in how we work. Because, of course, AI and algorithms have been behind the scenes for a long time — whether it’s cars or ovens or whatever it is — behind the scenes, work more efficiently and help us do things with making fewer decisions.

 

Generative AI came on the scene in a big way last year, or actually, it was 2022, with ChatGPT. So, a lot of people right now are asking questions around AI and their relevance in the workplace, how it’s going to change the way that we get things done. So, with that context in mind, I’d love to get some of your thoughts on the impact that artificial intelligence is making and will make on fundraising and nonprofits.

 

Nathan Chappell: Yeah, this is such an interesting convergence, right, because at the end of the day, where we’re thinking about what the incentives are and why AI is being used, almost unilaterally it’s around doing those things you mentioned, making your life easier. It’s around creating efficiency, and it’s creating efficiency through understanding you better.

 

And so, predictive AI, which has been around, frankly, since 1955 but really in the last 10-15 years, predictive AI has gotten really good at understanding, “What are you going to do today?” And that’s the area of creating precision around predictions. “What are you going to do today? Is this the moment to ask you to buy something or to just learn about something or whatever?”

 

And then, to your point, the plus is generative AI, which is around that personalization. So, now we can get that prediction with this tremendous amount of personalization at scale. But at the end of the day, all that is being used — and the private sector’s gotten really good at leveraging this technology — to really essentially build connections. And the ways that for-profit businesses are incentivized in the stock market are not just based on selling products or creating transactions, but they’re rather creating lifetime value. Not just selling you a pair of shoes but selling a pair of shoes to you and everyone in your family. Not just this time, but next time and the time after that. That’s how the stock market values for-profit entities now.

 

So, there’s a lot to say that while AI has gotten really good at understanding — and now with generative AI, pronouncing connection and building connection — that it calls into question that from a nonprofit perspective, well, what are you in the business of doing? And nonprofits are also in the business of building connections. But in the past, nonprofits were competing against nonprofits for charitable dollars. And the realization and, really, what one of the biggest takeaways from our book and probably 10,000 conversations in the last year is that nonprofits are not competing with nonprofits; they’re competing for connections. And they’re competing against organizations that have done this really well and really understand how to leverage precision and personalization at scale and that’s just a whole different playing field.

 

So first, I think it comes to the awareness that we're not competing for dollars, we’re competing for connection. What are the elements of connections? So, trust and authenticity. And how do you do that at scale? And how do you do it efficiently so that it’s not just, “Let’s raise money, a dollar from every person on the planet,” but “Let’s use AI to identify the very best people that aren’t just going to make a gift but that are going to stay with us for a long time and tell their neighbors and friends about what we’re doing and align with us in that really visceral way.”

 

Leah Davenport Fadling: I know it’s so important when we think about the private sector brands that have a very tight hold on us because of the incredible way that they personalize. And of course, Amazon comes to mind, social media comes to mind. I’m starting to plan a trip this next month, and Instagram is already delivering me videos on best things to do in that destination, and I’m seeing things in my feed that are personalizing and making me excited for that upcoming trip.

 

So, when it comes to nonprofits, I think about how valuable connection is. And I think a lot of fundraisers are asking themselves this question of, “What channels do I need to be in to bolster the connection that my constituents have with my nonprofit? How many different segments do I need to have?” I think that when it comes to artificial intelligence, people just have so many questions that they feel overwhelmed with where to even begin.

 

Nathan Chappell: Yeah, it’s such a good point. And I know there’s a lot of AI hype, and there’s a lot of AI FOMO, and our sector is usually resistant to change and is woefully behind in investing in things like charitable or philanthropic R&D. We just don’t do that. We wait until there’s other proven ways of doing things and that they’re really affordable, and that needs to change obviously.

 

I think, first and foremost, the importance of understanding what an algorithm is. And, essentially, the idea that every algorithm is essentially based on capturing attention. Its measure of success is, did it get you to spend or swipe one more time? Like in your Instagram feed. It’s just overwhelming. So, when the algorithm learns or the Instagram algorithm learns you’re going somewhere, very quickly, just based on either if you clicked on something heaven forbid, if you swiped up past it really quick, or if you paused on it even for a millisecond, algorithm is essentially — and these are not my words, Tristan Harris, who was in The Social Dilemma talked about this several years ago — algorithms are essentially a race to the bottom of the brain stem.

 

They’re essentially wanting you to just pay attention for one more millisecond. And if you do, it rewards itself by enhancing its ability to create one more millisecond, one more millisecond, to the point where the average person scrolls 300 feet on their phone a day at this point, which is astounding to think about. And by next year, the average person in the U.S. will have over 5,000 algorithmic decisions made on their behalf. So, 5,000 a day compared to the number of people that you’re interacting with is usually 8-10 people, humans, a day.

 

So, if I’m a fundraiser, I think, first and foremost, understanding an algorithm is essentially trying to capture attention. And so, an algorithm in the nonprofit space is the same thing. It’s trying to essentially capture attention, to get through the noise, so that it pays attention to you. I think there’s a lot of fear around algorithms in the nonprofit space and fundraisers really, and are getting that idea, “Is it going to take my job?” And of course, it will never take my job because robots don’t understand the heart of fundraising.

 

The reality is, like we’ve seen with ChatGPT, ChatGPT has read every book not just on psychology ever written but also every book on philanthropic psychology, which there are fewer of. It understands the motivations of giving. So, rather than a fundraiser saying, making this bioptic decision of its good or bad. There’s really no moving forward successfully without adopting and learning and enabling your ability to be a fundraiser — to leverage the same tools that the private sector is using to capture attention — to use that in your organization. And it’s a scary time for a lot of people because that represents a lot of change. But in the AI world, we say that 70% of AI transformation has nothing to do with data or models. It has to do with the people that are using it.

 

So, we talk about human-centric AI, which I kind of subscribe to and not. I feel like AI is just a tool that we want to learn to serve us more efficiently and really create efficiency and precision around us. But it’s something that I don’t think many organizations and many fundraisers will have success in the next two to three years saying, “No, that’s bad, I’m not going to use it.” I think there’ll be a great digital divide in the near future of those that have leveraged this tool, that are moving past their peers. They’re getting different, higher-level jobs faster because they’re learning how to essentially build that connection faster and better at scale.

 

Leah Davenport Fadling: I saw an ad campaign recently for Salesforce, and it featured Matthew McConaughey in this Western setting. And you can tell that the Western setting he’s in is a series of AI-generated images. And he says, “If AI is the Wild West, then is data the new gold?” And it really made me think.

 

Nathan Chappell: Yeah, what do you think about it? What was your impression overall?

 

Leah Davenport Fadling: Obviously, Salesforce is campaigning to be your database of record for containing all these touchpoints and ways that you are collecting information on your donors and constituents. It seemed like it was definitely trying to draw a line in the sand out there that, “If you want to be able to move forward in this new Wild West of AI, we’re the partner that’s going to help you get there quicker.”

 

Nathan Chappell: Yeah, no, it’s so funny because some of our coworkers were texting, like “Oh my gosh, did you see?” It’s like, we were the uncool kids, and now Matthew McConaughey is talking. It’s like, AI is cool, and AI’s had its day when you’ve got celebrities talking about it. And I absolutely believe in the statement that data’s the new gold. In fact, without data, AI just doesn’t work. It’s just nothing, it’s worthless.

 

It’s funny, but we also see Trevor Noah, who is doing a whole podcast series but also a lot of stuff with Microsoft. And Trevor Noah is really interested in understanding why humans do different things and why they don’t do certain things, so there’s this appreciation of AI and machine learning and all that. I think it’s a sign of the times. Seeing those commercials hit, the Super Bowl’s coming up, so who knows how many AI commercials there are going to be? And that’s maybe where some of the FOMO is in our sector because we’ve been pretty far behind in adoption and just put one little toe in the water, and now all of a sudden, there’s literally 2 million open AI developers in the world now. It’s only been out, to your point, since November 2022. So, there is a fear of missing out. And actually not just a fear, but those that have vs. those that have not, there’s a big difference.

 

And so, I think this will be the year where there’s a lot of catchup. And I think by the end of the year, we’ll see that digital divide will be a massive chasm between those that are just, “I’m not going to do it, I’m stuck in my ways, I don’t want to learn something new,” vs. those that just jump in full-bore and really (elevate) their role in their organization.

 

Leah Davenport Fadling: I’d love to start painting a more concrete picture for those watching or listening about how we may expect to see AI impacting the way that they fundraise and the way that they market. One of the projects that you started recently is Fundraising AI. So, could you tell us a little bit more about that and some of the topics that you’re tackling in that project?

 

Nathan Chappell: Yeah, thanks for highlighting it. This is definitely a passion project. It’s one of those things that you stumble along, not realizing it was going to be all-consuming, and now I have another full-time non-paid job.

 

It actually goes back to 2017. We built our first algorithm while I was working at a cancer hospital, and it was this idea that we had 2 million patient visits a year. Only 2% of patients ever make a gift, normally, to a hospital. So, needle in a haystack kind of thing. And when we built our very first algorithm — it took us a year and a half, it was painful — but we were getting a .6% response rate before we were using AI. And in our very first version, a super crude version, we were getting 2.1%. We were like, “Oh my gosh,” kind of mind-blowing. Like, this thing (we) worked on this for a year and a half, and we quadrupled our response rate. And it immediately struck me at the time, so going back to 2017, that this is a really tremendous power for good, but (it) also had tremendous power for bad.

 

And while we look at — and we’ve seen examples of this in the private sector over time, where Twitter created an algorithm that was racist and ageist and Islamophobic and ableist, which they did and their stock price went down, it was not a good day for Twitter — that if the nonprofit did that, it would impact all nonprofits. It wouldn’t just be the one organization, especially if it was a large nonprofit, and they were using AI quote-unquote irresponsibly. That it would affect and essentially diminish trust at scale within the nonprofit sector. That people are like, “Oh you’re all doing this, and you’re all just manipulating.”

 

So, 2017, I really decided that we needed to come together with some people who felt the same way, that saw the power of good in AI but also were willing to be cautious about it and to be strategic. And so, fast-forward, what happened, obviously, we had about 20, 30 people involved. And the pandemic happened, and then obviously that put a damper on things because we were more worried about where we were getting toilet paper than whether or not AI was going rogue.

 

But then, fast-forward after the pandemic, it’s been this resurgence of, I think, the power of AI and now the democratized power of AI that everyone has access to ChatGPT or Bard or Perplexity or Claud or whatever you like to use. And we see that there’s tremendous potential for good and also tremendous potential for bad.

 

And so, Fundraising AI was birthed out of, frankly, a bunch of people — and obviously Pursuant being part of that — came together, that thought, “This is beyond a marketing initiative. This is a business imperative. This is, frankly, a philanthropic imperative.” That if our industry did this wrong, we could diminish trust and that 49 years to the generosity crisis is going to be 14 years. But if we do it, what would be the potential to reverse the generosity crisis — to actually enable an organization to connect with people based on connection, not based on their wealth, based on lots of data, understanding, “Is this person raising their hand and they’re saying, ‘Hey, pay attention to me I want to learn more?’ ” Or intervene when people are starting to fall off the radar.

 

Last year we had 6,000 people that participated in Fundraising AI, which Pursuant was a great sponsor of. This year we’re aiming for 10,000 people at the same summit. And I think until every article we read uses the word “responsible” either before or after, at that point, we’ll know we’ve done our job. But at this moment, when I audit lots of AI articles and podcasts on a weekly basis, if the word “responsible” is used, it’s usually the last paragraph, the last line of the last paragraph. So, we’ve still got a long way to go. Because at the end of the day again, nonprofits have a lot to lose.

 

Trust is very easily lost and hard to gain. And we don’t have a pair of Nike shoes to sell even though you don’t trust Nike — which, I’m not saying you don’t — that they still might have the shoes that fit your foot the best. And nonprofits have trust — and only have trust — which is why, I think, we have to hold ourselves to a higher level of responsibility.

 

Leah Davenport Fadling: I’d love to dig into that a little bit more. When you say the word “responsible,” what does that mean to you? When you talk about responsible AI?

 

Nathan Chappell: Yeah, and this is not novel. There are a few parts we’ll highlight that are novel but overall, there’s initiatives — realistically, there’s a few hundred initiatives around responsible AI in the world. And the U.S. is very far behind — purposefully — on responsible AI because we tend to allow a thousand flowers to blossom, and then when something bad happens, we’ll regulate, vs. the UK, which is much more the opposite, which is, “Let’s regulate so something bad doesn’t happen.”

 

But all said and done, organizations, governments, everything in between, have looked at, “What are the ways that AI can…how can we diminish bias?” That’s the most obvious one. Ethics around AI is a very big topic. Implicit bias, explicit bias, even understanding, “How do you build your models so you’re reducing the amount of bias? How do you reinforce or enrich your database so you’re reducing the amount of biased datasets?”

 

So, when we’re talking about irresponsible AI, it’s things like privacy, security, breaches in ethics – all things that would diminish trust. We talk about responsible AI, it’s not only an algorithm that is not doing those things, so that you found ways to reduce bias and in generative AI reduce hallucinations, things like that. But you also have people that monitor those systems on top of whatever AI system. So, we have AI that actually looks for bias, but we have humans that look at the AI that’s looking for bias. So, it’s essentially a two-step process so there’s checks and balances to make sure that the AI’s not going rogue and not just coming to its own weird conclusions about, in our case, what a donor is or what a non-donor looks like.

 

The key difference — well, there’s all these hundreds of frameworks around the world — is that we felt that the fundraising sector — because the single imperative around trust is so unique to the nonprofit sector — that, again, we don’t have any backstop, we don’t have anything to fill that void if we diminish trust — is that we called really to question a responsible AI framework that didn’t have things like transparency and explainability. And so, we created our own framework, and this came out of just one of the biggest, hardest things I’ve ever done in my career. We had 1,700 emails, and about 30 people around the world participated in this project. It took months to create, back and forth, debating all these different words.

 

And really the conclusion, the consensus was that the nonprofit sector should not allow for what would be considered a black box. So, the way AI works pretty much in every other corner of the planet is when an algorithm gets developed, it’s essentially put in a black box because a competitor wouldn’t want another competitor to know how they built their algorithm. It’s like it’s their secret sauce. And we said, “In our sector, we don’t believe there should ever be a black box.” How do you trust something that you can’t interrogate? How do you trust something that you can’t explain?

 

And so, our framework is a little bit different and harder for people to comply with because it has this greater emphasis on this idea of transparency, explainability. Outside of all that, all the same things — bias, privacy, security, all those are obviously super important — we also have sustainability. Because we work for and with NGOs and nonprofits around the world, understanding the environmental impact that AI has on earth is pretty significant. Server farms all over crunching data. We saw this a lot with crypto and how communities in Nigeria were going without power because someone built a crypto farm next door, and it was consuming all the village power. We also have sustainability as one of our 10 tenets of our responsible AI framework.

 

So, all said and done, it’s similar. It just doubles down on those areas that really incentivize or prioritize trust. The trust between a donor and a nonprofit organization.

 

Leah Davenport Fadling: Another element to trust and ethics that I think we mentioned when we were having a prep call was the ways that AI can impact jobs and worsen inequality. There’ve been a lot of layoffs in the tech sector that have been attributed partially to AI, partially also to over-hiring during the pandemic. Lots of different reasons.

 

But there’s already a lot of questions about societal impacts of AI and how it can widen inequalities. Before we get into how we can balance a human-centric approach with precision, I’d love to explore that just a little bit.

 

Nathan Chappell: Yeah, and I will say, a lot is still unknown. This advancement in technology, this leap, was something that even people that were building it were not prepared for. So, Sam Altman was with Congress, this was the middle of last year, with all the other tech leaders, so Bill Gates and Elon Musk and pretty much, I think like 15 of them were on the table. And someone was talking about predicting the future. And I think it was Sam Altman who stopped and was like, “You know how bad we are at predicting the future is that if we were in the same room 18 months ago, no one would have predicted where we’re at today.” So, all that to say, whatever I say is probably totally wrong because the pace at which the technology is changing has now exceeded our ability to adapt for sure and even sometimes understand.

 

But we are seeing first glimpses of AI essentially displacing people or elevating people. And when we see — McKenzie does this a lot, where they’ll do control groups of College Student A next to College Student B. And College Student A gets to use ChatGPT and everything else they want. College Student B gets to use the internet, how old school. And what happens every single time, so whether they’re college students or white-collar workers, doesn’t matter, is that the College Student A or the White-Collar A that gets to use whatever at their disposal of course, who produces their work faster? Of course, the one that uses generative AI. Who produces work that’s more accurate? It’s also the person who uses generative AI. But also, who has the greater satisfaction in their work? It’s always the person because they off-loaded things that they didn’t want to do, and they feel good about having accuracy and better quality faster.

 

And so, we’re just now seeing the start of that. And I 100%, through and through believe that AI will not replace fundraisers, but I believe that fundraisers who use AI will absolutely replace those that do not. And I just saw an example of this with someone who was an early adopter of AI. He’s a 20-year fundraiser for pediatric oncology. When we started our company, he was so inquisitive. “What is it and how does it work and how do I apply it to our work?” And he started working with our hospital, and just a person who doubled down on this thirst for learning more and figuring it out. And it turns out they closed their moves management cycle by 17%, so they were closing gifts 17% faster. And when I talked to him one day, I’m like, “How’s it going?” And he said, “I’ve never closed gifts faster and at a higher average dollar than any time in my career.” He just got a number one role at a very large institution, which means he actually bypassed several other people, essentially leap-frogged those individuals because of his early adoption.

 

We’re going to see a lot more that will create a lot more tension, but hopefully also a lot more appetite for getting on the side of the digital divide where there’s not going to be a real opportunity here without being smart around working smarter, not harder. I think we’ll look back a year or two years from now, at the landscape of what’s really been a surprise to white-collar workers. People didn’t see that coming. Journalists, almost every area, analysts, consultants, across the board, doctors being augmented by AI. Essentially doing work faster, more accurately is the name of the game. So, we have to decide as a society, do we want more or do we want better, and what is that balance going to be?

 

Leah Davenport Fadling: Speaking of balance, how can a fundraiser keep humans at the center of their fundraising efforts while at the same time leveraging AI to do more faster? Because of course, with trust being the most important factor, nonprofits have to walk that balance of keeping up with the times but also ensuring that their communications and their relationships remain human.

 

Nathan Chappell: When ChatGPT was released, it was November 20, 2022. I’ve always been an early adopter, and I actually pioneered technology and I have patents. But it was so formative, I sat on it for a week. And GPT was already out. I knew lots about GPT, but the interface that was created became accessible to everyone overnight. It was about a week, then my pent-up feeling of this came out, and it was really this idea that sounding authentic is not the same thing as being authentic.

 

And that, while this tool can create tremendous efficiency, can boost performance, can be an awesome proofer and first drafter and all those things that we know it to be — and by the way, we’re dealing with parlor tricks right now. It will be able to do much, much more than the average human and better than the average human very soon if not already. But pretending like using AI or generative AI to produce something that goes to a donor directly, with no human interaction, is like sending someone a Hallmark card and pretending that I cut the paper and drew the picture on the front and came up with the little quip on the inside and passing it off for them to (be) like, “Oh you did this for me.” And all you really did was sign your name.

 

I think, and our sector absolutely calls the question: How do you keep authenticity front and center? I will say, hallucination was, “Oh, we can’t use it because it hallucinates,” or “It doesn’t sound like me.” That’s not a forever thing. Already, in 12 months, (it) has gotten infinitely better, and this is exponential technology, so it will improve exponentially. So, it’s not like in 5 years, generative AI will out-perform a really good writer. It will be, like, 18 months. It’s very fast. While we have these tools to create efficiency, and ideally, we have more time because of that efficiency, there are two paths. If we take the path of, “Okay, I have more time, I’m going to send out more crap,” then we’re going to get more crap. And the generosity crisis is actually going to continue to be exasperated by AI because more is not the answer. People cannot consume more than they’re already consuming now.

 

But if we take the other path — and this is a really popular stance in healthcare — if we take the other path and say, “Okay, this AI gave us more time, we’re going to spend more time human to human.” Because we can offload the things that were really arduous and that were taking our time, and we were only marginally good at to begin with. But we’re going to take that extra time and spend it one on one. I have an extra half an hour in the day because of generative AI, and I’m going to start my morning by just calling people to say, “I’m thinking of you. How are you doing?” “I just want to leave a voicemail to say, ‘You’re on my mind today.’ ” How amazing would that be for our sector?

 

That’s where, I think, the pendulum has moved so far to transactional. And the 20 years before I started — when I started fundraising, it was pre-GuideStar, it was Rolodex cards, and it was literally me calling people and saying, “Hey, are you free for lunch?” Or “I’d love to have coffee with you next week.” To the point of, “Let’s just let technology automate all the stuff and, by the way, we can do so much more. Let’s just send out more.” More is not the answer here. So, I really hope that this crossroads and — we use the inflection point term too often — at this inflection point for our sector, we take this advent of time that we can generate through precision and personalization, and we just pour it into human-to-human connection. And I am super optimistic and excited because that feels right for most people. It’s “less is more,” it feels good, it brings us back to center, why we got into the business to begin with. And we’re able to build off that hit of dopamine and serotonin that both we and the donor get by having those types of conversations. Bringing back more of that human side of what fundraising is all about.

 

Leah Davenport Fadling: Yeah, an analogy that came to mind as you were speaking, Nathan, is the way that within the last 15, 20 years, the elevation of hand-crafted goods, of hand-crafted objects has really come to the forefront. Where we saw how we could mass-produce items with incredible efficiency at incredible economies of scale. When you get something that’s just an assembly line item — like, it’s the same Ikea desk that everybody else has.

 

If you have something in your house that was handcrafted and has been passed down in your family, and it was the same desk that your grandfather had when he was in the 1930s or something like that, we treasure those objects. Because they were things that are one of a kind and someone put a lot of time into. And what’s exciting about that is handcrafting things didn’t go away, it just became much more precious.

 

Nathan Chappell: Right. Yeah, I’m so with you on this. This idea that the premium that’s been placed on the personal. And I used to do this a lot, and I stopped doing it because we got so busy, and email was easier, and Slack was easier. On my desk literally right now, I have cards that I just, I’m like, “Oh I have to email so and so, what’s your email or what’s your address?” But just writing a card and sending it to someone. It’s now something that you save because you receive so few, so of course the premium goes up.

 

That’s where I see the greatest margin of opportunity is for our sector right now, is that people, when they trust less — and this is proven through Edelman Trust Barometer — when they trust less externally — so, they trust the government less, whoever, the media less, and they trust whatever less — they tend to move inward. As humans, we have to place our trust somewhere. Right now, we’re seeing that premium on personal. So, people are drawing closer to their closest friends and their colleagues at work, a smaller circle of individuals that they’re placing their trust in.

 

If we leverage that, and we truly pour ourselves into not scale but that personal touch, man, we’re going to knock it out of the park. Because I don’t see anyone at, I don’t know, whether it’s Patagonia or Coca-Cola, handwriting me a card one random Thursday and being like, “I’m just thinking about you and just want you to know how important you are to our organization.” But if I got that card from anyone today, it’s not going in the trash I guarantee you. Where maybe 15, 20 years ago, you were like, “Aw, I used to get a lot of those.” But it’s going in a special box that I keep. I sent a card to someone the other day, and she said it made her feel good, so she put it as a placeholder in her cookbook.

 

You’re 100% right, and it’s so simple, right? It seems like this whole thing. We know what to do inherently. Generosity is a prosocial behavior; we’re meant to be generous. So, we just have to be more cognizant of that. That algorithms that are essentially trying to steal our attention, turn that off. Use them appropriately. Turn it off, put our blinders on, but use those appropriately. Give us the freedom of time and pour that time into personalization.

 

Leah Davenport Fadling: I think that is the perfect bow to place on this conversation.

 

Nathan Chappell: We’re just getting on my soapbox. No, I totally appreciate having the space for this. It’s a time of a lot of change, but it’s also a really exciting time, so thanks for allowing the conversation, (and) to really talk through the things that we should be challenged with right now.

 

Leah Davenport Fadling: So, Nathan, if people would like to learn more about you, your work, maybe get The Generosity Crisis or learn more about Fundraising AI, where are all those links that they can go to to get that information?

 

Nathan Chappell: Yeah, the easiest way for The Generosity Crisis, go to generositycrisis.com, and frankly, there’s a lot of free resources on there. So, if you want generosity conversation starters or a cheat-sheet on how to build radical connections — real basic stuff that you probably know but it just helps facilitate conversation either within your work or within your family, download that. Amazon, obviously, you can order the book or the audio.

 

But find me on LinkedIn. I tend to post, it's my favorite network for what we’re doing, so Nathan Chappell, C-H-A-P-P-E-L-L. And then, if you want to know more about Fundraising AI, you can go to fundraising.ai and you can see the framework for responsible AI, you can endorse it. And it’s entirely free. It’s all grassroots movement. And there’s also 54 hours of video content around responsible AI that are posted for free on that, made possible by sponsors like Pursuant.

 

Leah Davenport Fadling: Thanks again, Nathan, this was a real treat. I’m really excited about where we landed with it.

 

Nathan Chappell: That’s great, Leah, thanks so much for having me.