All Blog Posts

AI Adoption For Good: Your Questions Answered

Experts from WWF International, Amnesty International and The Kaleidoscope Project answer your burning questions about AI Adoption!

Phoebe Hayles
7 Min Read
Illustration of our four panelists under the text "You Asked, We Answered" with organization names discussing AI adoption.

TL;DR 

You asked, we answered! We gathered our panel of experts for our webinar “Has AI Adoption in Purpose-Led Organisations Reached Critical Mass?” (watch the full replay here!) to answer your questions about AI Adoption For Good.

To make things even better, the discussion was based on findings from our 2025 AI Adoption Report. Including how 89% of purpose-led organisations are now on their AI journey!

Do you experience conflicts between AI data collection ethics and your organisation’s mission statement?

The short answer is yes! The mission of AI tools are often at odds with the mission of purpose-led organisations,

But there is a solution to this: the principles and values of organisations always come first. 

Purpose-led organisations need to decide their stance on where they can acknowledge the differing missions but still utilise AI tools, and where using AI is a ‘no-go’. For example, Meet Muchala from WWF International described how AI can be great at predicting trends based on data, but it cannot predict the cultural nuances or trust when working with local communities. This means WWF does not rely on AI to replace the human relationships necessary for their conservation work but it can be used in some other use cases.

Paul Smith from Amnesty International also talked us through a great analogy of AI being like a photocopier. If you keep photocopying an image over and over again, it will eventually turn completely white. That’s exactly what happens when you continuously feed AI it’s own data. It loses all fidelity!

Although there are conflicts, an organisation’s mission is what will guide decisions about where AI helps, where it harms, and when better data, process, or other tech wins.

How are people navigating getting leadership buy-in for AI literacy and innovation?

Our panel gave great examples based on their experiences, but they all agreed on one very important thing… showing the impact!

We are seeing a variety of attitudes and approaches from leadership teams. Some are leading by example, testing AI themselves, whereas others are more comfortable waiting to see how AI plays out on a wider scale.

Talking about AI is one thing, and it’s theoretical, but being able to actually show what work has been done provides a starting point and place for further discussion!

“Show and tell goes a long way”

Paul Smith, Chief Information Officer, Amnesty International

Also, when getting leadership buy-in, it’s important to address mistrust by showing how your organisation has already used AI. For example, Meet was able to demonstrate how WWF International already had decades of experience using AI to achieve their mission! This reduces the feeling of overwhelm and focuses on what AI really means for your organisation. It’s a long process, but it pays off!

“This stuff takes time!”

Puff Story, Co-Founder, 3 Sided Cube USA

Any impact measurement frameworks you’d recommend?

The most suitable framework will differ based on the type of project, organisation and area the project is focusing on, but our panel had a few recommendations:

How do you think about the ethics of using AI regarding the potential issues it poses for younger generations?

Although we should not disregard these potential negative effects, finding use cases in which AI is actually helping those issues is really important.  For example, Addie Achan from The Kaleidoscope Project has worked with a climate organisation that uses AI for energy optimisation, teams that use AI to help upskill college students and give them direct advice, or even a tool that can predict when you’re looking at inflammatory content to help with tech addiction!

“The more people are able to organise and express that these are real concerns will hopefully put pressure on people to [direct it to use cases where AI is helping].”

Addie Achan, Founder, The Kaleidoscope Project

Paul Smith also talked about looking through the lens of the digital divide. What can AI bring to the table for people who are otherwise without? There’s actually a positive potential to close the digital divide by using AI for education, mentors, and knowledge sharing.

However, there are some very significant risks that we have not solved yet, such as the potential loss of junior or entry level roles. As mentioned, the positive use cases do not mean we should disregard the potential negative effects.

We need to be pushing for solutions to these very real issues!

How to apply guardrails to a higher intelligence?

When we talk about creating AI policies or governance, there’s often a fear or question around ‘what is this going to restrict?’. Instead of restrictions, the focus should be on designing guardrails that foster innovation in responsible ways!

Compared to the adoption of other technologies, adopting AI benefits a lot from investing time upfront into defining what is ok and what is not ok for your organisation. It means you can then design around those considerations to create a tool that has a longer lifespan.

“Set guardrails early… stop chasing hype and start chasing impact”

Meet Muchala, Strategic Innovation and AI Lead, WWF International

Actionable Advice for Adopting AI

We ended the webinar on a high, asking our panel for their ‘30 day move’, the one change they’d make (or recommend you make) in the next month!

Experiment with low-risk use cases

Addie Achan’s, Founder at The Kaleidoscope Project, advice is to make an AI Policy to help foster knowledge or think about what knowledge bases would be helpful to your organisation. For example, developing a ‘strategic plans’ knowledge base to help guide outputs.

Share AI learnings with each other

As Paul Smith, Chief Information Officer at Amnesty International, said in the webinar ‘If I can learn about your 5 things, I don't have to do as much of them myself’. Sharing with each other across industries, expertise, and experience helps us understand any risks, successes, tangible outcomes, and lessons.

Start small and start safe

Meet Muchhala, Strategic Innovation and AI Lead at WWF International, suggests picking some low risk use cases that can really help you solve your teams' day-to-day pain-points. Stop chasing hype and start chasing impact!

Want to hear more about AI Adoption for Good?

We’ve got you covered with almost every format you could dream of!

And if you have any other questions, shout us a Holla! We love to chat 💚

Published on 6 February 2026, last updated on 6 February 2026