“Zero AI-washing, zero robot overlords, just eight hard-won learnings from the experts.”
Puff Story, 3 Sided Cube USA Co-Founder
You asked. Our panellists answered. Keep reading for your roadmap to AI success.
We recently ran a Good Tech Sesh called “AI For Good: How to Embed AI in Purpose-Led Tech.”
And it delivered exactly what it promised, no empty hype, no robo-overlords, just four practitioners swapping scars and successes:
Jeff Hampton – Co-founder, Swagatar (ex-Roblox / PS5)
Nicole Levitz – Senior Director, Digital Health, Planned Parenthood
Ibrahim El Haddad – Head of Information & Communication, UN OCHA
Nick Suplina – SVP Law & Policy, Everytown for Gun Safety
(Hosted by our very own Puff Story, robot avatar cameo and all).
Missed the live show? Crash the conversation on demand whenever it suits.
Need the practical goods right now? Dive in for eight hands-on how-to’s from the panel.
1. Start where the pain is
Jeff Hampton (Swagatar, ex-Roblox) cut an 18-month code-migration slog down to six weeks:
“Whatever makes your team stub its toe, prototype there first.”
Small, high-friction wins generate immediate belief (and budget) for round two.
Cube Tip: Run a one-day “Toe-Stub Audit.” Everyone lists their most boring, repetitive task; highest-frequency pain gets the first AI spike.
2. Solve mission problems, not magpie cravings
UN OCHA’s news monitor now auto-summarises 3–4 hours of daily media trawling:
“Our AI came from operational necessity, not innovation for innovation’s sake.” Ibrahim Al Haddad
Cube Tip: Before any build, write the “If we switch AI off tomorrow…” test. If impact nosedives, you’re solving a real problem.
3. Ship embarrassingly small pilots
Planned Parenthood’s first chatbot handled just two “awkward” teen questions:
“I wish we’d had more micro-tests upfront. Iterate consistently, not just dive into one big build.” Nicole Levitz
Cube Tip: Cap v0 at one user journey and two metrics. If it can’t ship in four weeks, slice again.
4. Codify accuracy, don’t just promise it
Planned Parenthood blocks any answer below 90 % confidence.
Cube Tip: Hard-code your own red line (80? 95?) and pipe low-confidence queries straight to a human or a “Can’t answer yet” fallback.
5. Guard-rail for ethics and emotion
Jeff’s SPACE framework = Security, Privacy, Accessibility, AI Safety, Compliance, Education.
“Every response flows through those guard-rails; if emotion or accuracy slips, we hard-stop.”
Cube tip: add an “Abort & escalate” clause to your content style guide; teach the model and the team when silence is safer.
6. Keep a human in the last 5 %
EveryShot.org automates the grunt work; two humans still sanity-check output:
“Generative AI gets us 95 % of the way; that final 5 % needs human eyes.” Nick Suplina
Cube Tip: Budget reviewer time in the sprint plan, not as “extra.” QA is a feature, not a phase.
7. Own your data or expect hallucinations
Public LLMs mix up Springfields; your vetted set won’t.
“If it’s spitting the wrong town, feed it the right table, generic models won’t guess county-level truth.”
Cube Tip: Spin up a private vector store with only your blessed data. Five lines of code, endless sanity saved.
8. Put end-users in the room now
UN OCHA rebuilt its tool three times until field workers co-designed it.
“Do more listening before coding - users in Figma, users in alpha, users in retro.”
Cube Tip: Run “Design Karaoke”: users narrate their screen while you build live. Instant feedback, zero guesswork.
Ready for more than bite-size tips?
AI adoption is never plug-and-play, and there is no single roadmap that suits every mission. That is exactly why we host these brutally honest Good Tech Seshes: to swap the messy realities as well as the magic moments. If the eight pointers above sparked ideas, frustrations, or a healthy dose of FOMO, grab the on-demand recording. You will hear every unfiltered question, every “we tried this and it bombed,” and the practical pivots that followed.
Catch the full session here, steal what works, and ping us with your new mistakes and eureka moments!
Published on 3 July 2025, last updated on 3 July 2025