Conversational Design: An Alexa Skill for Coffee Lovers
Every good idea starts with a coffee. I spent my week of Innovation Time exploring conversational design to see how we can make interacting with an Alexa Skill feel more human. Introducing Brew Guide, an Alexa Skill that talks you through brewing the perfect cup of coffee.
The problem most Alexa users are facing is that voice interactions can be lumpy and frustrating to use, when designed without the user in mind (we’re looking at you Egg Timer and Fart Noises).
We’re starting to see design principles applied to Alexa Skill development which has spawned a new discipline in the design world, conversational design.
Conversational designs are flexible and responsive, they need to handle variations of conversations, conditional collection of data and switching context mid-conversation.
Amazon Developer's Alexa Skill Kit
What inspired the Brew Guide Alexa Skill?
All good ideas start with a good brew of coffee.
Personally, I’m a bit of a coffee fiend/nerd (the same can be said for a few members of our team), I love the variety of flavours in coffee and brewing it from the comfort of my own kitchen. The tactile process of grinding, stirring and pouring it in the right way, playing with all the variables to get a great flavour out of a bag of beans and experimenting with different processes to brew the perfect cup of coffee.
Those variables are a double-edged sword though, there is nothing worse than getting an expensive bag of fine Ethiopian Single Origin beans that promise dark chocolate, honey and cherry notes and turning it into burnt dishwater by doing the wrong thing in the brew. I wanted to build upon our previous work in conversational design and see whether we could create an improved experience, in brewing coffee.
A guide to brewing, tailored to an individual bag of coffee and brew method, delivered through an Alexa Skill. Following up a brew with a rating flow could also help tweak subsequent brew recipes and even recommend other beans to reflect the user’s own taste. This would give it a lasting benefit to the user and help coffee roasters match their products to customers individual tastes.
What does the Alexa Skill do?
To guide users through the process of brewing the perfect cup of coffee with an Aeropress, you have to first open the Skill with the invocation “Alexa, open Brewing Guide”.
Select which recipe you’re brewing from a number of options, in this case, it was simplified to options ‘one’ ‘two’ and ‘three’, the Skill will then confirm your selection and give you a description of the coffee you’re brewing.
The Skill will then read out everything you need to get started, from putting the kettle on to making sure your Aeropress and coffee mug are at hand, waiting for the next invocation before moving on to the next step.
Step-by-step instructions differ between brews, but include information on the optimal boiling-water temperature, how to avoid burning your coffee and the time it takes to brew in your Aeropress before it’s ready to be enjoyed.
Once all steps have been completed, the Skill will finish things off with a friendly goodbye and a reminder that users can customise their recipe’s with the invocation “Alexa, open Brew Guide and rate my brew”.
Conversational design in Alexa Skill development:
The complexity of designing for conversation comes down to handling every use case and every permutation of how a user can respond to questions and queries, what I really wanted to get out of my week was a proof of concept. I pared back the scope to a ‘Happy Path’ flow which includes a choice of three coffee blends from our awesome friends over at Naked Coffee, a local coffee shop we share a passion with for empowering independent businesses. Making the assumption that the user is brewing a black coffee with an Aeropress.
Making the Skill sound more human:
One of the main complaints about voice interactions is a cold, robotic feel. To make this something that users are comfortable and at ease interacting with (especially important when your users have just rolled out of bed) I modelled lines of dialogue on how our friendly local barista, Rob, chats to us, focusing on the terms used by Baristas, not customers ordering coffee.
This proved to be extremely valuable in finding alternative or variations of words, as well as the specific terminology used with Aeropress machines, such as:
‘Doppio’ is an alternative way of saying ‘double’ in context to espresso shots
The ‘plunger’ (top cylinder) and ‘chamber’ (bottom cylinder) parts of an Aeropress
Differences between ‘tones’, ‘flavours’, ‘roasts’ and ‘aromas’ in coffee brews
A number of different brands and machines associated with brewing coffee
I also got into the weeds of how all this was delivered, allowing me to include some in-speech pauses using SSML (Speech Synthesis Markup Language), that mirror natural speech patterns using the Amazon recommended One-Breath Test.
Coggle vs Storyline:
In previous projects, we mapped our conversation designs in a piece of mind-mapping software called Coggle. Whilst it’s good for early stage thinking, this soon got to be a bit of a nightmare later in the project when stuff started getting really complex. The conversational designs ended up as this huge sprawling beast of a thing that became extremely difficult to follow and make sense of.
Some organisation was definitely in order and so we welcomed the Storyline app (now known as Invocable) which is specifically designed to handle Alexa Skill designs. Storyline also allowed us to prototype the Skill directly from the user-friendly interface.
How can conversational design make Alexa more human?
Arguably the biggest challenge that voice assistants and technology like Alexa Skills are facing, at least when it comes to being adopted on a wide scale, is trust. Nowadays people are far more wary of how devices are being used, and in the case of Alexa, being used to gain information and seen as ‘spying’ on people in their own homes.
I want to develop the project by implementing machine learning and artificial intelligence, more specifically, teaching the Alexa Skill to personalise conversations to individual users. This personalisation would make every experience unique, intuitive and personal, allowing me to use conversational design to create an experience that builds trust and meaningful relationships between Alexa and it’s users.
A few ways I could implement this is by cutting out unnecessary steps or questions on a user-by-user basis, use the same terms and phrases as different individuals, even make user-specific recommendations based on their history of interacting with different Alexa Skills.
For me, humanising Alexa and making Skills more natural to interact with is the key to engaging and reaching more people with voice technology.
The future of the Brew Guide Alexa Skill?
Ratings, improvement and discovery:
This is a massive aspect of this concept; the loop that allows the Skill to be continuously improved from a user-experience perspective. Helping people achieve better and better brews with an interface that feels more natural and human to them.
I’d see this working as a few follow-up questions after your brew about the overall taste and (if the response was negative) going deeper to ask about the balance of different attributes (sweetness/bitterness/acidity) and modifying subsequent brews to draw out less or more of these depending on the feedback.
To ensure continuity in how coffee is brewed in a team of baristas, the Skill could provide step-by-step guides that help new team members replicate consistent quality across their team. A re-imagining of how the user-flow for his case would give the Skill another facet.
As the Skill got used to brew habits and your knowledge, it could start to remove steps and dialogue to continuously be streamlining itself. This would remove the feel of robotic repetitiveness and allow the user to only hear the steps they need through conversational design.
Published on November 29, 2018, last updated on November 29, 2018
We dedicate 15% of our team's time to explore emerging technology and work on projects they're passionate about. So far we've developed a Jenga game in augmented reality, an app that mimics the human eyes and an interactive map that tacks natural disasters in real-time.