Edinburgh UX Talk

Edinburgh UX Talk

Designing voice and natural language experiences

Speech and language technologies have been around for over 30 years and always seem to be at the brink of becoming mainstream. In reality, the main applications over the past couple of decades have been interactive voice response (IVR) systems in contact centres and dictation services.

However, in the last few years a number of advances have started to fundamentally change the landscape. Core speech recognition capability has improved dramatically with the invention of deep neural networks and increases in computing power. This has provided recognition accuracy much closer to an acceptable level for everyday users, helping to drive adoption. The rise of the mobile device and fast data connections has allowed hosted speech capabilities to be available to all in the guise of Cortana, Siri and Google Now. Smaller screens encourage users to find other ways to interact and, crucially, people are beginning to enjoy interacting with natural speech dialogues.

In the coming years it's predicted that speech and natural language interfaces are likely to become much more prevalent. This will be driven by a number of factors. Firstly, users will become used to and more reliant upon virtual personal assistants like Siri, Cortana and Facebook M. Speech and natural language interfaces are likely to become the most natural way of interacting with these assistants. Secondly, the rise of the Internet of Things and the connected home is creating a deluge of devices with limited or no visual interface. Speech is one of the most natural interfaces with these devices - either directly or through an always on, home based personal assistant, like Amazon's Echo. Thirdly, businesses are starting to see benefits in enabling enterprise virtual agents, like Nuance's Nina, to provide automated chat and self service capabilities or enable simplified access to complex information on the web or smartphone.

So what does this mean? As consumers increasingly demand lower effort experiences when interacting with products and services, businesses will need to be ready. Understanding how to build speech interfaces into hardware or virtual assistants into your website requires a new skillset. Those businesses getting this right will thrive.

This workshop will look at the fundamentals of designing for speech and natural language interfaces and equip you to start thinking about how to interact with users in this space.

About the Speaker
Dan Whaley is a senior user experience consultant with Sabio Ltd, a systems integrator specialising in customer contact strategies and solutions. Dan has over ten years' industry experience designing across multiple platforms, specialising in voice and natural language applications. He has lead design and consultancy engagements with numerous blue chip clients in areas including financial services, utilities, retail, logistics and the public sector. He is a passionate believer in the user centred approach. Dan also has a strong technical background in artificial intelligence and speech science and has delivered applications and consultancy in both areas.

Click here for RVSP.