An AI for every home

My hopes were high for some earth-shattering VR announcements from Google yesterday. As I watched the keynote on YouTube and sat through a series of announcements that, on the face of it, were rather underwhelming, I started to feel a little less hopeful.

daydream-vr
Daydream – a hardware and software VR ecosystem

When the VR came, it was in the form of an announcement about a new name (‘Daydream’), reference specs for VR ready handsets and headsets (‘coming this fall’) and a peek at the user interface, which looked interesting but somewhere south of what I can already get from Samsung Gear VR. The inclusion of a wiimote style controller was interesting though, and my mind went immediately to the possibility of using this in physical therapy and stroke-rehabilitation.

Headset and Controller - Reference model for 'Daydream' system
Headset and Controller – Reference model for ‘Daydream’ system

While the future of VR is, in my opinion, very bright and nausea-free, it was the remainder of the keynote that got my neurons firing with possibilities.

Earlier in the event, Google presented a rebranded messaging and video call pairing with Allo and Duo. Allo I could take or leave – it’s AI enhanced auto-reply seems well set to address all my dog-breed identifying woes. It does highlight the path to a more conversational, human-orientated AI interface, all of which strengthens Google’s core offering of a smart, intuitive natural language interface (more later)

So THAT'S what breed it is *wipes brow*
So THAT’S what breed it is *wipes brow*

Duo introduced something called ‘Knock Knock’ to video calling though, and this is where I saw some potential benefit. Essentially, it is a live video preview of the caller before you answer. This allows the recipient to know a little more about the person calling, presented as a way to gauge the mood of the caller.

Me? I saw a way of pre-triaging a patient before the call begins, without the patient needing to speak or enter data. We’ve seen video footage being used to determine pulse, respiratory rate, and even emotional state in other AI systems – could ‘Knock Knock’ even screen for facial weakness in acute stroke? Perhaps you could even analyse changes to speech against previous calls to determine subtle early changes to voice that can happen during ischaemic events. Whether this is unique to Google Duo or whether you could integrate this into existing WebRTC clients is another matter – the ‘smooth transition’ much emphasised by the presenter is less important in this situation.

Changes to Android OS (Version N – my money is on ‘Nougat’) were also discussed, but in truth the most interesting announcement of the whole event was Google Home. Having been nicely set up with a display of the advances in the natural language interface of Google Assistant, this device appears to be the voice activated front-end of home automation. There has been a slowly growing roster of Internet-of-Things (IoT) devices in the consumer market in recent years, such as Nest and Hive thermostats, Philips Hue lights, and the like. Google Home will provide voice search, internet services, and voice control of IoT home devices. Of course, Amazon got there first with Echo, but Google does have 17 years of weapons-grade search engine experience behind it.

The demo video shows the range of ways in which it could get the model family with 2.4 kids and a science project ready for their day – lovely, but much less interesting than the massive potential for health and social care.

A natural language interface could be enormously helpful in helping to meet the health & care needs of the older population:

  • It could be used to easily coordinate carers and update estimated time of arrival, reducing anxiety.
  • Food could be ordered from online grocers, reducing the need to employ carers for the simplest of care tasks (and thereby reducing cost and easing demand).
  • Medication reminders could be given in a friendly and simple to follow way, which could then feed back to the patient’s electronic record.
  • Voice control of home devices would be a gateway to increased use of IoT enabled lamps, heaters, and cooking equipment which would improve accessibility and safety, especially when it comes to family supporting  their more dependent members.
  • In event of an emergency, Google Home could be used to summon help if the user was unable to get up after a fall, and act as speakerphone to emergency services.

The list of possibilities goes on and on, and multiplies with every connected service – something Google is very good at. Having been at the NHS Hack Day this last weekend, I’ve been struck by how little hacking appears to take place in the Social Care arena – perhaps later this year we could see events where Google Home (and Amazon Echo) are used to provide novel services at low cost?

Google Home can potentially help with something much more human though – the need for company. So many of my older patients live alone, with no-one visiting between carer appointments. This device opens the door to a easy, natural way of communicating with others, playing music, listening to audiobooks and the radio, but also interacting with someone that is always there, ready to talk, 24/7. It may start with traffic and weather reports, pre-canned jokes, and facts about astronomy from wikipedia, but the ambition of Google and Amazon, powered by the exponential growth in the field of Artificial Intelligence, means that before long the AI in your home will not just be your butler and assistant, but also a friend and companion.

"No gang signs"
“No gang signs”

Leave a Reply

Your email address will not be published. Required fields are marked *