Google and Stanford researchers developing voice recognition models for medical conversations

November 28, 2017 - 

Google is working with researchers and physicians at Stanford University on Automatic Speech Recognition (ASR) models to transcribe medical conversations.

In a blog post, Katherine Chou, Product Manager and Chung-Cheng Chiu, Software Engineer, Google Brain Team say that Electronic Health Records (EHR) documentation often takes half of a doctor’s 11-hour workday, contributing to burnout.

Google cites recent research published in the Annals of Family Medicine which shows that physician satisfaction and medical chart quality and accuracy improved when scribes took notes. Research by Google and Stanford researchers has likewise shown that ASR can be used to transcribe medical conversations with multiple speakers, but so far their use is mostly confined to transcribing dictation from doctors.

In the previous study the researchers used a Connectionist Temporal Classification (CTC) phoneme based model, which achieved a word error rate of 20.1 percent, and a Listen Attend and Spell (LAS) grapheme based model, which achieved 18.3 percent.

The pilot will be conducted to “investigate what types of clinically relevant information can be extracted from medical conversations to assist physicians in reducing their interactions with the EHR.”

Leave a Comment

comments

About Chris Burt

Chris Burt is a writer and contributor to Biometric Update. He has also written nonfiction about information technology, dramatic arts, sports culture, and fantasy basketball, as well as fiction about a doomed astronaut. He lives in Toronto. You can follow him on Twitter @AFakeChrisBurt."