Google and Stanford researchers developing voice recognition models for medical conversations

Google is working with researchers and physicians at Stanford University on Automatic Speech Recognition (ASR) models to transcribe medical conversations.

In a blog post, Katherine Chou, Product Manager and Chung-Cheng Chiu, Software Engineer, Google Brain Team say that Electronic Health Records (EHR) documentation often takes half of a doctor’s 11-hour workday, contributing to burnout.

Google cites recent research published in the Annals of Family Medicine which shows that physician satisfaction and medical chart quality and accuracy improved when scribes took notes. Research by Google and Stanford researchers has likewise shown that ASR can be used to transcribe medical conversations with multiple speakers, but so far their use is mostly confined to transcribing dictation from doctors.

In the previous study the researchers used a Connectionist Temporal Classification (CTC) phoneme based model, which achieved a word error rate of 20.1 percent, and a Listen Attend and Spell (LAS) grapheme based model, which achieved 18.3 percent.

The pilot will be conducted to “investigate what types of clinically relevant information can be extracted from medical conversations to assist physicians in reducing their interactions with the EHR.”

Article Topics

 |   |   |   | 

Comments

Leave a Reply

Brand Focus

Biometrics Research Group

Biometrics White Papers

Biometrics Events

Explaining Biometrics