Earlier this month MedicalView reported on the study “Scalable and accurate deep learning with electronic health records”. In their study, the researchers from Google, UC San Francisco, Stanford Medicine, and The University of Chicago Medicine showed that an artificial intelligence outperformed traditional statistical models at predicting a range of clinical outcomes from a patient’s entire raw electronic health record (EHR).

“This new study is an example of deep learning applied to medical prediction tasks,” says Nigam Shah, MBBS, PhD, an associate professor at Stanford who was part of the research. “Here, neural networks were able to sift through troves of messy raw data and learn how to organize the data via variables that matter most in predicting health outcomes.”

The researchers used a deep learning techniques that is “inspired by the brain’s neural networks, uses multiple layers (hence ‘deep’) of non-linear processing units (analogous to ‘neurons’) to teach itself how to understand data and then to classify the record or make predictions.”

For Shah, this study shows that it is “possible to take messy electronic health record data — including unstructured clinical notes, errors in labels, and large numbers of input variables — from different institutions and pull the information together into a usable input from which actionable predictions about patient health can be made.”

The interview with Nigam Shah was published in SCOPE by Stanford University. You can read it here.