Increasingly, artificial intelligence systems known as deep learning neural networks are used to inform decisions vital to human health and safety, such as in autonomous driving or medical diagnosis.
These networks are good at recognizing patterns in large, complex datasets to aid in decision-making. But how do we know they’re correct? Alexander Amini and his colleagues at MIT and Harvard University wanted to find out.
They’ve developed a quick way for a neural network to crunch data, and output not just a prediction but also the model’s confidence level based on the quality of the available data. The advance might save lives, as deep learning is already being deployed in the real world today. [Read more…] about A neural network learns when it should not be trusted
Related Posts
- 46Artificial intelligence has arrived in our everyday lives – from search engines to self-driving cars. This has to do with the enormous computing power that has become available in recent years. But new results from AI research now show that simpler, smaller neural networks can be used to solve certain…
- 33