burgerlogo

New Theory on Deep Learning: Information Bottleneck

New Theory on Deep Learning: Information Bottleneck

avatar
IoT For All News Team

- Last Updated: November 25, 2024

avatar

IoT For All News Team

- Last Updated: November 25, 2024

featured imagefeatured imagefeatured image

Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, presented a new theory explaining how deep learning works, called the “information bottleneck.”

The theory says that deep learning takes place because of an information bottleneck procedure that compresses noisy data while preserving information about what the data represent. There is a threshold a system reaches, where it compresses the data as much as possible without sacrificing the ability to label and generalize the output.

Tishby and his students also made the intriguing discovery that deep learning occurs in two phases: a short “fitting” phase and a longer “compression” phase. In the first phase, the system learns to label the training data accurately, and in the second phase, the system learns what input features are relevant in making that classification.

This is one of many new and exciting discoveries made in the fields of machine learning and deep learning, as people break ground in training machines to be more human- and animal-like.

Read the full article from Quanta Magazine here.

Need Help Identifying the Right IoT Solution?

Our team of experts will help you find the perfect solution for your needs!

Get Help