Algorithms for Neural Networks by Google and Northwestern University

Jul 31, 2021 | Shaoni Ghosh

Algorithms for Neural Networks by Google and Northwestern University title banner

Observation

 

A verifiable observation has been made with respect to the success of DNN i.e, Deep Neural Network, which is an artificial neural network (ANN) covered with multiple layers, marking a difference between the input and output layers. There are different  kinds of neural networks, comprising the same components like neurons, synapses, weights, biases and functions. 

 

DNN has encouraged the machine learning community to a great extent. Machine learning, a branch of AI, is the process of analysing data that regulates analytical model building. Identification of patterns and the process of decision-making is made without the intervention of humans through this process. It inspired the community to focus on theoretical studies such as learning, optimization, and generalization.

 

 (Must Check: Introduction to Neural Networks and Deep Learning)

 

Teamwork

 

A team from Google Research and Northwestern University has performed their research work in this field and has been able to establish and arrange polynomial time and sample-efficient algorithms in a synchronized manner. 

 

According to Synced, the recent paper named ‘Efficient Algorithms for Learning  Depth-2 Neural Networks with General ReLU Activations, talks about the probability of such methodical and well organized algorithms for learning depth-2 Neural Networks and ReLU networks. 

 

 The team takes an approach towards the supervised learning problem with input derived from a standard Gaussian distribution.The tensor that has been used here is made up by solving the weighted average of a score function on each data point. Through this process of evaluating and analysing multiple high-order tensors, it becomes easier to obtain a good value of network.

 

The traditional approach towards learning networks with ReLU activations was zero but now, the decided algorithms consist of multiple high-order tensions and the way they perform ensures the fact that polynomial time algorithms can be sketched even when the bias terms exist in the ReLU units.

 

The team employs a well-structured format in order to locate how these algorithms have succeeded. It also demonstrates the networking framework embedded with slight assumptions. The team hints that more such research work could be made in order to provide a detailed analysis of networks that are of higher depth.

Tags #Deep Learning
Advertisement