Can LNNs beat out Transformers as the best AI in the world?
The field of natural-language processing (NLP), has debated whether models based upon the Long Short-Term Memory architecture (LSTM), can replace the Transformer models. LSTMs are the preferred model for NLP tasks because they can capture sequences with long-term dependencies. Transformers, introduced recently, are gaining popularity because of their superior performance in a variety of NLP benchmarks. While LSTMs were successful at handling tasks such as machine translation and sentiment analyses, transformer models showed remarkable results for tasks involving the understanding of context and generation text.
Transformers possess a unique mechanism of self-attention that allows them process words simultaneously, thus capturing dependencies at a greater distance more effectively. Transformers can therefore understand the context for a word based not only on nearby words, but also on far-away words. Transformers are particularly good at tasks such as question answering, document classifying, and language generation. It is not yet clear whether transformers will be able to replace LSTMs for every NLP task. However, there are increasing signs that they can outperform LSTMs when it comes complex language understanding and generation.
The human brain contains approximately 86 billion neuronal cells. It’s a difficult task to build neural networks like RNNs or CNNs to simulate the brain, since it isn’t possible to scale the network up to this level. It is also a problem to collect huge amounts of labeled training data.
Researchers at MIT’s Computer Science and AI Laboratory, CSAIL, have devised a new way to solve this problem. liquid neural netwThe following are some examples of how to useks or LNNs. These RNNs are continuously running, they process data in a sequential manner, remember past inputs and adapt their behaviour based on the new inputs.
Daniela RusDirector of CSAIL, demonstrated recently the use of LNNs to run a self-driving, autonomous vehicle. She demonstrated how the researchers built a self-driving autonomous car that was trained within the city by using 100,000 artificial neuron. She pointed out that the cameras were picking too much noise for its attention maps, which was not necessary.
The Leveraging of Just 19 artificial neurons Through LNNs the attention map became clearer and more focused on its path. This was accomplished by converting neurons into decision-trees, allowing the machine to decide on its own which path it should take.
Reduced number of neurons sounds like less intelligence. But Rus also demonstrated how this technique can be leveraged in various tasks such as embodied robotics as she claims that LNNs are causal — they understand cause and effect — something which has been touted as missing within Transformer-based neural networks.
The human brain contains billions of neuronal connections, but LNNs are able to achieve the same functionality using a much smaller number of artificial neural connections. It is similar. Yann LeCun‘s concept of achieving canine and feline-like intelligence prior to human-like intellect. This compactness has advantages for computational efficiency and scaling.
Researchers have shown that by combining liquid neural networks with robotics there is a great deal of potential to improve the reliability of navigation systems. This innovation would have many applications, including improved search and rescue efforts, wildlife monitoring, and enhanced delivery services.
Ramin Hasani says LNNs are a great solution for urban areas that have become more crowded. Their compact size also reduces the training costs associated with large models.
How does this work exactly?
Hasani says that the inspiration behind these systems came from an 1mm-long worm known as nematode C Elegans. These worms have a neural system that is made up of only 302 neurons. Hasani explained that even with only 302 neurons, the worms were able to perform tasks that could have complex dynamics.
Hasani explains the typical machine learning systems that use neural networks which only learn in the training phase and then their parameters do not change. LNNs have the ability to learn even after initial training. This is achieved by using “liquid” The brain’s neurons and synapses are able to adapt dynamically to new information.
As an example, in a experiment performed on a droneThe drone tracked the object more accurately using liquid networks than it could with other deep neural networks. It is possible because there are fewer neurons. The drone can focus on its task and is not affected by the surrounding context.
LNNs could have a limitation in that they are not aware of the surrounding environment and only focus on the direction it wants to take. This makes it difficult to avoid colliding with objects. The research takes place on roads that are empty and without obstacles. Hasani says that this area is being improved and explored.
LNNs or LLMs?
LNNs could open up a new field of research for AI applications. LLMs rely on Transformers to build their field, and therefore the number of parameters. Liquid networks have parameters that change based upon the results of a nesting set of differential algebraic equations. This means they can learn new tasks on their own and do not need extensive training.
LLMs such as GPT-3 or GPT-4 have already shown impressive capabilities in terms of language generation. The static nature of LLMs, in which the parameters are set once the initial training is complete, limits them. LNNs on the other hand are able to adapt to new information and continue to learn, much like the way neurons and synapses work in the brains of living organisms.
By integrating LNNs with LLMs, models can update parameters continuously and adapt to evolving contexts and language patterns. LLMs are able to respond better to user preferences and changing needs by incorporating this dynamic aspect. Since models built using these architectures will not have a large amount of information, we can’t expect the intelligent behavior that is hints in current LLMs. LNNs could be given more possibilities if the network were explored further.
Researchers are working to optimize LNNs despite the challenges by exploring ways to reduce the number of neuron required for certain tasks. A lack of literature on LNNs and their implementation, applications, and benefits also makes it challenging to understand the potential and limitations.