Monday, October 2, 2023
HomeTechnology4 deep thoughts on deep learning in 2022

4 deep thoughts on deep learning in 2022

Register now for your free virtual pass to Low-Code/No – Code Summit on November 9th. Hear from executives from Service Now, Credit Karma, Stitch Fix, Appian and more. Learn more.

We leave behind another year of exciting developments in artificial intelligence (AI) deep learning We – full of remarkable progress, controversies, and of course controversies. As we wrap up 2022 and gear up for 2023, here are some of the most notable and important trends in deep learning this year.

1. Size is still an important factor

A theme that has remained constant in deep learning over the past few years is the drive to create larger neural networks. The availability of computer resources enables the development of scale-friendly architectures such as scaling neural networks and specialized AI hardware, large datasets, and transformer models.

Currently, companies are getting better results by scaling neural networks to larger scales. In the past year, DeepMind released Gopher, a large language model (LLM) with 280 billion parameters; Google released Pathways Language Model (PaLM) with 540 billion parameters, and General Purpose Language Model (GLaM) with up to 1.2 trillion parameters parameters; Microsoft and Nvidia released the Megatron-Turing NLG, a 530 billion parameter LLM. For smaller people, this is not possible. This phenomenon is particularly interesting in LLM, where models show promising results on a wider range of tasks and benchmarks as the model scales up.


Low Code/No Code Summit

Virtual low-code/no-code summit on November 9th , join today’s leading executives. Sign up for a free pass today.

register here

It is worth noting, however, that some fundamental problems of deep learning remain unsolved, even in the largest models (more on this later).

2. Unsupervised learning continues to provide

many successful deep learning applications require human labeled training examples, also called supervised learning. But most of the data available on the internet does not come with the clean labels required for supervised learning. Data annotation is expensive and slow, creating a bottleneck. That’s why researchers have long sought advances in unsupervised learning, in which deep learning models can be trained without human-annotated data.

Great progress has been made in this area, and in recent years, especially in LLM, they are mainly trained on massive raw datasets collected from the Internet. While the LLM continues to progress in 2022, we also see other trends in unsupervised learning techniques gaining popularity.

For example, there have been significant advances in text-to-image models this year. Models such as OpenAI’s DALL-E 2, Google’s Imagen, and Stability AI’s Stable Diffusion demonstrate the benefits of unsupervised learning. strength. Unlike older text-to-image models that require well-annotated image and description pairs, these models use large datasets of loosely captioned images that already exist on the internet. The sheer size of their training datasets (which is only possible because manual labeling is not required) and the variability of captioning schemes enable these models to find a variety of complex patterns between textual and visual information. Therefore, they are more flexible in generating images for various descriptions.

3. Multimodal has come a long way

Text to Image Generator has another interesting Features: They combine multiple data types in a single model. Being able to handle multiple schemas enables deep learning models to take on more complex tasks.

Multimodality is very important for human and animal intelligence. For example, when you see a tree and hear the wind rustling on its branches, your brain can quickly associate them together. Likewise, when you see the word “tree,” you can quickly associate an image of a tree, remember the smell of a pine tree after a rain, or recall other experiences you’ve had before.

Obviously, multimodality plays an important role in making deep learning systems more flexible. This is perhaps best demonstrated by DeepMind’s Gato, a deep learning model trained on a variety of data types including images, text, and proprioceptive data. Gato excels at multiple tasks, including image captioning, interactive dialogue, controlling robotic arms, and playing games. This is in stark contrast to classical deep learning models designed to perform a single task. Artificial intelligence (AGI) needs to be implemented. Although many scientists disagree with this view, it is certain that multimodality brings important achievements to deep learning.

4. The basic deep learning problem still exists

Despite the impressive achievements of deep learning, some problems in this field remain unsolved. These include causality, compositionality, common sense, reasoning, planning, intuitive physics, and abstraction and analogy.

These are some of the intellectual mysteries that are still being studied by scientists in different fields. Purely scale- and data-based deep learning approaches have helped make incremental progress on some of these problems, but have failed to provide clear solutions.

For example, a larger LLM can maintain coherence and consistency in longer texts. But they fail on tasks that require careful step-by-step reasoning and planning.

Likewise, text-to-image generators can create stunning graphics, but make basic mistakes when asked to draw images

Various scientists are discussing and exploring these challenges, including some pioneers of deep learning. The most famous of these is Turing Award-winning Convolutional Neural Network (CNN) inventor Yann LeCun, who recently wrote a long article on the limitations of LLMs that only learn from text. LeCun is working on a deep learning architecture for learning world models that can address some of the current challenges in the field.

Deep learning has come a long way. But the more progress we make, the more we realize the challenges of creating truly intelligent systems. Next year is sure to be as exciting as this year.

The Mission of VentureBeat is a digital city plaza for technology decision makers to learn about transformative enterprise technologies and transactions. Read about our newsletter.



Please enter your comment!
Please enter your name here


Featured NEWS