This category focuses on the safety and ethical considerations of AI and deep learning, including fairness, transparency, and robustness.
Techniques that focus on the use of attention mechanisms and transformer models in deep learning.
Techniques for processing and understanding audio data and speech.
Techniques for training models across many decentralized devices or servers holding local data samples, without exchanging the data samples themselves.
Techniques that aim to make accurate predictions with only a few examples of each class.
This includes papers on generative models like Generative Adversarial Networks, Variational Autoencoders, and more.
Techniques for dealing with graph-structured data.
This category includes papers focused on techniques for processing and understanding images, such as convolutional neural networks, object detection, image segmentation, and image generation.
This category is about techniques to understand and explain the predictions of deep learning models.
These papers focus on large scale models for understanding and generating text, like GPT-3, BERT, and other transformer-based models.
Techniques that aim to design models that can learn new tasks quickly with minimal amount of data, often by learning the learning process itself.
Techniques for models that process and understand more than one type of input, like image and text.
Techniques for understanding and generating human language.
These papers focus on methods for automatically discovering the best network architecture for a given task.
This category includes papers focused on how to improve the training process of deep learning models, such as new optimization algorithms, learning rate schedules, or initialization techniques.
Papers in this category focus on using deep learning for reinforcement learning tasks, where an agent learns to make decisions based on rewards it receives from the environment.
These papers focus on learning meaningful and useful representations of data.
Techniques where models are trained to predict some part of the input data, using this as a form of supervision.
Techniques for dealing with data that has a temporal component, like RNNs, LSTMs, and GRUs.
Papers here focus on how to apply knowledge learned in one context to another context.
Papers in this category focus on techniques for learning from unlabeled data.