Document Classification

Document Classification is a procedure of assigning one or more labels to a document from a predetermined set of labels.

Benchmarks

These leaderboards are used to track progress in Document Classification

Libraries

Use these libraries to find Document Classification models and implementations

Datasets

Subtasks

Most implemented papers

Graph Attention Networks

We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations.

Semi-Supervised Classification with Graph Convolutional Networks

tkipf/pygcn • • 9 Sep 2016

We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs.

Revisiting Semi-Supervised Learning with Graph Embeddings

We present a semi-supervised learning framework based on graph embeddings.

On Calibration of Modern Neural Networks

gpleiss/temperature_scaling • • ICML 2017

Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications.

Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond

facebookresearch/LASER • • TACL 2019

We introduce an architecture to learn joint multilingual sentence representations for 93 languages, belonging to more than 30 different families and written in 28 different scripts.

Improving Language Understanding by Generative Pre-Training

huggingface/transformers • • Preprint 2018

We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task.

FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness

hazyresearch/flash-attention • • 27 May 2022

We also extend FlashAttention to block-sparse attention, yielding an approximate attention algorithm that is faster than any existing approximate attention method.

ZEN: Pre-training Chinese Text Encoder Enhanced by N-gram Representations

sinovation/ZEN • • Findings of the Association for Computational Linguistics 2020

Moreover, it is shown that reasonable performance can be obtained when ZEN is trained on a small corpus, which is important for applying pre-training techniques to scenarios with limited data.

SPECTER: Document-level Representation Learning using Citation-informed Transformers

allenai/specter • • ACL 2020

We propose SPECTER, a new method to generate document-level embedding of scientific documents based on pretraining a Transformer language model on a powerful signal of document-level relatedness: the citation graph.