17.05.2020       Выпуск 334 (11.05.2020 - 17.05.2020)       Интересные проекты, инструменты, библиотеки

nlp - Datasets for Natural Language Processing in NumPy, Pandas, PyTorch and TensorFlow


Экспериментальная функция:

Ниже вы видите текст статьи по ссылке. По нему можно быстро понять ссылка достойна прочтения или нет

Просим обратить внимание, что текст по ссылке и здесь может не совпадать.

Build GitHub GitHub release

Datasets and evaluation metrics for natural language processing

in NumPy, Pandas, PyTorch and TensorFlow

🤗nlp is a lightweight and extensible library to easily share and access datasets and evaluation metrics for Natural Language Processing (NLP).

nlp has many interesting features (beside easy sharing and accessing datasets/metrics):

  • Build-in interoperability with Numpy, Pandas, PyTorch and Tensorflow 2
  • Lightweight and fast with a transparent and pythonic API
  • Strive on large datasets: nlp naturally frees the user from RAM memory limitation, all datasets are memory-mapped on drive by default.
  • Smart caching: never wait for your data to process several times

nlp currently provides access to ~100 NLP datasets and ~10 evaluation metrics and is designed to let the community easily add and share new datasets and evaluation metrics.

nlp originated from a fork of the awesome TensorFlow Datasets and the HuggingFace team want to deeply thank the TensorFlow Datasets team for building this amazing library. More details on the differences between nlp and tfds can be found in the section Main differences between nlp and tfds.


From PyPI

nlp can be installed from PyPi and has to be installed in a virtual environment (venv or conda for instance)

From source

You can also install nlp from source:

git clone https://github.com/huggingface/nlp
cd nlp
pip install .

When you update the repository, you should upgrade the nlp installation and its dependencies as follows:

Using with PyTorch/TensorFlow/pandas

If you plan to use nlp with PyTorch (1.0+), TensorFlow (2.2+) or pandas, you should also install PyTorch, Tensorflow or pandas.


Using nlp is made to be very simple to use, the main methods are:

  • nlp.list_datasets() to list the available datasets
  • nlp.load_dataset(dataset_name, **kwargs) to instantiate a dataset
  • nlp.list_metrics() to list the available metrics
  • nlp.load_metric(metric_name, **kwargs) to instantiate a metric

Here is a quick example:

import nlp

# Print all the available datasets
print([dataset.id for dataset in nlp.list_datasets()])

# Load a dataset and print the first examples in the training set
squad_dataset = nlp.load_dataset('squad')

# List all the available metrics
print([metric.id for metric in nlp.list_metrics()])

# Load a metric
squad_metric = nlp.load_metric('squad')

Now the best introduction to nlp is to follow the tutorial in Google Colab which is here: Open In Colab

Main differences between nlp and tfds

If you are familiar with the great Tensorflow Datasets, here are the main differences between nlp and tfds:

  • the scripts in nlp are not provided within the library but are queried, downloaded/cached and dynamically loaded upon request
  • nlp also provides evaluation metrics in a similar fashion to the datasets, i.e. as dynamically installed scripts with a unified API. This gives access to the pair of a benchmark dataset and a benchmark metric for instance for benchmarks like SQuAD or GLUE.
  • the backend serialization of nlp is based on Apache Arrow/Parquet instead of TF Records and leverage python dataclasses for info and features with some diverging features (we mostly don't do encoding and store the raw data as much as possible in the backend serialization cache)
  • the user-facing dataset object of nlp is not a tf.data.Dataset but a built-in framework-agnostic dataset class with methods inspired by what we like in tf.data (like a map() method). It basically wraps a memory-mapped Arrow table cache.


Similarly to Tensorflow Dataset, nlp is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.

If you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!

If you're interested in learning more about responsible AI practices, including fairness, please see Google AI's Responsible AI Practices.

Разместим вашу рекламу

Пиши: mail@pythondigest.ru

Нашли опечатку?

Выделите фрагмент и отправьте нажатием Ctrl+Enter.

Система Orphus