Datasets and evaluation metrics for natural language processing
in NumPy, Pandas, PyTorch and TensorFlow
nlp is a lightweight and extensible library to easily share and access datasets and evaluation metrics for Natural Language Processing (NLP).
nlp has many interesting features (beside easy sharing and accessing datasets/metrics):
- Build-in interoperability with Numpy, Pandas, PyTorch and Tensorflow 2
- Lightweight and fast with a transparent and pythonic API
- Strive on large datasets:
nlpnaturally frees the user from RAM memory limitation, all datasets are memory-mapped on drive by default.
- Smart caching: never wait for your data to process several times
nlp currently provides access to ~100 NLP datasets and ~10 evaluation metrics and is designed to let the community easily add and share new datasets and evaluation metrics.
nlp originated from a fork of the awesome
TensorFlow Datasets and the HuggingFace team want to deeply thank the TensorFlow Datasets team for building this amazing library. More details on the differences between
tfds can be found in the section Main differences between
nlp can be installed from PyPi and has to be installed in a virtual environment (venv or conda for instance)
You can also install
nlp from source:
git clone https://github.com/huggingface/nlp cd nlp pip install .
When you update the repository, you should upgrade the nlp installation and its dependencies as follows:
Using with PyTorch/TensorFlow/pandas
If you plan to use
nlp with PyTorch (1.0+), TensorFlow (2.2+) or pandas, you should also install PyTorch, Tensorflow or pandas.
nlp is made to be very simple to use, the main methods are:
nlp.list_datasets()to list the available datasets
nlp.load_dataset(dataset_name, **kwargs)to instantiate a dataset
nlp.list_metrics()to list the available metrics
nlp.load_metric(metric_name, **kwargs)to instantiate a metric
Here is a quick example:
import nlp # Print all the available datasets print([dataset.id for dataset in nlp.list_datasets()]) # Load a dataset and print the first examples in the training set squad_dataset = nlp.load_dataset('squad') print(squad_dataset['train']) # List all the available metrics print([metric.id for metric in nlp.list_metrics()]) # Load a metric squad_metric = nlp.load_metric('squad')
Main differences between
If you are familiar with the great
Tensorflow Datasets, here are the main differences between
- the scripts in
nlpare not provided within the library but are queried, downloaded/cached and dynamically loaded upon request
nlpalso provides evaluation metrics in a similar fashion to the datasets, i.e. as dynamically installed scripts with a unified API. This gives access to the pair of a benchmark dataset and a benchmark metric for instance for benchmarks like SQuAD or GLUE.
- the backend serialization of
nlpis based on Apache Arrow/Parquet instead of TF Records and leverage python dataclasses for info and features with some diverging features (we mostly don't do encoding and store the raw data as much as possible in the backend serialization cache)
- the user-facing dataset object of
nlpis not a
tf.data.Datasetbut a built-in framework-agnostic dataset class with methods inspired by what we like in tf.data (like a map() method). It basically wraps a memory-mapped Arrow table cache.
Similarly to Tensorflow Dataset,
nlp is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.
If you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!
If you're interested in learning more about responsible AI practices, including fairness, please see Google AI's Responsible AI Practices.