05.06.2018       Выпуск 233 (04.06.2018 - 10.06.2018)       Интересные проекты, инструменты, библиотеки

SWEM - Tensorflow реализация "Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms"


Экспериментальная функция:

Ниже вы видите текст статьи по ссылке. По нему можно быстро понять ссылка достойна прочтения или нет

Просим обратить внимание, что текст по ссылке и здесь может не совпадать.


SWEM (Simple Word-Embedding-based Models)

This repository contains source code necessary to reproduce the results presented in the following paper:

This project is maintained by Dinghan Shen. Feel free to contact dinghan.shen@duke.edu for any relevant issues.


  • CUDA, cudnn
  • Python 2.7
  • Tensorflow (version >1.0). We used tensorflow 1.5.
  • Run: pip install -r requirements.txt to install requirements


  • For convenience, we provide pre-processed versions for the following datasets: DBpedia, SNLI, Yahoo. Data are prepared in pickle format, and each .p file has the same fields in the same order:

    • train_text, val_text, test_text, train_label, val_label, test_label, dictionary(wordtoix), reverse dictionary(ixtoword)
  • These .p files can be downloaded from the links below. After downloading, you can put them into a data folder:


  • Run: python eval_dbpedia_emb.py for ontology classification on the DBpedia dataset

  • Run: python eval_snli_emb.py for natural language inference on the SNLI dataset

  • Run: python eval_yahoo_emb.py for topic categorization on the Yahoo! Answer dataset

  • Options: options can be made by changing option class in any of the above three files:

  • opt.emb_size: number of word embedding dimensions.
  • opt.drop_rate: the keep rate of dropout layer.
  • opt.lr: learning rate.
  • opt.batch_size: number of batch size.
  • opt.H_dis: the dimension of last hidden layer.
  • On a K80 GPU machine, training roughly takes about 3 minutes each epoch and 5 epochs for Debpedia to converge, 50 seconds each epoch and 20 epochs for SNLI, and 4 minutes each epoch and 5 epochs for the Yahoo dataset.

Subspace Training & Intrinsic Dimension

To measure the intrinsic dimension of word-embedding-based text classification tasks, we compare SWEM and CNNs via subspace training in Section 5.1 of the paper.

Please follow the instructions in folder intrinsic_dimension to reproduce the results.


Please cite our ACL paper in your publications if it helps your research:

title={Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms}, 
author={Shen, Dinghan and Wang, Guoyin and Wang, Wenlin and Renqiang Min, Martin and Su, Qinliang and Zhang, Yizhe and Li, Chunyuan and Henao, Ricardo and Carin, Lawrence}, 

Лучшая Python рассылка

Разместим вашу рекламу

Пиши: mail@pythondigest.ru

Нашли опечатку?

Выделите фрагмент и отправьте нажатием Ctrl+Enter.

Система Orphus