diff --git a/docs.ipynb b/docs.ipynb new file mode 100644 index 000000000..2ae25d065 --- /dev/null +++ b/docs.ipynb @@ -0,0 +1,1377 @@ +{ + "nbformat": 4, + "nbformat_minor": 0, + "metadata": { + "colab": { + "name": "docs.ipynb", + "provenance": [], + "collapsed_sections": [] + }, + "kernelspec": { + "name": "python3", + "display_name": "Python 3" + }, + "accelerator": "GPU" + }, + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "TQtprU1En0ZH", + "colab_type": "text" + }, + "source": [ + "## OpenSeq2Seq Documentation (changes are welcome!)\n", + "Link: https://nvidia.github.io/OpenSeq2Seq/html/index.html" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "P1UYjCSbRQ1l", + "colab_type": "text" + }, + "source": [ + "OpenSeq2Seq is a TensorFlow-based toolkit for sequence-to-sequence models:\n", + "\n", + " - machine translation (GNMT, Transformer, ConvS2S, …)\n", + " - speech recognition (DeepSpeech2, Wave2Letter, Jasper, …)\n", + " - speech commands (RN-50, Jasper)\n", + " - speech synthesis (Tacotron2, WaveNet…)\n", + " - language model (LSTM, …)\n", + " - sentiment analysis (SST, IMDB, …)\n", + " - image classification (ResNet-50)\n", + "\n", + "Main features:\n", + "\n", + " - modular architecture that allows assembling of new models from available components\n", + " - support for mixed-precision training, that utilizes Tensor Cores in NVIDIA Volta/Turing GPUs\n", + " - fast Horovod-based distributed training supporting both multi-GPU and multi-node modes" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "hz4_ATFYSqp8", + "colab_type": "text" + }, + "source": [ + "### General Installation" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "3uUyDA0sv1RZ", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# Clone the repo and install the dependences\n", + "!git clone https://github.com/NVIDIA/OpenSeq2Seq\n", + "%cd OpenSeq2Seq\n", + "!pip install -r requirements.txt" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "NrXlWxniv_5b", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# Install Tensorflow GPU\n", + "!pip install tensorflow-gpu==1.15.0" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "LN4p_Ix1xbES", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# Install the Baidu CTC Decoder\n", + "!scripts/install_decoders.sh" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "qaZnjqq2yFFr", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# Test the installation (if you get an error here, just ignore it)\n", + "!python scripts/ctc_decoders_test.py" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "xWc1QPBXy2_B", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# Build a custom native TF op for CTC decoder w/ language model\n", + "# Install boost\n", + "!apt-get install libboost-all-dev\n", + "\n", + "# Build kenlm\n", + "!apt-get install cmake\n", + "!./scripts/install_kenlm.sh" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "B3L-5IVW07zs", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# Validate TensorFlow installation\n", + "!python -c \"import tensorflow as tf; print(tf.__version__)\"" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "f3QGavlw14dT", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# Download a language model for a CTC decoder\n", + "!./scripts/download_lm.sh" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "X64gnRZm2MEs", + "colab_type": "code", + "colab": {} + }, + "source": [ + " # Run speech2text example with enabled CTC beam search decoder and save the output to logs\n", + " !python run.py --config_file=example_configs/speech2text/ds2_toy_config.py --mode=train_eval --enable_logs" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "CwQYNoYHS8t7", + "colab_type": "text" + }, + "source": [ + "### Horovod installation" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "lqrksyM0F_IX", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# For multi-GPU and distributed training we install Horovod\n", + "!pip install mpi4py\n", + "!pip install horovod" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "Dkf6TrFc3w9i", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# To that everything is installed correctly\n", + "!bash scripts/run_all_tests.sh" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "fdbs7P17Fna7", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# When training with Horovod, use the following commands (update parameters as needed)\n", + "!mpiexec --allow-run-as-root -np python run.py --config_file=... --mode=train_eval --use_horovod=True --enable_logs" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "JM_JLFbRcopt", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# Run inference to dump logits to a pickle file (update parameters as needed)\n", + "!python run.py --mode=infer --config=\"MODEL_CONFIG\" --logdir=\"MODEL_CHECKPOINT_DIR\" --num_gpus=1 --use_horovod=False --decoder_params/use_language_model=False --infer_output_file=model_output.pickle" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "nsY_2h-qhO7p", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# Run beam search decoder (update parameters as needed)\n", + "!python scripts/decode.py --logits=model_output.pickle --labels=\"CSV_FILE\" --lm=\"LM_BINARY\" --vocab=\"ALPHABET_FILE\" --alpha=ALPHA --beta=BETA --beam_width=BEAM_WIDTH" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "BOr2HV_XTvrs", + "colab_type": "text" + }, + "source": [ + "### Multi-GPU and Distributed Training\n" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "rsgulcj4Tfb9", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# For multi-GPU training with native Distributed Tensorflow approach, \n", + "# you need to set use_horovod: False and num_gpus= in the configuration file. \n", + "\n", + "# To start training use run.py script (update parameters as needed):\n", + "!python run.py --config_file=... --mode=train_eval" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "0DHpq09jUQ59", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# To use Horovod you will need to set use_horovod: True in the config and use mpirun (update parameters as needed):\n", + "!mpiexec -np python run.py --config_file=... --mode=train_eval --use_horovod=True --enable_logs" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "g2Q0CvtcUgrK", + "colab_type": "text" + }, + "source": [ + " You can use Horovod both for multi-GPU and for multi-node training.\n", + " \n", + " Note: num_gpus parameter will be ignored when use_horovod is set to True. In that case, the number of GPUs \n", + " to use is specified in the command line with mpirun arguments." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "CAIA9LdlU-3q", + "colab_type": "text" + }, + "source": [ + "### Mixed Precision Training" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "V8fH0bCSVGs-", + "colab_type": "text" + }, + "source": [ + "Enabling mixed precision with existing models in OpenSeq2Seq is simple: change dtype parameter of model_params to “mixed”. You might need to enable loss scaling: either statically, by setting loss_scale parameter inside model_params to the desired number, or you can use dynamic loss scaling by setting automatic_loss_scaling parameter to “Backoff” or “LogMax”:" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "C7Q_PpqgVLHU", + "colab_type": "text" + }, + "source": [ + " base_params = {\n", + " ...\n", + " \"dtype\": \"mixed\",\n", + " # enabling static or dynamic loss scaling might improve model convergence\n", + "\n", + " # \"loss_scale\": 10.0,\n", + " # \"automatic_loss_scaling\": \"Backoff\",\n", + " ...\n", + " }" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Eqb9BhMUVn4i", + "colab_type": "text" + }, + "source": [ + "### Optimizers (LARC and NovoGrad)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "nQLn4jQdVppe", + "colab_type": "text" + }, + "source": [ + "The key idea of LARC is to adjust learning rate (LR) for each layer in such way that the magnitude of weight updates would be small compared to weights’ norm." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "K6qVY1nJWFNT", + "colab_type": "text" + }, + "source": [ + "To use LARC you should add the following lines to model configuration:\n", + "\n", + " \"larc_params\": {\n", + " \"larc_eta\": 0.002,\n", + " }" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "4jEfJmo_WwrL", + "colab_type": "text" + }, + "source": [ + "NovoGrad is a first-order SGD-based algorithm, which computes second moments per layer instead of per weight as in Adam. Compared to Adam, NovoGrad takes less memory, and we find it to be more numerically stable." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "5foBMw6yXZSm", + "colab_type": "text" + }, + "source": [ + "To use Novograd you should tun off the standard regularization and add the following lines to model configuration:\n", + "\n", + " \"optimizer\": NovoGrad,\n", + " \"optimizer_params\": {\n", + " \"beta1\": 0.95,\n", + " \"beta2\": 0.98,\n", + " \"epsilon\": 1e-08,\n", + " \"weight_decay\": 0.001,\n", + " }," + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "IMHbOf8OXi6r", + "colab_type": "text" + }, + "source": [ + "### Speech Recognition" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "2aDYZ4dmX9eO", + "colab_type": "text" + }, + "source": [ + "Automatic speech recognition (ASR) systems can be built using a number of approaches depending on input data type, intermediate representation, model’s type and output post-processing. OpenSeq2Seq is currently focused on end-to-end CTC-based models (like original DeepSpeech model). These models are called end-to-end because they take speech samples and transcripts without any additional information. CTC allows finding an alignment between audio and text.\n", + "\n", + "Training pipeline consists of the following blocks:\n", + "\n", + " 1) audio preprocessing (feature extraction): signal normalization, windowing, (log) spectrogram (or mel scale spectrogram, or MFCC)\n", + " 2) neural acoustic model (which predicts a probability distribution P_t(c) over vocabulary characters c per each time step t given input features per each timestep)\n", + " 3) CTC loss function\n", + "\n", + "Inference pipeline is different for block #3:\n", + "\n", + " decoder (which transforms a probability distribution into actual transcript)\n", + "\n", + "We support different options for these steps. The recommended pipeline is the following (in order to get the best accuracy, the lowest WER):\n", + "\n", + " 1) Mel scale log spectrograms for audio features (using librosa backend)\n", + " 2) Jasper as a neural acoustic model\n", + " 3) Baidu’s CTC beam search decoder with N-gram language model rescoring" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "A6SXCdOvY_3v", + "colab_type": "text" + }, + "source": [ + "#### Decoders\n", + "\n", + "In order to get words out of a trained model one needs to use a decoder. Decoder converts a probability distribution over characters into text. There are two types of decoders that are usually employed with CTC-based models: greedy decoder and beam search decoder with language model re-scoring.\n", + "\n", + " A greedy decoder outputs the most probable character at each time step. It is very fast and it can produce transcripts that \n", + " are very close to the original pronunciation. But it may introduce many small misspelling errors. Due to the nature of \n", + " WER metric, even one character error makes a whole word incorrect.\n", + "\n", + " A beam search decoder with language model re-scoring allows checking many possible decodings (beams) at once \n", + " with assigning a higher score for more probable N-grams according to a given language model.\n", + " The language model helps to correct misspelling errors. The downside is that it is significantly slower than a greedy decoder." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "GfdLJe3faEQ8", + "colab_type": "text" + }, + "source": [ + "There are two implementations of beam search decoder in OpenSeq2Seq:\n", + "\n", + " 1) native TensorFlow operation (./ctc_decoder_with_lm/). It is rather a deprecated decoder due to its slowness \n", + " (it works in a single CPU thread only). We keep it for backward compatibility. You have to build it \n", + " (or use pre-built version in NVIDIA TensorFlow container). In order to enable it, you’ll need to define its parameters \n", + " \"beam_width\", \"alpha\", \"beta\", \"decoder_library_path\", \"lm_path\", \"trie_path\", \"alphabet_config_path\" and \n", + " add \"use_language_model\": True line in \"decoder_params\" section of the config file.\n", + "\n", + " 2) Baidu decoder (as a separate Python script). It is parallelized across batch on multiple CPU cores, so it is significantly faster. \n", + " It doesn’t require a separate trie file as an input. It is the recommended decoder for ASR models. \n", + " In order to use it, please:\n", + " - make sure that \"decoder_params\" section has 'infer_logits_to_pickle': True line and \n", + " that \"dataset_files\" field of \"infer_params\" section contains a target CSV file\n", + " - run inference (to dump logits to a pickle file)\n", + " - run beam search decoder (with specific ALPHA, BETA and BEAM_WIDTH hyperparameters)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "14cRbFgVb4uq", + "colab_type": "text" + }, + "source": [ + "Now let’s consider a relatively lightweight version of DeepSpeech2 based model for English speech recognition on LibriSpeech dataset. Download and preprocess LibriSpeech dataset:" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "uhocVs_9lHZj", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# First, make the LibriSpeech directory\n", + "!mkdir -p data\n", + "!mkdir data/librispeech" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "Z-Xx97VpidgX", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# Download the dataset (this will take a lot of time)\n", + "!apt-get -y install sox libsox-dev\n", + "!pip install sox\n", + "\n", + "!python scripts/import_librivox.py data/librispeech " + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "phQta0K3q6KF", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# Everything should be setup to train the model\n", + "!python run.py --config_file=example_configs/speech2text/ds2_small_1gpu.py --mode=train_eval" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "n-0xHigyrPGt", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# Build your own language model\n", + "!export LS_DIR=/data/speech/LibriSpeech/\n", + "!python scripts/build_lm.py --n 5 $LS_DIR/librivox-train-clean-100.csv $LS_DIR/librivox-train-clean-360.csv librivox-train-other-500.csv" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "tdMTNyIfc2c-", + "colab_type": "text" + }, + "source": [ + "### Speech Synthesis" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "1_fzYy7wBWDr", + "colab_type": "text" + }, + "source": [ + "The current Tacotron 2 implementation supports the LJSpeech dataset and the MAILABS dataset. \n", + " \n", + " For more details about the model including hyperparameters and tips, see Tacotron-2. \n", + " The current WaveNet implementation only supports LJSpeech.\n", + "\n", + " First, you need to download and extract the dataset into a directory of your choice. The extracted file should consist \n", + " of a metadata.csv file and a directory of wav files. metadata.csv lists all the wav filename and their corresponding transcripts \n", + " delimited by the ‘|’ character." + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "iPQGEQjy4eTL", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# To start training Tacotron\n", + "# If your GPU does not have enough memory, reduce the batch_size_per_gpu parameter.\n", + "!python run.py --config_file=example_configs/text2speech/tacotron_float.py --mode=train" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "w0XaVEsC5ZQl", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# To start training WaveNet\n", + "# If your GPU does not have enough memory, reduce the batch_size_per_gpu parameter.\n", + "!python run.py --config_file=example_configs/text2speech/wavenet_float.py --mode=train" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "squ5IxIAAt4E", + "colab_type": "text" + }, + "source": [ + "Once training is done (this can take a while on a single GPU), you can run inference. \n", + "To do some, first create a csv file named test.csv in the same location as train.csv with lines \n", + "in the following format:\n", + "\n", + " UNUSED | UNUSED | This is an example sentence that I want to generate.\n", + "\n", + "You can put as many lines inside the csv as you want. \n", + "The model will produce one audio sample per line and save the audio sample inside your log_dir. \n", + "Lastly, run:" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "moQtHfvUA0i4", + "colab_type": "code", + "colab": {} + }, + "source": [ + "!python run.py --config_file=example_configs/text2speech/tacotron_float.py --mode=infer --infer_output_file=unused" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "YJ_o3JvyBDuw", + "colab_type": "text" + }, + "source": [ + "For WaveNet, only interactive infer is supported. First, replace the contents of the first box of with tacotron_save_spec.py. This will save the spectrogram generated \n", + "by Tacotron as a numpy array in spec.npy. \n", + "\n", + "Next, replace the contents of the first box with wavenet_naive_infer.py \n", + "and re-run the notebook. The generated audio will be saved to result/sample_step0_infer.wav every 1000 steps. \n", + "Note that this will take some time." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Fa7LKP9oCrEe", + "colab_type": "text" + }, + "source": [ + " This model extends Tacotron 2 with Global Style Tokens (see also paper). \n", + " We differ from the published paper in that we use Tacotron 2 from OpenSeq2Seq as opposed to Tacotron." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "iw4nAcCaAV-1", + "colab_type": "text" + }, + "source": [ + "Training Instructions:\n", + "\n", + " 1) Extract the dataset to a directory\n", + " 2) Change data_root inside tacotron_gst_combine_csv.py to point to where the dataset was extracted.\n", + " 3) Run tacotron_gst_combine_csv.py inside the scripts directory. \n", + " The script will merge all the metadata csv files into one large train csv file.\n", + " 4) Change line 15 of tacotron_gst.py such dataset_location points to where the dataset was extracted\n", + " 5) Train the model by running:" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "eBh1nZhgAfSC", + "colab_type": "code", + "colab": {} + }, + "source": [ + "!python run.py --config_file=example_configs/text2speech/tacotron_gst.py --mode=train" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "IrCqWLQMEVWk", + "colab_type": "text" + }, + "source": [ + "Inference is similar to Tacotron infer, except tacotron-gst additionally requires a style wav inside the infer csv. train.csv should contains lines with lines in the following format: " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "nxw-3eE9C_pt", + "colab_type": "text" + }, + "source": [ + " path/to/style.wav | UNUSED | This is an example sentence that I want to generate. " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "HulKjrTPdsgr", + "colab_type": "text" + }, + "source": [ + "### Machine Translation" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ZAowtm-8DT-X", + "colab_type": "text" + }, + "source": [ + "Next let’s build a small English-German translation model. This model should train in a reasonable time on a single GPU." + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "fhufo_4I_7C-", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# Download (this will take some time)\n", + "!scripts/get_en_de.sh" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "24JGykyrDyUx", + "colab_type": "text" + }, + "source": [ + " This script will download English-German training data from WMT, clean it, and tokenize using Google’s Sentencepiece library. \n", + " By default, the vocabulary size we use is 32,768 for both English and German.\n", + "\n", + " To train a small English-German model, change data_root inside en-de-nmt-small.py to the WMT data location and adjust \n", + " num_gpus to train on more than one GPU (if available). " + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "yYU0fQHSDkkg", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# Start training\n", + "!python run.py --config_file=example_configs/text2text/en-de-nmt-small.py --mode=train_eval" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "yQmL6U7ZF5o0", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# Once training is done (this can take a while on a single GPU), you can run inference:\n", + "!python run.py --config_file=example_configs/text2text/en-de-nmt-small.py --mode=infer --infer_output_file=raw.txt --num_gpus=1" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "vl-ht_jaGPXX", + "colab_type": "text" + }, + "source": [ + " Note that the model output is tokenized. In our case it will output BPE segments instead of words." + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "2rR1j5CNGJpS", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# The next step is to detokenize\n", + "!python tokenizer_wrapper.py --mode=detokenize --model_prefix=.../Data/wmt16_de_en/m_common --decoded_output=result.txt --text_input=raw.txt" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "ie_XAbGcGpbg", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# We measure BLEU scores using SacreBLEU package: (A Call for Clarity in Reporting BLEU Scores) \n", + "# Run SacreBleu on detokenized data:\n", + "!cat result.txt | sacrebleu -t wmt14 -l en-de > result.txt.BLEU" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "BaxWPgVzHJ3X", + "colab_type": "text" + }, + "source": [ + "All models have been trained with specific version of tokenizer. So first step would be copy m_common.model and m_common.vocab to current folder.\n", + "\n", + "To translate your English text source_txt to German you should:" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "CuPFvtWbHBnF", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# 1. tokenize source.txt into source.tok:\n", + "!python tokenizer_wrapper.py --mode=encode --model_prefix=m_common --text_input=source.txt --tokenized_output=source.tok --vocab_size=32768" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "O3zgOONvIvEo", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# 2. modify model config.py" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "JqlGIoOGH7pW", + "colab_type": "text" + }, + "source": [ + " base_params = {\n", + " \"use_horovod\": False,\n", + " \"num_gpus\": 1,\n", + " ...\n", + " \"logdir\": \"checkpoint/model\",\n", + " }\n", + " ...\n", + " infer_params = {\n", + " \"batch_size_per_gpu\": 256,\n", + " \"data_layer\": ParallelTextDataLayer,\n", + " \"data_layer_params\": {\n", + " \"src_vocab_file\": \"m_common.vocab\",\n", + " \"tgt_vocab_file\": \"m_common.vocab\",\n", + " \"source_file\": \"source.tok\",\n", + " \"target_file\": \"source.tok\", # this line will be ignored\n", + " \"delimiter\": \" \",\n", + " \"shuffle\": False,\n", + " \"repeat\": False,\n", + " \"max_length\": 1024,\n", + " },\n", + " }\n", + " ..." + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "oLJgOhgBHy_r", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# 3. translate source.tok into output.tok:\n", + "!python run.py --config_file=config.py --mode=infer --logdir=checkpoint/model --infer_output_file=output.tok --num_gpus=1" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "DNT3OOXVJHGn", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# 4. detokenize output.tok:\n", + "!python tokenizer_wrapper.py --mode=detokenize --model_prefix=m_common --text_input=output.tok --decoded_output=output.txt" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "WcIq4lBcLxen", + "colab_type": "text" + }, + "source": [ + "Transformer model is based solely on attention mechanisms, without any recurrent or convolutional layers. \n", + "\n", + "Common source and target vocabulary is used to share input/output embedding Tokenization of input and output is done with SentencePiece (SentencePiece). \n", + "\n", + "It is very good for neural machine translation tasks and base configuration achieves SacreBLEU of 26.4 on WMT 2014 English-to-German translation task ( checkpoint ) while big model gets around 27.5 ( checkpoint )." + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "9JLAbAwzKZw4", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# This model is based on Google Transformer which was introduced in Attention is all you need by A. Vaswani, etal.\n", + "\n", + "# Here is an example command of how to train such model on a 4-GPU machine:\n", + "!mpirun --allow-run-as-root --mca orte_base_help_aggregate 0 -mca btl ^openib -np 4 -H localhost:4 -bind-to none --map-by slot -x LD_LIBRARY_PATH python run.py --config_file=example_configs/text2text/en-de/transformer-bp-fp32.py --mode=train" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "XV5X_8cvLMcU", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# Then run inference like this\n", + "!python run.py --config_file=example_configs/text2text/en-de/transformer-bp-fp32.py --mode=infer --infer_output_file=raw_fp32.txt --num_gpus=1 --use_horovod=False" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "j7KBYWeaLSwX", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# De-tokenize output\n", + "!python tokenizer_wrapper.py --mode=detokenize --model_prefix=wmt16_de_en/m_common --decoded_output=fp32.txt --text_input=raw_fp32.txt" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "Vf8TnzBHLfuI", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# And compute BLEU score\n", + "!cat fp32.txt | sacrebleu -t wmt14 -l en-de > fp32.BLEU" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "wktkVXTHLp0F", + "colab_type": "text" + }, + "source": [ + " You should get around 26.4 after 300K iterations for the base model." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "TAjnHpb6efjf", + "colab_type": "text" + }, + "source": [ + "### Language Model" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "_5EClL3ONpWt", + "colab_type": "text" + }, + "source": [ + "The WkiText-103 dataset, developed by Salesforce, contains over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia. It has 267,340 unique tokens that appear at least 3 times in the dataset. Since it has full-length Wikipedia articles, the dataset is well-suited for tasks that can benefit of long term dependencies, such as language modeling.\n", + "\n", + "You can download the datasets here , extract them to the location of your choice. The dataset should contain of 3 files for train, validation, and test. Don’t forget to update the data_root parameter in your config file to point to the location of your dataset.\n", + "\n", + "Next let’s create a simple LSTM language model by defining a config file for it or using one of the config files defined in example_configs/lstmlm.\n", + "\n", + " 1) change data_root to point to the directory containing the raw dataset used to train your \n", + " language model, for example, your WikiText dataset downloaded above.\n", + " 2) change processed_data_folder to point to the location where you want to store the processed dataset.\n", + " If the dataset has been pre-procesed before, the data layer can just load the data from this location.\n", + " 3) update other hyper parameters such as number of layers, number of hidden units, cell type, \n", + " loss function, learning rate, optimizer, etc. to meet your needs.\n", + " 4) choose dtype to be \"mixed\" if you want to use mixed-precision training, \n", + " or tf.float32 to train only in FP32." + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "SeJOek-0NqUh", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# For example, your config file is lstm-wkt103-mixed.py. To train without Horovod, \n", + "# update use_horovod to False in the config file and run:\n", + "!python run.py --config_file=example_configs/lstmlm/lstm-wkt103-mixed.py --mode=train_eval --enable_logs" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "bCIS511gPUUS", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# When training with Horovod, use the following command:\n", + "!mpiexec --allow-run-as-root -np python run.py --config_file=example_configs/lstmlm/lstm-wkt103-mixed.py --mode=train_eval --use_horovod=True --enable_logs" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "rpX2u5jxPgYo", + "colab_type": "text" + }, + "source": [ + "Some things to keep in mind:\n", + "\n", + " 1) Don’t forget to update num_gpus to the number of GPUs you want to use.\n", + " 2) If the vocabulary is large (the word-level vocabulary for WikiText-103 is 267,000+), you might want to use \n", + " BasicSampledSequenceLoss, which uses sampled softnax, instead of BasicSequenceLoss, which uses full softmax.\n", + " 3) If your GPUs still run out of memory, reduce the batch_size_per_gpu" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "ZoxS4ZhRPnza", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# Even if your training is done using sampled softmax, evaluation and text generation will always\n", + "# be done using full softmax. Running in the mode eval will evaluate your model on the evaluation set:\n", + "!python run.py --config_file=example_configs/lstmlm/lstm-wkt103-mixed.py --mode=eval --enable_logs" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "n8rYmWOnQVdP", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# Running in the mode infer will generate text from the seed tokens, defined in the config file under the parameter name seed_tokens, each seed token should be separated by space. \n", + "# [TODO: make seed_tokens take a list of strings instead]:\n", + "!python run.py --config_file=example_configs/lstmlm/lstm-wkt103-mixed.py --mode=infer --enable_logs" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "i7uvx97FRIix", + "colab_type": "text" + }, + "source": [ + "### Sentiment Analysis" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "YGZaQJsye3cw", + "colab_type": "text" + }, + "source": [ + "The model we use for sentiment analysis is the same one we use for the LSTM language model, except that the last output dimension is the number of sentiment classes instead of the vocabulary size. This sameness allows the sentiment analysis model to use the model pretrained on the language model for this task. You can choose to train the sentiment analysis task from scratch, or from the pretrained language model.\n", + "\n", + "In this model, each source sentence is run through the LSTM cells. The last hidden state at the end of the sequence is then passed into the output projection layer before softmax is performed to get the predicted sentiment. If the parameter use_cell_state is set to True, the last cell state at the end of the sequence is concatenated to the last hidden state.\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "bxZnjnHYfGRf", + "colab_type": "text" + }, + "source": [ + " The IMDB Dataset contains 50,000 labeled samples of much longer length. The median length is 205 tokens. \n", + " Half of them are deemed positive and the other half negative. The train set, which contains of 25,000 samples, is \n", + " separated into a train set of 24, 000 samples and a validation set of 1,000 samples. The dalay layer used to process \n", + " this dataset is called SSTDataLayer. The dataset can be downloaded here ." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "YqmxeEuefo0-", + "colab_type": "text" + }, + "source": [ + "Next let’s create a simple LSTM language model by defining a config file for it or using one of the config files defined in example_configs/transfer.\n", + "\n", + " - if you want to use a pretrained language model specify the location of the pretrained language model \n", + " using the parameter load_model.\n", + "\n", + " - change data_root to point to the directory containing the raw dataset used to train your language model, for example, \n", + " the IMDB dataset downloaded above.\n", + "\n", + " - change processed_data_folder to point to the location where you want to store the processed dataset. \n", + " If the dataset has been pre-procesed before, the data layer can just load the data from this location.\n", + "\n", + " - update other hyper parameters such as number of layers, number of hidden units, cell type, loss function, learning rate, \n", + " optimizer, etc. to meet your needs.\n", + "\n", + " - choose dtype to be \"mixed\" if you want to use mixed-precision training, or tf.float32 to train only in FP32." + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "0Zb8nZvQQcOT", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# For example, your config file is lstm-wkt103-mixed.py. \n", + "# To train without Horovod, update use_horovod to False in the config file and run:\n", + "!python run.py --config_file=example_configs/transfer/imdb-wkt2.py --mode=train_eval --enable_logs" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "JG1cD59zhAKA", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# When training with Horovod, use the following command:\n", + "!mpiexec --allow-run-as-root -np python run.py --config_file=example_configs/transfer/imdb-wkt2.py --mode=train_eval --enable_logs" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "xHFoJcfzhTTW", + "colab_type": "text" + }, + "source": [ + "Some things to keep in mind:\n", + "\n", + " - Don’t forget to update num_gpus to the number of GPUs you want to use.\n", + " - If your GPUs run out of memory, reduce the batch_size_per_gpu parameter." + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "_y1HbPtHhGL0", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# Running in the mode eval will evaluate your model on the evaluation set:\n", + "!python run.py --config_file=example_configs/transfer/imdb-wkt2.py --mode=eval --enable_logs" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "SY-0tahkh2cG", + "colab_type": "code", + "colab": {} + }, + "source": [ + "# Running in the mode infer will evaluate your model on the test set:\n", + "!python run.py --config_file=example_configs/transfer/imdb-wkt2.py --mode=test --enable_logs" + ], + "execution_count": 0, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "xOiVFxwjiLQQ", + "colab_type": "text" + }, + "source": [ + " The performance of the model is reported on accuracy and F1 scores." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ryPKXydkiS8A", + "colab_type": "text" + }, + "source": [ + "### Image Classification" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "M7Azzm41iawC", + "colab_type": "text" + }, + "source": [ + "Our ResNet-50 v2 model is a mixed precison replica of TensorFlow ResNet-50 , which corresponds to the model defined in the paper Identity Mappings in Deep Residual Networks by Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, Jul 2016.\n", + "\n", + "This model was trained with different optimizers to state-of-the art accuracy for ResNet-50 model. Our best model reached top-1=77.63%, top-5=93.73 accuracy for Imagenet classification task." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "lkFiiVscifpA", + "colab_type": "text" + }, + "source": [ + " You will need to download the ImageNet dataset and convert it to TFRecord format as described in\n", + " `TensorFlow ResNet