Skip to content
This repository was archived by the owner on Jul 7, 2023. It is now read-only.

Commit 6c4ef81

Browse files
committed
docs: formatting fix
1 parent 792f314 commit 6c4ef81

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

docs/index.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ For language modeling, we have these data-sets in T2T:
6969
* LM1B (a billion-word corpus): `--problems=languagemodel_lm1b32k` for
7070
subword-level modeling and `--problems=languagemodel_lm1b_characters`
7171
for character-level modeling.
72-
72+
7373
We suggest to start with `--model=transformer` on this task and use
7474
`--hparams_set=transformer_small` for PTB and
7575
`--hparams_set=transformer_base` for LM1B.
@@ -95,7 +95,7 @@ For speech-to-text, we have these data-sets in T2T:
9595
For summarizing longer text into shorter one we have these data-sets:
9696
* CNN/DailyMail articles summarized into a few sentences:
9797
`--problems=summarize_cnn_dailymail32k`
98-
98+
9999
We suggest to use `--model=transformer` and
100100
`--hparams_set=transformer_prepend` for this task.
101101
This yields good ROUGE scores.
@@ -118,5 +118,5 @@ For all translation problems, we suggest to try the Transformer model:
118118
this should reach a BLEU score of about 28 on the English-German data-set,
119119
which is close to state-of-the art. If training on a single GPU, try the
120120
`--hparams_set=transformer_base_single_gpu` setting. For very good results
121-
or larger data-sets (e.g., for English-French)m, try the big model
121+
or larger data-sets (e.g., for English-French), try the big model
122122
with `--hparams_set=transformer_big`.

0 commit comments

Comments
 (0)