You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jul 7, 2023. It is now read-only.
abstract away the input and output data types so that **models** may deal with
@@ -211,7 +227,7 @@ inference. Users can easily switch between problems, models, and hyperparameter
211
227
sets by using the `--model`, `--problems`, and `--hparams_set` flags. Specific
212
228
hyperparameters can be overridden with the `--hparams` flag. `--schedule` and
213
229
related flags control local and distributed training/evaluation
214
-
([distributed training documentation](https://github.com/tensorflow/tensor2tensor/tree/master/tensor2tensor/docs/distributed_training.md)).
230
+
([distributed training documentation](https://github.com/tensorflow/tensor2tensor/tree/master/docs/distributed_training.md)).
215
231
216
232
---
217
233
@@ -222,7 +238,7 @@ enables easily adding new ones and easily swapping amongst them by command-line
222
238
flag. You can add your own components without editing the T2T codebase by
223
239
specifying the `--t2t_usr_dir` flag in `t2t-trainer`.
224
240
225
-
You can currently do so for models, hyperparameter sets, and modalities. Please
241
+
You can do so for models, hyperparameter sets, modalities, and problems. Please
226
242
do submit a pull request if your component might be useful to others.
0 commit comments