Skip to content
This repository was archived by the owner on Jul 7, 2023. It is now read-only.

Commit 2872bd0

Browse files
cbockmanrsepassi
authored andcommitted
minor spelling fix (#663)
* minor spelling fix * Update common_layers.py * Update common_layers.py * Update transformer.py
1 parent 1687993 commit 2872bd0

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

tensor2tensor/layers/common_layers.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -630,7 +630,7 @@ def layer_preprocess(layer_input, hparams):
630630
631631
See layer_prepostprocess() for details.
632632
633-
A hyperparemeters object is passed for convenience. The hyperparameters
633+
A hyperparameters object is passed for convenience. The hyperparameters
634634
that may be used are:
635635
636636
layer_preprocess_sequence
@@ -666,7 +666,7 @@ def layer_postprocess(layer_input, layer_output, hparams):
666666
667667
See layer_prepostprocess() for details.
668668
669-
A hyperparemeters object is passed for convenience. The hyperparameters
669+
A hyperparameters object is passed for convenience. The hyperparameters
670670
that may be used are:
671671
672672
layer_postprocess_sequence
@@ -1289,7 +1289,7 @@ def relu_density_logit(x, reduce_dims):
12891289
Useful for histograms.
12901290
12911291
Args:
1292-
x: a Tensor, typilcally the output of tf.relu
1292+
x: a Tensor, typically the output of tf.relu
12931293
reduce_dims: a list of dimensions
12941294
12951295
Returns:

tensor2tensor/models/transformer.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -704,7 +704,7 @@ def transformer_encoder(encoder_input,
704704
common_layers.layer_preprocess(x, hparams), hparams, pad_remover,
705705
conv_padding="SAME", nonpadding_mask=nonpadding)
706706
x = common_layers.layer_postprocess(x, y, hparams)
707-
# if normalization is done in layer_preprocess, then it shuold also be done
707+
# if normalization is done in layer_preprocess, then it should also be done
708708
# on the output, since the output can grow very large, being the sum of
709709
# a whole stack of unnormalized layer outputs.
710710
return common_layers.layer_preprocess(x, hparams)

0 commit comments

Comments
 (0)