Skip to content

Commit d3f0bd3

Browse files
authored
Merge pull request #259 from pengchzn/master (Transformer MHA Chinese Translation)
Refine Chinese translation
2 parents 391fa39 + e03dbc1 commit d3f0bd3

File tree

10 files changed

+149
-149
lines changed

10 files changed

+149
-149
lines changed

translate_cache/__init__.zh.json

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,15 +32,15 @@
3232
"<p><span translate=no>_^_0_^_</span></p>\n": "<p><span translate=no>_^_0_^_</span></p>\n",
3333
"<p>Solving games with incomplete information such as poker with CFR.</p>\n": "<p>\u4f7f\u7528 CFR \u89e3\u51b3\u8bf8\u5982\u6251\u514b\u7b49\u4e0d\u5b8c\u5168\u4fe1\u606f\u6e38\u620f</p>\n",
3434
"<p>This is a collection of simple PyTorch implementations of neural networks and related algorithms. <a href=\"https://github.com/labmlai/annotated_deep_learning_paper_implementations\">These implementations</a> are documented with explanations, and the <a href=\"index.html\">website</a> renders these as side-by-side formatted notes. We believe these would help you understand these algorithms better.</p>\n": "<p>\u8fd9\u662f\u4e00\u4e2a\u7528 PyTorch \u5b9e\u73b0\u5404\u79cd\u795e\u7ecf\u7f51\u7edc\u548c\u76f8\u5173\u7b97\u6cd5\u7684\u96c6\u5408\u3002\u6bcf\u4e2a\u7b97\u6cd5\u7684<a href=\"https://github.com/labmlai/annotated_deep_learning_paper_implementations\">\u4ee3\u7801\u5b9e\u73b0</a>\u90fd\u6709\u8be6\u7ec6\u7684\u89e3\u91ca\u8bf4\u660e\uff0c\u4e14\u5728<a href=\"index.html\">\u7f51\u7ad9</a>\u4e0a\u4e0e\u4ee3\u7801\u9010\u884c\u5bf9\u5e94\u3002\u6211\u4eec\u76f8\u4fe1\uff0c\u8fd9\u4e9b\u5185\u5bb9\u5c06\u5e2e\u52a9\u60a8\u66f4\u597d\u5730\u7406\u89e3\u8fd9\u4e9b\u7b97\u6cd5\u3002</p>\n",
35-
"<p>We are actively maintaining this repo and adding new implementations. <a href=\"https://twitter.com/labmlai\"><span translate=no>_^_0_^_</span></a> for updates.</p>\n": "<p>\u6211\u4eec\u6b63\u5728\u79ef\u6781\u7ef4\u62a4\u8fd9\u4e2a\u4ed3\u5e93\u5e76\u6dfb\u52a0\u65b0\u7684\u4ee3\u7801\u5b9e\u73b0<a href=\"https://twitter.com/labmlai\"><span translate=no>_^_0_^_</span></a>\u4ee5\u83b7\u53d6\u66f4\u65b0\u3002</p>\n",
35+
"<p>We are actively maintaining this repo and adding new implementations. <a href=\"https://twitter.com/labmlai\"><span translate=no>_^_0_^_</span></a> for updates.</p>\n": "<p>\u6211\u4eec\u6b63\u5728\u79ef\u6781\u7ef4\u62a4\u8fd9\u4e2a\u4ed3\u5e93\u5e76\u6dfb\u52a0\u65b0\u7684\u4ee3\u7801\u5b9e\u73b0\u3002<a href=\"https://twitter.com/labmlai\"><span translate=no>_^_0_^_</span></a>\u4ee5\u83b7\u53d6\u66f4\u65b0\u3002</p>\n",
3636
"<span translate=no>_^_0_^_</span>": "<span translate=no>_^_0_^_</span>",
3737
"<ul><li><a href=\"activations/fta/index.html\">Fuzzy Tiling Activations</a></li></ul>\n": "<ul><li><a href=\"activations/fta/index.html\">\u6a21\u7cca\u5e73\u94fa\u6fc0\u6d3b\u51fd\u6570</a></li></ul>\n",
3838
"<ul><li><a href=\"adaptive_computation/ponder_net/index.html\">PonderNet</a></li></ul>\n": "<ul><li><a href=\"adaptive_computation/ponder_net/index.html\">PonderNet</a></li></ul>\n",
3939
"<ul><li><a href=\"cfr/kuhn/index.html\">Kuhn Poker</a></li></ul>\n": "<ul><li><a href=\"cfr/kuhn/index.html\">\u5e93\u6069\u6251\u514b</a></li></ul>\n",
4040
"<ul><li><a href=\"diffusion/ddpm/index.html\">Denoising Diffusion Probabilistic Models (DDPM)</a> </li>\n<li><a href=\"diffusion/stable_diffusion/sampler/ddim.html\">Denoising Diffusion Implicit Models (DDIM)</a> </li>\n<li><a href=\"diffusion/stable_diffusion/latent_diffusion.html\">Latent Diffusion Models</a> </li>\n<li><a href=\"diffusion/stable_diffusion/index.html\">Stable Diffusion</a></li></ul>\n": "<ul><li><a href=\"diffusion/ddpm/index.html\">\u53bb\u566a\u6269\u6563\u6982\u7387\u6a21\u578b (DDPM)</a></li>\n<li><a href=\"diffusion/stable_diffusion/sampler/ddim.html\">\u53bb\u566a\u6269\u6563\u9690\u5f0f\u6a21\u578b (DDIM)</a></li>\n<li><a href=\"diffusion/stable_diffusion/latent_diffusion.html\">\u6f5c\u5728\u6269\u6563\u6a21\u578b</a></li>\n<li><a href=\"diffusion/stable_diffusion/index.html\">Stable Diffusion</a></li></ul>\n",
4141
"<ul><li><a href=\"gan/original/index.html\">Original GAN</a> </li>\n<li><a href=\"gan/dcgan/index.html\">GAN with deep convolutional network</a> </li>\n<li><a href=\"gan/cycle_gan/index.html\">Cycle GAN</a> </li>\n<li><a href=\"gan/wasserstein/index.html\">Wasserstein GAN</a> </li>\n<li><a href=\"gan/wasserstein/gradient_penalty/index.html\">Wasserstein GAN with Gradient Penalty</a> </li>\n<li><a href=\"gan/stylegan/index.html\">StyleGAN 2</a></li></ul>\n": "<ul><li><a href=\"gan/original/index.html\">\u539f\u59cb GAN</a></li>\n<li><a href=\"gan/dcgan/index.html\">\u4f7f\u7528\u6df1\u5ea6\u5377\u79ef\u7f51\u7edc\u7684 GAN</a></li>\n<li><a href=\"gan/cycle_gan/index.html\">\u5faa\u73af GAN</a></li>\n<li><a href=\"gan/wasserstein/index.html\">Wasserstein GAN</a></li>\n<li><a href=\"gan/wasserstein/gradient_penalty/index.html\">\u5177\u6709\u68af\u5ea6\u60e9\u7f5a\u7684 Wasserstein GAN</a></li>\n<li><a href=\"gan/stylegan/index.html\">StyleGan 2</a></li></ul>\n",
4242
"<ul><li><a href=\"graphs/gat/index.html\">Graph Attention Networks (GAT)</a> </li>\n<li><a href=\"graphs/gatv2/index.html\">Graph Attention Networks v2 (GATv2)</a></li></ul>\n": "<ul><li><a href=\"graphs/gat/index.html\">\u56fe\u6ce8\u610f\u529b\u7f51\u7edc (GAT)</a></li>\n<li><a href=\"graphs/gatv2/index.html\">\u56fe\u6ce8\u610f\u529b\u7f51\u7edc v2 (GATv2)</a></li></ul>\n",
43-
"<ul><li><a href=\"neox/samples/generate.html\">Generate on a 48GB GPU</a> </li>\n<li><a href=\"neox/samples/finetune.html\">Finetune on two 48GB GPUs</a> </li>\n<li><a href=\"neox/utils/llm_int8.html\">LLM.int8()</a></li></ul>\n": "<li><a href=\"neox/samples/generate.html\">\u5728\u4e00\u5757 48GB GPU \u4e0a\u751f\u6210</a></li> <ul>\n<li><a href=\"neox/samples/finetune.html\">\u5728\u4e24\u5757 48GB GPU \u4e0a\u5fae\u8c03</a></li>\n<li><a href=\"neox/utils/llm_int8.html\">llm.int8 ()</a></li></ul>\n",
43+
"<ul><li><a href=\"neox/samples/generate.html\">Generate on a 48GB GPU</a> </li>\n<li><a href=\"neox/samples/finetune.html\">Finetune on two 48GB GPUs</a> </li>\n<li><a href=\"neox/utils/llm_int8.html\">LLM.int8()</a></li></ul>\n": "<ul><li><a href=\"neox/samples/generate.html\">\u5728\u4e00\u5757 48GB GPU \u4e0a\u751f\u6210</a></li> \n<li><a href=\"neox/samples/finetune.html\">\u5728\u4e24\u5757 48GB GPU \u4e0a\u5fae\u8c03</a></li>\n<li><a href=\"neox/utils/llm_int8.html\">llm.int8 ()</a></li></ul>\n",
4444
"<ul><li><a href=\"normalization/batch_norm/index.html\">Batch Normalization</a> </li>\n<li><a href=\"normalization/layer_norm/index.html\">Layer Normalization</a> </li>\n<li><a href=\"normalization/instance_norm/index.html\">Instance Normalization</a> </li>\n<li><a href=\"normalization/group_norm/index.html\">Group Normalization</a> </li>\n<li><a href=\"normalization/weight_standardization/index.html\">Weight Standardization</a> </li>\n<li><a href=\"normalization/batch_channel_norm/index.html\">Batch-Channel Normalization</a> </li>\n<li><a href=\"normalization/deep_norm/index.html\">DeepNorm</a></li></ul>\n": "<ul><li><a href=\"normalization/batch_norm/index.html\">\u6279\u91cf\u5f52\u4e00\u5316</a></li>\n<li><a href=\"normalization/layer_norm/index.html\">\u5c42\u5f52\u4e00\u5316</a></li>\n<li><a href=\"normalization/instance_norm/index.html\">\u5b9e\u4f8b\u5f52\u4e00\u5316</a></li>\n<li><a href=\"normalization/group_norm/index.html\">\u7ec4\u5f52\u4e00\u5316</a></li>\n<li><a href=\"normalization/weight_standardization/index.html\">\u6743\u91cd\u6807\u51c6\u5316</a></li>\n<li><a href=\"normalization/batch_channel_norm/index.html\">\u6279-\u901a\u9053\u5f52\u4e00\u5316</a></li>\n<li><a href=\"normalization/deep_norm/index.html\">DeepNorm</a></li></ul>\n",
4545
"<ul><li><a href=\"optimizers/adam.html\">Adam</a> </li>\n<li><a href=\"optimizers/amsgrad.html\">AMSGrad</a> </li>\n<li><a href=\"optimizers/adam_warmup.html\">Adam Optimizer with warmup</a> </li>\n<li><a href=\"optimizers/noam.html\">Noam Optimizer</a> </li>\n<li><a href=\"optimizers/radam.html\">Rectified Adam Optimizer</a> </li>\n<li><a href=\"optimizers/ada_belief.html\">AdaBelief Optimizer</a> </li>\n<li><a href=\"optimizers/sophia.html\">Sophia-G Optimizer</a></li></ul>\n": "<ul><li><a href=\"optimizers/adam.html\">Adam \u4f18\u5316\u5668</a></li>\n<li><a href=\"optimizers/amsgrad.html\">AMSGrad \u4f18\u5316\u5668</a></li>\n<li><a href=\"optimizers/adam_warmup.html\">\u5177\u6709\u9884\u70ed\u7684 Adam \u4f18\u5316\u5668</a></li>\n<li><a href=\"optimizers/noam.html\">Noam \u4f18\u5316\u5668</a></li>\n<li><a href=\"optimizers/radam.html\">RAdam \u4f18\u5316\u5668</a></li>\n<li><a href=\"optimizers/ada_belief.html\">AdaBelief \u4f18\u5316\u5668</a></li>\n<li><a href=\"optimizers/sophia.html\">Sophia-G Optimizer</a></li></ul>\n",
4646
"<ul><li><a href=\"rl/ppo/index.html\">Proximal Policy Optimization</a> with <a href=\"rl/ppo/gae.html\">Generalized Advantage Estimation</a> </li>\n<li><a href=\"rl/dqn/index.html\">Deep Q Networks</a> with with <a href=\"rl/dqn/model.html\">Dueling Network</a>, <a href=\"rl/dqn/replay_buffer.html\">Prioritized Replay</a> and Double Q Network.</li></ul>\n": "<ul><li><a href=\"rl/ppo/index.html\">\u8fd1\u7aef\u7b56\u7565\u4f18\u5316</a>\u4e0e<a href=\"rl/ppo/gae.html\">\u5e7f\u4e49\u4f18\u52bf\u4f30\u8ba1</a></li>\n<li>\u5177\u6709<a href=\"rl/dqn/model.html\">\u5bf9\u6297\u7f51\u7edc</a>\u3001<a href=\"rl/dqn/replay_buffer.html\">\u4f18\u5148\u56de\u653e </a>\u548c\u53cc Q \u7f51\u7edc\u7684<a href=\"rl/dqn/index.html\">\u6df1\u5ea6 Q \u7f51\u7edc</a></li></ul>\n",

0 commit comments

Comments
 (0)