Skip to content

Commit 97708a4

Browse files
authored
Fix more typos (#190)
1 parent 12712d0 commit 97708a4

File tree

5 files changed

+39
-15
lines changed

5 files changed

+39
-15
lines changed

.github/actions/spelling/allow.txt

Lines changed: 26 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@ CXI
1616
Ceph
1717
Containerfile
1818
DNS
19+
Dockerfiles
1920
EDF
2021
EDFs
2122
EDFs
@@ -57,11 +58,9 @@ MFA
5758
MLP
5859
MNDO
5960
MPICH
60-
MPS
6161
MeteoSwiss
6262
NAMD
6363
NICs
64-
NVIDIA
6564
NVMe
6665
OTP
6766
OTPs
@@ -94,6 +93,8 @@ XDG
9493
aarch
9594
aarch64
9695
acl
96+
autodetection
97+
baremetal
9798
biomolecular
9899
bristen
99100
bytecode
@@ -104,31 +105,53 @@ concretizer
104105
containerised
105106
cpe
106107
cscs
108+
cuda
107109
customised
110+
dcomex
108111
diagonalisation
112+
dockerhub
113+
dotenv
109114
eiger
115+
epyc
110116
filesystems
117+
fontawesome
118+
gitlab
119+
gpu
111120
groundstate
112121
ijulia
113122
inodes
114123
iopsstor
124+
jfrog
115125
lexer
116126
libfabric
117127
miniconda
118128
mpi
129+
mps
119130
multitenancy
131+
netrc
120132
nsight
133+
numa
134+
nvidia
135+
octicons
136+
oom
121137
podman
138+
preinstalled
122139
prgenv
140+
prioritisation
123141
prioritised
124142
proactively
143+
pyfirecrest
125144
pytorch
126145
quickstart
146+
rocm
147+
runtime
148+
runtimes
127149
santis
128150
sbatch
129151
screenshot
130152
slurm
131153
smartphone
154+
sphericart
132155
squashfs
133156
srun
134157
ssh
@@ -140,6 +163,7 @@ subtables
140163
supercomputing
141164
superlu
142165
sysadmin
166+
tarball
143167
tcl
144168
tcsh
145169
testuser

.github/workflows/spelling.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ jobs:
3636
only_check_changed_files: 1
3737
post_comment: 1
3838
use_magic_file: 1
39-
warnings: bad-regex,binary-file,deprecated-feature,large-file,limited-references,no-newline-at-eof,noisy-file,non-alpha-in-dictionary,token-is-substring,unexpected-line-ending,whitespace-in-dictionary,minified-file,unsupported-configuration,no-files-to-check
39+
warnings: bad-regex,binary-file,deprecated-feature,large-file,limited-references,no-newline-at-eof,noisy-file,token-is-substring,unexpected-line-ending,whitespace-in-dictionary,minified-file,unsupported-configuration,no-files-to-check
4040
use_sarif: ${{ (!github.event.pull_request || (github.event.pull_request.head.repo.full_name == github.repository)) && 1 }}
4141
extra_dictionary_limit: 20
4242
extra_dictionaries:

docs/running/slurm.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Refer to the [Quick Start User Guide](https://slurm.schedmd.com/quickstart.html)
1919

2020
- :fontawesome-solid-mountain-sun: __Node sharing__
2121

22-
Guides on how to effectively use all resouces on nodes by running more than one job per node.
22+
Guides on how to effectively use all resources on nodes by running more than one job per node.
2323

2424
[:octicons-arrow-right-24: Node sharing][ref-slurm-sharing]
2525

@@ -68,7 +68,7 @@ $ sbatch --account=g123 ./job.sh
6868
!!! note
6969
The flags `--account` and `-Cmc` that were required on the old [Eiger][ref-cluster-eiger] cluster are no longer required.
7070

71-
## Prioritization and scheduling
71+
## Prioritisation and scheduling
7272

7373
Job priorities are determined based on each project's resource usage relative to its quarterly allocation, as well as in comparison to other projects.
7474
An aging factor is also applied to each job in the queue to ensure fairness over time.
@@ -219,7 +219,7 @@ The build generates the following executables:
219219

220220
1. Test GPU affinity: note how all 4 ranks see the same 4 GPUs.
221221

222-
2. Test GPU affinity: note how the `--gpus-per-task=1` parameter assings a unique GPU to each rank.
222+
2. Test GPU affinity: note how the `--gpus-per-task=1` parameter assigns a unique GPU to each rank.
223223

224224
!!! info "Quick affinity checks"
225225

@@ -491,7 +491,7 @@ rank 7 @ nid002199: thread 0 -> cores [112:127]
491491
In the above examples all threads on each -- we are effectively allowing the OS to schedule the threads on the available set of cores as it sees fit.
492492
This often gives the best performance, however sometimes it is beneficial to bind threads to explicit cores.
493493

494-
The OpenMP threading runtime provides additional options for controlling the pinning of threads to the cores assinged to each MPI rank.
494+
The OpenMP threading runtime provides additional options for controlling the pinning of threads to the cores assigned to each MPI rank.
495495

496496
Use the `--omp` flag with `affinity.mpi` to get more detailed information about OpenMP thread affinity.
497497
For example, four MPI ranks on one node with four cores and four OpenMP threads:
@@ -580,7 +580,7 @@ The approach is to:
580580
1. first allocate all the resources on each node to the job;
581581
2. then subdivide those resources at each invocation of srun.
582582

583-
If Slurm believes that a request for resources (cores, gpus, memory) overlaps with what another step has already allocated, it will defer the execution until the resources are relinquished.
583+
If Slurm believes that a request for resources (cores, GPUs, memory) overlaps with what another step has already allocated, it will defer the execution until the resources are relinquished.
584584
This must be avoided.
585585

586586
First ensure that *all* resources are allocated to the whole job with the following preamble:

docs/services/cicd.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -718,7 +718,7 @@ Private projects will always get as notification a link to the CSCS pipeline ove
718718
To view the CSCS pipeline overview for a public project and restart / cancel jobs, follow these steps:
719719

720720
* Copy the web link of the CSCS CI status of your project and remove the from the link the `type=gitlab`.
721-
* Alternativily, assemble the link yourself, it has the form `https://cicd-ext-mw.cscs.ch/ci/pipeline/results/<repository_id>/<project_id>/<pipeline_nb>` (the IDs can be found on the Gitlab page of your mirror project).
721+
* Alternatively, assemble the link yourself, it has the form `https://cicd-ext-mw.cscs.ch/ci/pipeline/results/<repository_id>/<project_id>/<pipeline_nb>` (the IDs can be found on the Gitlab page of your mirror project).
722722
* Click on `Login to restart jobs` at the bottom right and login with your CSCS credentials
723723
* Click `Cancel running` or `Restart jobs` or cancel individual jobs (button next to job's name)
724724
* Everybody that has at least *Manager* access can restart / cancel jobs (access level is managed on the CI setup page in the Admin section)
@@ -783,7 +783,7 @@ This is the clone URL of the registered project, i.e. this is not the clone URL
783783
### `ARCH`
784784
value: `x86_64` or `aarch64`
785785

786-
This is the architecture of the runner. It is either an ARM64 machine, i.e. `aarch64`, or a traditinal `x86_64` machine.
786+
This is the architecture of the runner. It is either an ARM64 machine, i.e. `aarch64`, or a traditional `x86_64` machine.
787787

788788

789789
## Runners reference
@@ -819,7 +819,7 @@ Accepted variables are documented at [Slurm's srun man page](https://slurm.sched
819819

820820
!!! Warning "SLURM_TIMELIMIT"
821821
Special attention should go the variable `SLURM_TIMELIMIT`, which sets the maximum time of your Slurm job.
822-
You will be billed the nodehours that your CI jobs are spending on the cluster, i.e. you want to set the `SLURM_TIMELIMIT` to the maximum time that you expect the job to run.
822+
You will be billed the node hours that your CI jobs are spending on the cluster, i.e. you want to set the `SLURM_TIMELIMIT` to the maximum time that you expect the job to run.
823823
You should also pay attention to wrap the value in quotes, because the gitlab-runner interprets the time differently than Slurm, when it is not wrapped in quotes, i.e. This is correct:
824824
```
825825
SLURM_TIMELIMIT: "00:30:00"
@@ -867,7 +867,7 @@ The value must be a valid JSON array, where each entry is a string.
867867

868868
It is almost always correct to wrap the full value in single-quotes.
869869

870-
It is also possible to define the argument's values as an entry in `variables`, and then reference in `DOCKER_BUILD_ARGS` only the variables that you want to expose to the build process, i.e. sth like this:
870+
It is also possible to define the argument's values as an entry in `variables`, and then reference in `DOCKER_BUILD_ARGS` only the variables that you want to expose to the build process, i.e. something like this:
871871
```yaml
872872
my job:
873873
extends: .container-builder-cscs-gh200
@@ -987,7 +987,7 @@ This tag is mandatory.
987987
##### `GIT_STRATEGY`
988988
Optional variable, default is `none`
989989

990-
This is a [default Gitlab variable](https://docs.gitlab.com/ee/ci/runners/configure_runners.html#git-strategy), but mentioned here explicitly, because very often you do not need to clone the repository sourcecode when you run your containerized application.
990+
This is a [default Gitlab variable](https://docs.gitlab.com/ee/ci/runners/configure_runners.html#git-strategy), but mentioned here explicitly, because very often you do not need to clone the repository source code when you run your containerized application.
991991

992992
The default is `none`, and you must explicitly set it to `fetch` or `clone` to fetch the source code by the runner.
993993

@@ -1323,7 +1323,7 @@ The easiest way to use the FirecREST scheduler of ReFrame is to use the configur
13231323
In case you want to run ReFrame for a system that is not already available in this directory, please open a ticket to the Service Desk and we will add it or help you update one of the existing ones.
13241324

13251325
Something you should be aware of when running with this scheduler is that ReFrame will not have direct access to the filesystem of the cluster so the stage directory will need to be kept in sync through FirecREST.
1326-
It is recommended to try to clean the stage directory whenever possible with the [postrun_cmds](https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.postrun_cmds) and [postbuild_cmds](https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.postbuild_cmds) and to avoid [autodetection of the processor](https://reframe-hpc.readthedocs.io/en/stable/config_reference.html#config.systems.partitions.processor) in each run.
1326+
It is recommended to try to clean the stage directory whenever possible with the [`postrun_cmds`](https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.postrun_cmds) and [`postbuild_cmds`](https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.postbuild_cmds) and to avoid [autodetection of the processor](https://reframe-hpc.readthedocs.io/en/stable/config_reference.html#config.systems.partitions.processor) in each run.
13271327
Normally ReFrame stores these files in `~/.reframe/topology/{system}-{part}/processor.json`, but you get a "clean" runner every time.
13281328
You could either add them in the configuration files or store the files in the first run and copy them to the right directory before ReFrame runs.
13291329

docs/software/container-engine/run.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -205,7 +205,7 @@ Directories outside a container can be *mounted* inside a container so that the
205205
!!! note
206206
The source (before `:`) should be present on the cluster: the destination (after `:`) doesn't have to be inside the container.
207207

208-
See [the EDF reference][ref-ce-edf-reference] for the full specifiction of the `mounts` EDF entry.
208+
See [the EDF reference][ref-ce-edf-reference] for the full specification of the `mounts` EDF entry.
209209

210210

211211
[](){#ref-ce-run-mounting-squashfs}

0 commit comments

Comments
 (0)