Skip to content

Commit fe42d84

Browse files
committed
Fixup doc
1 parent 496bfe0 commit fe42d84

File tree

1 file changed

+16
-16
lines changed

1 file changed

+16
-16
lines changed

docs/src/manual.md

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ BenchmarkTools.Trial: 10000 samples with 1000 evaluations.
2929
Time (median): 1.453 ns ┊ GC (median): 0.00%
3030
Time (mean ± σ): 1.462 ns ± 0.566 ns ┊ GC (mean ± σ): 0.00% ± 0.00%
3131

32-
32+
3333
▂▁▁▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▃▁▁▃
3434
1.44 ns Histogram: frequency by time 1.46 ns (top 1%)
3535

@@ -50,7 +50,7 @@ BenchmarkTools.Trial: 10000 samples with 1000 evaluations.
5050
Time (median): 1.453 ns ┊ GC (median): 0.00%
5151
Time (mean ± σ): 1.456 ns ± 0.056 ns ┊ GC (mean ± σ): 0.00% ± 0.00%
5252

53-
53+
5454
▂▁▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▂▁▁▃
5555
1.44 ns Histogram: frequency by time 1.46 ns (top 1%)
5656

@@ -86,7 +86,7 @@ You can pass the following keyword arguments to `@benchmark`, `@benchmarkable`,
8686
- `time_tolerance`: The noise tolerance for the benchmark's time estimate, as a percentage. This is utilized after benchmark execution, when analyzing results. Defaults to `BenchmarkTools.DEFAULT_PARAMETERS.time_tolerance = 0.05`.
8787
- `memory_tolerance`: The noise tolerance for the benchmark's memory estimate, as a percentage. This is utilized after benchmark execution, when analyzing results. Defaults to `BenchmarkTools.DEFAULT_PARAMETERS.memory_tolerance = 0.01`.
8888

89-
The following keyword arguments relate to [Running custom benchmarks] are experimental and subject to change, see [Running custom benchmarks] for furthe details.:
89+
The following keyword arguments relate are experimental and subject to change, see [Running custom benchmarks](@ref) for further details:
9090

9191
- `run_customizable_func_only`: If `true`, only the customizable benchmark. Defaults to `BenchmarkTools.DEFAULT_PARAMETERS..run_customizable_func_only = false`.
9292
- `enable_customizable_func`: If `:ALL` the customizable benchmark runs on every sample, if `:LAST` the customizable benchmark runs on the last sample, if `:FALSE` the customizable benchmark is never run. Defaults to `BenchmarkTools.DEFAULT_PARAMETERS.enable_customizable_func = :FALSE`
@@ -125,7 +125,7 @@ BenchmarkTools.Trial: 10000 samples with 10 evaluations.
125125
Time (median): 1.363 μs ┊ GC (median): 0.00%
126126
Time (mean ± σ): 1.786 μs ± 4.612 μs ┊ GC (mean ± σ): 9.58% ± 3.70%
127127

128-
▄▆██▇▇▆▄▃▂▁ ▁▁▂▂▂▂▂▂▂▁▂▁
128+
▄▆██▇▇▆▄▃▂▁ ▁▁▂▂▂▂▂▂▂▁▂▁
129129
████████████████▆▆▇▅▆▇▆▆▆▇▆▇▆▆▅▄▄▄▅▃▄▇██████████████▇▇▇▇▆▆▇▆▆▅▅▅▅
130130
1.15 μs Histogram: log(frequency) by time 3.8 μs (top 1%)
131131

@@ -139,7 +139,7 @@ BenchmarkTools.Trial: 10000 samples with 963 evaluations.
139139
Time (median): 84.497 ns ┊ GC (median): 0.00%
140140
Time (mean ± σ): 85.125 ns ± 5.262 ns ┊ GC (mean ± σ): 0.00% ± 0.00%
141141

142-
142+
143143
█▅▇▅▄███▇▇▆▆▆▄▄▅▅▄▄▅▄▄▅▄▄▄▄▁▃▄▁▁▃▃▃▄▃▁▃▁▁▁▁▁▃▁▁▁▁▁▁▁▁▁▁▃▃▁▁▁▃▁▁▁▁▆
144144
84.5 ns Histogram: log(frequency) by time 109 ns (top 1%)
145145

@@ -158,7 +158,7 @@ BenchmarkTools.Trial: 10000 samples with 54 evaluations.
158158
Time (median): 1.073 μs ┊ GC (median): 0.00%
159159
Time (mean ± σ): 1.296 μs ± 2.004 μs ┊ GC (mean ± σ): 14.31% ± 8.76%
160160

161-
▃█▆
161+
▃█▆
162162
▂▂▄▆███▇▄▄▃▃▃▃▃▂▂▂▂▂▂▂▂▂▂▂▁▂▂▂▁▂▂▁▁▁▁▁▂▁▁▁▁▂▂▁▁▁▁▂▁▁▁▁▁▁▂▂▂▂▂▂▂▂▂▂
163163
889 ns Histogram: frequency by time 2.92 μs (top 1%)
164164

@@ -237,7 +237,7 @@ BenchmarkTools.Trial: 819 samples with 1 evaluations.
237237
Time (median): 6.019 ms ┊ GC (median): 0.00%
238238
Time (mean ± σ): 6.029 ms ± 46.222 μs ┊ GC (mean ± σ): 0.00% ± 0.00%
239239

240-
▃▂▂▄█▄▂▃
240+
▃▂▂▄█▄▂▃
241241
▂▃▃▄▆▅████████▇▆▆▅▄▄▄▅▆▄▃▄▅▄▃▂▃▃▃▂▂▃▁▂▂▂▁▂▂▂▂▂▂▁▁▁▁▂▂▁▁▁▂▂▁▁▂▁▁▂
242242
5.98 ms Histogram: frequency by time 6.18 ms (top 1%)
243243

@@ -292,7 +292,7 @@ BenchmarkTools.Trial: 10000 samples with 1000 evaluations.
292292
Time (median): 3.176 ns ┊ GC (median): 0.00%
293293
Time (mean ± σ): 3.262 ns ± 0.882 ns ┊ GC (mean ± σ): 0.00% ± 0.00%
294294

295-
295+
296296
█▁▂▁▁▁▂▁▂▁▂▁▁▂▁▁▂▂▂▂▂▂▁▁▂▁▁▂▁▁▁▂▂▁▁▁▂▁▂▂▁▂▁▁▂▂▂▁▂▂▂▂▂▂▂▂▂▂▂▁▂▂▁▂
297297
3.18 ns Histogram: frequency by time 4.78 ns (top 1%)
298298

@@ -301,7 +301,7 @@ BenchmarkTools.Trial: 10000 samples with 1000 evaluations.
301301

302302
The key point here is that these two benchmarks measure different things, even though their code is similar. In the first example, Julia was able to optimize away `view(a, 1:2, 1:2)` because it could prove that the value wasn't being returned and `a` wasn't being mutated. In the second example, the optimization is not performed because `view(a, 1:2, 1:2)` is a return value of the benchmark expression.
303303

304-
BenchmarkTools will faithfully report the performance of the exact code that you provide to it, including any compiler optimizations that might happen to elide the code completely. It's up to you to design benchmarks which actually exercise the code you intend to exercise.
304+
BenchmarkTools will faithfully report the performance of the exact code that you provide to it, including any compiler optimizations that might happen to elide the code completely. It's up to you to design benchmarks which actually exercise the code you intend to exercise.
305305

306306
A common place julia's optimizer may cause a benchmark to not measure what a user thought it was measuring is simple operations where all values are known at compile time. Suppose you wanted to measure the time it takes to add together two integers:
307307
```julia
@@ -312,7 +312,7 @@ julia> @btime $a + $b
312312
0.024 ns (0 allocations: 0 bytes)
313313
3
314314
```
315-
in this case julia was able to use the properties of `+(::Int, ::Int)` to know that it could safely replace `$a + $b` with `3` at compile time. We can stop the optimizer from doing this by referencing and dereferencing the interpolated variables
315+
in this case julia was able to use the properties of `+(::Int, ::Int)` to know that it could safely replace `$a + $b` with `3` at compile time. We can stop the optimizer from doing this by referencing and dereferencing the interpolated variables
316316
```julia
317317
julia> @btime $(Ref(a))[] + $(Ref(b))[]
318318
1.277 ns (0 allocations: 0 bytes)
@@ -341,7 +341,7 @@ BenchmarkTools.Trial: 10000 samples with 1 evaluations.
341341
Time (median): 30.818 μs ┊ GC (median): 0.00%
342342
Time (mean ± σ): 31.777 μs ± 25.161 μs ┊ GC (mean ± σ): 1.31% ± 1.63%
343343

344-
▂▃▅▆█▇▇▆▆▄▄▃▁▁
344+
▂▃▅▆█▇▇▆▆▄▄▃▁▁
345345
▁▁▁▁▁▁▂▃▄▆████████████████▆▆▅▅▄▄▃▃▃▂▂▂▂▂▂▁▂▁▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
346346
26.5 μs Histogram: frequency by time 41.3 μs (top 1%)
347347

@@ -378,35 +378,35 @@ As you can see from the above, a couple of different timing estimates are pretty
378378

379379
```julia
380380
julia> minimum(t)
381-
BenchmarkTools.TrialEstimate:
381+
BenchmarkTools.TrialEstimate:
382382
time: 26.549 μs
383383
gctime: 0.000 ns (0.00%)
384384
memory: 16.36 KiB
385385
allocs: 19
386386

387387
julia> maximum(t)
388-
BenchmarkTools.TrialEstimate:
388+
BenchmarkTools.TrialEstimate:
389389
time: 1.503 ms
390390
gctime: 1.401 ms (93.21%)
391391
memory: 16.36 KiB
392392
allocs: 19
393393

394394
julia> median(t)
395-
BenchmarkTools.TrialEstimate:
395+
BenchmarkTools.TrialEstimate:
396396
time: 30.818 μs
397397
gctime: 0.000 ns (0.00%)
398398
memory: 16.36 KiB
399399
allocs: 19
400400

401401
julia> mean(t)
402-
BenchmarkTools.TrialEstimate:
402+
BenchmarkTools.TrialEstimate:
403403
time: 31.777 μs
404404
gctime: 415.686 ns (1.31%)
405405
memory: 16.36 KiB
406406
allocs: 19
407407

408408
julia> std(t)
409-
BenchmarkTools.TrialEstimate:
409+
BenchmarkTools.TrialEstimate:
410410
time: 25.161 μs
411411
gctime: 23.999 μs (95.38%)
412412
memory: 16.36 KiB

0 commit comments

Comments
 (0)