You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1.44 ns Histogram: frequency by time 1.46 ns (top 1%)
56
56
@@ -86,7 +86,7 @@ You can pass the following keyword arguments to `@benchmark`, `@benchmarkable`,
86
86
-`time_tolerance`: The noise tolerance for the benchmark's time estimate, as a percentage. This is utilized after benchmark execution, when analyzing results. Defaults to `BenchmarkTools.DEFAULT_PARAMETERS.time_tolerance = 0.05`.
87
87
-`memory_tolerance`: The noise tolerance for the benchmark's memory estimate, as a percentage. This is utilized after benchmark execution, when analyzing results. Defaults to `BenchmarkTools.DEFAULT_PARAMETERS.memory_tolerance = 0.01`.
88
88
89
-
The following keyword arguments relate to [Running custom benchmarks]are experimental and subject to change, see [Running custom benchmarks] for furthe details.:
89
+
The following keyword arguments relate are experimental and subject to change, see [Running custom benchmarks](@ref) for further details:
90
90
91
91
-`run_customizable_func_only`: If `true`, only the customizable benchmark. Defaults to `BenchmarkTools.DEFAULT_PARAMETERS..run_customizable_func_only = false`.
92
92
-`enable_customizable_func`: If `:ALL` the customizable benchmark runs on every sample, if `:LAST` the customizable benchmark runs on the last sample, if `:FALSE` the customizable benchmark is never run. Defaults to `BenchmarkTools.DEFAULT_PARAMETERS.enable_customizable_func = :FALSE`
@@ -125,7 +125,7 @@ BenchmarkTools.Trial: 10000 samples with 10 evaluations.
3.18 ns Histogram: frequency by time 4.78 ns (top 1%)
298
298
@@ -301,7 +301,7 @@ BenchmarkTools.Trial: 10000 samples with 1000 evaluations.
301
301
302
302
The key point here is that these two benchmarks measure different things, even though their code is similar. In the first example, Julia was able to optimize away `view(a, 1:2, 1:2)` because it could prove that the value wasn't being returned and `a` wasn't being mutated. In the second example, the optimization is not performed because `view(a, 1:2, 1:2)` is a return value of the benchmark expression.
303
303
304
-
BenchmarkTools will faithfully report the performance of the exact code that you provide to it, including any compiler optimizations that might happen to elide the code completely. It's up to you to design benchmarks which actually exercise the code you intend to exercise.
304
+
BenchmarkTools will faithfully report the performance of the exact code that you provide to it, including any compiler optimizations that might happen to elide the code completely. It's up to you to design benchmarks which actually exercise the code you intend to exercise.
305
305
306
306
A common place julia's optimizer may cause a benchmark to not measure what a user thought it was measuring is simple operations where all values are known at compile time. Suppose you wanted to measure the time it takes to add together two integers:
307
307
```julia
@@ -312,7 +312,7 @@ julia> @btime $a + $b
312
312
0.024 ns (0 allocations:0 bytes)
313
313
3
314
314
```
315
-
in this case julia was able to use the properties of `+(::Int, ::Int)` to know that it could safely replace `$a + $b` with `3` at compile time. We can stop the optimizer from doing this by referencing and dereferencing the interpolated variables
315
+
in this case julia was able to use the properties of `+(::Int, ::Int)` to know that it could safely replace `$a + $b` with `3` at compile time. We can stop the optimizer from doing this by referencing and dereferencing the interpolated variables
316
316
```julia
317
317
julia>@btime$(Ref(a))[] +$(Ref(b))[]
318
318
1.277 ns (0 allocations:0 bytes)
@@ -341,7 +341,7 @@ BenchmarkTools.Trial: 10000 samples with 1 evaluations.
0 commit comments