Skip to content

Improvements to DynamicPPLBenchmarks #346

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 49 commits into from
Mar 12, 2025
Merged
Show file tree
Hide file tree
Changes from 32 commits
Commits
Show all changes
49 commits
Select commit Hold shift + click to select a range
57b5d47
bigboy update to benchmarks
torfjelde Aug 2, 2021
e7c0a76
Merge branch 'master' into tor/benchmark-update
torfjelde Aug 19, 2021
60ec2c8
Merge branch 'master' into tor/benchmark-update
torfjelde Sep 8, 2021
eb1b83c
Merge branch 'master' into tor/benchmark-update
torfjelde Nov 6, 2021
d8afa71
Merge branch 'master' into tor/benchmark-update
torfjelde Nov 6, 2021
5bb48d2
make models return random variables as NamedTuple as it can be useful…
torfjelde Dec 2, 2021
02484cf
add benchmarking of evaluation with SimpleVarInfo with NamedTuple
torfjelde Dec 2, 2021
5c59769
added some information about the execution environment
torfjelde Dec 3, 2021
f1f1381
added judgementtable_single
torfjelde Dec 3, 2021
a48553a
added benchmarking of SimpleVarInfo, if present
torfjelde Dec 3, 2021
f2dc062
Merge branch 'master' into tor/benchmark-update
torfjelde Dec 3, 2021
fa675de
added ComponentArrays benchmarking for SimpleVarInfo
torfjelde Dec 5, 2021
3962da2
Merge branch 'master' into tor/benchmark-update
yebai Aug 29, 2022
53dc571
Merge branch 'master' into tor/benchmark-update
yebai Nov 2, 2022
f5705d5
Merge branch 'master' into tor/benchmark-update
torfjelde Nov 7, 2022
7f569f7
formatting
torfjelde Nov 7, 2022
4a06150
Merge branch 'master' into tor/benchmark-update
yebai Feb 2, 2023
a1cc6bf
Apply suggestions from code review
yebai Feb 2, 2023
3e7e200
Update benchmarks/benchmarks.jmd
yebai Feb 2, 2023
c867ae8
Merge branch 'master' into tor/benchmark-update
yebai Jul 4, 2023
96f120b
merged main into this one
shravanngoswamii Dec 19, 2024
0460b64
Benchmarking CI
shravanngoswamii Dec 19, 2024
a8541b5
Julia script for benchmarking on top of current setup
shravanngoswamii Feb 1, 2025
0291c2f
keep old results for reference
shravanngoswamii Feb 1, 2025
6f255d1
Merge branch 'master' of https://github.com/TuringLang/DynamicPPL.jl …
shravanngoswamii Feb 1, 2025
3b5e448
updated benchmarking setup
shravanngoswamii Feb 20, 2025
1e61025
Merge branch 'master' of https://github.com/TuringLang/DynamicPPL.jl …
shravanngoswamii Feb 20, 2025
640aa45
applied suggested changes
shravanngoswamii Feb 27, 2025
3bdbe40
Merge branch 'master' of https://github.com/TuringLang/DynamicPPL.jl …
shravanngoswamii Feb 27, 2025
d8fd05c
updated benchmarks/README.md
shravanngoswamii Feb 27, 2025
c34e489
setup benchmarking CI
shravanngoswamii Feb 27, 2025
1d1b11e
Merge remote-tracking branch 'origin/main' into tor/benchmark-update
mhauru Mar 3, 2025
ad4175a
Update benchmark models (#826)
mhauru Mar 3, 2025
d39a9d6
Merge branch 'main' into tor/benchmark-update
mhauru Mar 6, 2025
f765b40
Make benchmarks not depend on TuringBenchmarking.jl, and run `]dev ..…
mhauru Mar 6, 2025
00296bd
Benchmarking.yml: now comments raw markdown table enclosed in triple …
shravanngoswamii Mar 8, 2025
9a64f32
Benchmarking.yml: now includes the SHA of the DynamicPPL commit in Be…
shravanngoswamii Mar 8, 2025
5c35238
Benchmark more with Mooncake
mhauru Mar 7, 2025
923105e
Add model dimension to benchmark table
mhauru Mar 10, 2025
2f15b72
Add info print
mhauru Mar 10, 2025
ee39e26
Fix type instability in benchmark model
mhauru Mar 10, 2025
6ce0a4f
Remove done TODO note
mhauru Mar 10, 2025
c95d298
Merge branch 'main' into tor/benchmark-update
mhauru Mar 10, 2025
2161352
Apply suggestions from code review
mhauru Mar 11, 2025
70ff1a9
Fix table formatting bug
mhauru Mar 11, 2025
b847542
Simplify benchmark suite code
mhauru Mar 11, 2025
4a15940
Use StableRNG
mhauru Mar 11, 2025
5e6cab0
Merge branch 'main' into tor/benchmark-update
shravanngoswamii Mar 12, 2025
dae1be7
Merge branch 'main' of https://github.com/TuringLang/DynamicPPL.jl in…
shravanngoswamii Mar 12, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
66 changes: 66 additions & 0 deletions .github/workflows/Benchmarking.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
name: Benchmarking

on:
pull_request:

jobs:
benchmarks:
runs-on: ubuntu-latest

steps:
- name: Checkout Repository
uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}

- name: Set up Julia
uses: julia-actions/setup-julia@v2
with:
version: '1'

- name: Install Dependencies
run: julia --project=benchmarks/ -e 'using Pkg; Pkg.instantiate()'

- name: Run Benchmarks
id: run_benchmarks
run: |
# Capture version info into a variable, print it, and set it as an env var for later steps
version_info=$(julia -e 'using InteractiveUtils; versioninfo()')
echo "$version_info"
echo "VERSION_INFO<<EOF" >> $GITHUB_ENV
echo "$version_info" >> $GITHUB_ENV
echo "EOF" >> $GITHUB_ENV

# Capture benchmark output into a variable
echo "Running Benchmarks..."
benchmark_output=$(julia --project=benchmarks benchmarks/benchmarks.jl)

# Print benchmark results directly to the workflow log
echo "Benchmark Results:"
echo "$benchmark_output"

# Set the benchmark output as an env var for later steps
echo "BENCHMARK_OUTPUT<<EOF" >> $GITHUB_ENV
echo "$benchmark_output" >> $GITHUB_ENV
echo "EOF" >> $GITHUB_ENV

- name: Find Existing Comment
uses: peter-evans/find-comment@v3
id: find_comment
with:
issue-number: ${{ github.event.pull_request.number }}
comment-author: github-actions[bot]

- name: Post Benchmark Results as PR Comment
uses: peter-evans/create-or-update-comment@v4
with:
issue-number: ${{ github.event.pull_request.number }}
body: |
## Computer Information
```
${{ env.VERSION_INFO }}
```
## Benchmark Report
${{ env.BENCHMARK_OUTPUT }}
comment-id: ${{ steps.find_comment.outputs.comment-id }}
edit-mode: replace
7 changes: 2 additions & 5 deletions benchmarks/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,7 @@ version = "0.1.0"

[deps]
BenchmarkTools = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf"
DiffUtils = "8294860b-85a6-42f8-8c35-d911f667b5f6"
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
DynamicPPL = "366bfd00-2699-11ea-058f-f148b4cae6d8"
LibGit2 = "76f85450-5226-5b5a-8eaa-529ad045b433"
Markdown = "d6f4376e-aef5-505a-96c1-9c027394607a"
Pkg = "44cfe95a-1eb2-52ea-b672-e2afdf69b78f"
Weave = "44d3d7a6-8a23-5bf8-98c5-b353f8df5ec9"
PrettyTables = "08abe8d2-0d0c-5749-adfa-8a2ac140af0d"
TuringBenchmarking = "0db1332d-5c25-4deb-809f-459bc696f94f"
28 changes: 3 additions & 25 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
@@ -1,27 +1,5 @@
To run the benchmarks, simply do:
To run the benchmarks, simply do this from the root directory of the repository:

```sh
julia --project -e 'using DynamicPPLBenchmarks; weave_benchmarks();'
```

```julia
julia> @doc weave_benchmarks
weave_benchmarks(input="benchmarks.jmd"; kwargs...)

Weave benchmarks present in benchmarks.jmd into a single file.

Keyword arguments
≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡

• benchmarkbody: JMD-file to be rendered for each model.

• include_commit_id=false: specify whether to include commit-id in the default name.

• name: the name of directory in results/ to use as output directory.

• name_old=nothing: if specified, comparisons of current run vs. the run pinted to by name_old will be included in the generated document.

• include_typed_code=false: if true, output of code_typed for the evaluator of the model will be included in the weaved document.

• Rest of the passed kwargs will be passed on to Weave.weave.
```
julia --project=benchmarks benchmarks/benchmarks.jl
```
49 changes: 0 additions & 49 deletions benchmarks/benchmark_body.jmd

This file was deleted.

65 changes: 65 additions & 0 deletions benchmarks/benchmarks.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
using DynamicPPL: @model
using DynamicPPLBenchmarks: make_suite
using BenchmarkTools: median, run
using Distributions: Normal, Beta, Bernoulli
using PrettyTables: pretty_table, PrettyTables

# Define models
@model function demo1(x)
m ~ Normal()
x ~ Normal(m, 1)
return (m=m, x=x)
end

@model function demo2(y)
p ~ Beta(1, 1)
N = length(y)
for n in 1:N
y[n] ~ Bernoulli(p)
end
return (; p)
end

demo1_data = randn()
demo2_data = rand(Bool, 10)

# Create model instances with the data
demo1_instance = demo1(demo1_data)
demo2_instance = demo2(demo2_data)

# Specify the combinations to test:
# (Model Name, model instance, VarInfo choice, AD backend)
chosen_combinations = [
("Demo1", demo1_instance, :typed, :forwarddiff),
("Demo1", demo1_instance, :simple_namedtuple, :zygote),
("Demo2", demo2_instance, :untyped, :reversediff),
("Demo2", demo2_instance, :simple_dict, :forwarddiff),
]

results_table = Tuple{String,String,String,Float64,Float64}[]

for (model_name, model, varinfo_choice, adbackend) in chosen_combinations
suite = make_suite(model, varinfo_choice, adbackend)
results = run(suite)

eval_time = median(results["AD_Benchmarking"]["evaluation"]["standard"]).time

grad_group = results["AD_Benchmarking"]["gradient"]
if isempty(grad_group)
ad_eval_time = NaN
else
grad_backend_key = first(keys(grad_group))
ad_eval_time = median(grad_group[grad_backend_key]["standard"]).time
end

push!(
results_table,
(model_name, string(adbackend), string(varinfo_choice), eval_time, ad_eval_time),
)
end

table_matrix = hcat(Iterators.map(collect, zip(results_table...))...)
header = [
"Model", "AD Backend", "VarInfo Type", "Evaluation Time (ns)", "AD Eval Time (ns)"
]
pretty_table(table_matrix; header=header, tf=PrettyTables.tf_markdown)
130 changes: 0 additions & 130 deletions benchmarks/benchmarks.jmd

This file was deleted.

Loading
Loading