You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With the recent additions of new dashboards like TorchAO, vLLM, and CacheBench, there are some ugly hard-coding parts in the code base that will become harder to maintain when we try to onboard new benchmarks in the future. The key issue here is that, although they share the same code base, different dashboards have slightly different requirements on what to show there. For example, we want to show is_dynamic field on CacheBench.
This is an open end issues to document some thoughts I have on how to improve this. We could:
Figure out a generic and intuitive way to display the information on how a benchmark is setup on the dashboard. Note that this can include lots of information about parameters, devices, dependencies, etc. Although these information are recorded in the database, they don't generally have a coherent structure (to give devs a degree of flexibility there)
Enforce a structure on how a benchmark setup is hard because there are lots of use cases out there with widely different requirements, i.e. mobile benchmarks
Refactor the code base into better reusable components
Customize the dashboards using config files
More ideas?
The goal is to make it easier to add new dashboard and/or reconfigure existing ones.
#6337)
issue: #6323
# Description
Put llms-benchmark-specific files under component/benchmark/llms
# Overview
this pr simple move everything specifically for llms under
component/benchmark/llms
first step to come up customized component UI for llms.
new& modified files:
- create LlmsPage as main page component, no complicated logic in
pages/benchmark/llms.tsx, it should mainly used as api routing
- move LlmsReport to its own component instead of nested in LlmsPage.
- rename component with prefix 'Llms' Since we have similar name
component such as GraphPanel.
- move convertToCompilerPerformanceData in lib/complier/compierUtils
# Demo
https://torchci-9eem5f1do-fbopensource.vercel.app/benchmark/llms?repoName=pytorch%2Fpytorch&benchmarkName=TorchCache+Benchmark
Issue: #6323
- Use UserReducer to handle the benchmark dashboard props
- Restruct the LLMsPage, move picker logics to UI component, and render
the dropdown list dynamically.
- Pass props instead of each param to llmsReport for easy maintainance
Demo:
https://torchci-4sjlqbjzx-fbopensource.vercel.app/benchmark/llms?repoName=pytorch%2Fexecutorch
Next steps:
- keep cleaning the rest of component
- introduct repo speciifc configuration logic
With the exception of PT2 inductor dashboard, all benchmark dashboards on HUD are base on:
With the recent additions of new dashboards like TorchAO, vLLM, and CacheBench, there are some ugly hard-coding parts in the code base that will become harder to maintain when we try to onboard new benchmarks in the future. The key issue here is that, although they share the same code base, different dashboards have slightly different requirements on what to show there. For example, we want to show
is_dynamic
field on CacheBench.This is an open end issues to document some thoughts I have on how to improve this. We could:
The goal is to make it easier to add new dashboard and/or reconfigure existing ones.
cc @yangw-dev
The text was updated successfully, but these errors were encountered: