Skip to content

Refactor benchmark dashboard codebase for new usecases #6323

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
huydhn opened this issue Feb 22, 2025 · 0 comments
Open

Refactor benchmark dashboard codebase for new usecases #6323

huydhn opened this issue Feb 22, 2025 · 0 comments
Assignees

Comments

@huydhn
Copy link
Contributor

huydhn commented Feb 22, 2025

With the exception of PT2 inductor dashboard, all benchmark dashboards on HUD are base on:

With the recent additions of new dashboards like TorchAO, vLLM, and CacheBench, there are some ugly hard-coding parts in the code base that will become harder to maintain when we try to onboard new benchmarks in the future. The key issue here is that, although they share the same code base, different dashboards have slightly different requirements on what to show there. For example, we want to show is_dynamic field on CacheBench.

This is an open end issues to document some thoughts I have on how to improve this. We could:

  • Figure out a generic and intuitive way to display the information on how a benchmark is setup on the dashboard. Note that this can include lots of information about parameters, devices, dependencies, etc. Although these information are recorded in the database, they don't generally have a coherent structure (to give devs a degree of flexibility there)
    • Enforce a structure on how a benchmark setup is hard because there are lots of use cases out there with widely different requirements, i.e. mobile benchmarks
  • Refactor the code base into better reusable components
  • Customize the dashboards using config files
  • More ideas?

The goal is to make it easier to add new dashboard and/or reconfigure existing ones.

cc @yangw-dev

@huydhn huydhn moved this to Cold Storage in PyTorch OSS Dev Infra Feb 22, 2025
@yangw-dev yangw-dev self-assigned this Feb 26, 2025
@yangw-dev yangw-dev moved this from Cold Storage to In Progress in PyTorch OSS Dev Infra Feb 26, 2025
yangw-dev added a commit that referenced this issue Feb 27, 2025
#6337)

issue: #6323
# Description
Put llms-benchmark-specific files under component/benchmark/llms
# Overview
this pr simple move everything specifically for llms under
component/benchmark/llms
first step to come up customized component UI for llms.

new& modified files:
- create LlmsPage as main page component, no complicated logic in
pages/benchmark/llms.tsx, it should mainly used as api routing
  - move LlmsReport to its own component instead of nested in LlmsPage. 
- rename component with prefix 'Llms' Since we have similar name
component such as GraphPanel.
  - move convertToCompilerPerformanceData in lib/complier/compierUtils
 
# Demo

https://torchci-9eem5f1do-fbopensource.vercel.app/benchmark/llms?repoName=pytorch%2Fpytorch&benchmarkName=TorchCache+Benchmark
yangw-dev added a commit that referenced this issue Mar 3, 2025
Issue: #6323

- Use UserReducer to handle the benchmark dashboard props
- Restruct the LLMsPage, move picker logics to UI component, and render
the dropdown list dynamically.
- Pass props instead of each param to llmsReport for easy maintainance

Demo:
https://torchci-4sjlqbjzx-fbopensource.vercel.app/benchmark/llms?repoName=pytorch%2Fexecutorch


Next steps:
- keep cleaning the rest of component
- introduct repo speciifc configuration logic
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: In Progress
Development

No branches or pull requests

2 participants