Skip to content

Conversation

@quic-calvnguy
Copy link
Contributor

Description

  • Remove PerThreadContext (Only used for id management)
  • Create ManagedHtpPowerConfigId to manage destruction of id
  • Only create one htp power config id per session (previously was one per thread)

Motivation and Context

One session can potentially be used on multiple threads for execution, and there is a set max number of htp power config ids that can be used at any given time. If enough sessions and enough threads are created, then the maximum number of power config ids can be reached easily (see ticket).

Additionally, all power configurations are available on a per-session basis. Therefore, there is no reason to have more than one power config id per session.

Removal of PerThreadContext is due to the fact that it was only used to contain and destroy the power config ids on thread termination. As such, there is no more need for PerThreadContext.

 - Remove PerThreadContext (Only used for id management)
 - Create ManagedHtpPowerConfigId to manage destruction of id
 - Only create one htp power config id per session
if (IsHtpPowerConfigIdValid()) {
if (qnn::HtpPerformanceMode::kHtpDefault != htp_performance_mode) {
ORT_RETURN_IF_ERROR(qnn_backend_manager_->SetHtpPowerConfig(GetPerThreadContext().GetHtpPowerConfigId(),
ORT_RETURN_IF_ERROR(qnn_backend_manager_->SetHtpPowerConfig(GetHtpPowerConfigId(),
Copy link
Contributor

@adrianlizarraga adrianlizarraga Nov 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When a session is used across multiple threads, is it possible that they can interfere with each other here (and in other places where the EP calls qnn_backend_manager_->Set*PowerConfig.)?

The mutex is only locked during the call to GetHtpPowerConfigId() but there is no synchronization here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants