Skip to content

Add more threading configs #9076

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

flying-sheep
Copy link

@flying-sheep flying-sheep commented May 13, 2025

Big question: is this even correct? Dasks’s docs mention the three env variables mentioned here, but why “1”?

What does threads_per_worker in LocalCluster mean?

Ideally I’d start my workers, run 1 Python thread in each of them, and configure each of them so all these parallelization engines use a certain number of threads.

Closes #9075

  • Tests added / passed
  • Passes pre-commit run --all-files

no tests, since you also don’t test OMP_NUM_THREADS and the other documented env variables.

Copy link
Contributor

Unit Test Results

See test report for an extended history of previous test failures. This is useful for diagnosing flaky tests.

    27 files  ±0      27 suites  ±0   11h 23m 43s ⏱️ + 15m 23s
 4 113 tests ±0   3 995 ✅  - 5    111 💤 ±0  6 ❌ +4  1 🔥 +1 
51 570 runs  +1  49 278 ✅  - 5  2 285 💤 +1  6 ❌ +4  1 🔥 +1 

For more details on these failures and errors, see this check.

Results for commit ce38c35. ± Comparison against base commit 39f91e3.

Copy link
Member

@jacobtomlinson jacobtomlinson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does threads_per_worker in LocalCluster mean?

Each worker is a separate process. Each worker process can have multiple threads. In use cases where the task releases the GIL it can be beneficial to use many threads because the inter-thread communication cost is lower. In cases where the tasks aren't threadsafe or don't release the GIL is can be better to have 1 thread with many processes.

By default we use a balanced profile where we look at how many CPU cores you have and then create a few processes, each with a few threads. The product of processes x threads == CPU cores. This doesn't favour any workflow type in particular.

We set these variables to 1 because we assume Dask will be running one task per thread/process, and the product of that configuration will be the total CPU cores in your system. So if you have 12 cores Dask will run 12 tasks in parallel. So we don't want the libraries they call like Numpy to try and use more than 1 core, otherwise we will have too much CPU contention.

Ultimately these things come down to tuning for your specific workflows. You tweak these numbers when you are debugging or trying to squeeze more performance out. If you are finding a benefit from setting the variables you mention here for your workflow that's great. I'm not sure I see the value in setting this default for all Dask users though unless there is a clear problem that many people are running into.

@fjetter
Copy link
Member

fjetter commented May 14, 2025

We've also seen a bit of fallout with these settings for users who are intentionally running dask with one thread per worker, assuming that the lib underneath parallelizes just to realize that this isn't happening by default due to these settings. Because of this, I'm generally not enthusiastic about these settings. While some users may benefit from it, it is a surprising side effect when running things inside of a dask worker.

(I'm not blocking, just adding context)

@flying-sheep
Copy link
Author

Each worker process can have multiple threads

So that means Python threads that don’t actually have any benefit apart from I/O?
I guess that might help in IO-bound operations, but I’d rather let my native code multithread, it’s better at it than Python.

@jacobtomlinson
Copy link
Member

I’d rather let my native code multithread

Absolutely! In this case you want to set the number of threads and processes per machine to 1 and then let your native code leverage all the cores.

This isn't something we should be setting as the default for all users though. So I'm going to close this out.

@flying-sheep
Copy link
Author

That’s not what I meant. setting it to 1 is vastly preferable to having it implicitly be n_cores, for exactly the same reasons why you have the other env variables set.

If I run a numba function in map_blocks without this on a 256-core machine and threads_per_worker=4 I have 64 workers each starting 256 numba threads.

I think this should be re-opened, but maybe I missed something. I don’t have my head wrapped around this fully, but I’m getting there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support NUMBA_NUM_THREADS env variable
3 participants