-
Notifications
You must be signed in to change notification settings - Fork 96
spatial_autocorr uses all the cores #957
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
For a quick fix, you should be able to set the environment variable NUMBA_NUM_THREADS. |
hi @GloriaLiu28 can you check if this still exists with the most recent version of scanpy? @flying-sheep I noticed we use gearys stats in this |
It will not be parallelized when the current thread’s name starts with joblib seems to use this: https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.ThreadPool so scanpy’s workaround will probably not work there. we should augment it so it does. I also found the way joblib adapts to manually reduced thread numbers, which seems like a nice starting point for us: https://github.com/joblib/joblib/blob/ed0806a497268005ad7dad30f79e1d563927d7c6/joblib/_parallel_backends.py#L65 |
@selmanozleyen Ensure that numba only uses one core here going forward. |
Hi,
when I run squidpy.gr.spatial_autocorr to calculate moran, it used all the 112 cores of my server, even the n_jobs were set to n_jobs=8, how to limit the number of cores it uses ? I find the old issues about occurence also talked about this. Could you please also set numba_parallell = False to squidpy.gr.spatial_autocorr function?
The text was updated successfully, but these errors were encountered: