Skip to content

Avoiding OpenAI Token Rate Limit #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
bsimon11 opened this issue May 8, 2025 · 1 comment
Closed

Avoiding OpenAI Token Rate Limit #1

bsimon11 opened this issue May 8, 2025 · 1 comment

Comments

@bsimon11
Copy link

bsimon11 commented May 8, 2025

Hello, thank you for the exciting work! Do you have any advice for avoiding the OpenAI token rate limit which is set at 30.000 tokens per minute for 4o? I run into this error consistently when using STAgent. Thanks!

@linzw14
Copy link
Collaborator

linzw14 commented May 10, 2025

Hi, thanks for using STAgent, and we highly appreciate your feedback!

In our case, we use an organization-level (university) OpenAI API account (Tier 5), which provides a generous rate limit of 30,000,000 tokens per minute. If you're affiliated with a university, consider reaching out to your institution’s IT or research computing department — they may already have a dedicated OpenAI API access.

Alternatively, you can upgrade to Tier 2 (see OpenAI usage tiers). Tier 2 offers 450,000 tokens per minute for gpt-4o, which is typically sufficient for most academic or research use cases.

By contrast, Tier 1 users are limited to 30,000 tokens per minute, which can be a bottleneck when performing spatial omics analysis, especially with large numbers of high-resolution images. Unfortunately, there isn’t a straightforward workaround for this limitation.

Hope this helps clarify your options!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants