You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are currently encountering an issue noted in #82. Specifically, we are working with two large dataframes: one with 1992 rows and 3000 columns, and another with 3000 rows and 400 columns. Our Large Language Model (LLM) is producing errors indicating that we have exceeded the maximum number of tokens it can process.
Here is the specific error message we are receiving:
BadRequestError: Error code: 400 - {'error': {'message': 'litellm.BadRequestError:
OpenAIException - max_tokens must be at least 1, got -3022.. Received Model
Group=pixtral-large-2411-local\nAvailable Model Group Fallbacks=None', 'type': None,
'param': None, 'code': '400'}}
The size of these dataframes appears to be causing the LLM to reach its token limit, resulting in processing failures.
Is there any way to address this issue within the pandas-ai framework? Specifically, we are interested in implementing a solution that splits the dataframes into smaller chunks or optimizes the columns processed. Any guidance or recommendations on how to handle this effectively would be greatly appreciated.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Dear Pandas-ai team,
We are currently encountering an issue noted in #82. Specifically, we are working with two large dataframes: one with 1992 rows and 3000 columns, and another with 3000 rows and 400 columns. Our Large Language Model (LLM) is producing errors indicating that we have exceeded the maximum number of tokens it can process.
Here is the specific error message we are receiving:
The size of these dataframes appears to be causing the LLM to reach its token limit, resulting in processing failures.
Is there any way to address this issue within the pandas-ai framework? Specifically, we are interested in implementing a solution that splits the dataframes into smaller chunks or optimizes the columns processed. Any guidance or recommendations on how to handle this effectively would be greatly appreciated.
Thank you
Beta Was this translation helpful? Give feedback.
All reactions