Skip to content

bug: Crash when trying to use 2 parquet datasets with different types (LargeList and Sequence) #551

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
HarikrishnanBalagopal opened this issue May 12, 2025 · 2 comments

Comments

@HarikrishnanBalagopal
Copy link
Contributor

HarikrishnanBalagopal commented May 12, 2025

Describe the bug

There are at least 2 types of Parquet files.
One where the columns are a LargeList and one where the columns are Sequence.
(Note: In general there could also be a mix, with some columns being Sequence and others being LargeList.)

Trying to use both types together leads to a crash

ValueError: The features can't be aligned because the key input_ids of features {'input_ids': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), 'labels': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'attention_mask': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None)} has unexpected type - Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None) (expected either LargeList(dtype=Value(dtype='int32', id=None), id=None) or Value("null").

This is also an issue when trying to mix a pretokenized dataset with online data processing of a JSONL dataset.
All of our code produces Sequence columns by default. So if the pretokenized dataset is a parquet with LargeList (e.g. from storage systems like Lakehouse), then the same crash is triggered.

Platform

Please provide details about the environment you are using, including the following:

  • Interpreter version: Python 3.13.1
  • Library version: main branch commit 8821791f3485a639ab1f08314a6edb182e54f108

Sample Code

Steps to reproduce:

  1. Download the zip and uncompress to get the 2 types of parquet files. data.zip

  2. Load the datasets individually

from datasets import load_dataset
>>> dlarge = load_dataset('parquet', data_files=['chat_dataset_jsonl_processed_largelist.parquet'])
Generating train split: 1000 examples [00:00, 88440.78 examples/s]
>>> dseq = load_dataset('parquet', data_files=['chat_dataset_jsonl_processed_sequence.parquet'])
Generating train split: 1000 examples [00:00, 264908.99 examples/s]
  1. and then either try to concatenate https://huggingface.co/docs/datasets/v3.6.0/en/process#concatenate
>>> from datasets import concatenate_datasets
>>> concatenate_datasets([dlarge['train'], dseq['train']])

or try to interleave the datasets https://huggingface.co/docs/datasets/v3.6.0/en/process#interleave

>>> from datasets import interleave_datasets
>>> interleave_datasets([dlarge['train'], dseq['train']])

Expected behavior

Should not crash. Intelligently look at the column type and cast the output of the data processing to the correct type (usually this means casting the output of the data handlers Sequence, into a LargeList)

Option 1 Casting

AFTER the data handlers are done, and BEFORE the call to concatenate or interleave the datasets, we should check the column types are cast all of them to the same type (e.g. LargeList).

>>> dataset = dataset.cast_column('input_ids', LargeList(feature=Value(dtype='int32', id=None))) 
Casting the dataset: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 [00:00<00:00, 24377.69 examples/s]
DatasetDict({
    train: Dataset({
        features: ['input_ids', 'labels', 'attention_mask'],
        num_rows: 1000
    })
})
>>> dataset
DatasetDict({
    train: Dataset({
        features: ['input_ids', 'labels', 'attention_mask'],
        num_rows: 1000
    })
})
>>> dataset['train'].features
{'input_ids': LargeList(dtype=Value(dtype='int32', id=None), id=None), 'labels': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'attention_mask': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None)}



>>> import datasets
>>> datasets.__version__
'3.5.0'

Option 2 Loading the datasets together does the casting automatically

When loading them together, the order is important!

When giving LargeList first, you get LargeList

>>> dboth = load_dataset('parquet', data_files=['chat_dataset_jsonl_processed_largelist.parquet', 'chat_dataset_jsonl_processed_sequence.parquet'])
>>> dboth
DatasetDict({
    train: Dataset({
        features: ['input_ids', 'labels', 'attention_mask'],
        num_rows: 2000
    })
})
>>> dboth['train'].features
{'input_ids': LargeList(dtype=Value(dtype='int32', id=None), id=None), 'labels': LargeList(dtype=Value(dtype='int64', id=None), id=None), 'attention_mask': LargeList(dtype=Value(dtype='int32', id=None), id=None)}

When giving Sequence first, you get Sequence

>>> dbothotherorder = load_dataset('parquet', data_files=['chat_dataset_jsonl_processed_sequence.parquet', 'chat_dataset_jsonl_processed_largelist.parquet'])
Generating train split: 2000 examples [00:00, 249750.15 examples/s]
>>> dbothotherorder
DatasetDict({
    train: Dataset({
        features: ['input_ids', 'labels', 'attention_mask'],
        num_rows: 2000
    })
})
>>> dbothotherorder['train'].features
{'input_ids': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), 'labels': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'attention_mask': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None)}

Observed behavior

ValueError: The features can't be aligned because the key input_ids of features {'input_ids': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), 'labels': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'attention_mask': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None)} has unexpected type - Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None) (expected either LargeList(dtype=Value(dtype='int32', id=None), id=None) or Value("null").

Additional context

Ran into this while trying to mix pretokenized replay buffer data with use-case specific JSONL chat datasets.

Example chat dataset in JSONL format (used to create the parquet files):
chatdata.zip

Sequence parquet file created using the offline processing script:

python3 scripts/offline_data_processing.py \
    --data_config_path data_config.yaml \
    --output_dir myoutput1 \
    --model_name_or_path ibm-granite/granite-3.1-8b-base \
    --add_special_tokens "<|start_of_role|>" "<|end_of_role|>" "<|tool_call|>"

The data_config.yaml

# -----------------------------------------
# Data config docs: https://github.com/foundation-model-stack/fms-hf-tuning/blob/main/docs/advanced-data-preprocessing.md
dataprocessor:
  type: default
  streaming: false
  # granite 3.1 8b instruct chat template
  # https://huggingface.co/ibm-granite/granite-3.1-8b-instruct/blob/main/tokenizer_config.json#L188
  chat_template: "{%- if messages[0]['role'] == 'system' %}\n    {%- set system_message = messages[0]['content'] %}\n    {%- set loop_messages = messages[1:] %}\n{%- else %}\n    {%- set system_message = \"Knowledge Cutoff Date: April 2024.\nToday's Date: \" + strftime_now('%B %d, %Y') + \".\nYou are Granite, developed by IBM.\" %}\n    {%- if tools and documents %}\n        {%- set system_message = system_message + \" You are a helpful AI assistant with access to the following tools. When a tool is required to answer the user's query, respond with <|tool_call|> followed by a JSON list of tools used. If a tool does not exist in the provided list of tools, notify the user that you do not have the ability to fulfill the request.\n\nWrite the response to the user's input by strictly aligning with the facts in the provided documents. If the information needed to answer the question is not available in the documents, inform the user that the question cannot be answered based on the available data.\" %}\n    {%- elif tools %}\n        {%- set system_message = system_message + \" You are a helpful AI assistant with access to the following tools. When a tool is required to answer the user's query, respond with <|tool_call|> followed by a JSON list of tools used. If a tool does not exist in the provided list of tools, notify the user that you do not have the ability to fulfill the request.\" %}\n    {%- elif documents %}\n        {%- set system_message = system_message + \" Write the response to the user's input by strictly aligning with the facts in the provided documents. If the information needed to answer the question is not available in the documents, inform the user that the question cannot be answered based on the available data.\" %}\n    {%- else %}\n        {%- set system_message = system_message + \" You are a helpful AI assistant.\" %}    \n    {%- endif %}\n    {%- if 'citations' in controls and documents %}\n        {%- set system_message = system_message + '\n\nIn your response, use the symbols <co> and </co> to indicate when a fact comes from a document in the search result, e.g <co>0</co> for a fact from document 0. Afterwards, list all the citations with their corresponding documents in an ordered list.' %}\n    {%- endif %}\n    {%- if 'hallucinations' in controls and documents %}\n        {%- set system_message = system_message + '\n\nFinally, after the response is written, include a numbered list of sentences from the response that are potentially hallucinated and not based in the documents.' %}\n    {%- endif %}\n    {%- set loop_messages = messages %}\n{%- endif %}\n{{- '<|start_of_role|>system<|end_of_role|>' + system_message + '<|end_of_text|>\n' }}\n{%- if tools %}\n    {{- '<|start_of_role|>tools<|end_of_role|>' }}\n    {{- tools | tojson(indent=4) }}\n    {{- '<|end_of_text|>\n' }}\n{%- endif %}\n{%- if documents %}\n    {{- '<|start_of_role|>documents<|end_of_role|>' }}\n    {%- for document in documents %}\n        {{- 'Document ' + loop.index0 | string + '\n' }}\n        {{- document['text'] }}\n        {%- if not loop.last %}\n            {{- '\n\n'}}\n        {%- endif%}\n    {%- endfor %}\n    {{- '<|end_of_text|>\n' }}\n{%- endif %}\n{%- for message in loop_messages %}\n    {{- '<|start_of_role|>' + message['role'] + '<|end_of_role|>' + message['content'] + '<|end_of_text|>\n' }}\n    {%- if loop.last and add_generation_prompt %}\n        {{- '<|start_of_role|>assistant' }}\n            {%- if controls %}\n                {{- ' ' + controls | tojson()}}\n            {%- endif %}\n        {{- '<|end_of_role|>' }}\n    {%- endif %}\n{%- endfor %}"
datasets:
- name: tuning_data
  data_paths:
  - './chat_dataset.jsonl'
  data_handlers:
  - name: tokenize_and_apply_chat_template_with_masking
    arguments:
      remove_columns: all
      batched: false
      fn_kwargs:
        conversation_column: "messages"
@HarikrishnanBalagopal HarikrishnanBalagopal changed the title Crash when trying to use 2 parquet datasets with different types (LargeList and Sequence) bug: Crash when trying to use 2 parquet datasets with different types (LargeList and Sequence) May 12, 2025
@dushyantbehl
Copy link
Collaborator

Fix will be here - #494

@dushyantbehl
Copy link
Collaborator

@HarikrishnanBalagopal can we close out the issue now? have you tested this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants