Skip to content

Add _apply_fn_to_data in AOBaseClass #2349

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
drisspg opened this issue Jun 10, 2025 · 0 comments · May be fixed by #2365
Open

Add _apply_fn_to_data in AOBaseClass #2349

drisspg opened this issue Jun 10, 2025 · 0 comments · May be fixed by #2365
Labels
good first issue Good for newcomers topic: new feature Use this tag if this PR adds a new feature

Comments

@drisspg
Copy link
Contributor

drisspg commented Jun 10, 2025

Summary

This pattern is very common and can be implemented generically.

The only times this will change is when we need to spoof our actual size, which is uncommon NJT is the only one I can think of

def _apply_fn_to_data(self, fn: Callable):
    """Applies a fn to all tensor components stored on this class"""
    tensor_names, ctx = self.__tensor_flatten__()

    # Apply the function to each tensor component
    new_tensors = {}
    for name in tensor_names:
        new_tensors[name] = fn(getattr(self, name))

    return self.__class__.__tensor_unflatten__(
        new_tensors,
        ctx,
        None,  # outer_size parameter
        None,  # outer_stride parameter
    )
@drisspg drisspg added good first issue Good for newcomers topic: new feature Use this tag if this PR adds a new feature labels Jun 10, 2025
krikera added a commit to krikera/ao that referenced this issue Jun 12, 2025
   - Implements generic pattern for applying functions to tensor components
   - Uses __tensor_flatten__ and __tensor_unflatten__ pattern
   - Fixes pytorch#2349
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers topic: new feature Use this tag if this PR adds a new feature
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant