-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Description
🚀 Feature
Discussion: Should the forward of LightningLite's _LiteModule
wrapper convert the outputs back to default precision?
Motivation
Original motivation was to have a precision-agnostic module wrapper such that user does not need to convert outputs when switching to different precision backends. However, on the other hand it takes away control from the user.
Pitch
A) Keep as is (convert output to default type)
B) Convert it back to the type the input had (here, input refers to the input of the wrapper, NOT the inner module)
C) Do nothing. Return the output as it was returned by the inner forward
Additional context
Raised in #14792 (comment)
If you enjoy Lightning, check out our other projects! ⚡
-
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
-
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
-
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
-
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
-
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging PyTorch Lightning, Transformers, and Hydra.