Skip to content
This repository was archived by the owner on Aug 22, 2024. It is now read-only.

Compare the GPU memory requirements between TorchSharp-phi and Microsoft.ML.OnnxRuntimeGenAI #13

Open
GeorgeS2019 opened this issue Jun 25, 2024 · 3 comments

Comments

@GeorgeS2019
Copy link

No description provided.

@LittleLittleCloud
Copy link
Owner

@GeorgeS2019 thanks for the feedback. We are working on migration this project to Microsoft.ML.GenAI.Phi and we will add the GPU memory comparsion between this package and phi model in onnxruntime package afterwards

@GeorgeS2019
Copy link
Author

I have problems with using GPU for onnxruntime, onnxruntime-training and GenAI

The requirements are not clear and the examples provided not updated

All examples focus only on cpu, making using GPU unclear

@LittleLittleCloud
Copy link
Owner

For the GenAI, are you referring to this sample? If that case would you mind sharing the error log you have, I'm happy to help out.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants