-
Notifications
You must be signed in to change notification settings - Fork 694
wasi_nn_openvino.c: remove the tensor layout adjustment logic #4308
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
yamt
wants to merge
1
commit into
bytecodealliance:main
Choose a base branch
from
yamt:nn4
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
the logic in question seems like an attempt to work around some application bugs. my wild guess is that it was for classification-example. cf. bytecodealliance/wasmtime#10867 because wasi-nn api doesn't provide a way for users to express their intended layout, it seems impossible to "fix" it here without risking to break other applications. btw, i suspect the condition "if (wasi_nn_tensor->dimensions->size == 4 || ov_dims[1] == 3)" in the code in question was meant "if (wasi_nn_tensor->dimensions->size == 4 && ov_dims[1] == 3)". this commit fixes it by removing it. after this change, the classification-example mentioned above produces similar inferences as with wasmtime. (i used the fixed version of tensor.bgr mentioned in bytecodealliance/wasmtime#10867) wasm-micro-runtime with this commit: ``` Found results, sorted top 5: [InferenceResult(963, 0.71130574), InferenceResult(762, 0.07070772), InferenceResult(909, 0.036355656), InferenceResult(926, 0.015456188), InferenceResult(567, 0.015344019)] ``` wasmtime: ``` Found results, sorted top 5: [InferenceResult(963, 0.7113058), InferenceResult(762, 0.070707425), InferenceResult(909, 0.036355816), InferenceResult(926, 0.01545616), InferenceResult(567, 0.015344027)] ``` wasm-micro-runtime without this commit: ``` Found results, sorted top 5: [InferenceResult(505, 0.097339325), InferenceResult(459, 0.053065795), InferenceResult(849, 0.03836019), InferenceResult(582, 0.036462624), InferenceResult(725, 0.030435428)] ```
t will lead to two failures regarding openvino-mobilenet-image and openvino-mobilenet-raw in the wasmedge-wasinn-example, where WAMR is used to verify the functionality. |
yamt
added a commit
to yamt/WasmEdge-WASINN-examples
that referenced
this pull request
Jun 5, 2025
tested with: * wasmtime * wasm-micro-runtime with bytecodealliance/wasm-micro-runtime#4308 openvino-mobilenet-raw has the same issue. it can be fixed by transposing the tensor file as suggested in: as described inhttps://github.com/bytecodealliance/wasmtime/issues/10867#issuecomment-2927998250 i guess these tests have the same origin as wasmtime's classification-example and share the same bug. see bytecodealliance/wasmtime#10867 for details of the bug.
yamt
added a commit
to yamt/WasmEdge-WASINN-examples
that referenced
this pull request
Jun 5, 2025
tested with: * wasmtime * wasm-micro-runtime with bytecodealliance/wasm-micro-runtime#4308 openvino-mobilenet-raw has the same issue. it can be fixed by transposing the tensor file as suggested in: as described in bytecodealliance/wasmtime#10867 (comment) i guess these tests have the same origin as wasmtime's classification-example and share the same bug. see bytecodealliance/wasmtime#10867 for details of the bug.
|
hydai
pushed a commit
to second-state/WasmEdge-WASINN-examples
that referenced
this pull request
Jun 5, 2025
tested with: * wasmtime * wasm-micro-runtime with bytecodealliance/wasm-micro-runtime#4308 openvino-mobilenet-raw has the same issue. it can be fixed by transposing the tensor file as suggested in: as described in bytecodealliance/wasmtime#10867 (comment) i guess these tests have the same origin as wasmtime's classification-example and share the same bug. see bytecodealliance/wasmtime#10867 for details of the bug.
yamt
added a commit
to yamt/wasm-micro-runtime
that referenced
this pull request
Jun 9, 2025
the sample app was tested with: * wasmtime * iwasm with bytecodealliance#4308 currently only contains wasi-nn. maybe it makes sense to add lib-socket things as well. cf. bytecodealliance#4288
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
the logic in question seems like an attempt to work around some application bugs.
my wild guess is that it was for classification-example. cf. bytecodealliance/wasmtime#10867
because wasi-nn api doesn't provide a way for users to express their intended layout, it seems impossible to "fix" it here without risking to break other applications.
btw, i suspect the condition
"if (wasi_nn_tensor->dimensions->size == 4 || ov_dims[1] == 3)" in the code in question was meant
"if (wasi_nn_tensor->dimensions->size == 4 && ov_dims[1] == 3)". this commit fixes it by removing it.
after this change, the classification-example mentioned above produces similar inferences as with wasmtime.
(i used the fixed version of tensor.bgr mentioned in bytecodealliance/wasmtime#10867)
wasm-micro-runtime with this commit:
wasmtime:
wasm-micro-runtime without this commit: