-
-
Notifications
You must be signed in to change notification settings - Fork 239
patterns: Add json support for glb files #412
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
This makes it possible to separate display the different buffer views, accessors and images (even visualizing them). Unfortunately the data within the JSON gets sometimes corrupted and this is the reason, why it parses the JSON multiple times at some places.
I've uploaded this as a draft, because it parses the JSON multiple times and preferably this shouldn't be necessary. I'm also not sure with what formatting it should work. Newer files seem to use UpperCamelCase for types and snake_case for fields. Currently the original part uses the C++ style with the It also fails the unit test as the |
I am confused by this statement. Json doesn't get parsed multiple times. The pattern you wrote is the one that is repeating the operation 3 times and you are right, it is not needed, so remove the extra copies.
Since you are extending an existing pattern the proper thing to do is to use the format that it was written originally so that the original code remains as unchanged as possible while making the pattern easier to read for everybody.
Use the same check the evaluator uses to see if it is running with all the plugins and if it isn't don't use the json type. thats ok for unit tests since we dom't test the json part anyway. |
@@ -53,7 +56,10 @@ enum gltf_chunk_type_t : u32 { | |||
struct gltf_chunk_t { | |||
u32 length; /**< Length of this chunk. */ | |||
gltf_chunk_type_t type [[format("gltf_format")]]; /**< Type of the chunk. JSON or BIN expected. */ | |||
u8 string[length]; /**< The chunk data. */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is one way you could pass unit tests.
#ifndef __IMHEX__
u8 data[length];
#endif
#ifdef __IMHEX__
match (type) {
(gltf_chunk_type_t::JSON): hex::type::Json<length> json;
(gltf_chunk_type_t::BIN): u8 data[length];
} /**< The chunk data. */
#endif
You would need to also do all other parts that fail the unit tests.
patterns/gltf.hexpat
Outdated
ComponentType component_type = Json.accessors[accessor_index].componentType [[export]]; | ||
|
||
match (Json.accessors[accessor_index].type, component_type) { | ||
("SCALAR", ComponentType::BYTE): StrideType<Scalar<s8>, byte_stride> content[count_elements] @ view_offset + Offset; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You want to avoid having to list all cases for each data type and each component type. There should be one case for SCALAR, one for VEC2, and so one with no sub-matches or nothing, just one line per type. Instead of using the type asargument pass the enumeration so you can match the built in types at the lowest level possible. That way you only have to match them once instead of having to match them once for scalar, one for vec2 ,.... Note that you don't need to change the StrideType struct templates, only the ones for scalar, vec2,...
Start by creating a new templated type for ComponentType (I would rename ComponentType enum to ComponentTypes.) that does the matching on the template argument. Then use the newly created type to populate scalar, vec2,vec3,...
I think the code will become cleaner and easier to read and work with.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean with that something like this in the different types?
struct Scalar<auto ComponentType, auto Stride> {
match (ComponentType) {
(ComponentTypes::BYTE): StrideType<s8, Stride> scalar;
(ComponentTypes::UNSIGNED_BYTE): StrideType<u8, Stride> scalar;
(ComponentTypes::SHORT): StrideType<s16, Stride> scalar;
(ComponentTypes::UNSIGNED_SHORT): StrideType<u16, Stride> scalar;
(ComponentTypes::UNSIGNED_INT): StrideType<u32, Stride> scalar;
(ComponentTypes::FLOAT): StrideType<float, Stride> scalar;
}
};
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would also not be optimal including matches for each type. Ypu can create only one match to test for content_type enum value
struct ComponentType<auto component_type> {
match (component_type) {
(ComponentTypes::BYTE) : s8 value;
(ComponentTypes::UNSIGNED_BYTE) : u8 value;
(ComponentTypes::SHORT) : s16 value;
(ComponentTypes::UNSIGNED_SHORT) : u16 value;
(ComponentTypes::UNSIGNED_INT) : u32 value ;
(ComponentTypes::FLOAT) : float value;
}
};
Then you can define Scalar and the rest like this
struct Scalar<auto component_type> {
ComponentType<component_type> x;
} [[static, sealed]];
with formaters to avoid the extra layers added. This creates the same data types but using only one match for the value types (int ,short,...) and one match for the data types (scalar, vector2,...).
After going through the motions of adding a meshes struct to the group formed by images, buffer views and accessors, I realized that I was simply recreating something that already exists in the pattern. Meshes can be accessed from the first json chunk so adding them again is simply not necessary. Naturally, I started to wonder what the purpose of your creating variables for buffer views and accessors since the ones in the json object already exist and already follow the strict format specifications. At first I thought they were simply created to facilitate the access to the binary data and my confusion now stems from the realization that we can access the binary data equally well or even better using the variable that already exists and read as the first gltf chunk. |
Can you maybe explain your last part of the comment a little bit?
What do you mean with "access the binary data"? Access it to generate pattern data? Or access it to visualize it? |
In order to visualize data, and by that i mean to use the visualize attribute on a pattern holding the data you must first generate pattern data. Right now the binary part of the file is converted into a pattern that is an array of bytes. Yes, it is a pattern, but no, it isn't what the format specifies as being stored there. |
Removes the duplicate definition of `component_type_t` and also removes the need to pass the `component_type` to `stride_type_t`.
Okay in that case it's up to you, what you want to do. My usecase was to determine what parts of the binary data contains what data and this is accomplished by the pattern. My two reasons for marking it as a draft (duplicate jsons and naming) are not present anymore which is why I marked them ready for review. |
I have tested your pattern on files that are not the unit test file and found it very slow. It is true that the reason for it is mostly one of the changes I proposed. I wasn't considering the effect on evaluation time but moving the match for types to the last moment makes it have to check for matches at every value. To make up for my mistake I am attaching a patch to put the code back to how it would have been without that change so you can just copy paste it. Even with the patch the code is still very slow for want it does. In one case it goes from 20 seconds to 2 minutes to display images, bufferviews and accessors. While it may be worth it for images, looking at buffers of bytes dont really say much about the data being stored. Some of the data are indices, some are vertices and so on which can be seen in the 3d visualizer model code I have added to the gltf pattern to visualize models stored. |
This makes it possible to separate display the different buffer views, accessors and images (even visualizing them).
Unfortunately the data within the JSON gets sometimes corrupted and this is the reason, why it parses the JSON multiple times at some places.