It depends how they store the models, if they’re using normal mapping (which they probably are) they will need to store the following in a file:
Position (x,y,z) normal (x,y,z) texturecoordinates (u,v) tangent (x,y,z) bitangent (x,y,z). For each vertex, assuming that they’re using a custom binary format and 32-bit (4 byte) floats, 56 bytes per vertex. The Sponza model which is commonly used for testing has around 1.9 million vertices: in our hypothetical format at least, 106.4MB for the vertices. But we also have to store the indices which are a optimisation to prevent the repetition on common vertices. Sponza has 3.9 million triangles, 3 32-bit integers per triangle gets us an additional 46.8 MB. So using that naeive format which should be extremely fast to load and alot of models, 3D model data is no insignificant contributor to file size.
No, but raw audio and high res 3d models are
3d models consist of images right? Coordinates for the image does not take up much?
Yes but also more polygons for more detailed models. Which… is more space but it can’t be that much lol idk tho
It depends how they store the models, if they’re using normal mapping (which they probably are) they will need to store the following in a file: Position (x,y,z) normal (x,y,z) texturecoordinates (u,v) tangent (x,y,z) bitangent (x,y,z). For each vertex, assuming that they’re using a custom binary format and 32-bit (4 byte) floats, 56 bytes per vertex. The Sponza model which is commonly used for testing has around 1.9 million vertices: in our hypothetical format at least, 106.4MB for the vertices. But we also have to store the indices which are a optimisation to prevent the repetition on common vertices. Sponza has 3.9 million triangles, 3 32-bit integers per triangle gets us an additional 46.8 MB. So using that naeive format which should be extremely fast to load and alot of models, 3D model data is no insignificant contributor to file size.