
A key difference in the 2022 iteration of vr.pbs.org compared to 2021 is that we are delivering an experience that feels more authentic to the notion of a film festival, with all films playing at the same time, on a beautiful new film screen asset provided by the Design Team. Loading the new asset into the experience and creating as many instances as there were films (27) was uncomplicated, but we noticed performance in the experience was suffering after all instances of the film screen were operational.
The first thing we did was reach for the nearest tool and log the new mesh in the browser console. One thing that stood out while inspecting the output was that the large number of geometries that composed the mesh, despite it looking fairly simple.

Using the Babylon.JS inspector’s statistics tab, we confirmed that any time a film screen was in view, the number of active indices and faces was significantly greater. Rendering performance generally has an inverse relationship with the volume of active geometry being rendered, and on my machine, frame rate was dipping any time more than one screen was in the viewport.
We knew that reducing the geometry in the model had a good chance of relieving the performance pressure, but we weren’t confident that this was something that could be achieved with code in BabylonJS at runtime 1 without unintended effects. We imported the model into a blank project in Blender, a FOSS 3D toolset, for further investigation.

Being able to manipulate the model and understand its composite pieces was useful. Turning on the Statistics viewport overlay while enabling and disabling different nodes in the model helped us to better understand both visually and quantitatively where the costs of rendering the model lay.

We traversed the node tree of the model until we reached an outlier. Over half of the geometry of the model appeared to be in the lettering.
At this point we knew where to reduce the geometric complexity. English letters and Arabic numerals don't have to be particularly complex shapes, and these certainly weren't at a glance, but a closer look showed lots of detail that could be removed without changing the fundamental shape of the character.
The problem, stated plainly: How do we reduce the complexity of the model's text while maintaining design intent, without getting our hands too dirty?
We already had the model imported into Blender, and confirmed that the problem was primarily in the characters at the top of the film screen. Examining a little deeper, we saw that all of the characters were undifferentiated in one mesh, so we first wanted to separate into individual meshes by character to make our workspace cleaner and have greater control.
1. Align the Blender camera to the global Y axis by clicking the green Y.

2. Change view type to wireframe so that any selections made selections carry through to the vertices and edges on the other side (keyboard shortcut: Z, 4).
3. Select all of a character's geometry by clicking and dragging, then separate by selection (keyboard shortcut: P, ↵).

4. Repeat for each character.
In the interest of saving some time, we only repeated this process for curved characters, omitting characters like L and I.
With our character meshes separated, we were ready to try reducing the complexity.
Decimation recalls a notorious Roman military practice, but it can also mean retaining every Nth sample of a signal. Neither definition is particularly appropriate, but whatever the etymology, in 3D this means using an algorithm to make a mesh's geometry less dense.
In Blender, a user can add Modifiers to objects to manipulate their geometry nondestructively, and one of those modifiers is a Decimation modifier.

We added the modifier to a new character mesh, and using the default Collapse algorithm decreased the Ratio parameter while keeping an eye on the character in the main viewport to make sure no visible detail was lost. The character looked the same and its face count was reduced dramatically.
We repeated the process for the rest of the characters (direct link to a more efficient decimation method) and applied each character's modifiers one-by-one to bake the geometry reductions into the model. Surely enough, performance penalties when keeping multiple film screens in view became much rarer.
At this point, we were functionally done. But what if we had to do this at a greater scale? It would be incredibly tedious to do this for a hundred characters. We earmarked this as a workflow that could take advantage of Blender's Python API to minimize repetition.
Blender offers a rich Python API with most Blender functionality, perfect for automating repetitive tasks. Instead of adding modifiers and adjusting their parameters and applying them one at a time, we can leverage bpy
to automate the process described in the previous section.
1. Select the character meshes to be decimated and add them to a new collection called 'to-decimate'.
2. In the Scripting view, run:
3. Take a look at the objects to be decimated in the main viewport. If an object looks off, adjust its decimation ratio individually.
4. Once all objects look acceptable, apply all modifiers in that collection destructively:
5. Export the file.
Unoptimized assets produce poor performance in 3D experiences on the web just like any other platform. In this year's Film Fest iteration, we learned more practical lessons in the context of building 3D web apps with a small team and established a common workflow for optimizing 3D assets whose geometries become a performance concern at scale beyond 1.
3D will continue to become more prevalent on the web. You might find 3D models described alongside other HTML elements in W3C standard markup sooner than you think. Teams can find value in methods for eliminating performance bottlenecks and automating common workflows.
Speaking of automation, Blender can be built to run headlessly. Imagine a web API where a browser-based requestor can send a model file with few parameters exposed to the requestor. The file can be glTF, or usdz, or OBJ. If the the job and its parameters are reasonable, the API returns a 200 and a job ID, then redirects the browser to a page generated for that job ID. When the job is done, the page links to the output file. In a world where more content on the web is delivered in the form of 3D models, a service like this may make sense to media organizations.