Post-Mortem

Colleido is a 3D animation project designed as a proof of concept for Diversity, Equity, and Inclusion (DEI) workshops. It leverages DEP (Dual Eye Point) technology and individualized audio tracks to present the same story from multiple perspectives, immersing viewers in how people from different backgrounds may experience the same event differently. The animation follows a two and a half minute narrative featuring four university students attending a team meeting. Each character represents a distinct background or challenge—ADHD, introversion, first-generation status, or international student experience.  Through customized plotlines, visual effects, and environmental details, the story reflects each character’s internal world and the specific obstacles they face in group collaboration. Our final renders are 4 separate animations being viewed at the same time sequence, and a neutral version of the story is also provided. By presenting these parallel experiences, the project aims to raise awareness of unconscious bias and promote empathy through visual storytelling. Our team includes Dennis Sun, Sharon Liu, Charlotte Ai, Jesse Xu, Eliana Huang, and Frida Chen, with support from instructors John Dessler and Ruth Comley, project consultant Ricardo Washington, and subject matter expert Ayana Ledford, who is the Associate Dean for Community Impact and Educational Outreach at Carnegie Mellon University.

During the production of our project, one of the most successful aspects was the establishment of a strong and consistent art direction. From the beginning, we focused on defining a cohesive art style that could unify all visual elements under a clear thematic vision. The environment assets were designed and textured, and when paired with our lighting setup and materials, they produced a rich and grounded atmosphere. Additionally, our character models were thoughtfully crafted to complement the world they inhabit. The color palette, costume design, and silhouette all aligned with the overall mood and tone, enhancing their integration within the environment. Special attention was paid to how characters interacted with VFX and shaders—ensuring that effects like international student’s ‘words’, flying notes, or the glitch of the painting reinforced the mood without breaking visual harmony.

We also made effective use of various tools that helped accelerate our animation production process, allowing us to complete the ambitious scope we had planned. Our project involved building three distinct environments— a bus, a hallway, and a classroom. To manage this workload within the given timeline, we integrated approximately 30% AI-generated models, which greatly supported our environment creation. For character development, we used Metahuman to quickly create four unique characters, significantly reducing the time needed for early modeling. Since our team did not include a dedicated animator, we relied on motion capture to animate all four characters. This approach allowed us to complete a large volume of character animations efficiently and stay on schedule.

Integrating Metahumans into Unreal Engine came with technical challenges. We encountered a persistent issue where clothing would detach from character animations or explode during playback. After investigation, we traced this to mismatched LOD (Level of Detail) settings between the body and clothing meshes. Unreal requires clothing and body to share consistent LODs; when LODs were missing for the clothing, animations would become unstable. A temporary fix was to lock all characters to LOD 0, ensuring consistency across meshes. While effective for cinematic rendering, this approach is not viable for real-time applications.

The clothing was designed in Marvelous Designer, which creates highly detailed garments with many unmerged seams. These complex seam structures made skinning in Maya difficult, leading to potential holes or tearing during animation and physics simulation. Zippers and densely packed geometry further complicated the import process into Unreal. A future improvement would be to merge more seams where possible or bake cloth simulations into Alembic files for greater fidelity, particularly effective in non-interactive, animation-based projects like this.

Facial animation also posed challenges. We explored both ARKit LiveLink and Metahuman Animator. While ARKit provided real-time previewing, it delivered less accurate results after retargeting and presented issues with syncing the head and body. We ultimately used Metahuman Animator, which offered more precise facial capture and integrated audio recording. However, successful capture depended on keeping actors close to the camera and facing forward, deviations led to data loss or distorted expressions. Additionally, a recurring issue involved the head animation not aligning with the body, often resolved by changing the additive animation settings to mesh space. Occasionally, switching to mesh space caused facial animations to disappear entirely, requiring a rebake of the performance.

Overall, while Metahuman tools and Marvelous Designer allowed us to rapidly prototype and visualize our characters, they introduced technical bottlenecks that required creative and sometimes time-consuming workarounds. Better planning around asset compatibility, LOD management, and facial animation recording setups will be essential in future projects using similar pipelines.

The scope of this project is ambitious, involving five renders and four distinct characters, which makes production complex. One of the challenges we faced was managing the integration of all elements through cinematography. The merging of perspectives happened too late in the animation process, which added pressure and limited flexibility in the final stages of production. Developing small, finished demos sooner would have helped us identify challenges earlier and streamline the merging of visuals and audio tracks across different character perspectives.

If we had completed our narrative earlier, we could have scheduled the motion capture session sooner and finished the animation earlier. Since cinematography depends on the animation, this would have given us more time to integrate the cinematography. When we didn’t have the animation, we could have spent more time researching cinematography instead of working directly on it. We should have set a smaller project scope at the beginning and designed the overall pipeline before diving into individual tasks. Additionally, we need to better understand task dependencies and plan backward from critical deadlines. 

Through the course of this project, we gained valuable insight into the complexities of managing a multi-perspective narrative within a 3D animation framework. We learned the importance of integrating cinematographic elements earlier in the production process to avoid bottlenecks and preserve creative flexibility.