Category: Weekly Blogs

  • Week 14 (04/25/2025) – Wrapping Up, Code Freeze and UI Pass

    This is the final week of the semester. The first and foremost thing to do is setup a time for code freeze, so that our workaholic team members won’t continue pushing updates until the semester ended.

    Sample demo scenes, starter scenes, tester scenes.

  • Week 13 (04/18/2025) – Jam 2 & Orbbec!!

    Week 13 (04/18/2025) – Jam 2 & Orbbec!!

    CAVERN Jam.

    Pipes.

    Important guests.

    Jam:

    8 worlds, 13 jammers.

    Mike and Bryan

    Grace

    Jing and Jose

    Yuhuai

    Mia and Jinyi

    Skye and Enn

    Josh

    Winnie and Terri

    Showcase: I think (+ Ling, Yingjie, James, Eva, John Balash, Anthony, Jesse Schell, Nina, Michael, Victor, ….will delete this after i know who i’m counting)… 23 people came.

    Orbbec Release

    Documentation Revision

    Sound problem of surround speakers found. (look at bug report and do analysis next week)

  • Week 12 (04/11/2025) – Soft Opening & Body Tracking Integration

    Week 12 (04/11/2025) – Soft Opening & Body Tracking Integration

    This Thursday, it would be Soft Opening day, where faculty rotates around projects to learn about what’s almost done! For our project, we decided to showcase Sample Scene 2.0 in the CAVERN, and walk faculty through our toolkit in our project room.

    Sample scene

    We fixed all the bugs that from Playtest day, and the following showcases a full run-through of the sample scene interactions.

    The goal of the sample scene is to evoke a sense of immersion for stereoscopic rendering and surround sound, demo the interaction building blocks we created, and prove the performance of our camera through various technical art features.

    Interactions

    There are 4 states of the creature, each showcasing different aspects of the sample scene.

    • State 1: Starting from a tree hole, a small
    Tech Art features

    Toolkit feature walkthrough

    We walked through faculty how how our toolkit works, along with documentation that supports it. We especially spent time on describing and showcasing how our UI makes it intuitive for users to add a CAVERN setup, add Vive Trackers, add interaction building blocks, auto configuration of 7.1 surround sound, the live preview gizmo, and the debug keys.

    The following is the how the final UI looks at the end of the semester.

    Feedback from faculty

    The good news is that faculty felt that we were exceeding expectations, and they really liked that we’re doing a second jam (we set a record by having documentation and 1 game jam by halves, and now we’re doing a second one!).

    In addition, they encouraged us to update your toolkit and documentation frequently. They stated the needs for tutorials to help start up and how common ways things can go wrong and ways to troubleshoot. These are all currently in progress, so we’re happy to discover that we are indeed on the right track.  

    Orbbec Body Tracking

    On the other hand, Orbbec Femto Bolt tracking is still under development, we decided to adapt the open source Azure Kinect body tracking sample to an Orbbec one that will work in CAVERN.

    There were many obstacles to overcome:

    • The cameras had individual IDs, yet they can only be detected after entering play mode, manually checking the IDs, then inserting them to the code. We have to simplify this process like we did Vive Trackers.
    • If there are no cameras plugged into the computer, the project will crash. For a toolkit, we need to prevent this from happening.
    • The center orbbec camera is flipped upside down due to installation error. However, if we ignored this, past projects will break. Instead our toolkit should handle any orientation of the camera.
    • There are 3 cameras in total, and we need to find a way to sync them up, or choose which ones are tracking.
    2 Femto Bolts.

    We are tackling them one by one. Stay tuned for updates in the following weeks. But firstly, next Monday would be CAVERN Jam 2.0, and getting everything ready is our other main priority.

  • Week 11 (04/04/2025) – New Features Development & Jam 2 Preparation

    Week 11 (04/04/2025) – New Features Development & Jam 2 Preparation

    Playtesting ended last week. This week, we continued with all the features development for our toolkit that were halted previously to prepared for Playtesting both at South Fayette and ETC. This week, our main focus was starting on Orbbec Body Tracking integration, continuing RenderGraph, as well as wrapping up on keyboard shortcuts. In this week, we will also give an overview of all the little but crucial improvements we’ve made during the hectic testing and demoing schedules in the previous weeks in the hopes of informing a more comprehensive list of toolkit feature plan.

    New Features

    CAVERN Preview as a Gizmo

    Because the original CAVERN Previewer is a mesh that is generated by the camera, when baking lighting details to the scene to optimize performance, errors will show up. (This happened in Alex’s Flesh Wall made during the first CAVERN jam). This is why, we migrated this previewer to a gizmo!

    Live preview when editing the scene
    Debug Keys

    People have requested hotkeys so we can “toggle stereo and mono views”, “change to headtracking with a button click”, etc. And we made that happen. Now, when you press “h” during play mode, you’ll see the available debugging keys as well as frame rates and tracker statuses.

    Debug keys that shows up after pressing “h” as in “help”
    Fullscreen On Play

    Fullscreen On Play is a really useful tool when you are testing on the CAVERN directly in the Unity Editor without building — it lets you see the true views of the scenes without the view being interrupted or disrupted by the Unity Editor GUI.

    However, at the start of the semester, the asset that was used was a paid asset from Unity Store, and it continuously caused crashes when exiting play mode on the CAVERN computer.

    The Fullscreen On Play option on our toolkit.

    Fortunately, Ezra, our technical consultant, was able to create one for our purposes, so now we have a Fullscreen On Play feature that developers can toggle on and off during development.

    What the screen looks like after entering fullscreen.
    RenderGraph

    RenderGraph is Unity’s new system for customizable render pipelines. Since we are using Unity’s default camera and bliting onto the screen via our own rendering code, we defined our own Scriptable Render Passes called Cavern Render Pass and specified it to be within the RenderGraph.

    The Cavern Render Pass we added into the RenderGraph.
    Canvas UI

    We discovered there are two ways of thinking about the CAVERN: as a screen or as a window. Depending on which view you’re using, your user interface needs change. Because of this our toolkit supports three methods of creating in-game UI. 

    Flat world-space UI is best when treating the CAVERN as a window, since it ignores the curvature of the CAVERN. On other hand, while it can be rotated to face the center, it will always look like a flat plane.

    Round world space UI fixes this issue by wrapping a 2D plane around the CAVERN, so everything faces you. This is best when treating the CAVERN as a screen. Both of these options support object occlusion and proper stereoscopic 3D.

    Our third option, screen space UI, always appears on top of everything and doesn’t fully support 3D, but avoids the overhead of our camera system. This is best when creating 2D experiences, or trying to display videos.

    Editor UI

    What says a finished and polished toolkit better than an intuitive and beautiful UI? For achieve that, we changed to use the UI Toolkit for our editors, and this allows us to customize the design through the UI Builder, as well as through code (UXML, UXX – HTML and CSS for Unity). We are also working on icons to add that final polish feel!

    Creating a more intuitive and pretty interface!
    Orbbec Femto Bolts Body Tracking

    Finally getting to another crucial tracker built into the CAVERN systems, at the start, because many other projects at ETC also used a Femto Bolt, we couldn’t access or test with one. That is why we started with the Azure Kinect DK, a predecessor of Femto Bolt, and one that Orbbec has built their body tracking sample wrapper upon.

    Jam 2 Preparations

    We have finalized our budget for the jam. From last time’s experience, we’ve learned that providing a meal for jammers will be both an incentive for people to join, as well as a deserved reward for everyone who will be spending time playtesting our toolkit.

    This time, instead of simply ETC people, with official reach to high schools and greater university community, we hope to invite jammers from a broad range of groups, not only to show more people our project, but also diversify the experience levels of our testers to provide us with more insights for different people:

    • ETC people who have been on a CAVERN project
    • Main campus game developers
    • People with no experience with CAVERN

    This week, we created the poster and sign up sheets, as well as reached out to people. Next week, we are ready for new jammers and new worlds. Stay tuned!

    CAVERN Jam 2.0 Poster
    CAVERN Jam 2.0 signup sheet.
  • Week 10 (03/28/2025) – Playtesting All Week & Sample Scene V2

    Week 10 (03/28/2025) – Playtesting All Week & Sample Scene V2

    It was a hectic week! Starting from the AASA (American Association of School Administrators) showcase at ETC on Monday, Digital Promise on Tuesday, small showcases for guests at the ETC, and preparations for the big Playtesting Day on Saturday, this week was shaped with demoing and playtesting!

    Digital Promise

    Digital Promise was a rocky day, but we managed to work around all the unexpected problems and tried our best to create still a fruitful experience for all the visitors at South Fayette. We got up early at 6am, in order to get to South Fayette by 7:30 to tech check everything for the conference that would start at 8am. However, an unexpected power outage the night before caused the remote desktop software that connects their computer to their CAVERN computer to break. In addition, some time in the previous week, a fourth monitor was plugged into the CAVERN computer, which broke the warping software’s configuration.

    More familiar on the software side of the platform, our team spent a considerable time trying to fix these issues before Steve, the CAVERN builder, was able to run a custom script to fix everything after the conference ended. In the meantime, while Digital Promise was still happening, we adapted our showcase agenda to start from outside of the CAVERN, giving an overview of the platform and talking about our project, and then showcasing our sample scene that was partially warped incorrectly. Fortunately, the detailed oral walkthrough with the aid of Matt, South Fayette’s innovation director, also the person who incorporated CAVERN into the school, as well as the sample scene that despite incorrectly warped, still created an immersive scene that filled audiences’ minds with wonder and inspiration.

    In particular, we met many groups of educations that provided us invaluable insights into how the space can be used.

    • Multiple K-12 groups mentioned having an Ancient Rome city as part of a History class.
    • A person from Julliard was in particularly interested in having CAVERN as a tool to help musicians and performers overcome stage fright.

    In sum, although met with challenges, Digital Promise ended with us learning a lot from not only how to fix things on the hardware side, but also future application potential on the software side!

    Sample Scene V2

    In preparation for the Playtest Day on Saturday, we’ve decided to update our sample scene so that it has serves the following newly identified needs:

    • More ranged sceneries to showcase depth, especially as the stereoscopic rendering is a huge feature of CAVERN.
    • Vive Tracker interactions that can inspire future creators to build interactive experiences.
    • Post processing such as shaders and particle systems to showcase the rendering abilities.

    Therefore, we brainstormed a new scene, where the little creatures made earlier in the semester but not present in the first sample scene are now the main characters. In the new sample scene, players will follow the little creature that spawned from a hole in a tree to explore the space. After that, they will in turn guide the little creature to meet with a big creature in the middle of the scene, to trigger a heartwarming hug event!

    Finite State Machine

    All of us are strong advocates of modular and extensible code (which is why we pitched the project in the first place). Therefore, to support a smooth game development experience, Terri created a custom finite state machine system for Winnie to build the gameplay events, which will also potentially be included in the future releases of the toolkit. Before Playtesting day, the new sample scene with interactions are finally created.

    Playtest Plans

    After meeting with our advisor Drew, and Consultant Mike, we decided to split our playtesters into two groups: those with Unity experience will be brought to our project room to follow a simple tutorial to play with our toolkit, and see it built on the CAVERN immediately. The other group of people are those who simply want to experience the space, in which we will showcase our sample scene and other demo projects to learn more about the space.

    Because each group will only have 20 minutes to playtest, and we also wanted to build up to the CAVERN jam that will happen two weeks later, we created a form to collect contacts of people who are interested in come back and try more features, even potentially creating an experience themselves.

    Playtest Day

    At the end, out of the 6 groups that came to playtest our project, 3 groups got to test our toolkit, and 3 groups experienced a full range of experiences built on the CAVERN.

    20 minutes is not enough!! We learned immensely from every person, and we hoped to talk more. Here is a summarized feedback.

    Group 1 Sample Scene Testers (No Unity Experience)
    • Found the scene visually calming and immersive, with aesthetic comparisons to indie games and interactive art museums.
    • Desired more interactivity, such as drawing, tactile flower interactions, and objects reacting to user proximity.
    • Flesh wall was initially creepy but became comforting once the eyes tracked users, making it feel alive and intelligent.
    • Noted potential dizziness and suggested limiting time in the experience or designing scenes for therapeutic use.
    • Recommended for younger audiences or mindfulness use cases, but not for older individuals like parents.
    Group 2 – Sample Scene Testers (Low Unity Experience)
    • Felt the space was immersive and fantastical, though movement felt disorienting without proper head tracking.
    • Enjoyed the visual environment but found interaction lacking; wanted more reactive elements, characters, and ways to “touch” the experience.
    • Compared it favorably to museum-style installations like TeamLab and saw potential for similar public exhibitions.
    • Frogs was liked visually but considered too static; they wanted more behavioral realism or playful engagement.
    Group 3 – Toolkit Testers (Strong CS Background, Low Unity Experience)
    • Quickly understood and enjoyed the toolkit despite minimal Unity experience, helped by clear tutorials and peer observation.
    • Successfully completed the Vive tracker activity and showed strong enthusiasm for learning and experimentation.
    • Valued immersive audio and visuals, though some visuals (like the eyes) were startling but memorable.
    • Interested in returning to build more, and would recommend the platform to friends in both programming and art.
    Group 4 – Toolkit Testers (Faculty, Game Dev + Strong Unity Experience)
    • Deviated from the standard playtest to explore the toolkit deeply, offering critical technical feedback.
    • Emphasized better onboarding through strong naming conventions, scene modularity, and error messages rather than heavy tutorials.
    • Suggested improved documentation and alignment with Unity 6.1 tools like the Project Analyser.
    • Recommended offering feature-specific starter scenes (e.g., stereoscopic, tracking) for better developer usability.
    Group 5A – Toolkit Testers (Students, Game Dev Exp but Not Unity – Programmers)
    • Found the development process surprisingly fast and accessible, especially the “flesh wall” creation.
    • Requested a broader set of creative assets to build more personalized or complex content.
    • Responded positively to technical tools like tracking systems and enjoyed experimenting with interaction design.
    Group 5B – Toolkit Testers (Students, Game Dev Exp but Not Unity – Scene Builders)
    • Focused on interactive and social potential—suggested mini-games, competitive challenges, and co-op mechanics using tracked real-world movement.
    • Pitched ideas like “Overcooked”-style mechanics, and emphasized using the large physical space creatively.
    • Interested in peaceful and artistic applications as well, such as cherry blossom-inspired environments.
    • Identified varied target audiences: mom (for relaxation), online friends, spouse, and younger sibling (for games).
    Group 6 – Sample Scene Testers (University game dev faculty.)
    • Impressed by spatial use and visual immersion, particularly when obstructions didn’t interfere with the illusion.
    • Loved peaceful experiences but also wanted deeper interactivity—suggested games like basketball, sorting, and time-based challenges using full-body movement.
    • Saw potential for both meditative and social or competitive use cases.
    • Would share the experience with their mom, spouse, and friends for its mix of therapeutic and playful potential.

    Next Steps

    This week started out slightly bumpy due to the unexpected issues at South Fayette. However, we wrapped up a week full of learning from our experiences and also learning from our users with Playtest Day. Now we are excitedly marching towards a new week of refinement to put in our softs presentation, as well as prepare for CAVERN Jam 2.0 on the week after softs.

  • Week 9 (03/21/2025) – Attending GDC and Preparing for Digital Promise Conference

    Week 9 (03/21/2025) – Attending GDC and Preparing for Digital Promise Conference

    This week will be a short blog, as most of our team flew to San Francisco to attend GDC (Game Developers Conference) 2025, where also showcased Spelunx at the ETC booth. In addition, our Producer Yingjie stayed in Pittsburgh and coordinated with South Fayette in preparation for next week’s Digital Promise Conference there.

    GDC

    Since CAVERN is huge, and cannot be brought to GDC to showcase as experiences on other platforms, we focused on showcasing our toolkit introduction video that we submitted SIGGRAPH and included our extended abstract summary on the side as a handout for people who are even more interested in learning the technical details. Overall, we gauged the interest of experience design professionals, as well as educators across all levels.

    Digital Promise Preparation

    Digital Promise, in partnership with the American Association of School Administrators (AASA), will be happening at South Fayette on Tuesday. It is an event that will bring educators and school administrators from across the United States to Pittsburgh to learn about the innovations happening in the school, and CAVERN will be one of the core showcases South Fayette will present on that day.

    In addition to the Tuesday event, ETC’s Outreach and Engagement Networking Coordinator, Anthony, informed us that around 20 AASA visitors are scheduled to visit the ETC on the day before Digital Conference to learn about our department as well as our partnership with South Fayette — Spelunx will again be a core part of the tour!

    Roadmap of Learning

    Earlier this semester, as part of our collaboration with the Game Development Course at South Fayette, we sent a Unity + CAVERN Roadmap of Learning, which we spent time refining during spring break, as a base where the teacher, Stacey Barth, will further extend into a complete curriculum to share with other future schools that installed a CAVERN system.

    This week, while other teammates were away at GDC, Yingjie walked Stacey through the roadmap, as the students at South Fayette are also eager to build simple experiences on CAVERN that they can also showcase during Digital Promise. In the process, a package bug was caught and fixed.

    Next Steps

    While GDC slightly halted the development this week, we are still ever prepared for the next week, which will eventually be called the week of demoing and playtesting, as it contains not only Digital Promise, but also ETC’s official Playtesting Day that will happen at the end of the week.

  • Week 8 (03/14/2025) – Conference Preparation and Interaction Building Blocks

    Week 8 (03/14/2025) – Conference Preparation and Interaction Building Blocks

    This week, we hit the ground running. Coming back from break, we had no time to ease in—within hours, we had discovered a perfectly fitting conference opportunity and had only two days to put together a full submission. At the same time, we needed to process faculty feedback from halves, refining our long-term plans and thinking beyond just delivering the toolkit. With South Fayette’s March 25 showcase fast approaching, we also had to push an update to ensure everything was fully functional on their CAVERN setup.


    24 hours for a SIGGRAPH submission

    On Monday afternoon, we stumbled upon SIGGRAPH’s Spatial Storytelling track, a session that doesn’t just showcase immersive experiences, but emphasizes how they are built. This was exactly what we had been working on for months, but we had only two days before the submission deadline. After a quick discussion, we committed: we were doing this.

    What followed was a full-on sprint to put together a polished, professional submission in an incredibly short timeframe. We wrote a detailed extended abstract, articulating the technical innovations of Spelunx—from our stereoscopic rendering to our motion-tracking and spatial audio integration. We refined a shorter synopsis to capture our work succinctly, balancing technical depth with accessibility. And finally, we produced a video with a full voiceover and subtitles, demonstrating Spelunx in action and clearly explaining its impact.

    Despite the chaotic timeline, we didn’t cut corners. Faculty provided feedback on our content, ensuring our explanations were both technically precise and engaging for a broader audience. Whether or not we get accepted, the process itself was a huge moment of consolidation—forcing us to step back and clearly define what Spelunx is, what problems it solves, and how it fits into the larger immersive technology landscape.


    Planning for the second half of the semester

    With halves presentations behind us, we had the chance to sit down to reflect and assess where we stood. The response from faculty was overwhelmingly positive—faculty not only understood our project, but saw it as something that was well-structured, impactful, and exceeding expectations.

    This clarity of purpose was a major milestone. One of the challenges of a technical project is effectively communicating its value, especially to an audience that includes designers, educators, and developers with different levels of familiarity with the technology. Faculty encouraged us to keep refining how we frame Spelunx for different user groups—whether it’s an educator looking for an easy way to onboard students, or a developer wanting to extend the toolkit’s functionality.

    Beyond immediate validation, this feedback pushed us to think beyond the semester. The question wasn’t just, “What do we need to finish?” but “How does this toolkit remain useful after we leave?” Maintaining Spelunx as a living, growing platform meant reinforcing documentation, modularity, and ease of access. This led to deeper discussions about long-term sustainability—who maintains the toolkit, where it will be hosted, and how future teams can continue building on it.


    Pushing the Next Update: Getting South Fayette Ready

    By the end of the week, we shifted focus to toolkit updates, ensuring South Fayette could fully utilize Spelunx for their upcoming March 25 Digital Promise conference. Their setup differs from ours in multiple ways, meaning that toolkit functionality needed to be flexible and adaptable to different environments.

    Perhaps the most important refinement was ensuring Spelunx could support multiple CAVERN configurations. South Fayette’s CAVERN is 22 feet in diameter, larger than the ETC’s setup. This meant that scaling the CAVERN space, motion tracking zones, and spatialized interactions dynamically had to be part of the toolkit. The update we pushed this week introduced a profile system that allows Spelunx to adapt to different CAVERN sizes, making it far more flexible for future applications.

    Each visit to South Fayette is a reminder that development environments are never one-size-fits-all. The better we can design Spelunx to handle different setups with minimal friction, the more accessible it becomes for a wider audience.


    Looking Ahead: From Expansion to Refinement

    With SIGGRAPH behind us and faculty confident in our direction, we’re now entering a new phase of development. We have more features to add, more documentations, more user testing, and all will be showcased and demonstrated in various demoing opportunities and cavern jam 2.

  • Week 7 (02/28/2025) – Havles Presentation and Teacher’s Showcase at South Fayette

    It is now week 7, and it is time for Halves Presentations! In addition, we hosted an official half-semester showcase within ETC and also demonstrated CAVERN’s capabilities to educators across K-12 in South Fayette, showcasing how CAVERN can be used across different contexts, and how our showcase is a great tool to enable those. And of course, our team went to a celebratory brunch at the end of the week to end this half-semester before going into spring break.

    Halves Preparations

    Because so much of the CAVERN development is technical work that happens under the hood, it is crucial to communicate our impact thoughtfully, especially in way where even non-technical audiences can also understand clearly. To do so, we decided to start off the presentation by highlighting the success of the CAVERN Jam, followed by introducing our problem solving process and results of our works on rendering, Vive trackers, editor workflow, and sample scene, with emphasis on our comprehensive documentation along the way.

    Mathematical Derivation Document for the Camera

    Since our goal is to have future developers build upon our toolkit, our camera rendering solution has to be well documented and including our thought processes, so people can solve new problems that works within our system. The “Math Documentation” as our team calls it, is then born.

    This document starts off by discussing the curved screen and breaks down the possible rendering solutions. It briefly explains the original 32-camera rendering solution and its tradeoffs, and then proposed our single-camera cube map solution with the linear algebra math derivations alongside. At the end of the document, we arrived at the solution of head-tracking, and why it became trivial to solve once we’ve chosen to duplicate the cube maps and treat the head

    As much of the CAVERN is better seen in person, on the day after the presentation, we invited everyone from the department to join our official half-semester showcase, where we showcased what we said during halves presentation, as well as demoing the wonderful worlds built during last week’s CAVERN jam.

    On Friday, we went to South Fayette for a teacher’s showcase, where we

    And, of course, we ended the week with a celebratory brunch, marking an exciting half-semester of progress!


    Documenting the Camera – A Mathematical Guide for Future Developers

    One of the most significant additions this week was a formal mathematical documentation of the CAVERN camera system.

    Since CAVERN uses stereoscopic projection on a curved screen, traditional rendering approaches don’t work out-of-the-box. While we had successfully developed a single-camera rendering pipeline to replace previous inefficient multi-camera solutions, we realized that future developers would struggle to modify or expand upon our work without a clear mathematical breakdown.

    To address this, we documented:

    • How projection from a single camera to a curved screen is achieved.
    • The transformations involved in mapping the 3D scene onto CAVERN’s display.
    • How developers can modify camera parameters if the CAVERN setup changes.

    Toolkit Usage Diagrams – Bridging the Gap for New Users

    In addition to the camera documentation, we also created diagrams and structured guides to make our toolkit more accessible for non-programmers.

    Since Spelunx is intended for a range of users—from experienced Unity developers to high school students exploring immersive media for the first time, we needed to ensure that our documentation was clear, visual, and easy to follow.

    By refining these materials before Halves, we ensured that we were not just delivering a working toolkit, but also providing the resources needed to make it usable and expandable.


    Halves Presentation and CAVERN Showcase

    On Wednesday, we presented our progress to faculty, peers, and members of the broader ETC community. The response was overwhelmingly positive—people were excited to see how Spelunx was making CAVERN development more accessible, and many were interested in experimenting with the toolkit themselves.

    However, while slides and videos were useful for explaining our process, CAVERN is a space that must be experienced firsthand to be fully appreciated. For this reason, we extended an open invitation to faculty and students to visit the ETC CAVERN Showcase on Thursday, where they could:

    • Experience our sample scene in full stereoscopic 3D.
    • Try out interactions from CAVERN Jam projects.
    • See how different depth cues, motion, and sound work in an immersive space.

    Key Feedback from the Showcase

    As attendees explored the space, we gathered valuable insights into how people perceive and engage with CAVERN environments:

    • The 3D effect was highly convincing, making the screen “disappear.” This reinforced that our sample scene’s depth and spatial design were effective.
    • Horizon alignment felt slightly off in some scenes. This is something we will refine in upcoming iterations.
    • People were drawn to more dynamic, reactive interactions. Suggestions included having objects respond to player presence, using subtle movements to enhance immersion.
    • Ambience and atmosphere were strong, but directional sound could be showcased better. Now that surround sound is properly configured, we plan to incorporate more layered audio interactions in future updates.

    South Fayette Visit – Introducing CAVERN to Educators

    On Friday, we visited South Fayette High School for the second time—this time, not just to engage with students, but to introduce CAVERN to K-12 STEAM teachers.

    Bringing CAVERN to the Classroom

    Our goal was to demonstrate how immersive environments can be integrated into education and to help teachers understand the process of creating interactive experiences in CAVERN.

    During the session, we showcased:

    • The fundamentals of CAVERN as an interactive space.
    • How students can use Spelunx to quickly develop and test ideas.
    • Examples from CAVERN Jam that illustrated creative interaction design.

    The response was enthusiastic—many teachers saw potential applications in storytelling, science visualization, and interactive learning.

    Hands-On Debugging and Support for Students

    After the demo, we worked closely with Stacey and her students to provide technical guidance on working with CAVERN.

    • We walked Stacey through the full process of importing Unity packages, setting up scenes, and configuring CAVERN’s display.
    • We debugged a Blender-to-Unity 6 issue, ensuring that students could properly import 3D models into their projects.

    This session reinforced that beyond just providing a toolkit, our role is also about empowering future creators—ensuring that educators and students feel confident using these tools independently.


    Celebrating Our Half-Semester Milestone

    After an intense week of presenting, testing, and refining, we took a well-deserved break with a celebratory brunch in Shadyside. It was a moment to appreciate how far we had come—from our initial pitch to a fully functional toolkit, a successful game jam, and multiple real-world demos.

    But this was just the halfway point. Looking ahead, we are preparing to:

    • Refine interactions and dynamic responsiveness based on showcase feedback.
    • Continue working with South Fayette to ensure successful student projects.
    • Explore advanced features, including potential support for additional tracking methods beyond Vive Trackers.

    Week 7 was about sharing our work with the world—now, we move forward with clear next steps and renewed energy.

  • Week 6 (02/21/2025) – CAVERN Jam!

    Week 6 (02/21/2025) – CAVERN Jam!

    This week was the anticipated CAVERN Jam, where our toolkit will meet its first set of users! In the two-day event, there were 8 participants bringing 6 different worlds, each exploring different ways of interactions and immersion methods in this space. While technical difficulties did emerge, overall feedback was clear: Spelunx made developing for CAVERN dramatically easier, and people are excited to make more experiences here.


    Six Unique Worlds

    Kicking off on Monday and showcasing on Tuesday, within a tight 24-hour timeframe, 8 participant (including both our team members, other ETC students, and even faculty and staff) created 6 worlds, and drew over 20 attendees in the showcase. Here we’ll breakdown what we learned from each experience.

    Alex Hall Flesh Wall

    Alex, a multimedia artist in our cohort created an unsettling experience inside of a wall made out of flesh, where giant eyeballs will stare at the person wearing the tracker, creating an eerie sense of being watched constantly. Along with loud and scary surround sounds screaming into the space, the experience was very scary and very immersive, and all people gasped in excitement and fear when the experience first started. (And this is verified in every future demo we’ve done to guests that came to ETC)

    Two things we’ve learned:

    1. Tracking works for everyone else other than the person wearing the tracker. While this is a known design constraint that stemmed from our rendering code, when a person wears the tracker on their head, from their perspectives, the eyeballs will not stare directly at them, but at a space above and behind them. However, for every other people in the space, they will perceive the person as being watched correctly. This opens up discussion for multiplayer experience design.
    2. Distortion of the scene is minimal when art assets were placed extremely close to the screen. When looking at our sample scene, we knew that as people walk around the space, there will be some degree of distortion of the scene due to our rendering solution. However, this did not happen in Alex’s Flesh Wall, and that was because of how close the art assets was placed near the screen.

    Alex is an artist, and the only code needed for this to work was a LookAt built-in method for the eyes to follow the Vive Trackers. She spent a total of 10 hours, with most of the time used recording sound effects and voice lines, to create the experience. In fact, the CAVERN setup took only 20 minutes! This is a huge achievement for our toolkit.

    Josh’s Frogs Choir

    One of Spelunx team members, Josh, created a musical interaction where four Vive Trackers are used to trigger singing frogs, and the volume and pitch will change based on player’s proximity to the screen. The design naturally encouraged different play styles: players could place a tracker in one frog’s zone and leave it singing, or multiple people could move around dynamically to shift the composition. A dead zone area is carefully spaced at the center, so it is also a possible interaction when one person wanted to stay there and trigger the frogs like a conductor.

    This piece demonstrated CAVERN’s multiplayer potential that utilizes physical space, which is a major difference between the CAVERN and other virtual reality systems. With all the sound and art assets, this experience was created in under 4 hours.

    Jing’s Bubble Game

    One of the programmer students at ETC, Jing, created a bubble interaction game where players could use Vive Trackers to repel and pop bubbles that flies towards you. While simple, the interaction was surprisingly engaging as people found joy physically reaching out to the bubbles, and even tried to compete against each other on who popped the most bubbles. In addition, because the bubble was transparent, the players were able to see both the real world as well as the virtual environment. It blended the physical and digital seamlessly.

    As a programmer, Jing incorporated our sample scene as the base environment art assets, and focused on the coding side of development!

    Terri’s Head-Tracked Anime Girl

    Terri, Spelunx’s own rendering and graphics programmer, created a technical proof-of-concept demo of head tracking. The scene featured a dancing anime girl that the person wearing the head tracked Vive tracker could see from different angles. This was a huge achievement, as it was a novel rendering solution that enabled simple implementation without needing to move the whole world to get the same effect (what previous teams used, which severely affects performance). This head tracking gold spike demonstrated that CAVERN is capable of enabling more complex interactions.

    Winnie’s Little Match Girl Immersive Storytelling

    Inspired by a previous mixed-reality project from Visual Story course, Winnie’s world utilized surround audio along with visual storytelling to create an immersive space. The experience began with the players lighting up a candle using a Vive Tracker, and a voice circulating the space in an eerie tone. After the candle is lit and the voice lines were played through, the scene will transition into a dreamlike world with a giant whale flying around, and the Vive Trackers turn into a bubble.

    Though technical issues prevented some elements from functioning (the Vive Trackers were set too low so candles couldn’t be lit, and the computer was set to stereo instead of surround sound) the core environmental design remained effective. Even without full interactivity, attendees found themselves immersed in the scale and atmosphere of the scene, demonstrating that CAVERN’s visual and beautiful music alone gives powerful potential.

    Grace & Selena’s Femto Bolt Tracking

    As part of Anamnesis team, another CAVERN project team focused on live-action interactive filmmaking using the Orbbec Femto Bolts, Grace and Selena experimented with Orbbec instead of Vive Trackers for motion tracking. However, because our toolkit currently only supports Vive Trackers, they faced integration challenges. Still, this project shed lights on what difficulties we might face when we moved to integrating Femto Bolts.

    Mike & Bryan’s Space Game

    Our consultant Mike Christel and ETC Senior Research Programmer Bryan Maher collaborated on a project featuring colorful balls shooting to you the same way a fireball might fly towards you during a space fight. While the project was halted due to busy schedules, they gave us invaluable feedback on their initial development process and documentation suggestions, such as tutorials on binding a Vive Tracker, and removing the manual Tarball installation.


    Reflections and Refinements

    CAVERN Jam concluded with overwhelming praises, not just from the attendees but also those who came to the showcase. Everyone was inspired to create more experiences on CAVERN, and this is exactly our goal.

    Throughout the jam, our team was able to quickly push bug updates within a short hour because of our choice to host the package through UPM instead of GitHub.

    On the development side, the feedback we received show that confidence levels for developing on CAVERN went up from 3 out of 5 to 4 out of 5, and participants quoted the documentation as well as the CAVERN previewer as a highlight that accelerated their development and testing.

    On the other hand, there are areas identified for improvements:

    • Manual tarball installation was cumbersome. We did not have enough time to put that into the package before the jam, but now we have automatic installation configured.
    • Error messages that came from SteamVR settings not present on developer’s computer should be suppressed in order not to confuse them, since the errors will disappear immediately when they are on CAVERN computer, and the game should run normally on their computers.
    • We should allow more time for participants to test in the CAVERN, so errors such as incorrect audio configuration will not be present in the showcase.

    More updates

    Sound configuration 5.1 to 7.1

    After fixing the computer configuration itself, we also realized that 5.1 setting does not align as well with the actual physical speaker setup. Instead, having 7.1 while disabling the two surround speakers and the center speaker is more like our intended quadraphonic setup.

    Sample scene refinements

    To refine the sample scene before Halves, we:

    • Added a butterfly animation to better demonstrate movement across the curved screen and into the space.
    • Changed terrain the skybox assets from gray box materials to the ones we made.
    • Tested shaders by adding a water shader to the river in the scene.

    Next Steps

    CAVERN Jam validated the strengths of Spelunx, while also highlighting key areas for refinement. With this, we are ready to present our half semester achievements to community and faculty in next week’s halves presentation. Stay tuned for more updates.

  • Week 5 (02/14/2025) – Preparing for CAVERN Jam, Vive Trackers, and User Research

    Week 5 (02/14/2025) – Preparing for CAVERN Jam, Vive Trackers, and User Research

    With CAVERN Jam scheduled for Monday of Week 6, this week was dedicated to preparing for this first major user test with real world external developers. Because people might interact with the toolkit in ways we never anticipated, our focus this week was to clearly define the aspects we want our users to test, and make that part of the toolkit as refined and stable as possible.

    In the meantime, we had a breakthrough in Vive Tracker integration, finally solving this long-standing issue that had frustrated all the previous CAVERN teams. We also continued researching past ETC projects, where we tested performance of our camera with a high fidelity scene from Hycave last semester, and interviewed more former developers to gain insights into common challenges in CAVERN development.


    Planning CAVERN Jam

    We want our participants to focus on the current CAVERN affordances and build upon that. Spelunx provides support for stereoscopic rendering, easy setup for the CAVERN space including surround sound, a preview mesh tool to see how the world renders in the Unity editor, basic Vive Tracker integration, and a sample scene. These are our half semester breakthroughs, and we should ensure we allow participants to explore these without venturing into challenges beyond the current capabilities. We then designed the jam theme to be:

    ”Build a CAVERN experience using Spelunx and contains one single interaction.”

    To support participants, we need to prepare:

    • Documentation: tutorials for understanding the CAVERN space, setting up for development, and toolkit tutorials.
    • Tech: Making sure our package can be successfully installed using Unity’s Package Manger (UPM).
    • User Research: Pre-survey and post-survey to understand the development journey of participants from various backgrounds.
    • Logistics: discord server for live support and announcements, Perforce folder for version control and submission, testing time-slots sign-ups for testing in the space, boba preparation to incentivize engagement.
    Cavern Jam Poster
    Cavern Jam Poster
    Jam rules on the kick-off slides
    Toolkit tutorial we provided for CAVERN jam.

    Finally a Vive Tracker Solution

    One of the biggest remaining technical hurdles before CAVERN Jam was ensuring that Vive Trackers worked reliably, so we can test this critical input system. Previous teams had struggled with complicated setup and manual configuration that is not extendable beyond the individual projects. Therefore, as a toolkit project, we aim to generalize this and make it as hassle-free as possible.

    The first thing to determine was which package to integrate. Over the past three weeks, over crashes and bulky solutions, we’ve tried and switched into 5 different ones, and finally settled down with one. Here are the list of packages we’ve considered and their trade-offs:

    • OpenXR + OpenVR + SteamVR – the one most widely used by past teams, yet it relies on outdated dependencies (especially since we are in Unity 6), and has many things specific to VR headsets that we won’t use, but will unreasonably increase our package size.
    • OpenXR + OpenVR + Vive Tracker Package – This Vive Tracker package works exactly for our case of not using Vive Trackers in a VR headset environment. However, while it worked for some time, it eventually led to constant crashes of Unity editor, so we had to change our solution.
    • OpenXR + Vive Tracker Profile – it only worked with a headset.
    • OpenXR + OpenVR + VIU (Vive Input Utility) + SteamVR – partially worked, but it was extremely bulky. We eventually reverse engineered this solution to work with the second one, using its tracking implementation that eliminated the crashes with the Vive Trackers Package, and that was the one we are using now.
    • Libsurvive – requires redoing the entire CAVERN setup, so we did not tried this. (Thankfully)

    The final result was a fully functional, crash-free Vive Tracker integration that CAVERN Jam participants can integrate with a single click.

    Adding a Vive Tracker to a scene in one click.

    An improved camera performance using Hycave’s past scene

    To validate our work, we also tested Spelunx’s camera rendering solution against a past CAVERN project, Hycave. They used to discard a scene with procedural generated grass due to the extremely low frame rate. Using our new camera, the frame rates tripled from around 20 to 60 FPS.

    left – old camera. right – Spelunx camera.

    Camera 3.0 – Head Tracking

    Being a physical space, players are encouraged to move about in the Cavern. The player’s position can also be tracked using accessories such as the VIVE Tracker. An example would be a game where a creepy set of eyes follow a player as they move about.

    However, since the camera is currently assumed to be in the center of the screen when calculating P, this creates the issue where the perspective of the rendered image is incorrect when the player moves away from the center.

    In the case of the above game example, the eyes might look like they are looking at the player when they are standing in the center of the space, but appear to be looking past the player when they walk around.

    The fix is relatively simple. Rather than splitting the screen into quadrants at the center, we just split the split from the head.

    Splitting the screen into quadrants from the head instead of center.

    Other Updates

    Outside of CAVERN Jam preparation and technical testing, we made several key refinements:

    • We met with Carl Rosendahl, a faculty from ETC’s Silicone Valley location for advice on how to communicate Spelunx’s impact more effectively (especially better than quarters). His suggested that rather than simply listing features we are working on, we should emphasize the our process of solving the problems in a narrative, and we can quote our user’s words to show our impact.
    • We continued to interview past teams, and many of them cited a lack a documentation as being the biggest roadblock, and that it was hard to share their knowledge with future groups. This highlighted our emphasis on documentation was a crucial user need.
    • We also finalized and sent the structured learning roadmap to Stacey at South Fayette, ensuring that students can onboard gradually without being overwhelmed.

    Next Steps – Running CAVERN Jam and Analyzing Results

    With all CAVERN jam preparations complete, with a working Vive Tracker integration, stay tuned for Week 6 updates, where we break down our findings from CAVERN Jam!

  • Week 4 (02/07/2025) – South Fayette Visit, Toolkit 1.0, and Team Bonding

    Week 4 (02/07/2025) – South Fayette Visit, Toolkit 1.0, and Team Bonding

    This week, we had our first visit to South Fayette High School, where we met the students who will be using our toolkit. We also reached a major milestone with Toolkit 1.0, bringing together stereoscopic rendering and our beautiful botanical garden sample scene into a fully testable and demo-able package. To wrap up the week, we took a well-earned break for High Tea team bonding, celebrating our progress before diving into the next phase of development.

    Camera 2.0 – Achieving Stereoscopic Rendering (For Real This Time)

    Top view of Cavern rendering with two cubemaps.

    Last week, we had partial success with stereoscopic rendering. To recap the issue, notice that as the screen angle approaches 90°, the stereoscopic effect decreases as the IPD between the eyes approaches zero. Moreover, going beyond 90°, the view from the left eye is now to the right of the right eye, resulting in the depth perception of objects being reversed.

    To solve this, we used four cubemaps, each with an offset in each cardinal direction. Then, splitting the screen into four quadrants (one for each cardinal direction), the left and right eye will select a different cubemap to sample from each quadrant.

    When facing northwards, the left and right eyes sample from the west and east cubemaps respectively.
    When facing southwards, the left and right eyes sample from the east and west cubemaps respectively.
    When facing eastwards, the left and right eyes sample from the north and south cubemaps respectively.
    When facing eastwards, the left and right eyes sample from the south and north cubemaps respectively.

    Stereoscopic rendering now works! However, a drawback of this method is that it creates a vertical seam along the lines where the quadrants meet. Despite that, during playtesting, the majority of our users who spent an average 15 minutes time in the Cavern, would not notice it unless it was specifically pointed out to them. Sometimes, a technical solution is not the most theoretically correct one, but one that works most effectively without taking all of our efforts and time!

    Vertical seam where quadrant connects.

    South Fayette Visit

    Learning from Future CAVERN Developers

    On Wednesday, we traveled to South Fayette High School to introduce students to CAVERN and understand how they engage with interactive development. We wanted to understand what drew them to self-select into this Building Virtual Worlds course, what kinds of experiences they are hoping to build, and what support they need to get started.

    Students shared a mix of interests, with some drawn to programming and game mechanics, while others were excited about art, world design, and immersive storytelling. When asked what they would build if they could make anything, students imagined overgrown ruins, apocalyptic worlds, and turn-based RPGs with rich visuals.

    Collaborating on a Shared Lesson Roadmap

    One challenge students faced was figuring out where to start. They weren’t sure how to break down development into manageable steps, and using the toolkit through UPM (Unity Package Manager) was unfamiliar and difficult. Stacey, their teacher, suggested that if we could provide a structured learning roadmap, she could incorporate it into their lesson plan. In return, we could use their lesson plans into our toolkit for future teachers and developers.

    Working with a Different CAVERN

    South Fayette’s CAVERN setup was different from ours at ETC, which led to unexpected technical hurdles when testing our package:

    • Dimensions – South Fayette’s CAVERN screen is 4 inch above ground as opposed to directly starting from the floor like at ETC. Also, the radius and heights are larger and taller. We already have settings for dimensions in our toolkit, but have yet to determine the actual numbers to assist them with developing.
    • Display Setup – Unlike our driver-level mirroring, theirs uses VNC remoting, and so more documentation of working with different mirroring options are needed.
    • Speaker Configuration – Their four speakers are mounted above the space instead of facing inward, and they use a stereo setup instead of a quad arrangement. This affects spatial sound, and will require a next visit to test how to adjust our toolkit.

    Toolkit 1.0

    We also released the first stable version of our toolkit! We integrated what each of us worked on in the previous weeks into a Unity package.

    What is in the Package

    • Stereoscopic Rendering – Our optimized single-camera rendering system is now fully functional, with monoscopic mode as a toggle-button option.
    • Sample Scene – A botanical garden with giant flowers and a swing that showcases 3D immersion, along with spatial sound examples, such as a 2D ambience, and a circulating cricket 3D sound.
    • CAVERN Tools Panel in Unity – A simple UI that allows users to add the CAVERN camera setup with one click.
    • CAVERN Previewer – A Unity tool that lets developers preview their scene at the correct size and curvature before testing in CAVERN. Saves a lot of time during development.

    What is still in Progress

    • Vive Tracker Integration – Not included in this release due to ongoing stability issues, which we plan to fix in the next iteration.

    Team Bonding: A Steampunk High Tea Adventure

    After an intense few weeks of development, we also had our official team bonding! After voting between karaoke and axe throwing and more, we settled on going to High Tea at the Inn on Negley. To immerse into scene even more, we decided to dress up with a Steampunk theme, and look at the photo! We had a lot of fun, not only developing, but also as a team.

    Spelunx Team Bonding Photo

    Other Updates

    While the South Fayette visit and Toolkit 1.0 were the big highlights, there were plenty of other developments this week:

    • Interactions scripts: we implemented a mirroring effect and a shy creature (backing away from tracker) interaction script for non-programmer CAVERN developers to directly use as an asset. It will be featured in our sample scene in the future as well.
    • Fullscreen Integration: we moved fullscreen-on-play into our package, so that developers will be able to test in the CAVERN without having to build the project every time.
    • CAVERN Preview: Constantly improving upon feedback, we added a CAVERN screen renderer, where when you enter play mode, you can see what your 3D world would look like in the CAVERN screen, right at your desk. This helped a lot for our artists to visualize where to put assets in the scene.

    Next Steps for Week 5

    With Toolkit 1.0 in place, our next priority is to rapidly test with developers to understand how we can improve. Therefore, we decided to host a CAVERN Jam on the 7th week. In week 5, we will polish all our current technology, such as polishing the code style in rendering, actually fix Vive Tracker integration, building a more modular sample scene, creating onboarding documentation, and planning for user testing process.

    Stay tuned for Week 5 updates!

  • Week 3 (01/31/2025) – Quarters, Stereoscopic Camera, and Vive Tracker Troubles

    Week 3 (01/31/2025) – Quarters, Stereoscopic Camera, and Vive Tracker Troubles

    Welcome back! This week, we had our Quarter walkarounds, where faculty provided feedback to help us refine our approach to interactions and toolkit usability. A follow-up discussion with Brenda Harger gave us deeper insights into narrative design and engagement strategies in CAVERN’s environment. On the technical side, we made a major breakthrough in stereoscopic rendering, but Vive tracker integration remained highly unstable. Finally, we also planned our toolkit 1.0 for the visit to South Fayette that will happen next week.


    Refining How We Communicate Our Project During Quarters

    This week, we had our first major round of faculty feedback through Quarters walkarounds. During these sessions, faculty rotated between project teams, offering guidance and helping us evaluate our initial direction. This was also the first time we formally presented our project since last semester’s pitch, which meant reassessing how we communicate our goals.

    We discovered that faculty had differing expectations for our toolkit. Some envisioned a fully no-code, drag-and-drop system, while we had always planned for a toolkit that still requires coding for non-CAVERN-specific interactions. This raised an important question: How do we define accessibility in our toolkit? Our approach assumes that by designing for high school students as a less technical user base, we will also enable ETC graduate students, regardless of coding experience, to create impactful experiences in CAVERN.

    Another key realization was that the term “novice” can mean many different things—a user could be new to programming, game development, or CAVERN itself. Faculty feedback helped us recognize that we need to clearly define our target audience and ensure that our documentation and onboarding process supports different levels of experience.

    During quarters, we focused on getting feedback on the above top-priority questions.

    Exploring Non-Tech Multiplayer Interaction Techniques with Brenda

    After Quarters, together with Brenda Harger, ETC’s professor teaching Improvisational Acting, we explored in person in CAVERN how users engage with the space and how interaction design could be made more intuitive.

    Just like how interactive storytelling children’s game like Going on a Bear Hunt is fun and engaging even without any technology or props, Brenda encouraged us to consider utilizing the wide 20-feet play area CAVERN provides for opportunities for multiplayer social experience. Clapping, stomping, and following movements are all simple interactions that are beyond the digital screen or complex controls, but perfect for the space.

    In addition, CAVERN’s curved wall gives potential to creating moments of surprise – objects can appear from behind, wrap around players periphery, or a sound cue could guide attention subtly without requiring direct instructions. Minimizing explicit verbal guidance and allowing players to naturally discover mechanics can make interactions feel more immersive and intuitive.

    Sometimes, simple environmental cues and physical actions outside of tech solutions can be just as compelling as complex mechanics. This conversation helped us rethink how to blend physical actions with digital interactions to create a seamless, intuitive experience inside CAVERN.


    Camera 1.0 – Achieving Stereoscopic Rendering (Somewhat)

    With the success of last week’s monoscopic camera, this week was the time to start bringing the world into 3D by exploring stereoscopic rendering.

    Stereoscopic rendering allows us to achieve the “popping out” effect that we see when watching a 3D movie. To render a stereoscopic view on a flat screen, we render the scene twice, each with a slight offset for each eye, and overlay the images on top of one each other. When the player puts on 3D glasses, it filters out the images such that each eye only sees one image, and the brain combines them to perceive depth.

    The offset is known as the interpupillary distance (IPD), which is the distance between our eyes. On average, adult humans have an IPD of 63mm. In the case of the Cavern, the output for each eye is vertically stacked in the output render buffer, and specialised software is used to overlay them when projecting onto the screen.

    Left eye view is stacked on top of the right eye view.
    Overlaying both images on top of one another.

    We can also approximate the effect using multiple cubemaps for stereoscopic rendering.

    Top view of Cavern rendering with two cubemaps.

    Finally, at 11pm on Friday, our stereoscopic camera 1.0 was created and tested with two naive users of the CAVERN, and garnered great response – they both were able to see a cube floating mid air and outside of the screen!

    However, we’ve noticed that as the screen angle approaches 90°, the stereoscopic effect decreases as the IPD between the eyes approaches zero. Furthermore, going beyond 90°, the view from the left eye will now be to the right of the right eye, resulting in the depth perception of objects being reversed. So while the solution generally works, it’s not a perfect, yet. So, quote our exceptionally hardworking Terri who’s done this much in a week already: “Oh well, I know what to work on next week then!”


    Debugging Vive Trackers

    On the other hand, Vive Trackers integration was met with huge obstacles. While last week we successfully integrated it into Unity via the unity-openvr-tracking package, it ran into unknown bugs that led to endless crashing of the Unity editor, as well as the entire computer upon exiting play mode.

    On initial inspection, we pinpointed the bug crash being an asynchronous system function in OpenVR still being called even after exiting play mode. We tried to create a breakpoint, tried to comment different lines of code, but all were in vain.

    Miraculously, on Friday, right when we were debating whether or not to include the buggy version into the demo next week at South Fayette and pushed a temporary version to the main branch on GitHub, it suddenly started working as intended! We decided to continue investigating the problem, but for now, a working version is available!


    Other Updates

    Beyond the core technical breakthroughs and design discussions, we also made progress in other areas:

    • Sound Sample Scene: Winnie built a test environment for spatial audio, including a 2D background music and a spatialized sound effect circling the camera, which will be integrated in our initial sample scene.
    • Art Development Continues: Mia and Ling continued working on finalizing models for our sample scene, ensuring they are optimized for CAVERN’s projection system. The environment is slowly and steadily taking shape.
    • Planning for South Fayette Visit: Our producers scheduled the first visit to South Fayette, and we started outlining what we want to showcase and how to structure our interactions for student engagement.

    Next Steps for Week 4

    Since we are visiting South Fayette next week, integrating our freshly built camera, initial solution for input, sample scene art and sound assets, all into our toolkit package will be our main priority.

    Week 3 was all about refining our approach, tackling major technical challenges, and rethinking how users engage with CAVERN. With our first real playtest approaching, Week 4 will be a critical milestone in seeing how our work translates into actual user interactions.

    Stay tuned for Week 4 updates!

  • Week 2 (01/24/2025) – Learning the Tools and Gold Spike

    Week 2 (01/24/2025) – Learning the Tools and Gold Spike

    Welcome back to our Week 2 blog post! This week, our focus was on learning and experimenting with the tools needed for CAVERN, laying the foundation for development moving forward. Since many of the features we aim to implement require a deep understanding of existing systems, this week was all about researching, testing, and iterating on our ideas before committing to long-term solutions. In addition, we set up our GitHub repository for our package, and onboarded everyone (including artists) to use git commands.


    Rendering – Understanding CAVERN’s Projection System

    Rendering in CAVERN presents a unique challenge due to its curved display system, requiring a fundamentally different approach from traditional game rendering. Terri dedicated much of this week to learning Unity’s ShaderLab and HLSL, as well as understanding the updates to URP in Unity 6. With major changes introduced in this version, existing documentation is limited, making reverse engineering and experimentation essential in finding a viable solution.

    Notes on rendering solutions on the original API

    Previously, the old camera system relies on a multi-camera system, using over 30 cameras per eye, aligned in a circle, and combined the output from the cameras into a final texture which is projected onto the screen.

    While it did work, it had many drawbacks.

    1. Firstly, it was a complicated system, which a relied on complex hierarchy of GameObjects and cameras to function, making working with the system difficult.
    2. Secondly, the camera could not be rotated in the Unity editor, and required transformations to be done via code.
    3. Lastly, having so many separate cameras meant that the render pipeline has to be run through many times, once for each camera. Finally, the output from each camera had to be stitched together to form the final output. This resulted in heavy rendering performance penalty and severely limited the densities of objects developers could place in their scene.

    While there was a single-camera version in the old system, it was still a work in progress and incomplete, and was using Unity’s deprecated Built-In Rendering Pipeline (BIRP), which is deprecated in Unity 6.

    Alongside this research, Terri began working on reverse engineering the old single-camera. This process was initially broken down into two steps:

    • Render the world view into a cubemap and convert it into an equirectangular texture.
    • Sample from the equirectangler using a custom texture and project it onto the CAVERN display.

    However, as the old single-camera system was a work in progress, the projection of the world onto the CAVERN display still resulted in cropping errors and warping.

    Taking inspiration from part of the old single-camera, rather than to convert the cubemap into a equirectangular texture, the approach was to directly sample from the cubemap. This proved to be a viable option, as it was relatively trivial to calculate the direction of a point on the physical screen from the centre of the CAVERN.

    This diagram explains how the cubemap is sampled.
    The photo shows a monoscopic view of a flat floor projected in the CAVERN using the new camera. Despite the curved screen, the floor is projected correctly as a flat plane.

    With this new approach, not only was the system simplified from over 30 cameras to just 1, it also improved rendering performance significantly, as we now only need to sample the output from a single camera rendering into a cubemap. From our tests, the performance increase ranged from 100% to 200%, depending on the output resolution.


    Input – Exploring Vive Tracker Integration

    Being a non-traditional immersive platform, creators making experiences for the CAVERN tend to gravitate towards non-traditional, immersive controls. In the past, teams have used the Orbbec Femto Bolt body trackers and the HTC Vive position trackers. These both allow creators to get information about where people and objects are located within the CAVERN. The existing CAVERN API had no built-in input functionality, so each team in the past had to figure it out on their own.

    Our first goal is to integrate Vive Trackers, as its use cases seemed to come into mind most naturally. Previously, teams ran into many issues with the trackers not working or crashing, and they had to hardcode the device IDs into their code, making it difficult to switch to a different Vive tracker if one stopped working. Our initial goal was to simplify this process.

    Difficulties

    • Originally designed to be used with a virtual reality headset (head mounted display, or HMD), using Vive Trackers in the CAVERN emerged as our main challenge. This usage of Vive Trackers was unsupported, undocumented, and relatively unknown, so research and lots of debugging was required. It seems the current best practice that other people have found to work is to use SteamVR installed on the PC with a “null driver” which pretends to be a headset.
    • Getting the position, rotation, and pin input data into Unity. While past teams used OpenXR + OpenVR + SteamVR to solve this, because SteamVR contains a lot of VR specific code irrelevant to CAVERN, crashes often, and causes issues with Unity6, we decided to use another package on GitHub called unity-openvr-tracking instead.

    Editor Tools – Prototyping a User-Friendly Unity Workflow

    Beyond the core technology, we also need to ensure that our toolkit is user-friendly and accessible. This week, Yingjie focused on prototyping a custom Unity Editor panel that will eventually house our CAVERN development tools.

    As a proof of concept, she developed an early prototype that allows users to click a button and change an object’s color to green—a simple but crucial first step toward a fully functional toolkit UI. This work provided insights into how we can integrate CAVERN-specific settings into Unity’s Editor workflow, making it easier for developers to set up and modify their projects without extensive manual adjustments. Additionally, Yingjie explored Unity’s package system, which will be important when we distribute the toolkit for future use.

    Initial toolkit structure that was showcased in quarters in the following week.

    Art & Scene Experimentation – Understanding How CAVERN Renders Visuals

    Visual design for CAVERN is unique, given its curved projection and stereoscopic rendering. Ling & Mia spent the week experimenting with how different types of assets look when rendered inside CAVERN.

    To understand how models behave in CAVERN, Ling tested with a swing asset to be used in our sample scene, discovering that because of the rendering resolution of CAVERN, diagonal visuals will be pixelated. Additionally, lower-poly models appear smoother, as crucial details of high fidelity models tend to disappear, making the visuals harder to identify and therefore not as high as a priority when making assets in the CAVERN. These findings will help optimize future models or textures, and inform how rendering technology can possibly be improved.

    Meanwhile, Mia focused on particle effects, exploring how they behave in a projection-based environment. She also worked on the initial logo concepts, creating four draft versions that will be refined in the coming weeks.


    Audio – Researching Spatial & Directional Sound Solutions

    Sound is a crucial part of immersion but is often overlooked until late in development. With CAVERN’s 4.1 speaker setup (front left, front right, rear left, rear right, and a subwoofer), Winnie focused this week on optimizing its use to create directional audio cues. Unlike traditional VR audio, which relies on headphones, CAVERN’s physical speakers require a different approach to spatialization.

    She categorized three key types of spatial sound:

    • Surround Sound – Uses speaker placement for horizontal positioning.
    • Spatial Audio – Software-driven 3D soundscapes, common in VR and headphones.
    • Directional Audio – Achieved with beam-forming speakers, allowing sound to be heard only in specific locations.

    Currently, CAVERN supports standard surround sound, with its middle speaker virtualized by balancing front left and right channels at half volume. Winnie tested Unity’s 5.1 surround sound settings, finding them to be the most natural spatialization option so far. She also explored how the previous API handled moving sound sources, discovering that objects placed in the scene automatically rendered to the correct speaker. Additionally, while investigating the API, she and other team members identified an issue where the camera was flipped, causing opposite sound rendering, which has now been corrected.

    General diagram of how the speakers are set up in CAVERN, with the center being virtualized.

    User Research & Transformational Framework

    While most of our research focused on technical aspects, we also explored how our toolkit can inspire and empower users. As part of this effort, we sent out a questionnaire to past CAVERN teams to gather insights on their experiences, challenges, and best practices. While we are still awaiting responses, we plan to follow up individually to ensure we collect useful data.

    Additionally, Josh & Winnie attended a Transformational Framework workshop, which focused on designing experiences that create a lasting impact on users. While we are not making a traditional game, we want to empower users to build meaningful experiences in CAVERN. This workshop taught us how to define and evaluate good design and intuitive tools.

    Transformational Framework

    Preparing for Quarters & Website Setup

    With Quarters coming up soon, we also began structuring our first formal presentation of the semester. Our goal is to clearly communicate our research, challenges, and early implementations while gathering feedback from faculty and industry experts. Additionally, we started setting up the website structure, which will serve as our central hub for documentation, blog posts, and toolkit resources.


    Next Steps

    Week 2 was all about exploration, research, and laying the groundwork for the next phase of development. We now have a stronger understanding of rendering, input, audio, and UI workflows, which will guide us as we move into implementation and refinement. Next week, we will begin building out our first set of tools and interactions, refining our prototypes, and preparing for our first user tests.

    Stay tuned for more updates in Week 3!


    Gallery

    Our GitHub Repository that was setup this week.
  • Week 1 (01/17/2025) – Kicking Off Spelunx, Meeting with Stakeholders, and Tech Setups

    Week 1 (01/17/2025) – Kicking Off Spelunx, Meeting with Stakeholders, and Tech Setups

    Welcome to the first dev blog for Spelunx! This week, we officially kicked off our project, met with key stakeholders, and began setting up our technical pipeline. With our core hours set, project scope discussed, and initial tools chosen, we’re ready to dive into development.

    A core goal of this project is to make CAVERN development more accessible, not just for our team but for future developers as well. That means prioritizing long-term support, intuitive tools, and well-documented best practices.

    Goals for the Week

    Before jumping into development, we needed to establish our workflow, tools, and project direction. Our main objectives were:

    • Meeting with key stakeholders (faculty, advisors, South Fayette partners) to understand our problem space.
    • Scoping the project goals and deliverables for the semester.
    • Exploring the existing CAVERN API.
    • Setting up version control (Git vs. Perforce) and documentation tools (Doxygen).
    • Defining core working hours for the team.

    Project Pillars Defined

    We met with Drew Davidson (faculty advisor) and Steve Audia (CAVERN builder) to discuss our project direction and expectations. Four core pillars emerged for our development:

    1. Input Systems – Vive trackers and Femto Bolt for motion tracking.
    2. Graphics – Optimize rendering to improve CAVERN visuals
    3. Audio – Spatial sound experiments for immersive experiences
    4. UX – Simplifying onboarding and development for new users

    In order for design and technical knowledge that we will explore and encounter this semester to be passed down to future developers of the CAVERN, we also recognized and emphasized the importance of thorough documentation to aid our four core pillars.

    Technical Consultation with Ezra

    Ezra Hill, our technical consultant and CAVERN API developer (also an ETC alum), walked us through the existing API, shared his toolkit, and gave valuable advice on best practices. Here ae some key takeaways:

    • The current Unity camera setup is 30 Unity cameras (hard on performance), or 1 camera (broken for stereoscopic rendering). Since this is a semester-long project, we decided to tackle fixing the 1 camera solution, as this will greatly improve performance, leading to support for higher-fidelity art assets. We were also advised not to create a camera from scratch, but extend the built-in Unity camera.
    • Vive trackers and SteamVR integration are inconsistent and need debugging while Femto Bolts were previously integrated using Unreal instead of Unity. Both need to be streamlined to include into our toolkit.
    • Since Unity 6 is the newest standard, we will convert everything to this version to ensure long-term support and compatibility for future developers. However, because Universal Render Pipeline (URP) is the default for Unity 6, shaders need modifications to work properly, and packages should be upgraded as well.
    • There is already spatial audio support, but it has not been properly explored, and Unity 6 changes also need to be investigated.
    • Version Control Considerations – Perforce could help with large files, but Git is fine for now.
    • Ezra also recommended starting with a functional rendering pipeline before focusing on additional features. This shaped our Week 2 priorities.

    South Fayette – Understanding Our Key Users

    South Fayette High School students are one of our key user groups, and their needs will shape how we design our toolkit. Our meeting (organized with the help of John Balash, ETC’s Director of Educational Outreach) with Matthew Callison, Director of Innovation & Strategic Partnerships at South Fayette, and Stacey Barth, the teacher for the pilot game design course hoping to develop on the CAVERN, helped us understand how out toolkit can fit into their curriculum.

    What We Learned About SF and Students

    • Most have limited to zero Unity, programming, and 3D experiences.
    • The 5 students who self-selected into this pilot course are excited about game development & virtual worlds. One student has even applied for game design programs for college.
    • They are planning on collaborating with a creative writing course for children’s books to build educational experiences using the CAVERN.
    • The school is part of the Digital Promise’s League of Innovative Schools, which will gather for a conference on March 25th, and CAVERN will be showcased to other schools potentially interested in building a CAVERN.

    How We Can Support Them

    • Make it easy for students to get started through simplified onboarding.
    • Provide structured learning materials to assist SF teachers to build lesson plans.
    • Offer intuitive interactions so students can explore without complex coding.

    Apart from calibrating logistical schedules, we were happy to discover that our goal for a beginner-friendly toolkit, clear tutorials, easy-to-use Unity components, flexible interaction examples, and awe-inspiring demos are aligned greatly with our key users.

    Sample Scene Art & Interactions Brainstorming

    After determining what features we will support in our toolkit, our artists brainstormed art and interactions that can best showcase a CAVERN experience: Vive trackers and Femto Bolts, stereoscopic rendering, directional sound, and immersive aesthetic.

    We have then settled on a sample scene with mystical and botanical feel, where the color palette will mainly be blue and green.

    Interactions we will prioritize to support include:

    • Mirror-like interactions through Vive Trackers following player movement.
    • Wind controlling powered by Femto Bolts.

    Other Progress and Setups

    Version Control & Documentation

    • Git (without LFS/Perforce for now) – We’ll add LFS or Perforce later when needed.
    • Doxygen Setup Completed! – We successfully set up automated API documentation and have it working in the link here.
    • Tutorial Documentation Tools – We explored options like GitBook, Mkdocs, and Google docs. For now, we will document internally in Google Docs, while the final published documentation will be decided in the future.

    User Research & Design Explorations

    • Our development will center around rapid iteration and playtesting with our users: South Fayette High School, current ETC students, and Interactive Story Lab, another CAVERN project this semester discovering live-action interactive film in the space.
    • This week, we created a questionnaire for past CAVERN developers to gather insights on how they designed experiences and the technical difficulties they encountered.

    Next Steps

    Moving into Week 2, our focus will be on starting research and development on various parts of our toolkit, including one-camera rendering for stereoscopic view, Vive Tracker integration, Unity Editor tooling pipeline, spatial audio, and experimentations are art assets.

    We’re off to a great start, and we’re excited to push forward! Stay tuned for more updates next week.

    Gallery

    composition box
    Composition Box from Friday’s Playtest to Explore Workshop