Category: Weekly Blogs

  • Week 7 (02/28/2025) – Halves Presentation

    Halfway through the semester, we reached an important milestone: Halves Presentation. While our team has been deeply focused on development, this was our moment to step back, reflect on our progress, and effectively communicate our work to the broader ETC community.

    CAVERN development is highly technical, and much of our work happens under the hood—through rendering optimizations, tracking solutions, and workflow improvements. This presentation was an opportunity to not just showcase the toolkit itself, but also highlight the problem-solving process behind it.

    Beyond the presentation itself, this week also included two major public demos—one at the ETC CAVERN Showcase, where students and faculty could experience Spelunx firsthand, and another at South Fayette High School, where we introduced K-12 STEAM teachers to CAVERN and refined the student onboarding experience.

    And, of course, we ended the week with a celebratory brunch, marking an exciting half-semester of progress!


    Final Refinements – Preparing for Halves

    With Halves on Wednesday, we dedicated the first half of the week to finalizing documentation and refining our presentation. A key priority was ensuring that our technical work was not only well-structured for future developers but also clearly explainable to a general audience.

    Documenting the Camera – A Mathematical Guide for Future Developers

    One of the most significant additions this week was a formal mathematical documentation of the CAVERN camera system.

    Since CAVERN uses stereoscopic projection on a curved screen, traditional rendering approaches don’t work out-of-the-box. While we had successfully developed a single-camera rendering pipeline to replace previous inefficient multi-camera solutions, we realized that future developers would struggle to modify or expand upon our work without a clear mathematical breakdown.

    To address this, we documented:

    • How projection from a single camera to a curved screen is achieved.
    • The transformations involved in mapping the 3D scene onto CAVERN’s display.
    • How developers can modify camera parameters if the CAVERN setup changes.

    Toolkit Usage Diagrams – Bridging the Gap for New Users

    In addition to the camera documentation, we also created diagrams and structured guides to make our toolkit more accessible for non-programmers.

    Since Spelunx is intended for a range of users—from experienced Unity developers to high school students exploring immersive media for the first time, we needed to ensure that our documentation was clear, visual, and easy to follow.

    By refining these materials before Halves, we ensured that we were not just delivering a working toolkit, but also providing the resources needed to make it usable and expandable.


    Halves Presentation and CAVERN Showcase

    On Wednesday, we presented our progress to faculty, peers, and members of the broader ETC community. The response was overwhelmingly positive—people were excited to see how Spelunx was making CAVERN development more accessible, and many were interested in experimenting with the toolkit themselves.

    However, while slides and videos were useful for explaining our process, CAVERN is a space that must be experienced firsthand to be fully appreciated. For this reason, we extended an open invitation to faculty and students to visit the ETC CAVERN Showcase on Thursday, where they could:

    • Experience our sample scene in full stereoscopic 3D.
    • Try out interactions from CAVERN Jam projects.
    • See how different depth cues, motion, and sound work in an immersive space.

    Key Feedback from the Showcase

    As attendees explored the space, we gathered valuable insights into how people perceive and engage with CAVERN environments:

    • The 3D effect was highly convincing, making the screen “disappear.” This reinforced that our sample scene’s depth and spatial design were effective.
    • Horizon alignment felt slightly off in some scenes. This is something we will refine in upcoming iterations.
    • People were drawn to more dynamic, reactive interactions. Suggestions included having objects respond to player presence, using subtle movements to enhance immersion.
    • Ambience and atmosphere were strong, but directional sound could be showcased better. Now that surround sound is properly configured, we plan to incorporate more layered audio interactions in future updates.

    South Fayette Visit – Introducing CAVERN to Educators

    On Friday, we visited South Fayette High School for the second time—this time, not just to engage with students, but to introduce CAVERN to K-12 STEAM teachers.

    Bringing CAVERN to the Classroom

    Our goal was to demonstrate how immersive environments can be integrated into education and to help teachers understand the process of creating interactive experiences in CAVERN.

    During the session, we showcased:

    • The fundamentals of CAVERN as an interactive space.
    • How students can use Spelunx to quickly develop and test ideas.
    • Examples from CAVERN Jam that illustrated creative interaction design.

    The response was enthusiastic—many teachers saw potential applications in storytelling, science visualization, and interactive learning.

    Hands-On Debugging and Support for Students

    After the demo, we worked closely with Stacey and her students to provide technical guidance on working with CAVERN.

    • We walked Stacey through the full process of importing Unity packages, setting up scenes, and configuring CAVERN’s display.
    • We debugged a Blender-to-Unity 6 issue, ensuring that students could properly import 3D models into their projects.

    This session reinforced that beyond just providing a toolkit, our role is also about empowering future creators—ensuring that educators and students feel confident using these tools independently.


    Celebrating Our Half-Semester Milestone

    After an intense week of presenting, testing, and refining, we took a well-deserved break with a celebratory brunch in Shadyside. It was a moment to appreciate how far we had come—from our initial pitch to a fully functional toolkit, a successful game jam, and multiple real-world demos.

    But this was just the halfway point. Looking ahead, we are preparing to:

    • Refine interactions and dynamic responsiveness based on showcase feedback.
    • Continue working with South Fayette to ensure successful student projects.
    • Explore advanced features, including potential support for additional tracking methods beyond Vive Trackers.

    Week 7 was about sharing our work with the world—now, we move forward with clear next steps and renewed energy.

  • Week 6 (02/21/2025) – CAVERN Jam!

    Week 6 (02/21/2025) – CAVERN Jam!

    This week was all about CAVERN Jam, the first major public test of Spelunx in the hands of developers outside our team. The two-day event brought together eight participants who created six unique CAVERN experiences, each exploring different aspects of interaction, immersion, and spatial design within the space.

    Beyond being a showcase of creativity, CAVERN Jam was a true usability test. For the first time, we were able to see how developers—some with extensive Unity experience, others primarily artists—navigated our toolkit, documentation, and development workflow. The results were exciting: participants could quickly set up projects, create compelling interactions, and adapt to CAVERN’s unique affordances.

    At the same time, technical challenges emerged, reinforcing that real-world testing is essential. Issues like computer crashes, audio misconfiguration, and installation friction highlighted areas for improvement. However, the overall sentiment was clear: Spelunx made developing for CAVERN dramatically easier, and participants left feeling more confident about creating experiences in this space.


    CAVERN Jam – Six Unique Worlds

    Over two days, participants created and refined six interactive experiences, each showcasing a different strength of CAVERN. The final showcase drew over 20 attendees, including faculty, students, and other ETC developers, all eager to see what was possible in this immersive environment.

    AlexHallFleshWall – The Power of Presence

    A surreal, unsettling experience where giant eyeballs track the person wearing a Vive Tracker, creating an eerie sense of being watched. Interestingly, because CAVERN currently tracks position rather than head orientation, the person inside may not feel like they are being tracked, but everyone else in the space perceives the effect perfectly. This highlights a fascinating design consideration for multiplayer VR-like spaces.

    Additionally, because the art was placed extremely close to the screen, distortion was minimal—a strong contrast to more expansive 3D worlds. Alex, primarily an artist, spent around 10 hours, mostly on modeling, and was able to integrate her work into CAVERN with minimal technical difficulty.

    Josh’s Frog Choir – Multiplayer Interaction with Vive Trackers

    A musical interaction where four Vive Trackers are used to trigger singing frogs, with volume and pitch changing based on player proximity. The design naturally encouraged different playstyles—players could place a tracker in one frog’s zone and leave it singing, or multiple people could move around dynamically to shift the composition.

    This piece demonstrated CAVERN’s multiplayer potential, allowing spatial coordination and emergent behavior between players. Josh, a programmer and Spelunx team member, built the system in around four hours, showing how quickly interactions could be developed with the toolkit.

    Jing’s Bubble Game – Transparency and Depth in CAVERN

    Building on our provided sample scene, Jing introduced a bubble interaction mechanic, where players could use Vive Trackers to repel and pop bubbles. While simple, the interaction proved surprisingly engaging—players found joy in physically reaching out and watching bubbles react in real-time.

    One of the most exciting takeaways from this experience was the way bubbles created a visual sense of depth. As they moved across the space, their transparency allowed players to see both the real world behind them and the virtual world inside CAVERN, reinforcing immersion in a way that was uniquely effective.

    Terri’s Head-Tracked Anime Girl – Validating Head Tracking

    A technical proof-of-concept rather than a full experience, Terri’s scene featured a dancing anime character that players could view from different angles using head-tracked rendering. This served as a gold spike, demonstrating that CAVERN could correctly handle dynamic head tracking—a crucial feature for future immersive storytelling and interactive projects.

    Winnie’s Little Match Girl Experience – Immersive Storytelling

    Inspired by a previous mixed-reality project, Winnie’s world aimed to blend spatialized sound and visual storytelling. The experience began with players lighting a candle using Vive Trackers, with a surround sound voice guiding them through a dreamlike transition into a beautiful world featuring a giant, ethereal whale.

    Though technical issues prevented some elements from functioning—the candle script failed, and the computer was set to stereo instead of surround sound—the core environmental design remained effective. Even without full interactivity, attendees found themselves immersed in the scale and atmosphere of the scene, demonstrating that CAVERN’s visual potential alone is powerful.

    Grace & Selena’s Femto Bolt Tracking – Exploring New Input Methods

    As part of Anamnesis, another CAVERN project team focused on live-action filmmaking, Grace and Selena experimented with using Orbec Femto Bolts for motion tracking. However, because our toolkit currently only supports Vive Trackers, they faced integration challenges.

    Despite the setback, their work reinforced the importance of future-proofing Spelunx for multiple input methods—something we will consider in later iterations.


    Reflections from the Showcase

    With over 20 attendees, the final showcase was an exciting moment—not just for the participants, but for everyone curious about the potential of CAVERN development. The reaction was overwhelmingly positive:

    • Attendees were impressed by the immersive scale of the space, many expressing interest in creating future projects.
    • Developers found the toolkit intuitive, with most participants reaching a fully functional setup in under 20 minutes—a process that previously took hours.
    • The CAVERN Previewer was a highlight, allowing artists to see their work rendered correctly before testing in the space.

    At the same time, developers identified areas for improvement:

    • Manual installation of the toolkit via tarball was cumbersome, reinforcing our plan to move toward automated installation.
    • Some experiences were under-tested before the showcase, leading to small but noticeable issues (such as the surround sound misconfiguration).
    • Error messages in Unity needed better suppression, as certain warnings confused new users.

    Despite these challenges, confidence in developing for CAVERN increased from 3/5 to 4/5 among participants, and our ability to push real-time bug fixes through UPM (Unity Package Manager) proved invaluable—we patched an issue within an hour during the jam itself.


    Post-Jam Refinements and Halves Preparation

    Following CAVERN Jam, the rest of the week was spent analyzing feedback, making refinements, and preparing for Halves presentations. Rather than focusing on what didn’t work, we looked at what worked well and how it could be improved.

    Fixing Surround Sound and Display Configuration

    One of the key refinements was adjusting CAVERN’s audio settings. By switching from 5.1 to 7.1 surround, we ensured that only the front left, front right, rear left, rear right, and subwoofer were active—avoiding unintended stereo configurations. This change, along with automatically setting Unity projects to 7.1, should prevent similar issues in the future.

    Additionally, we worked with Steve to test different display mirroring solutions, ultimately shifting toward a 4th monitor setup to reduce warping issues.

    Finalizing Sample Scene Updates

    To refine the sample scene before Halves, we:

    • Added a butterfly animation to better demonstrate movement across the curved screen.
    • Adjusted skybox and shaders for more atmospheric visuals.
    • Created and tested a CavernAudioSource component, but ultimately decided that audio tutorials were more valuable than an additional technical feature.

    Automating Installation

    Following participant feedback, we removed manual tarball installation in favor of a streamlined automatic install, significantly simplifying the setup process for future users.


    Next Steps – From CAVERN Jam to the Next Stage

    CAVERN Jam validated the strengths of Spelunx, while also highlighting key areas for refinement. With these lessons in hand, our next focus is:

    • Finalizing toolkit improvements for Halves.
    • Continuing to refine usability based on developer feedback.
    • Looking ahead to more structured user testing with South Fayette students.

    Week 6 was a milestone moment, proving that Spelunx is not just a technical tool, but an accessible way to unlock creativity in CAVERN. Now, we move forward with the insights we’ve gained, ready to make development in CAVERN even better.

  • Week 5 (02/14/2025) – Preparing for CAVERN Jam, Stabilizing Vive Trackers, and Learning from the Past

    Week 5 (02/14/2025) – Preparing for CAVERN Jam, Stabilizing Vive Trackers, and Learning from the Past

    With CAVERN Jam scheduled for Monday of Week 6, this week was dedicated to preparing for our first major user test. Unlike internal testing, where we control every variable, this would be the first time Spelunx was in the hands of external developers—people who would engage with the toolkit in ways we might not have anticipated. To make the most of this, we needed to refine the toolkit, structure the event to gather meaningful usability insights, and ensure that all technical components were stable.

    At the same time, we had a breakthrough in Vive Tracker integration, solving a long-standing issue that had frustrated previous CAVERN teams. We also continued researching past ETC projects, testing Hycave’s abandoned scene with our toolkit and interviewing former developers to gain insight into common challenges in CAVERN development.


    Structuring CAVERN Jam – A User Test Disguised as a Game Jam

    A major challenge in preparing for CAVERN Jam was ensuring that participants could effectively test Spelunx without getting lost in open-ended experimentation. If we gave them no structure, they might struggle to engage with the toolkit meaningfully; if we overly constrained them, they wouldn’t be using it in real-world development conditions.

    To balance this, we designed the event around a focused challenge:

    “Build one meaningful interaction that showcases CAVERN’s unique affordances using Spelunx.”

    This format allowed participants to explore core features of the toolkit—stereoscopic rendering, Vive Tracker input, surround sound, and sample scenes—without needing to build an entire experience from scratch.

    What We Provided to Participants

    Since this was our first real usability test, we wanted to make sure that barriers to getting started were minimized. To support participants, we prepared:

    • A pre-configured Unity project containing Spelunx’s core features, so developers didn’t have to manually set up anything.
    • Step-by-step onboarding tutorials that walked through the essential tools and how to use them.
    • Live support—our team was present throughout the event to answer questions and observe developer workflows.
    • A pre-survey and post-survey, allowing us to track participants’ backgrounds, pain points, and overall impressions.
    • An incentive system (boba!), ensuring that participation remained high and that people stayed engaged.

    How We Measured Usability

    Observing participants in real time provided critical insights into where Spelunx was intuitive and where it wasn’t. Instead of just asking developers whether they liked the toolkit, we tracked how they used it and looked for patterns, such as:

    • Which tools were discovered naturally, and which needed explanation?
    • How long did it take for participants to build their first interaction?
    • Were there common points of frustration, confusion, or inefficiency?

    We also gave participants freeform interaction with Spelunx, rather than forcing them through a rigid tutorial. By watching where they struggled or where they found workarounds, we could pinpoint what needed refinement in future iterations.

    This wasn’t just about seeing if developers could use Spelunx—it was about understanding how they used it, where they hesitated, and what could be improved.


    Vive Trackers: From Persistent Crashes to a Stable Solution

    One of the biggest remaining technical hurdles before CAVERN Jam was ensuring that Vive Trackers worked reliably. Previous teams had struggled with frequent crashes, inconsistent tracking, and complex configuration setups that added friction to development. Since CAVERN Jam involved testing Spelunx’s ability to handle input systems, we needed a solution that wouldn’t interrupt participants’ workflow.

    Over time, different teams had tried five different approaches to integrate Vive Trackers, each with its own trade-offs:

    1️⃣ OpenXR + OpenVR + SteamVR – The most widely used method, but prone to crashes and outdated dependencies.
    2️⃣ OpenXR + OpenVR + unity-openvr-tracking – Allowed trackers without a headset, but caused Unity to crash upon exiting Play mode.
    3️⃣ OpenXR + Vive Tracker Profile – Could have eliminated OpenVR, but only worked if a VR headset was plugged in.
    4️⃣ Vive Input Utility (VIU) – Supported both Vive Trackers and Controllers, but initially required multiple dependencies.
    5️⃣ Libsurvive – A complete SteamVR replacement, but required reconfiguring all tracking hardware.

    For most of the semester, we had been refining option 2, modifying unity-openvr-tracking to simplify setup. However, the persistent Unity crashes made it clear that this approach wasn’t reliable enough for real-world development.

    The breakthrough came when we revisited option 4, Vive Input Utility (VIU). Initially, VIU seemed overly complex, requiring OpenXR, OpenVR, and SteamVR all running at once. However, after examining its implementation, we realized that VIU handled tracking differently, avoiding the crash issue that plagued OpenVR.

    By combining our previous modifications to unity-openvr-tracking with VIU’s more stable tracking approach, we created a hybrid solution that eliminated crashes while maintaining compatibility with our existing setup.

    The final result was a fully functional, crash-free Vive Tracker integration, ensuring that CAVERN Jam participants could experiment with input tracking without technical disruptions.


    Testing Spelunx Against Past CAVERN Projects

    To validate our work, we also tested Spelunx against past CAVERN projects. A key moment was running Hycave’s abandoned scene through our toolkit. This scene had been scrapped in a previous semester due to camera and rendering limitations, but when tested with Spelunx’s optimized rendering pipeline in Unity 6, it ran smoothly.

    This test confirmed that our improvements weren’t just theoretical—they actively solved real problems that had stopped past teams from completing their projects. Seeing this scene work for the first time was a strong indication that Spelunx was meaningfully improving the development process.

    Additionally, we continued interviewing previous ETC teams to gather long-term insights on recurring challenges in CAVERN development. Many former developers cited a lack of documentation as one of the biggest struggles—each team had to rediscover solutions that previous groups had already found.

    By consolidating this knowledge into Spelunx, we aimed to prevent future teams from losing progress and ensure that technical discoveries build on each other rather than being forgotten each semester.


    Camera 3.0 – Head Tracking

    Being a physical space, players are encouraged to move about in the Cavern. The player’s position can also be tracked using accessories such as the VIVE Tracker. An example would be a game where a creepy set of eyes follow a player as they move about.

    However, since the camera is currently assumed to be in the centre of the screen when calculating P, this creates the issue where the perspective of the rendered image is incorrect when the player moves away from the centre.

    In the case of the above game example, the eyes might look like they are looking at the player when they are standing in the centre of the space, but appear to be looking past the player when they walk around.

    The fix is relatively simple. Rather than splitting the screen into quadrants at the centre, we just split the split from the head.

    Other Updates

    Outside of CAVERN Jam preparation and technical testing, we made several key refinements:

    • We met with Carl to discuss how to communicate Spelunx’s impact more effectively. His feedback helped us shape our narrative—rather than just listing features, we should emphasize the problems Spelunx solves and the challenges we overcame.
    • We finalized and sent the structured learning roadmap to Stacey at South Fayette, ensuring that students can onboard gradually without being overwhelmed.

    These discussions helped shape our approach for how we present Spelunx moving forward—not just as a toolkit, but as a solution to long-standing CAVERN development challenges.


    Next Steps – Running CAVERN Jam and Analyzing Results

    With all preparations complete, CAVERN Jam is happening on Monday at the start of Week 6. Now, we shift to:

    • Observing how developers engage with Spelunx in real-time.
    • Analyzing usability bottlenecks.
    • Gathering structured feedback for refinement.

    Stay tuned for Week 6 updates, where we break down our findings from CAVERN Jam! 🚀

  • Week 4 (02/07/2025) – South Fayette Visit, Toolkit 1.0, and Team Bonding

    Week 4 (02/07/2025) – South Fayette Visit, Toolkit 1.0, and Team Bonding

    This week, we had our first visit to South Fayette High School, where we met the students who will be using our toolkit. We also reached a major milestone with Toolkit 1.0, bringing together stereoscopic rendering and our beautiful botanical garden sample scene into a fully testable and demo-able package. To wrap up the week, we took a well-earned break for High Tea team bonding, celebrating our progress before diving into the next phase of development.

    Camera 2.0 – Achieving Stereoscopic Rendering (For realsies this time)

    Top view of Cavern rendering with two cubemaps.

    Last week, we had limited success with stereoscopic rendering. To recap the issue, notice that as the screen angle approaches 90°, the stereoscopic effect decreases as the IPD between the eyes approaches zero. Furthermore, going beyond 90°, the view from the left eye is now to the right of the right eye, resulting in the depth perception of objects being reversed.

    To solve these issues, we will use four cubemaps, each with an offset in each cardinal direction. Then, split the screen into four quadrants, one for each cardinal direction. Finally for each quadrant, the left and right eye will select a different cubemap to sample from.

    When facing northwards, the left and right eyes sample from the west and east cubemaps respectively.
    When facing southwards, the left and right eyes sample from the east and west cubemaps respectively.
    When facing eastwards, the left and right eyes sample from the north and south cubemaps respectively.
    When facing eastwards, the left and right eyes sample from the south and north cubemaps respectively.

    A flaw of this method is that it tend to create a vertical seam along the lines where the quadrants. Despite that, during testing a majority of our users who spent about 15 minutes in the Cavern, the average time of an experience, did not notice it unless it was specifically pointed out to them. And if they don’t see the problem, it doesn’t exist. I count that as a win!

    Vertical seam where quadrant connects.

    South Fayette Visit

    Learning from Future CAVERN Developers

    On Wednesday, we traveled to South Fayette High School to introduce students to CAVERN and understand how they engage with interactive development. We wanted to understand what drew them to self-select into this Building Virtual Worlds course, what kinds of experiences they are hoping to build, and what support they need to get started.

    Students shared a mix of interests, with some drawn to programming and game mechanics, while others were excited about art, world design, and immersive storytelling. When asked what they would build if they could make anything, students imagined overgrown ruins, apocalyptic worlds, and turn-based RPGs with rich visuals.

    Collaborating on a Shared Lesson Roadmap

    One challenge students faced was figuring out where to start. They weren’t sure how to break down development into manageable steps, and using the toolkit through UPM (Unity Package Manager) was unfamiliar and difficult. Stacey, their teacher, suggested that if we could provide a structured learning roadmap, she could incorporate it into their lesson plan. In return, we could use their lesson plans into our toolkit for future teachers and developers.

    Working with a Different CAVERN

    South Fayette’s CAVERN setup was different from ours at ETC, which led to unexpected technical hurdles when testing our package:

    • Dimensions – South Fayette’s CAVERN screen is 4 inch above ground as opposed to directly starting from the floor like at ETC. Also, the radius and heights are larger and taller. We already have settings for dimensions in our toolkit, but have yet to determine the actual numbers to assist them with developing.
    • Display Setup – Unlike our driver-level mirroring, theirs uses VNC remoting, and so more documentation of working with different mirroring options are needed.
    • Speaker Configuration – Their four speakers are mounted above the space instead of facing inward, and they use a stereo setup instead of a quad arrangement. This affects spatial sound, and will require a next visit to test how to adjust our toolkit.

    Toolkit 1.0

    We also released the first stable version of our toolkit! We integrated what each of us worked on in the previous weeks into a Unity package.

    What is in the Package

    • Stereoscopic Rendering – Our optimized single-camera rendering system is now fully functional, with monoscopic mode as a toggle-button option.
    • Sample Scene – A botanical garden with giant flowers and a swing that showcases 3D immersion, along with spatial sound examples, such as a 2D ambience, and a circulating cricket 3D sound.
    • CAVERN Tools Panel in Unity – A simple UI that allows users to add the CAVERN camera setup with one click.
    • CAVERN Previewer – A Unity tool that lets developers preview their scene at the correct size and curvature before testing in CAVERN. Saves a lot of time during development.

    What is still in Progress

    • Vive Tracker Integration – Not included in this release due to ongoing stability issues, which we plan to fix in the next iteration.

    Team Bonding: A Steampunk High Tea Adventure

    After an intense few weeks of development, we also had our official team bonding! After voting between karaoke and axe throwing and more, we settled on going to High Tea at the Inn on Negley. To immerse into scene even more, we decided to dress up with a Steampunk theme, and look at the photo! We had a lot of fun, not only developing, but also as a team.

    Spelunx Team Bonding Photo

    Other Updates

    While the South Fayette visit and Toolkit 1.0 were the big highlights, there were plenty of other developments this week:

    • Interactions scripts: we implemented a mirroring effect and a shy creature (backing away from tracker) interaction script for non-programmer CAVERN developers to directly use as an asset. It will be featured in our sample scene in the future as well.
    • Fullscreen Integration: we moved fullscreen-on-play into our package, so that developers will be able to test in the CAVERN without having to build the project every time.
    • CAVERN Preview: Constantly improving upon feedback, we added a CAVERN screen renderer, where when you enter play mode, you can see what your 3D world would look like in the CAVERN screen, right at your desk. This helped a lot for our artists to visualize where to put assets in the scene.

    Next Steps for Week 5

    With Toolkit 1.0 in place, our next priority is to rapidly test with developers to understand how we can improve. Therefore, we decided to host a CAVERN Jam on the 7th week. In week 5, we will polish all our current technology, such as polishing the code style in rendering, actually fix Vive Tracker integration, building a more modular sample scene, creating onboarding documentation, and planning for user testing process.

    Stay tuned for Week 5 updates!

  • Week 3 (01/31/2025) – Quarters, Stereoscopic Camera, and Vive Tracker Troubles

    Week 3 (01/31/2025) – Quarters, Stereoscopic Camera, and Vive Tracker Troubles

    Welcome back! This week, we had our Quarter walkarounds, where faculty provided feedback to help us refine our approach to interactions and toolkit usability. A follow-up discussion with Brenda Harger gave us deeper insights into narrative design and engagement strategies in CAVERN’s environment. On the technical side, we made a major breakthrough in stereoscopic rendering, but Vive tracker integration remained highly unstable. Finally, we also planned our toolkit 1.0 for the visit to South Fayette that will happen next week.


    Refining How We Communicate Our Project During Quarters

    This week, we had our first major round of faculty feedback through Quarters walkarounds. During these sessions, faculty rotated between project teams, offering guidance and helping us evaluate our initial direction. This was also the first time we formally presented our project since last semester’s pitch, which meant reassessing how we communicate our goals.

    We discovered that faculty had differing expectations for our toolkit. Some envisioned a fully no-code, drag-and-drop system, while we had always planned for a toolkit that still requires coding for non-CAVERN-specific interactions. This raised an important question: How do we define accessibility in our toolkit? Our approach assumes that by designing for high school students as a less technical user base, we will also enable ETC graduate students, regardless of coding experience, to create impactful experiences in CAVERN.

    Another key realization was that the term “novice” can mean many different things—a user could be new to programming, game development, or CAVERN itself. Faculty feedback helped us recognize that we need to clearly define our target audience and ensure that our documentation and onboarding process supports different levels of experience.

    During quarters, we focused on getting feedback on the above top-priority questions.

    Exploring Non-Tech Multiplayer Interaction Techniques with Brenda

    After Quarters, together with Brenda Harger, ETC’s professor teaching Improvisational Acting, we explored in person in CAVERN how users engage with the space and how interaction design could be made more intuitive.

    Just like how interactive storytelling children’s game like Going on a Bear Hunt is fun and engaging even without any technology or props, Brenda encouraged us to consider utilizing the wide 20-feet play area CAVERN provides for opportunities for multiplayer social experience. Clapping, stomping, and following movements are all simple interactions that are beyond the digital screen or complex controls, but perfect for the space.

    In addition, CAVERN’s curved wall gives potential to creating moments of surprise – objects can appear from behind, wrap around players periphery, or a sound cue could guide attention subtly without requiring direct instructions. Minimizing explicit verbal guidance and allowing players to naturally discover mechanics can make interactions feel more immersive and intuitive.

    Sometimes, simple environmental cues and physical actions outside of tech solutions can be just as compelling as complex mechanics. This conversation helped us rethink how to blend physical actions with digital interactions to create a seamless, intuitive experience inside CAVERN.


    Camera 1.0 – Achieving Stereoscopic Rendering (Somewhat)

    With the success of last week’s monoscopic camera, this week was the time to start bringing the world into 3D by exploring stereoscopic rendering.

    Stereoscopic rendering allows us to achieve the “popping out” effect that we see when watching a 3D movie. To render a stereoscopic view on a flat screen, we render the scene twice, each with a slight offset for each eye, and overlay the images on top of one each other. When the player puts on 3D glasses, it filters out the images such that each eye only sees one image, and the brain combines them to perceive depth.

    The offset is known as the interpupillary distance (IPD), which is the distance between our eyes. On average, adult humans have an IPD of 63mm. In the case of the Cavern, the output for each eye is vertically stacked in the output render buffer, and specialised software is used to overlay them when projecting onto the screen.

    Left eye view is stacked on top of the right eye view.
    Overlaying both images on top of one another.

    We can also approximate the effect using multiple cubemaps for stereoscopic rendering.

    Top view of Cavern rendering with two cubemaps.

    Finally, at 11pm on Friday, our stereoscopic camera 1.0 was created and tested with two naive users of the CAVERN, and garnered great response – they both were able to see a cube floating mid air and outside of the screen!

    But! Notice that as the screen angle approaches 90°, the stereoscopic effect decreases as the IPD between the eyes approaches zero. Furthermore, going beyond 90°, the view from the left eye is now to the right of the right eye, resulting in the depth perception of objects being reversed. So while it works to some degree, it’s not a perfect solution. Oh well, that’s next week’s problem!


    Debugging Vive Trackers

    On the other hand, Vive Trackers integration was met with huge obstacles. While last week we successfully integrated it into Unity via the unity-openvr-tracking package, it ran into unknown bugs that led to endless crashing of the Unity editor, as well as the entire computer upon exiting play mode.

    On initial inspection, we pinpointed the bug crash being an asynchronous system function in OpenVR still being called even after exiting play mode. We tried to create a breakpoint, tried to comment different lines of code, but all were in vain.

    Miraculously, on Friday, right when we were debating whether or not to include the buggy version into the demo next week at South Fayette and pushed a temporary version to the main branch on GitHub, it suddenly started working as intended! We decided to continue investigating the problem, but for now, a working version is available!


    Other Updates

    Beyond the core technical breakthroughs and design discussions, we also made progress in other areas:

    • Sound Sample Scene: Winnie built a test environment for spatial audio, including a 2D background music and a spatialized sound effect circling the camera, which will be integrated in our initial sample scene.
    • Art Development Continues: Mia and Ling continued working on finalizing models for our sample scene, ensuring they are optimized for CAVERN’s projection system. The environment is slowly and steadily taking shape.
    • Planning for South Fayette Visit: Our producers scheduled the first visit to South Fayette, and we started outlining what we want to showcase and how to structure our interactions for student engagement.

    Next Steps for Week 4

    Since we are visiting South Fayette next week, integrating our freshly built camera, initial solution for input, sample scene art and sound assets, all into our toolkit package will be our main priority.

    Week 3 was all about refining our approach, tackling major technical challenges, and rethinking how users engage with CAVERN. With our first real playtest approaching, Week 4 will be a critical milestone in seeing how our work translates into actual user interactions.

    Stay tuned for Week 4 updates!

  • Week 2 (01/24/2025) – Learning the Tools and Gold Spike

    Week 2 (01/24/2025) – Learning the Tools and Gold Spike

    Welcome back to our Week 2 blog post! This week, our focus was on learning and experimenting with the tools needed for CAVERN, laying the foundation for development moving forward. Since many of the features we aim to implement require a deep understanding of existing systems, this week was all about researching, testing, and iterating on our ideas before committing to long-term solutions. In addition, we set up our GitHub repository for our package, and onboarded everyone (including artists) to use git commands.


    Rendering – Understanding CAVERN’s Projection System

    Rendering in CAVERN presents a unique challenge due to its curved display system, requiring a fundamentally different approach from traditional game rendering. Terri dedicated much of this week to learning Unity’s ShaderLab and HLSL, as well as understanding the updates to URP in Unity 6. With major changes introduced in this version, existing documentation is limited, making reverse engineering and experimentation essential in finding a viable solution.

    Notes on rendering solutions on the original API

    Previously, the old camera system relies on a multi-camera system, using over 30 cameras per eye, aligned in a circle, and combined the output from the cameras into a final texture which is projected onto the screen.

    While it did work, it had many drawbacks.

    1. Firstly, it was a complicated system, which a relied on complex hierarchy of GameObjects and cameras to function, making working with the system difficult.
    2. Secondly, the camera could not be rotated in the Unity editor, and required transformations to be done via code.
    3. Lastly, having so many separate cameras meant that the render pipeline has to be run through many times, once for each camera. Finally, the output from each camera had to be stitched together to form the final output. This resulted in heavy rendering performance penalty and severely limited the densities of objects developers could place in their scene.

    While there was a single-camera version in the old system, it was still a work in progress and incomplete, and was using Unity’s deprecated Built-In Rendering Pipeline (BIRP), which is deprecated in Unity 6.

    Alongside this research, Terri began working on reverse engineering the old single-camera. This process was initially broken down into two steps:

    • Render the world view into a cubemap and convert it into an equirectangular texture.
    • Sample from the equirectangler using a custom texture and project it onto the CAVERN display.

    However, as the old single-camera system was a work in progress, the projection of the world onto the CAVERN display still resulted in cropping errors and warping.

    Taking inspiration from part of the old single-camera, rather than to convert the cubemap into a equirectangular texture, the approach was to directly sample from the cubemap. This proved to be a viable option, as it was relatively trivial to calculate the direction of a point on the physical screen from the centre of the CAVERN.

    This diagram explains how the cubemap is sampled.
    The photo shows a monoscopic view of a flat floor projected in the CAVERN using the new camera. Despite the curved screen, the floor is projected correctly as a flat plane.

    With this new approach, not only was the system simplified from over 30 cameras to just 1, it also improved rendering performance significantly, as we now only need to sample the output from a single camera rendering into a cubemap. From our tests, the performance increase ranged from 100% to 200%, depending on the output resolution.


    Input – Exploring Vive Tracker Integration

    Being a non-traditional immersive platform, creators making experiences for the CAVERN tend to gravitate towards non-traditional, immersive controls. In the past, teams have used the Orbbec Femto Bolt body trackers and the HTC Vive position trackers. These both allow creators to get information about where people and objects are located within the CAVERN. The existing CAVERN API had no built-in input functionality, so each team in the past had to figure it out on their own.

    Our first goal is to integrate Vive Trackers, as its use cases seemed to come into mind most naturally. Previously, teams ran into many issues with the trackers not working or crashing, and they had to hardcode the device IDs into their code, making it difficult to switch to a different Vive tracker if one stopped working. Our initial goal was to simplify this process.

    Difficulties

    • Originally designed to be used with a virtual reality headset (head mounted display, or HMD), using Vive Trackers in the CAVERN emerged as our main challenge. This usage of Vive Trackers was unsupported, undocumented, and relatively unknown, so research and lots of debugging was required. It seems the current best practice that other people have found to work is to use SteamVR installed on the PC with a “null driver” which pretends to be a headset.
    • Getting the position, rotation, and pin input data into Unity. While past teams used OpenXR + OpenVR + SteamVR to solve this, because SteamVR contains a lot of VR specific code irrelevant to CAVERN, crashes often, and causes issues with Unity6, we decided to use another package on GitHub called unity-openvr-tracking instead.

    Editor Tools – Prototyping a User-Friendly Unity Workflow

    Beyond the core technology, we also need to ensure that our toolkit is user-friendly and accessible. This week, Yingjie focused on prototyping a custom Unity Editor panel that will eventually house our CAVERN development tools.

    As a proof of concept, she developed an early prototype that allows users to click a button and change an object’s color to green—a simple but crucial first step toward a fully functional toolkit UI. This work provided insights into how we can integrate CAVERN-specific settings into Unity’s Editor workflow, making it easier for developers to set up and modify their projects without extensive manual adjustments. Additionally, Yingjie explored Unity’s package system, which will be important when we distribute the toolkit for future use.

    Initial toolkit structure that was showcased in quarters in the following week.

    Art & Scene Experimentation – Understanding How CAVERN Renders Visuals

    Visual design for CAVERN is unique, given its curved projection and stereoscopic rendering. Ling & Mia spent the week experimenting with how different types of assets look when rendered inside CAVERN.

    To understand how models behave in CAVERN, Ling tested with a swing asset to be used in our sample scene, discovering that because of the rendering resolution of CAVERN, diagonal visuals will be pixelated. Additionally, lower-poly models appear smoother, as crucial details of high fidelity models tend to disappear, making the visuals harder to identify and therefore not as high as a priority when making assets in the CAVERN. These findings will help optimize future models or textures, and inform how rendering technology can possibly be improved.

    Meanwhile, Mia focused on particle effects, exploring how they behave in a projection-based environment. She also worked on the initial logo concepts, creating four draft versions that will be refined in the coming weeks.


    Audio – Researching Spatial & Directional Sound Solutions

    Sound is a crucial part of immersion but is often overlooked until late in development. With CAVERN’s 4.1 speaker setup (front left, front right, rear left, rear right, and a subwoofer), Winnie focused this week on optimizing its use to create directional audio cues. Unlike traditional VR audio, which relies on headphones, CAVERN’s physical speakers require a different approach to spatialization.

    She categorized three key types of spatial sound:

    • Surround Sound – Uses speaker placement for horizontal positioning.
    • Spatial Audio – Software-driven 3D soundscapes, common in VR and headphones.
    • Directional Audio – Achieved with beam-forming speakers, allowing sound to be heard only in specific locations.

    Currently, CAVERN supports standard surround sound, with its middle speaker virtualized by balancing front left and right channels at half volume. Winnie tested Unity’s 5.1 surround sound settings, finding them to be the most natural spatialization option so far. She also explored how the previous API handled moving sound sources, discovering that objects placed in the scene automatically rendered to the correct speaker. Additionally, while investigating the API, she and other team members identified an issue where the camera was flipped, causing opposite sound rendering, which has now been corrected.

    General diagram of how the speakers are set up in CAVERN, with the center being virtualized.

    User Research & Transformational Framework

    While most of our research focused on technical aspects, we also explored how our toolkit can inspire and empower users. As part of this effort, we sent out a questionnaire to past CAVERN teams to gather insights on their experiences, challenges, and best practices. While we are still awaiting responses, we plan to follow up individually to ensure we collect useful data.

    Additionally, Josh & Winnie attended a Transformational Framework workshop, which focused on designing experiences that create a lasting impact on users. While we are not making a traditional game, we want to empower users to build meaningful experiences in CAVERN. This workshop taught us how to define and evaluate good design and intuitive tools.

    Transformational Framework

    Preparing for Quarters & Website Setup

    With Quarters coming up soon, we also began structuring our first formal presentation of the semester. Our goal is to clearly communicate our research, challenges, and early implementations while gathering feedback from faculty and industry experts. Additionally, we started setting up the website structure, which will serve as our central hub for documentation, blog posts, and toolkit resources.


    Next Steps

    Week 2 was all about exploration, research, and laying the groundwork for the next phase of development. We now have a stronger understanding of rendering, input, audio, and UI workflows, which will guide us as we move into implementation and refinement. Next week, we will begin building out our first set of tools and interactions, refining our prototypes, and preparing for our first user tests.

    Stay tuned for more updates in Week 3!


    Gallery

    Our GitHub Repository that was setup this week.
  • Week 1 (01/17/2025) – Kicking Off Spelunx, Meeting with Stakeholders, and Tech Setups

    Week 1 (01/17/2025) – Kicking Off Spelunx, Meeting with Stakeholders, and Tech Setups

    Welcome to the first dev blog for Spelunx! This week, we officially kicked off our project, met with key stakeholders, and began setting up our technical pipeline. With our core hours set, project scope discussed, and initial tools chosen, we’re ready to dive into development.

    A core goal of this project is to make CAVERN development more accessible, not just for our team but for future developers as well. That means prioritizing long-term support, intuitive tools, and well-documented best practices.

    Goals for the Week

    Before jumping into development, we needed to establish our workflow, tools, and project direction. Our main objectives were:

    • Meeting with key stakeholders (faculty, advisors, South Fayette partners) to understand our problem space.
    • Scoping the project goals and deliverables for the semester.
    • Exploring the existing CAVERN API.
    • Setting up version control (Git vs. Perforce) and documentation tools (Doxygen).
    • Defining core working hours for the team.

    Project Pillars Defined

    We met with Drew Davidson (faculty advisor) and Steve Audia (CAVERN builder) to discuss our project direction and expectations. Four core pillars emerged for our development:

    1. Input Systems – Vive trackers and Femto Bolt for motion tracking.
    2. Graphics – Optimize rendering to improve CAVERN visuals
    3. Audio – Spatial sound experiments for immersive experiences
    4. UX – Simplifying onboarding and development for new users

    In order for design and technical knowledge that we will explore and encounter this semester to be passed down to future developers of the CAVERN, we also recognized and emphasized the importance of thorough documentation to aid our four core pillars.

    Technical Consultation with Ezra

    Ezra Hill, our technical consultant and CAVERN API developer (also an ETC alum), walked us through the existing API, shared his toolkit, and gave valuable advice on best practices. Here ae some key takeaways:

    • The current Unity camera setup is 30 Unity cameras (hard on performance), or 1 camera (broken for stereoscopic rendering). Since this is a semester-long project, we decided to tackle fixing the 1 camera solution, as this will greatly improve performance, leading to support for higher-fidelity art assets. We were also advised not to create a camera from scratch, but extend the built-in Unity camera.
    • Vive trackers and SteamVR integration are inconsistent and need debugging while Femto Bolts were previously integrated using Unreal instead of Unity. Both need to be streamlined to include into our toolkit.
    • Since Unity 6 is the newest standard, we will convert everything to this version to ensure long-term support and compatibility for future developers. However, because Universal Render Pipeline (URP) is the default for Unity 6, shaders need modifications to work properly, and packages should be upgraded as well.
    • There is already spatial audio support, but it has not been properly explored, and Unity 6 changes also need to be investigated.
    • Version Control Considerations – Perforce could help with large files, but Git is fine for now.
    • Ezra also recommended starting with a functional rendering pipeline before focusing on additional features. This shaped our Week 2 priorities.

    South Fayette – Understanding Our Key Users

    South Fayette High School students are one of our key user groups, and their needs will shape how we design our toolkit. Our meeting (organized with the help of John Balash, ETC’s Director of Educational Outreach) with Matthew Callison, Director of Innovation & Strategic Partnerships at South Fayette, and Stacey Barth, the teacher for the pilot game design course hoping to develop on the CAVERN, helped us understand how out toolkit can fit into their curriculum.

    What We Learned About SF and Students

    • Most have limited to zero Unity, programming, and 3D experiences.
    • The 5 students who self-selected into this pilot course are excited about game development & virtual worlds. One student has even applied for game design programs for college.
    • They are planning on collaborating with a creative writing course for children’s books to build educational experiences using the CAVERN.
    • The school is part of the Digital Promise’s League of Innovative Schools, which will gather for a conference on March 25th, and CAVERN will be showcased to other schools potentially interested in building a CAVERN.

    How We Can Support Them

    • Make it easy for students to get started through simplified onboarding.
    • Provide structured learning materials to assist SF teachers to build lesson plans.
    • Offer intuitive interactions so students can explore without complex coding.

    Apart from calibrating logistical schedules, we were happy to discover that our goal for a beginner-friendly toolkit, clear tutorials, easy-to-use Unity components, flexible interaction examples, and awe-inspiring demos are aligned greatly with our key users.

    Sample Scene Art & Interactions Brainstorming

    After determining what features we will support in our toolkit, our artists brainstormed art and interactions that can best showcase a CAVERN experience: Vive trackers and Femto Bolts, stereoscopic rendering, directional sound, and immersive aesthetic.

    We have then settled on a sample scene with mystical and botanical feel, where the color palette will mainly be blue and green.

    Interactions we will prioritize to support include:

    • Mirror-like interactions through Vive Trackers following player movement.
    • Wind controlling powered by Femto Bolts.

    Other Progress and Setups

    Version Control & Documentation

    • Git (without LFS/Perforce for now) – We’ll add LFS or Perforce later when needed.
    • Doxygen Setup Completed! – We successfully set up automated API documentation and have it working in the link here.
    • Tutorial Documentation Tools – We explored options like GitBook, Mkdocs, and Google docs. For now, we will document internally in Google Docs, while the final published documentation will be decided in the future.

    User Research & Design Explorations

    • Our development will center around rapid iteration and playtesting with our users: South Fayette High School, current ETC students, and Interactive Story Lab, another CAVERN project this semester discovering live-action interactive film in the space.
    • This week, we created a questionnaire for past CAVERN developers to gather insights on how they designed experiences and the technical difficulties they encountered.

    Next Steps

    Moving into Week 2, our focus will be on starting research and development on various parts of our toolkit, including one-camera rendering for stereoscopic view, Vive Tracker integration, Unity Editor tooling pipeline, spatial audio, and experimentations are art assets.

    We’re off to a great start, and we’re excited to push forward! Stay tuned for more updates next week.

    Gallery

    composition box
    Composition Box from Friday’s Playtest to Explore Workshop
  • Week 0 – How Spelunx Came to Be

    Week 0 – How Spelunx Came to Be

    Spelunx began as a technical passion project—four programmers eager to dive deep into rendering, software engineering, and systems development. We wanted to build something from scratch, not just another experience, but tools that would empower future creators.

    Each of us had a different motivation for starting this project:

    • Terri was drawn to optimizing software systems. She was especially interested in digging into the mathematics behind rendering code.
    • Yingjie was passionate about human-centered design, particularly in how development tools could be structured to fit into creative pipelines.
    • Winnie was interested in technical problem-solving with real impact, especially in documentation and tutorials. Also, she is interested in audio design and its immersive impact.
    • Josh wanted to develop tools that enable creativity, focusing on the intersection of human-computer interaction and deeply technical math-heavy systems.

    To better understand how to structure a technical project within ETC, with the help of faculty member Phil Light, we reached out to alumni from Isetta Engine, a past team that built a game engine from scratch. They shared insights into scoping technical work, balancing research with production, and making tools that are actually usable.

    We were also connected with Cort Stratton, an ETC alum at Unity, whose expertise in real-time rendering and graphics programming helped us explore possible areas of focus. But what we needed was a platform that would challenge us technically while having real-world impact—which led us to CAVERN.


    Discovering CAVERN

    CAVERN is a 270-degree projection-based stereoscopic immersive space with a 20-foot play area. Unlike similar high-end virtual production or immersive projection systems on the market, CAVERN was built by Steve Audia as a cost-effective solution, making it affordable enough for educational institutions like high schools to implement. This makes CAVERN uniquely accessible, allowing students and educators to explore large-scale immersive environments without requiring the multi-million-dollar setups used in industry.

    However, with this accessibility comes a unique set of challenges. Because CAVERN is a custom-built system, its technical setup requires careful configuration, including projection alignment, stereo warping, and input tracking. While this modularity makes it highly customizable, it also means that developers must manually integrate various components rather than relying on an out-of-the-box solution. Additionally, since each team working in CAVERN has historically developed its own workflow, past solutions are not always well-documented or standardized, leading to a learning curve for new users.

    To learn more, we met with Steve, and he walked us through the system’s capabilities and pain points. Every semester, teams working in CAVERN struggled with:

    • Cumbersome setup processes – Installing dependencies, configuring software, and troubleshooting setup delays.
    • A lack of standardized tools – Every team had to engineer custom solutions for rendering, tracking, and input.
    • Rendering and camera limitations – Standard Unity camera setups didn’t work properly in the space.
    • No clear documentation – Each new team reinvented solutions from scratch instead of building on past work. Worse yet, teams graduate and leave with their knowledge.

    It became clear that CAVERN had enormous creative potential, but its usability barriers prevented it from being fully realized. This was the perfect challenge—a deeply technical problem that needed solving, with real impact for future ETC teams and beyond.

    At this point, Ling joined our team, drawn to the opportunity to work alongside a technically focused group while developing her skills in environmental art and technical art. Shortly after, Mia, a second-year student with experience on two previous CAVERN projects, joined us as well, bringing valuable insights from past development efforts. With a well-rounded team and a clear technical challenge, Spelunx was born.


    Researching the Problem – Past Teams & Expert Feedback

    With CAVERN as our focus, we started by interviewing past teams, especially the Hycave project that used Unity and the Flow project that used Unreal Engine, to understand exactly what was missing from the development workflow. Across multiple projects, we heard the same frustrations:

    • Input system incompatibility – Vive trackers and other input devices were difficult to integrate.
    • Unclear technical constraints – Teams weren’t sure what art, shaders, and post-processing would actually work.
    • Lack of testing flexibility – Testing required physical access to CAVERN, making iteration slow.
    • No documentation of past solutions – Every project started from zero, even when previous teams had solved similar problems.
    The CAVERN Unity camera setup that included 30 cameras, causing great hurdles in developing with the original API.

    At this stage, Mike Christel, our faculty champion, played a crucial role in guiding our project direction. He helped us refine our focus, ensuring that our work would lead not only to a usable final product but also to a meaningful journey of exploration and discovery for each team member, regardless of background or expertise. Through conversations with nearly all faculty, we received valuable feedback that encouraged us to think beyond technical challenges and consider how our tool design could uncover novel insights into experience design within an immersive platform, and this led to our final pitch direction.


    Pitching Our Project

    At the ETC, pitch projects go through two rounds of approval before being officially greenlit. This process ensures that projects are well-defined, have clear goals, and are feasible within a semester. Faculty aren’t just looking for interesting ideas—they want to see a structured plan for execution and real-world impact.

    With our research in hand, we structured our pitch around two key ideas:

    1. CAVERN has untapped creative potential, but technical barriers make it difficult to develop for.
    2. A structured, modular toolkit will make CAVERN more accessible, enabling both novice and experienced developers to create immersive experiences.

    We presented a two-pronged approach:

    • Technical Setup & Configuration – Solving rendering, input tracking, and audio challenges.
    • Documentation & Templates – Providing structured tutorials and best practices to help new teams onboard quickly.

    We made it very clear that our project was not just solving technical hurdles, but a user-centric project rooted in research, documentation, tutorials and testing.

    Brainstorming and defining our deliverables that we will propose.

    To further validate our approach, we reached out to potential toolkit users. South Fayette High School’s was especially excited about the prospect of using Spelunx, as they are running a game design course this semester with the hope of developing experiences on their newly built CAVERN. This reinforced that this toolkit wasn’t just about ETC—it could also support young creators in learning interactive development.

    After multiple iterations, faculty feedback, and refining our deliverables, Spelunx was officially approved as an ETC project.


    Looking Ahead – Week 1 and Beyond

    Weekly roadmap we proposed at the time of pitching.

    With our pitch behind us, we entered Week 1, focusing on:

    • Meeting with stakeholders (faculty, South Fayette, and technical advisors).
    • Defining our development roadmap to keep scope manageable.
    • Setting up core tools like version control and documentation systems.

    Spelunx started as a technical curiosity—an interest in pushing boundaries in graphics, software engineering, and development tools—but through research, team expansion, and faculty feedback, it became a well-defined project with a clear purpose. Now, it’s time to bring that vision to life.

    Stay tuned for Week 1 updates!