Week 13- Polishing and Action Steps

November 27th, 2023 – December 3rd, 2023


This week was all about polishing our game and making sure the experience was ready to go for the ETC Fall Festival. We focused on adding in some final parts like mangrove trees above the sea surface in phase 1 to help make it a more finished world and adding in final slides to finish the experience showing the world being done in the real world.


The big design task this week was finalizing the ending of our game. We had reached out to Sanibel-Captiva Conservation Foundation because they were working to restore mangroves in Florida. We asked if they wouldn’t mind providing pictures for us to use at the end of our game to show the actual work being done in the field to restore mangroves and they said yes. 


Here is our final ending where we show that there is hope and  encourage the players to take action.


This week we updated the mangrove trees in our phase 1 scene to have leaves that you can see above the surface.

We also updated the dolphin to have a new swim, chat, swim-to-chat, chat-to-swim, and glide animations. We also added an eating animation as feedback for when the player eats fish.



We also adjusted another AI generated skybox for our final phase so that players can see the sky through the water.

We also added a 2D red tide to help with the engulfing scene at the end of phase 2.


Given the potential for players to become disoriented in the destroyed environment, we introduced a new feature: dynamically generating fish. When the player uses echolocation, a fish is generated near the player, indicating the correct direction.

From a developer’s perspective, we need to define the generated fish range in the scene. Then, we place target locations for the fish in appropriate positions. The rationale for dividing the range into different blocks, rather than having a single target, is because of the obstacles in the level. If there is a rock between the player and the final destination, the fish might generate on the rock or not generate at all. Using multiple dynamically generated targets is a more effective choice.

We are incorporating a transformational ending for the game, combining words with real-life pictures. Initially, our intention was to use a video to convey the ending; however, we discovered that the video pipeline in UE5 does not support VR. Consequently, we have opted for a manual approach, adding each picture individually to create a ‘video-like’ experience.