Ww went to the client’s lab (Physical Intelligence lab). Currently our game runs with another server program to split up the work between the the face recognition with data collection task and the game itself, The reason we chose this architecture is that our clients will host their own server to collect data and configure parameters for multiple games in their future research settings. Although our clients currently don’t have their own server yet, we still helped our client set up our game with both the game and the server code in their lab.
We will also provide detailed documentation and future support for our clients to ease out the process of changing parameters and collect the data they need. Now the neck game is setup in the lab already, feel free to visit their lab to play our game.
We also help our client to keep track of the players behavior by collecting data from their moving paths, also supply our track related information which enables our client to reconstruct the player’s progress from each round of game play. We also made this collection rate configurable which could be controlled by our clients.
We collected some data during a playtest on a naive guest, and we were able to do some analysis towards the complete rate of a certain track with the number of rounds played. And we were able to see a trend of players improving over time which is a perfect example of learning progress from a blank state.
We finalized the tutorial to wrap Up the game, fixing in-game small bugs, playtesting with the data collection the gives a sample for our client to review, and also as a reference of data collection sample.
UI Final refinement
Problem: Difficulty Should Come from Design, Not Confusion
With a designed level, another important aspect that influences difficulty is control. A game can be hard, but it shouldn’t feel impossible due to missing information. We should provide the basic “tools” for players to overcome challenges and let their skill determine success. That’s where a tutorial becomes necessary.
Insight: Teach Just Enough, Not Too Much
Confirming with our client, we expect our players to start the game with some basic understanding of how to control, for the sake of enhancing our game’s data collection quality by filtering out accidental deaths caused by not knowing anything about the controls. However, over-instruction can backfire. We still expect players to fail—due to slow reactions, misaligned expressions, or discomfort with this unique control settings.
The goal is to give an appropriate tutorial that helps players gain a basic understanding.
Solution: Iterating the Tutorial Through Player Feedback
We first introduced controls through UI-based text instructions. But playtests showed that plain text of facial expressions caused confusion.
Take “mouth roll”—a term from MediaPipe, it is hard to imagine. Then we tried to add themed emojis, but still, it was not clear how to perform.
Let’s have more references—rather than “mouth roll,” let’s say it’s a “fold in lips.” Still confusing? It is an expression “the expression you make when trying not to laugh.” Lastly, if you’re still not sure what that is, hover over to see a real human image figure. This kind of off the entire art theme, although human img is most straightforward, we did not show it outside in order keep the game not off the theme
To validate understanding, players must perform each expression correctly to proceed. This interactivity confirms that you got it right. Through these layered cues and iteration, he majority of players understood what each expression meant.
A Safe Playground to Learn by Doing
Once players understood what they could do, we wanted them to learn when to do it—through play.
After completing instructions, players enter a tutorial playground. This space introduces every mechanic covered so far in the route setting. Starting with simple navigation and turns, moving to dynamic ones, and introducing each expression-driven skills one by one
There are no penalties here, just a space to build confidence and comfort with the system. No mastery, just enough familiarity to begin playing.
After the official playtest day on Saturday, March 29th, we tested our game with many guests of various ages. Most of them were unfamiliar with our game and were all naïve guests who had not played it before.
Our plan was to test Level 1 and Level 2, starting with the default speed. If players successfully completed a level, we proceeded to gather feedback through a survey designed to collect both qualitative and quantitative data.
Quantitative data included participants’ ages and the number of playthroughs it took to pass each level. Qualitative data focused on perceived difficulty, track length feel for each level, and aspects that felt challenging or easy. Additionally, we gathered quantitative feedback about playthrough experiences, clarity of player goals, and overall gameplay experience.
Feedback:
In short, Level 1 offers a reasonable difficulty that works well as an introductory stage, allowing players to get familiar with navigating using their head. However, some players noted that it still takes time to fully adjust.
Level 2 proved to be more challenging than we expected, at both the default and fast speeds. This difficulty aligns with our client’s goal of making the game highly replayable.
For the mechanics, the idea of eating planets and shooting them out to destroy meteors adds a fun and engaging element to the game, but it’s somewhat challenging for players to grasp on their first try. In contrast, staying on track and reaching the end are much easier to understand. This feedback highlights the need for a clear tutorial, as players are expecting guidance.
Additionally, we asked players about their experience playing the game over a longer duration, since our client envisions a gameplay experience lasting more than 20 minutes. From an ethical standpoint, considering physical comfort is essential. The good news is that over 50% of participants did not feel uncomfortable. However, we observed that older players tend to experience neck discomfort more easily.
Lastly, since we invited various people to play our game, we discovered that having a beard or wearing glasses can significantly interfere with mouth and eye detection. We will make sure to inform our clients about this, helping them clarify the requirements for players who can provide reliable data, while also isolating these external interferences.
Iteration:
A pre-game playground that incorporates all mechanics that might be used (eating planets, shooting meteors, shrinking the body to pass narrow roads, blinking to dash) and helps players get familiar with navigating using their head. In the playground, each mechanic is introduced one by one, starting with getting over small turns, then dynamic turns, and eventually teaching each expression-driven skill.
We want to make this a tutorial where players learn by actively playing, with the assistance of UI instructions and each time focusing on a single technique. We do not expect players to fully master or memorize everything but to gain at least a basic understanding.
This approach also helps improve our game’s data collection by filtering out accidental deaths caused by unfamiliarity with the controls (e.g., the amount of head tilting or opening the mouth).
During GDC week, the rest of the team continued working on the project, including facial expression mapping, IK setup for character animations, Level 2 design and implementation based on the last week block out, the landing page, and level selection UI.
3D:
We’re currently working on three animations for our character, including ghost skirt waving, a happy turn-around, and a sad face frown. Here are previews of each animation. We plan to embed them as interactive feedback throughout the game—for example, the idle animation will combine floating with the skirt waving; successfully eating a planet will trigger the happy turn-around; and losing will display the sad frown. These animations will help enhance immersion and make the game feel more lively and juicy.
3 animation videos
UI
Start with the last week’s wireframes, start to add artstyle to the UI elements. Aligning with the cartoonish style and the rounded character, and the space and universe style, I choose a cool color palette, including white, blue and greenish gray. Also, finally we get a name for our character: NOMU, since we are going to have the character eating planet, this name gives a feeling of “eat.”
Brought in assets. Focused on scripts that control visual elements. Adjusted animations of the character model and timing of visual effects. Next week will be focusing on the mapping of face detection and animation, data collection, and fine tuning of animations
UI/UX:
Designed the complete game wireframe, including the landing page, in-game hints for planet and meteor encounters, “How to Play” instructions, and various GUIs such as planet count, energy recharge, and astronaut rescue bay. Also created game over and completion screens displaying completion percentage, planet count, and the number of astronauts rescued, which varies based on gameplay. Next week will be working on UI designs.
3D:
Refining Main character of the game, developing animations, polishing character design to make it more complete.
Used AI to generate animation references for Nomu, the ghost character, including spinning, happy expressions, and idle floating animations.
And bring the design concept to the game theming, we retain the original idea of a character traveling through space. Keeping in mind the constraint of constraining the path, the player Will be surrounded by a dangerous meteorite belt, navigating along a single predetermined route to safely get to the destination, not hitting the boundary.
In the beginning of this week, we first came up with 10 pitches for a face detection game that using face as a controller. We come up with interesting idea, and also analyse the possible challenges that it will bring up: either tech challenges or does not align with client’s need.
In conclusion, summing up from all ideas, we narrow down to 2 approaches that will work for client, and also matching with our need:
1- A rhythm/Dancing game that Player follows a path and make reactions at given points
2- A Racing game in which players are limited to following the one & only right track, which we can iterate from the prototype from last week. Based on time constraints, we are more willing to go this direction to pursue an outcome that meets need.
Game Design:
We have our first prototype ready: in which is a STG game in which players use their face to control a spaceship and travel in space. We implement couple of them: in this prototype, player will be using faces to:
Control movement of spaceship, dogging from meteors:
Nodding: controlling spaceship back and force
Shake Head: controlling spaceship left and right
Special Mechanic: Open Mouth (FirePower UFOs)
Playtest Observation:
Based on playtest feedback and client need, we found something work and something not working:
Work:
Based on the feedback from our client and some playtesters, we found out the control mapping we designed is easy to pick up. Moreover, some playtesters found satisfying in our shooting mechanic, which provides instant and tight feedback between face expressions and game interactions.
Not Working:
1: Too much freedom:Clear goal, No clear PATH
Although we provide players with limited freedom, right now the game is still open-ended, which allows players to solve problems in ambiguous ways.For example, when encountering an enemy, depending on their position, players can choose to evade or shoot the enemy. which makes the outcome of each play different. According to our clients’ needs and their research purposes, this is hard to calibrate compared to a predefined path where performance can be measured against one and only one correct solution.
2- When the player is using their head for directional controls, rotating the head to move left and right makes it difficult for them to look at the screen, while nodding up and down sometimes tampers the detection results and accuracy.
3 – We found out that controlling a spaceship with only facial movements alone lacks tight connections. Since players will be constantly thinking about facial expressions instead of immersing themselves in the game, this approach makes the gameplay not very engaging compared to controlling something with a more visual and intuitive connection.
2nd prototype Iteration:
We made two key changes to limit player freedom. First, We constrained the navigation path in a fixed way, guiding players along a fixed route with a clear direction. And We removed back-and-forth controls, making the control target move automatically, while players adjust the forward direction by tilting their heads left or right.
Here is a demo video, in which this one is when the player makes an early attempt at playing.
First, it better fulfills client needs by shifting from allowing diverse player strategies to focusing on training unfamiliar motor skills. Progression is tied to: getting used to head-based controls and memorizing the level. Second, tilting the head is much more accessible and at the same time still requires progressive learning. But we need more playtesting to fine-tune parameters.Lastly, this version enables scalable level design (Gradually add on reaction points)
After the meeting last Friday and this Monday, the team and the client reached the consensus of pivoting towards facial recognition technology as our new focus. We presented four pitches to the client, and received feedback on them. The request was to design the game in a way that the players always have a clear goal paired with a desired behavior at each moment in the game. Rather than defeating the game or reaching a score, the goal here refers to a behavior that the players should perform as a reaction to what happens on the screen. With a clear goal, the player’s behavior can be easily analyzed and evaluated from the researcher’s perspective.
Another desired design is the continuous movements in contrast to the discrete movements like aiming or shooting. We used the famous Atari game Asteroids as reference, and the client liked the idea that players keep reacting to what happens in the game world.
After the client meeting, the team quickly reconsidered the pitches and made adjustments to it. The idea of an STG was born from the Asteroids. Players use the direction of their faces to control the positioning of a spaceship, move their mouths and eyes to react to enemies, obstacles, and other mechanisms.