Bon Appétit! This week, the team at AppetizeR is settling in and getting more comfortable with Lens Studio, like a sausage wrapped in a doughy blanket (rendered in multiple styles, ranging from abstract to realistic). Hopefully we’re not just being lured into a false sense of comfort before being devoured…

Our first order of business this week was a change to our workflow: we switched from Perforce to Git for version control across our project. Trying to use Perforce to sync our Lens Studio projects has been a pain for the past few weeks, and we decided it would be better to pivot rather than keep trying to push through the issues. One problem was that our p4ignore files would not successfully ignore all of the local files that it needed to. Another was that when pulling changes from the depot, the local cache files would have issues and cause compilation errors. This meant each person would have to delete all of their local files and have them regenerate every time they wanted to open Lens Studio to work on the project.
Moving to GitHub has made the process a lot smoother, as we can have our own feature branches that are easier to merge, and we no longer have any compilation errors in local files when pulling changes. However, whenever we make a change now, GitHub will also add every single .meta file to the changelist, even though clicking into the change shows that there is no difference. In order to make the changes easy to track, we need to specifically only select the files we actually changed to push and then discard the other hundreds or thousands of files. Despite this inconvenience, it’s still a major improvement over Perforce, so for now we have just documented our experience and will continue onwards.
On the art side, we discovered a fundamental constraint of the technology when comparing the look of the assets within Lens Studio and the view inside the Spectacles themselves. No matter what render settings we change, objects in the Spectacles can not be rendered fully opaquely! The Spectacles use a method called direct see-through to render objects on top of the real world, while other devices with AR technology that we’re used to utilize virtual pass-through. What this means is that the Spectacles only support additive AR blending mode and cannot render using opaque AR blending mode. As can be seen below, the difference between the two modes is quite stark, particularly for dark-colored objects, which look almost completely transparent.


To address this, our artists explored how we could adjust the lighting and colors of objects to make them show up the best that we could get it. By using ambient lighting and bright colors with flat shading, we can make the objects appear more solid than otherwise. The artists also explored different rendering styles, comparing more realistic food items to slightly more cartoony ones.

Another feature that we’ve been exploring is the use of face filters. Our artists modeled a chef’s hat and moustache that could get placed onto a player’s head, using Snap’s head tracking features.


During early tests, we discovered that, similar to image markers, head tracked objects can also only be local objects, meaning they can’t be directly synced. Additionally, Lens Studio does not let us assign these objects to one specific person, such as just the chef player. The software looks for absolutely any face and will attach these objects to anyone that it detects. You get a chef hat! And you get a chef hat! Everyone gets a hat!
On the UI/UX side, our designer has been researching the conventions and best practices of user experience design for the Spectacles. We’ve identified three primary types of UI panels: anchored to a position in the world, following the user’s head, and following the user’s hand. Each has different uses for conveying information to the user. Based on the guidelines provided by Snap in their resources for developers, we will want to present different types of interfaces, interactions, and information at varying distances from the user’s head.

We also looked into approaches for onboarding users into our experience by examining the tutorials of the existing lenses published for the Spectacles. There, we found that some apps use characters and dialogue bubbles to explain mechanics, while others use UI panels with text or audio cues. We will use the examples with characters as inspiration for our game, as this approach is engaging and memorable.
Our UI designer created a paper prototype for the interface displaying each player’s role and personal ingredients in front of them, which we will next work on integrating into Lens Studio.

Finally, our programmers set out to tackle the issues with syncing image marker objects across the network. Our first idea was to take the location of an invisible object on the image marker whenever one player looked at the marker, and then move a synced object to that same location. Then, we could have a synced object that moved with the marker. However, this attempt did not work. Whenever we tried to get the world transform of the image marker object, it would instead just give us the location of the glasses. While this didn’t end up accomplishing what we needed, we later realized we might be able to use this interaction to position the face filter objects onto one specific person.
We then took a closer look at the documentation for the Spectacles Sync Kit package and the ways that information is shared between multiple users. The package offers a StorageProperty class, which can hold simple data types, and updating the value of a StorageProperty triggers an event that all of the headsets can then respond to. We came up with a new approach: one player’s local interaction with their instance of the image marker object can update the value a synced StorageProperty (e.g. a boolean flag indicating that a certain event occurred), and this event can trigger all of the other devices to replicate that same interaction for each user’s local image marker object.
We made a simple prototype to test out the concept. In this example, if the first player that joined the session looks at the image marker, the soup inside of the pot will disappear for all the players.

This approach works! After finally figuring out a system to connect all of the pieces of our concept, we will now be able to build out our prototype in the coming weeks.