Week 4

Bon Appétit! Welcome to the week of ideas being skewered.

Here at AppetizeR, we took the feedback from Quarters, had a long hard look at our platform, project requirements, needs, limitations, and enthusiasm, and then tried to find the right place to bite into. The team had this moment where we didn’t really know what our shish kebab was made of until we had a visit from Dave Culyba.

That meeting and quick chat helped get our grills fired up. He reminded us that the best way to have a good project was to come up with something that the team members were genuinely excited about first and foremost, and that it was okay to take heavy inspiration from other games for our mechanics, as translating those concepts into AR would necessarily make them distinct from the originals.

Ideas started flowing, and the programmers started checking for the feasibility of the interactions we came up with… only to be swiftly shut down every time we explored them. A couple of examples: 

  1. We thought about using a deck of image markers (specific images that the glasses could recognize and anchor a 3D object to when the image is seen) in order to create a 3D deck of cards. However, a Spectacles lens project cannot contain more than 10 unique image markers, so a full deck will not work.
  2. Maybe we could use a limited number of image markers to build up a recipe, using each marker as an individual ingredient… if only more than 1 image marker could be rendered at a time with the Spectacles.

We are 4 weeks in, and with so many ideas going nowhere due to limitations of our platform, we started to feel the heat of the oven and wanted to get some prototyping done. We thought about if there were ways that we could continue exploring by coming up with some proofs of concept for ways that AR might be used in the future when the tech gets better, rather than being limited by what it cannot do today. 

We returned to our previous idea of decorating drinks by attaching image markers to the cups, so that virtual objects would stay attached even if the cup is moved around. Our UI designer came up with a storyboard for an experience where players virtually decorate a drink in front of them based on a prompt and then pass the drinks around to get decorated by more people, before they get revealed to the recipient of the drink. At the end, players can take pictures of themselves with the drink and funny face filters. We then started making paper prototypes to test out how we could use social mechanics from party games like Imaginiff and Whoonu to facilitate conversations throughout the experience.

On the tech side, we prototyped attaching an image marker to a cup on the lazy susan and attempting to attach synced objects that could persist on the cup when other users looked at the marker. 

However, we immediately ran into networking issues. It seems that objects tied to markers don’t sync across the network, even if we tried to turn them into SyncTransforms, a component from the Spectacles Sync Kit package. So the drink decoration feature only works in a single player environment… but there’s no surprise if you decorate your own drink. After investigating, we found that image markers are tied to each device’s local camera and cannot become part of the colocated world that all synced objects are part of in the sync kit. Essentially, each player has their own copy of the image marker object. 

There seems to be a common denominator in all of our tech issues. Using image markers to tie digital content to physical objects and take advantage of weight, tactility, and physicality sounded like a perfect application of AR, but image markers don’t seem to want to mesh with the synced communal part of our project. Thus, we keep going back to the drawing board and spinning our wheels, which is starting to feel frustrating.

Luckily, our team isn’t easily charred by the fear of failure. So we kept exploring what we knew that we could do. What if we solely use a marker as a visual toy and build the rest of the interaction separately? 

We found that image markers could recognize their own rotation, which meant that we could write a script that does something to the marker object when rotation is detected (e.g. attaching the marker to a lazy susan on a table and spinning it). Having a lazy susan in the middle of the table also allows us to place digital content in the middle of the table that everyone can see and brings the whole experience together. 

With that small win, our artists started working on a model for a cauldron of soup to go on a spinning marker, with a soup shader whose parameters could be adjusted to change the amount that the broth is swirling in the pot. The fact that the pot itself is round makes this a good model to place on a central marker. Right now, this shader is not integrated into the tech prototypes, but we’re going to try to connect this work soon, so that spinning the lazy susan with a marker on it results in seeing the pot of soup mixing, which will be a rewarding visual effect.

As for the problems with networking and using the image markers, a solution will come, confidence rises and falls, and that’s just the way a project moves.