Things We Learned
We picked up new information from the semester that we will take into future semesters but also for this project to take into consideration for the future. Technology gave us interesting obstacles this semester. There are limits to AR in its current state that are not a problem so much as they are barriers. Until the technology progresses to a point where it can handle it; it will remain so. An example of this limitation would be where a 3D character wants to be able to interact with a real life object, but cannot. The characters brought forth from this semester were deemed cute or interesting by many playtesters and have showcased a good use of this technology, but are at the current limits of what AR will allow. The lightbulb torso, one of the options the user can choose from the character customization content, has a transparent texture which works very well in AR, allowing light and background elements to pass through visually. These combined elements add to the immersive nature of the experience.
A few other technical areas we investigated are animation retargeting and custom trained object detection machine learning model inferencing in Unity using Barracuda. We learned that there is a solution to animation retargeting both in Maya and in Unity. In Unity, it works if you have the designated bone structure setup, however, we did not have the same structure and we had asked the motion capture studio technician if we could set up a different bone structure hierarchy and were told no. It also could work in Maya using the Human IK system. However, when we translated the data from the collected data’s skeleton to the desired character skeleton, the knees of the character would bend outwards instead of forwards. While we believed this to be an IK issue, we did not have the time to address the knee problem due to scoping concerns. The custom trained object detection machine learning model inferencing in Unity using Barracuda, had some features of the model that Barracuda does not support yet. Although at first, we had the intention of switching the inferencing platform to Barracuda, however, due to this lack of full support, we had to unfortunately abandon the custom trained model functionality of this portion in favor of a more general version of the model.
Obstacles We Navigated
There were some obstacles that we had to navigate in order to complete the work we did this semester. This came in all of our team divisions. For the UI/UX portions, we had to figure out how to make the UI fit within any environment in which it could take place. The way we achieved this was to create a semi-transparent background with semi-transparent textured buttons to allow the environment to show through without being a distraction to the user. After playtesting, we opted for hinting the scannable objects through the loading screen animation as well as the intro page to the application. To add to the overall feedback the user receives while using the application, when scanning an object, there will be an overlay that includes a shifting magnifying glass to teach the user that the app is still scanning.
We navigated technical barriers this semester which included using object detection on a mobile device in conjunction with AR and hand keyframe animation editing. Due to the technological constraints of the application only working on newer generation devices smoothly, we decided to turn off the object detection function during the animations. By controlling the use time, we improved the frame rate, temperature of the device, and battery consumption of the application. After we collected the motion capture data and decided against the retargeting pipeline, we had to adjust the given skeleton’s bone lengths and starting positions to fit the target character’s structure. Once this was done, we edited the keyframes of the animation to allow for a smooth transition from the T-Pose into their animation.
One of the final team obstacles we faced dealt with the story of the customized character. Limiting the amount of variation of story was one of the ways that we navigated this issue. Another way we implemented this process was by giving the character a motivation within the scene to create a story. We did this because we did not have the benefit of giving backstories of these characters that would have guided their actions otherwise. The scope of the project also dictated the fox scenes to use one or two motions with varying props to initiate their AR performance.
Future Suggestions
After working on this application and with the knowledge that it will only continue to grow after our project semester, we can break down our suggestions into the following categories: technical, UI/UX, and Artistic.
Technical Suggestions
- Once Unity is able to take custom trained models, implementing this will give the developer the freedom to optimize it for this application
- Optimize the animation loading stage to make it work on older devices
- Retargeting would be implemented
User Interface and User Experience Suggestions
- Use Sound and Vibration as indirect control
- Eventually open to the public
- Including a gallery function
- Including a depth camera function
- Having memories or library of scenes you have seen before
- Thematic tutorial at the beginning, possibly with a 3D animation with Snobs from the world
- Build a community using various social media platforms
- Generate support via trailers, screenshots, etc.
Artistic Suggestions
- More Content, for mix and match scenes, more body parts
- More dynamic entrances or exits for the characters