Wrap-up & Retrospective


Final Presentation

Live Video of Presentation

Post-Mortem

This semester, we worked with the CMU Psychology Department with the goal of creating a game to serve as a research vehicle for them to study Human/AI collaboration. As we met with our clients over the first week, we clarified the exact specifications of our main deliverable to be as follows:

  • Dual-player collaborative game with asymmetrical roles/information but a common overarching goal
  • Main role-based gameplay can center around foraging and visual classification, two areas of interest for psychological study with both human and AI agents
  • Both roles should be playable by both human and AI agents. We will focus on creating the human-human part of the game, and our clients will build and integrate the AI agents
  • Game analytics/data logging of events that can be used to extract research data

During our team’s initial meetings, we decided that our main focus points for this semester would be to fulfill client requirements, develop using rapid prototyping practices, and aim to achieve fun/interesting gameplay mechanics with high replayability.

From a process perspective, we scheduled weekly meetings with both our faculty instructor and the clients starting from the beginning of the semester. We also scheduled check-in meetings with our faculty consultant.

Final Deliverables

At the end of the semester, our team delivered BitterBuster, the game that fulfilled the client’s above requirements. BitterBuster is a light-hearted game set in the Candy Kingdom where the two players, known as the Explorer and the Selector, must find & correctly diagnose Bitter Candies in order to save them. The game comes with many customizable options which can allow the researchers to tackle a variety of psychology-related research questions.

We also delivered assets and scripts for a general data generator, sample scripts to parse logged data from the game, and documentation to support the future development and integration of AI into our game.

What Went Well

Our approach of doing rapid prototyping was our strongest asset this semester. We created numerous iterations of the project to present to our client, giving us the ability to receive detailed feedback from both our client and instructor every week and a concrete list of goals to improve upon in the next sprint. 

We were also very proactive in playtesting our game through both on-campus playtesting nights and various events in the ETC. Each playtesting session was rigorously conducted with surveys and interviews that allowed us to effectively collect feedback from the testers and guide our discussions with the client.

In-team communication was also very effective; we made multiple methods of communication and would hold daily syncs on what was done and what still needed to be completed. Blocking items would be resolved very quickly, ensuring everyone is as productive as possible during our core hours. Additionally, at the beginning of every sprint, we would update and go over a common task list that ensured everyone would be on the same page for the sprint goals and what needed to be accomplished.

Finally, along with being effective with their given roles, everyone in the team was able to explore different aspects of the project they wanted to work on with the support of other members. This ensured that team motivation was high throughout development and enabled us to produce a higher-quality product by the end of the semester.

What Could Have Been Better

Although we met the goals of our client very early on (we received a green light on our design by quarters), a majority of the game iterations were made to achieve our own objectives with regard to the gameplay. Many ideas that would make our project a more effective game were constrained by the requirements placed by our client, for instance:

  • Adding playfulness through random events and more interactable environments
    • Would introduce noise into research data and may be too complicated for AI agents to learn
  • Stronger interfaces and data visualization that would support the effectiveness of player gameplay, such as a visual memory menu of previously selected candies
    • Often discarded to match the experimental goals of our client
  • Iterating the game to have more interesting/in-depth strategy-building
    • May be too complex for a reinforcement-learning-based AI agent to reasonably learn

Another difficulty we ran into early on was not designing our prototypes in a way that would minimize our artistic/technical debt during our early iterations. For instance, we spent a lot of time iterating the game map’s design: initial creation, expansion, adding more houses, changing the shape from a rectangle to a circle, etc. Each time the map needed to be changed, we would have to adjust all colliders within the map (houses, walls, barriers, floors, etc) as well. It would have been more effective to design the map in such a way that it could be easily changed or iterate more simplified versions of it at the very beginning to get feedback.

What would we do if we had 5 more weeks?

Constanza: Do actual Human/AI teaming; look into the data

Angela: Implement AI into the game

Aiden: Increase the playfulness

Cherry: Improve art & 3d environment

Takeaways

Throughout our development, our team had the opportunity to learn and practice many new skills. From the game design perspective, we worked with the design challenge of being able to accommodate our clients’ requirements and constraints. Knowing that our game would be made for AI agents, all interactions between our players and in the game would have to be serializable into data that an AI could understand. 

With development, this project had many aspects that were new to our team members. Firstly, we had to ensure that we were able to implement a game with asymmetric gameplay, and also design interfaces that would effectively communicate each player’s role to them. Our programmers were also working with multiplayer for the first time, and also had to tackle the challenges of designing a code architecture that would support the future integration of AI agents.

As our process involved rapid prototyping, we gradually learned how to effectively create prototypes with little technical and artistic debt for different features to effectively demonstrate our thoughts to the client and receive feedback. This skill proved to be immensely valuable as it meant we could cycle through multiple iterations of the game without too much wasted effort from the development team. We gradually built this skill throughout the course of the semester and hope to continue honing it in our future projects.

Conclusion

Following the end of this semester, our clients will be taking on the project for future development, particularly focusing on building upon the AI aspect of the project. 

In the final few weeks, our team presented written artifacts of our project in the form of a documentation wiki website (containing both development and user guides), READMEs in each of our coding repositories, and summaries of our key design decisions made throughout the semester.

We also held multiple wrap-up meetings with our clients in the final month to conduct in-lab playtesting and ensure they could successfully make a golden spike build following our documentation material.

Similar Posts