Playtest Day
This week, we had an official playtest day hosted by ETC on Saturday. Last week, we already playtested the music prototype from an engagement perspective. We found that novel interaction can definitely enhance the level of interest and raise users’ curiosity. So for this week’s playtest, we want to focus on understanding the music data. But we still provided some engagement questions in the questionnaire.
Therefore, we decided to fool the users to see if they really understand the haptic and the visualization of the music. We provided 3 songs in total for this playtest. For all three songs, the haptic feeling is matched with the visualization. However, only the first song has the matched music. For the second and third songs, what the user listens to is different from what they see and what they feel. The second song is slightly disconnected. The third song is totally disconnected. During the playtest, we didn’t tell the playtester the song was not matched.
So this time, we had 22 playtesters in total. The process is the same as before. For each player, they will start with a tutorial and then experience 3 songs. And then they need to finish the questionnaire.
For this playtest, we tested the engagement from satisfaction, future play and the ease of use. Here is our result. We think the overall experience is engaging and can raise players’ curiosity about data science.
We also tested the players’ understanding of the music. We have the below question for each song to see if the player can find the music is not matched with the haptic and visualization.
Do you feel like the haptic feedback represented each part of the music data (beat/accent/note) well? 1 – 5 Scale (Very Confusing- Confusing- Neutral – Clear- Very Clear)
Here is our result. The score of the first song is a little bit higher than the second song and third song. However, most of the users still feel that the haptic feeling is matched with all three music. We are not surprised by this result. We think the experience is too novel. So all the playtesters only focused on the novel experience. A 5 minutes experience is too short for them to figure out what is correct and what is wrong. Therefore, we plan to conduct another playtest next week. So for the next playtest, we decide to invite the ETC students who had already played this prototype before to experience this again to see if they can find the music and haptic are not matched.
Also, we got some really interesting feedback.
- I liked the feeling of the vibration in my hand but wanted a little more distinction in the feeling. I also felt like I was not focusing on the sensation in my hand because I was using my other senses to see the beats and hear the music as well.
- It interacted with multiple senses of the body, sight, touch and hearing. The ability to do this is just mind blowing.
- I think the concept of that type of sensation opens a new aspect of music.
- Somewhat gained knowledge about accent/beat/note. i was more focused on the holistic experience and less of the individual components
- I think this can be a tremendously fun way with the right kinda music and experience designed around it. I think experiencing this with just a small haptic pad might not be indicative of the sort of visceral satisfaction that a larger haptic experience + right song + lighting can bring to somebody who loves music
Sprint specific hypotheses are:
The novelty of Interaction can enhance the level of understanding of music (beat/accent/note)
- No. Novelty of Interaction may not necessarily enhance the level of understanding. For the first-time users, they were excited to try the machine and the novelty hindered users’ understanding of data.
After the playtest, we made a reflection of our sprint-specific hypothesis and global hypothesis. Our global hypothesis used to be “providing an engaging experience can enhance the understanding of machine learning and data.” However, we felt like it was challenging to educate people about the concept of machine learning with one playtest. Users are coming from diverse backgrounds and have different prior knowledge in computer science. Instead of educating people, we hope to encourage people to be less afraid of learning ML.
The revised hypothesis is “The concept of machine learning has been largely regarded as a black box. Providing an engagement experience can raise interest and curiosity from people which can shift the attitude toward AI, so that people are inclined to understand”
Sprint 3 – Reinforcement Learning
This week, we finally decided on our direction for sprint 3. First, our goal is to visualize the process of Reinforcement Learning. We want to focus on the basic concept of reinforcement learning, which contains action, state, reward, and evolution. We chose Q – learning as our algorithm. It is a model-free reinforcement learning algorithm to learn the value of an action in a particular state. The reason we chose this model is that it is relatively simple. Also, it can run in real-time so that the player can see the evolution of AI in real-time.
So here is an overview of this prototype. The blue box is AI. The green box is the goal. The red box is the obstacle. The user can set up the environment and see how AI is trained to find the shortest path to reach the goal.
This week, we already finished the basic function. Currently, the user can change the environment in real time to how AI reacts to this environment. Here is the demo video.
After finishing the basic function, we decide to visualize the AI, goal, and obstacle to make it easier for the user to understand. We decided to use dinosaur food and predators to visualize them.
Also, next week we will design some levels for the user to show the concept of reinforcement learning. The user needs to solve some “puzzles” in order to pass the level. We believe that providing the user with a goal can help them understand reinforcement learning better.
Sprint 4 – VR Painting Gallery using K-means
Our final prototype for this semester is the VR Painting Gallery using K-means. We will bring our first prototype to VR. Our goal is to explore if VR immersive experiences can help improve engagement and help understand ML. So this time, we want to focus on the interactions and the understanding of data. Here are some interactions we designed.
- Visit a virtual gallery
- See how a painting be split into multiple layers using K-means
- Walk into the painting
- Interact with each layer
- Drag
- Zoom
- Hide
- View details(Color,Percentage,etc)
- Change K-value in real-time
This week, we already finished the VR environment setup. Next week, we will finish the UI, tutorial, and interaction.