Back from spring break, our team all got a good relaxation. Some of my teammates had a trip to other cities and some of my teammates just stayed in Pittsburgh and had some fun. We all got the new energy from the break and it’s time to continue our work!
For this week, we got feedback from our half-presentation. We reviewed this feedback carefully and we made a reflection on where we should go forward. And then, we came up with a concrete idea about sprint 2 and finished a simple prototype to show our idea. Also, we decided our direction for sprint 3.
Feedback from half-presentation
On Monday, we got feedback from our half-presentation. The feedback focuses on these areas: the content and delivery of the presentation, the team’s direction, and the team’s plan. Here is a summary of our feedback.
- The content and delivery of the presentation
- Read off the monitor too much
- The target audience is not too clear
- Hypothesis are vague
- Demos were helpful
- The comparison of what is engaging and what is disengaging was helpful
- Team’s direction and plan
- Not clear how the two music-related prototypes will be connected to the overall project theme.
- Worried that the choice of music is hard to visualize
We analyzed this feedback carefully and made a reflection. First, we will definitely try to remember all the content of the slides and make sure we have eye contact all the time with the audience. As for the target audience and hypothesis, it is totally understandable that some faculty feel confused. Since our project is a discovery project, our overall hypothesis is engaging experience helps users better understand the data and machine learning algorithms. We will make 5-6 prototypes to explore various interactions, datasets, and ML/ Data Science algorithms for the whole semester. The general target audience is people in higher education. However, the specific target audience may change in different prototypes.
As for the direction for the next sprint, we also realized that music is a hard topic to visualize. Also, it is hard to connect music with ML or data science. At this time, our instructor Ruth really helped us. She provided us with new hardware called ultrahaptics. This device allows the users to feel the objects with their hands using sonic vibration. Therefore, we decided to use this device to explore a new way to visualize the music, which is feeling the music.
Sprint 2 – Music
Our direction for Sprint 2 is visualizing the music but also making the music touchable. First, we chose fast Fourier transform algorithm to analyze the beats, notes, and strength of the music. That is how our project relates to data science. The following picture shows the result of the music data after running the fast Fourier transform.
And then, after we get the beats, notes, and strength data, we will visualize them on the screen, and also let the user feel them using the ultrahaptics. Ultrahaptic is a device developed by Leap Motion. It needs to work with leap motion. First, leap motion will detect your hands’ potions, and then, ultrahaptics can make you have a haptic feeling using the sonic vibration.
Therefore, we decided to map beat, note and strength to different sensations. When a beat, note or accent appears, the user will have a different feeling.
Beat: Tapping the palm Note: Tapping the Finger Accent: An Open Circle
Also, as for the notes, since we have 12 different types of notes in total, we map each note to different fingers.
Thumb:C C# A A#
Index finger: D D# B
Middle finger: E
Ring finger: F F#
Little finger: G G#
Finally, we want the visualization to match the feeling. We want the player to see the beats, notes and accent hit their hand on the screen. At the same time, they can feel them. This week, we discussed the art principle we want.
- Users can change different themes.
- Left-handed and right-handed people have different interfaces
- Transparency hand to see the beats and notes.
- Basic dark background and bright gloomy object style.
Here are some references we found.
Overall, we all think this is a really interesting prototype to explore. And we made good progress this week. We finished the function of analyzing the music, and also let the user feel the beat, notes and accent. Next week, we plan to focus on the visual part and run our playtest for this prototype.
For Sprint 3
In Sprint 1, the painting prototype and garden prototype show visualizations of unsupervised. ML(K means Clustering) In Sprint 2, we explore a new direction of data visualization using data science- Feeling the music. Now we’re going to visualize Reinforcement Learning.
One direction we want to explore is using Reinforcement ML to procedural generating content. The user can interact with the models, for example, they can give rewards and punishment to AI to see the evolution of AI.
But there are still some challenges we need to solve, such as how to design the reward rules and is it possible to train the model in real-time. We presented this idea to Google on Friday and they provided us with some really useful advice. They shared a small and flexible model called Tabular Reinforcement Learning which can be trained in real-time. Also, they shared a past ETC project called shadow agent. It is An experimental project exploring new gameplay interactions using reinforcement learning. We will read their blog to learn more about their project.
In summary, we are still in the design stage for sprint 3. We decided on our main direction, which is reinforcement learning. But we will do more research about it next week and come up with a clearer idea next week.