Post Moterm

Introduction and Project Overview

HelloAlice is a semester-long exploratory project created in collaboration with the Alice Team, a group known for developing a block-based programming environment designed to help students learn fundamental programming concepts through storytelling and animation. The HelloAlice team consists of six members: Eva, the producer; Jinyi, the programmer and co-producer; Yifan, the programmer; Jia and Zhanqi, the 3D artists; and Wendy, the UI designer.

The goal of HelloAlice was to explore what the future of programming education might look like in an era shaped by artificial intelligence. With the rapid evolution of AI tools, particularly large language models, we were interested in how these technologies might be leveraged to make programming education more accessible, engaging, and personalized—especially for beginners who are often intimidated by traditional syntax-based coding.

The project aimed to build a tool that combines the power of AI with Alice’s visual programming environment. Our deliverable was a prototype educational tool that allows students to describe a high-level programming idea in natural language. The AI assistant then interprets that input and generates a sequence of editable codeblocks, helping students visualize the logic behind their ideas and connect them to actual programming constructs. Rather than overwhelming beginners with full-on code, the system supports learning through editable block structures, helping students understand variables and logic without needing to memorize syntax. The final goal for users is to build an interactive story—encouraging them to think structurally and creatively at the same time.

What Went Well

Several aspects of the HelloAlice project progressed successfully and contributed to a meaningful and functional prototype.

One of the first effective steps was conducting an early idea pitch. The initial client prompt was intentionally broad, which made it difficult for the team to determine a clear direction. The pitching process gave us an opportunity to explore multiple AI integration scenarios in educational contexts. This phase not only clarified our objectives but also brought the team into alignment with a shared vision.

Another success was the steady and focused progress made on the prototype itself. By the halves milestone, we had already developed a working build of our tool. While it was still unstable at the time, it provided a strong foundation for further refinement. Having a tangible version of the tool early in the semester allowed us to begin thinking critically about usability improvements, stability issues, and feature expansions. It also allowed us to consider how AI should be positioned within the tool—not as a black box, but as a collaborator whose suggestions students could learn from and modify.

We were also pleasantly surprised by how smoothly AI integration went. There was some concern in the beginning that relying on AI might make the system too complex or unpredictable. However, our decision to keep AI’s role relatively focused—breaking down natural language prompts into codeblock structures—proved effective. It enabled us to deliver a responsive and flexible learning experience, while still giving students control over the final output. The AI assistant helped guide students’ creativity without removing the learning challenge entirely.

Finally, user testing—while delayed—showed that students responded enthusiastically to the storytelling aspect and enjoyed playing with visual effects and block manipulations. Many were able to grasp the relationship between different components, such as how editing codeblock fields affected what appeared on screen. This confirmed that our core educational goal—helping students connect high-level ideas to structured logic—was being met.

What Could Have Been Better

Despite our successes, there were several challenges that limited the potential impact of the project and could be improved upon in future iterations.

First and foremost, earlier playtesting with our target audience would have provided valuable insights. Because the tool required a workshop-like setup and was still undergoing major revisions early in the semester, we hesitated to bring it into a classroom setting too soon. In hindsight, even testing a rough version with students could have revealed critical usability issues or misunderstandings in how AI-generated codeblocks were interpreted. Delaying testing meant fewer opportunities to iterate based on real user feedback.

We also faced difficulties deciding on the scope and function of the art assets. Since the core focus of the project was on the programming logic and AI workflow, the art was considered supplementary. However, that created a dilemma—what do we build, and how much visual content is enough to support storytelling without overwhelming the development timeline? This uncertainty slowed us down in the early stages of asset production. While the final assets supported the interactive story functionality well enough, feedback showed that users wanted more content variety and visual storytelling elements than we had time to produce.

Another area of improvement lies in defining user expectations. During testing, students expressed interest in features beyond the tool’s current scope. They wanted more diverse animations, complex interactions, and story customization options. While it was rewarding to see them engaged, this also showed that our tool, while functional, might need better onboarding to clarify what it can and cannot do. Setting clearer boundaries earlier could help manage expectations and allow us to prioritize features more strategically.

Lessons Learned and Conclusion

As an exploratory collaboration, HelloAlice gave us room to ask important questions about the future of programming education and experiment with ideas that had no guaranteed outcomes. One of the most important lessons we learned was the value of clarity—in both direction and communication. The process of pitching ideas and aligning as a team made a huge difference in our productivity and cohesion.

We also learned that AI integration in education doesn’t have to be disruptive or overly complicated. By narrowing its role to a guided assistant, we found a balanced way to introduce advanced technology into a beginner-friendly space. At the same time, we learned the importance of iterative testing and being willing to let players shape the tool through their interactions and feedback.

We are still unsure where this tool will be. However, the results we’ve seen—especially from our most recent playtests—suggest there is great potential for this kind of AI-assisted programming tool. At the very least, we believe we’ve contributed a meaningful proof of concept that the Alice Team and other educators might build upon in the future.

As a team, we are proud of what we’ve accomplished. From concept to prototype, HelloAlice gave us the opportunity to stretch our skills across programming, UI design, storytelling, AI integration, and educational thinking.