Week 9 – November 3rd, 2023

This week, we made some last-minute iterations on the two films to prepare us for the Saturday Playtest Day.

By.

min read

Work Preparation:

This week, we were busy preparing for the Saturday Playtest Day. We want to have two films both updated and iterated on, and thus we are making some last minute changes to them during the week. We have updated the approach 3 film with all the new assets and VFXs implemented. We have also researched a lot on AI rendering and hoped to have an updated approach 1 film to show to the playtesters on Saturday.

We also had the finalized Pangu body and face model ready, which would be used in both films from next week. Later, we would work on rigging the full body as well as making animations for it.

Progress Report:

  • Updated the project website and weekly blogs.
  • Added new research results into the documentation.
  • Met with Pan-Pan for sound support and scheduled a weekly meeting time.
  • Finalized the Pangu face model.
  • Started on Pangu character rigging and animations.
  • Adjusted some camera angles based on the character.
  • Added more layers in the background and animated the background.
  • Continued researching AI rendering.
  • Worked on testing AI texturing.
  • Updated both versions of the film, with one has sound effects added.

Research Results:

  • AI rendering: 

Based on last week’s progress, we need to look for ways to improve details.

Since we can foremost use 3 controlNet units at a time, it is important to choose the unit wisely. Even though canny is a super powerful unit that recognizes the wireframes within the image which really helps us differentiate objects from objects. However, we think it is time to not use it since the other two units, depth and normal, we have also been using could do a fairly good job as it could do even without it.

AI-generated image

Reference image

We first tested using all 3 units to generate an image and it is obvious that these units well helped recognize the objects, however, the image lacks details to better define it. Then we tried to keep everything the same except that we boosted the canny unit to better recognize the wireframes to see if it would help or not.

AI-generated result

Enhanced canny

We could hardly tell if the ai-generated result has improved, but it is obvious that the canny does recognize more frames than the previous one. Now it is a good condition to question whether canny meets our need or not since the job it does seems that it could be perfectly replaced by normal and depth as they could even provide more information than canny does.

Next, we tried not to use canny and only use depth and normal to generate the image.

AI-generated image without canny

Now the result is obvious, normal and depth can recognize the objects while delivering the surface and depth information. And, there are even more details just by disabling canny since the generation is no longer constrained to these frames so that AI could try to fill these spaces.

It’s time to test this new method for animation generation. We, in the end, have a result with more details but less consistency(it’s a trade-off). 

We have a basic AI rendering pipeline built but needs to be refined. The method of not using canny could be a promising way to improve details, and the result of losing a certain amount of consistency is acceptable as we can denoise it. One vacant spot for the ControlNet unit is worth considering and we will keep experimenting on its best candidate.

  • AI modeling: 

Daz3D is a character modeling and scene staging tool that allows you to build characters from templates and use sliders to adjust the character models to your preference. Unfortunately, the base package is not very robust. The toolset seems largely geared towards staging scenes for rendering, and the modeling tools are largely hidden behind a paywall. Even then, the selection of available models and morphs is constantly changing, and not especially diverse.

This seems like precisely the kind of tool that could benefit greatly from the integration of generative AI. Given a large enough dataset of character models, you could certainly design an AI model specialized in creating and customizing biped characters. That said, the base tool was not even competitive with character creators in modern games (Street Fighter 6 comes to mind), and a lot could be done to improve the tool sans AI.

Additionally, as AI tools like DAL-E have shown recent advancements in maintaining consistency between images, there is cause for optimism about the possibility of generating models based on images that can be input as a 3D turnaround.

Plan for next week :

  • Meet with Panpan for sound support.
  • Continue working on the website and weekly blogs.
  • Finalize rigging Pangu model and working on its animations.
  • Research more on AI rendering and iterate on rendering the film.
  • Improve the background environment more.
  • Improve the Yin & Yang model into the film and its cracking effect.
  • Reflect on the Saturday ETC Playtest Day feedback.
  • Have a new version of the film with character in and new crack effect in.

Challenge:

  • Hard to estimate how long each approach will take and what potential problems we will meet throughout the semester.
  • Hard to estimate the cost for AI tools, and how effective they will be.
  • Need to think of better ways to document our research process.
  • Need to make sure we have some powerful shots in the film.
  • Need to prepare for the ETC Soft Opening.
  • Need to keep Panpan on the same page for sound production.
  • AI modeling is proved to fail, and need to shift focus to AI rigging for approach 2.