Yesterday I went out to see some cherry blossoms in Osaka, the city I am currently at. I hung around Sakura-nomiya park and Osaka Castle. There are many other places where you can enjoy those beautiful cherry blossoms but I like that area near the river and I’ve heard that there are many cherry blossom trees there so I decided to go there.
On the way, I launched Puppetoon (I am a beta tester) and tried the app yet again. This time I’ve chosen this little dude with the high pitch voice and we went together on an adventure.
It was funny because when I was recording a video of me using Puppetoon, I forgot something important. Thie this is that when you animated the virtual AR puppet, it record only the location, your voice, and character animation but it doesn’t record a video. Only the second time when you press the record button, it plays back the pre-recorded animation and then you can compose your shots and a video is being recorded.
So you might ask, well what’s the problem with that? well, it’s not actually a problem, it’s a benefit, but if you already talk and explain about dynamic things in the environment, by the time you replay the animation and the recorded voice, that subject might not be there in the frame any longer.
For example, I was near the lake, made the puppet say: “Oh, look, that’s a beautiful boat.”. When I then recorded the video, the character said the same lines, but at that point, there was no boat in the river, it has passed a minute ago.
This is why I think it is good if there was an option to allow recording a video while animating the character, You do lose the ability to recompose while recording and have it unassociated with the animation, location and voice metadata, but it will be easily contextualized related to dynamic objects in the scene.
Again, Puppetoon is still in beta, and I am just sharing my own experience with it. Many of those things might be changed or improved by the time the app is officially released.
I created a video sharing my experience, so here you go.