360 Video for Immersive Nature Sounds

Our show, Nature Sounds of Ontario, Canada is changing to Immersive Nature Sounds, and will feature 360-Video along with the binaural 3D audio of nature recordings!

I’ve been chewing on ideas for the second season of Nature Sounds of Ontario, Canada the past while, and during our spring visit to the cottage I decided to try something new: use Category5’s 360 camera rig to shoot 360×220-degree video along with the immersive binaural audio. I shot two videos, setting up my surround-sound dummy head in a forest, and again in front of the campfire.

Executive Producer, Nathan Salapat and I had some good brainstorm sessions, and decided to change the name to Immersive Nature Sounds to better express what we’re hoping to achieve with Season 2. Nathan then put a lot of work into making our 360 cameras work for this purpose, and I’ve now begun experimenting with the early output.

For my first two tests, I rendered frames 1-1000.

In my first test, which was rendered at just 5% quality for a quick output, I simply wanted to see if this was going to work. I noticed immediately that we have some polar anomalies, which I have affectionately named “equi-anuses”.

For my second test, I increased the render quality to 50%. I also added a cave picture to the background texture, and threw our network logo on top of that since Christa is still working on the new Immersive Nature Sounds logo. I notice the equi-anuses look a bit like flowing water meeting at the poles. Perhaps something is being stretched oddly. I also noticed, to my surprise, that the logo and background are not spherical, but rather seem to appear to the viewer as more of a “wall” behind them. It makes me have visions of placing an animated stargate behind the viewer — not because it has any relevance to a nature sounds channel, but because it would pay homage to my geekiness. Another thing I noticed this time is that the video portion is in fact flipped on the horizontal axis. If you look at my shirt, the evolution joke is that man evolves from monkeys only to be killed by robots. However in the test, you can see the robot first, then devolve into monkeys  🙂  So will have to check with Nathan on how to fix the flipped video.

For my third test, I’ve set quality to 100% and set the video output to lossless. I’m rendering frames 1000-2000 for test 3, this time using the Linux terminal, which is giving me a significantly faster render than the GUI via my laptop CPU. Here’s the command I used: blender -b single-camera.blend -a

Also, I pulled our logo into the GIMP and made it a square canvas as per Nathan’s instructions so now the logo behind you should look better.

I’m pleased to see the equi-anuses appear to have been a result of low-quality draft renders. The 4K render looks much better and hasn’t got the anomaly.

Now, onto my fourth test. Nathan sent me a new .blend file to “flip the UV layout” as he called it. In my lay user terms, it means he made it so the video isn’t backwards, and my shirt can now evolve correctly to the point where robots destroy all humanity [phew!]. The change is subtle, and you may not even really spot it. But to me, seeing the stone wall to my right was really trippy. Like seeing something “not quite right” but unable to put your finger on it. So now, all is right with the firepit.

I wanted a fast render, so I changed up some settings so it’d still look pretty good, and went with a 50% render (1080s). I really think it’s starting to come together!

Alright, now that we’re getting to the point where the output is looking and working well, we need to figure out how the editing process will go. The problem is, the camera saves the files in 15-minute segments. Well, that’s typical and not technically a “problem” – but the issue is this: Nathan setup the file to take one file, not several files in sequence. He has a few ideas on how we might achieve this, but in the meantime, as a test, I thought I’d load all the files back-to-back into Blender and render them out.

This is what the master looks like (Frames 1-1000) before processing in Blender:

So that’s the output from Blender, where I put the files in sequence, but only the first 1000 frames. It’s a good test to see if this is a solution (creating a new master file that contains all the video in one file).

However, that clip (yes, just 1000 frames) is 4.8 GB at native quality. During my tests, I was having to delete files, move things to other drives, and so-on, and eventually ran out of drive space anyways. There’s nothing I can do besides investing in a killer server farm to be able to take this approach. This one episode is 155,454 frames. We know from our 1000-frame test that each frame is 0.0048 GB, so a master file for this one episode would cost almost 800 GB. That’s before editing, before rendering, before converting to equirectangular, and before adding the 3D soundscape.

Back to the drawing board.

August 4, 2017

Yay, Christa sent our new logo this morning!

She also provided an alternate option, but both Nathan and I thought this was in line with our vision for the program.

Nathan also sent a new file, having figured out how we can import all video masters from the camera in sequence. Now I have to re-learn the Blender portion, but hopefully the learning curve isn’t too bad. Nathan has been wonderful to document the process in detail for me.

September 2, 2017

Now we’re getting somewhere! Nathan created a new file, Christa got the logo in, and I’ve offset the start point and created a fade-in so you won’t see me walking away at the beginning of the video.

This is rendered at just 25% canvas size, so it’ll look terrible – but shows the effect is working.

Now, this next one’s a big deal! This demonstrates (in very low output quality) frames 26300 thru 26900. That means, this one is the first to rollover from the first video to the second. As you can see, the transfer from one scene to the next turned out seamlessly.

What is achieved here is monumental:
1) The spherical video from the camera is directly converted to equirectangular within Blender: no having to convert the sources to equirectangular first (as would be the case with say, Cyberlink PowerDirector).
2) It can now go from one video to the next, to the next.

Starting to feel close to being ready to start releasing! I’ve got an old 2-processor quad-core Xeon (so 8 total cores, 16 with hyperthreading) which I’m going to deprecate from my home storage. I am thinking I’ll repurpose that into a Blender rendering system… just stick Debian and Blender on it and do it all from the command line… no overhead. It has a miserable video card (it’s just an old server after all) so I will just use CPU rendering – but those are some good powerful CPU’s so hopefully it’ll run well, and certainly better than having my laptop out of commission for 2 weeks while the video renders.

This is a living blog (being updated as I go) so make sure you check back!

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments