Foundry Founder Talks Ocula, i3D Live & Stereoscopic Filming

Mar 05, 2011 No Comments by

EXCLUSIVE INTERVIEW WITH THE FOUNDRY FOUNDER SIMON ROBINSON

ocula image 300x253 Foundry Founder Talks Ocula, i3D Live & Stereoscopic FilmingThe Foundry is a world famous UK visual effects company and software vendor whose software has helped bring Avatar and Tron to 3D life. 3D Focus exclusively interviewed The Foundry Founder and Chief Scientist Simon Robinson, who told us more about The Foundry’s involvement in the BBC i3D Live project – an amazing project that is working on digitally creating stereo 3D from standard cameras; the pros and cons between parallel and convergence filming; the surprising problem of colour disparity between cameras in stereo rigs and how 3D software Ocula is used to fix issues in post production.

review dividing line Foundry Founder Talks Ocula, i3D Live & Stereoscopic Filming

3DF: What is The Foundry’s contribution to the BBC i3D Live Project?

Simon Robinson: BBC Lead Technologist Oliver Grau and his team work on a number of different projects and the one which we are involved with is i3DLive. The i3DLive project is a UK funded project which involves BBC R&D, us and Surrey University. The i3D Live project is still ongoing and due to finish sometime in the next two months. The project has several different themes to it but one of the major ones is the question – what can we do if, on set, we are capturing multiple viewpoints, with the particular aim of taking one of those viewpoints we are capturing on set and using another one to try and use that to inform us what a 3D stereo view would look like?

3DF: What would the advantage of free viewpoint 3D like i3DLive over a conventional stereo rig?

Simon Robinson: Two points – it does involve exploring the case about what we can do with free view point video which Oliver Grau mentioned when you spoke to him; that is one component of i3D Live it but from our side it is about generating stereo imagery.

The rationale behind it is twofold – one is that we have, for example, the existing infrastructure around a football pitch in terms of video capture with conventional cameras dotted around the pitch.  Today that is not a stereo set up so we are asking if we can use the standard arrangement of cameras installed around a stadium today and, from each of those camera views, how good could we make 3D look from each of those views without any of the equipment being upgraded.  This is generation of stereo as a post process. So we are asking, can we generate stereo from any one of the camera positions by using the data of the other cameras to give us an idea about what the depth of the shot would be?

This is a way off but as a long term goal this is actually quite plausible. For The Foundry, our speciality lies very much in visual effects and we are interested in the scenario where a camera rig, used to film stereo, is slightly less well defined.

3DF: Presumably football matches would be ideal for i3DLive?

Simon Robinson: There are shots you could perceive of, especially tight shots on players for example, where you can’t really guarantee that you have the right number of cameras pointing exactly at the player that your tight shot is currently using. For wide shots of the main part of the action it is more plausible, as long as we can perceive we have the right amount of cameras pointing at the same locations.

3DF: Your Ocula product attempts to deal with something called keystoning effects. What are keystoning effects?

Simon Robinson: Keystoning, which is the problem we set out to solve, tends to be the dominant problem anyway. First the whole project kicked off when we had friends down from Weta Digital in New Zealand who were doing pre-production work for Avatar and between them and us we came up with a set of problems which were based on our, at the time, fairly limited understanding of 3D. One of those issues was the technical difference between shooting in parallel and shooting converged.

There are communities who prefer converged and some who prefer parallel. The detractors of converged shooting point out that the act of converging the two cameras in and pointing them in to what you want as your convergence point has the artefact that each camera has a slightly different view on the scene. If you pointed two cameras towed in slightly towards a cube then if you look at what we end up with in each camera, each camera has a slightly different perspective view of the cube and therefore slightly different keystoning is visible on the front face of the cube from the left camera compared to the right camera.

It turns out that the amount of separation / convergence you tend to end up with in cameras for a large number of shots, isn’t terribly extreme anyway so initially we focussed too much on keystone correction. In reality it turns out there are so many other slight geometrical errors in the way the cameras are positioned which are often more to just do with one camera being lifted slightly compared to another, pointing slightly higher, having a slight roll compared to the other camera etc. All of those ones swamp this possible keystone effect so Ocula is used to deal with all the other corrections and the keystone effect turns out to be incidental – but it’s in there!

3DF: What are the disadvantages/advantages of parallel and converged shooting?

Simon Robinson: If you shoot parallel, then none of the images converge so all of the data is behind the screen plane therefore people who shoot parallel do their convergence choices as a post process. In post production they will slide their images left and right over the top of each other and they will choose a view over that point which will become the convergence point or the screen plane of the shot. That all seems fine but because you then have to slide the images left and right in post production you are losing some of your image width in the process. Whatever you have shot, in post it’s going to be cropped down by whatever the amount of convergence; you are losing some of your image assets.

People who care about what they see in camera being what ends up on screen prefer to make sure their convergence choices are done right on set and then hopefully they have to do very little convergence adjustment in post production and they get to keep the whole image.

3DF: Are there really so many noticeable disparities between each camera in a stereo rig where you need something like Ocula?

Simon Robinson: Well let me give you the surprising example of colour which is a massive one. When we started out we didn’t perceive colour being that big because we thought the people on set are talented and the rigs are impressive and that all will happen downstream are a few subtle differences and that would be it.

There is a long story as to why it looked far worse than we thought. When you are filming stereo 3D, the amount of separation you want to force the audience to put up with is fairly small. The amount of positive and negative parallax you can expose audiences to is only about 3% of screen width. Because you do not have a lot to play with when you are shooting you need to keep everything nice and tight within these bounds which is why the camera axis in general for 3D shooting for close up/studio shots the camera separation is very small. It is certainly far smaller than the camera bodies will allow which is why everyone shoots through mirror/prism rigs.

Because everyone is shooting in mirror/prism rigs, the glass in them, the reflective part of the process, means the camera that is seeing the reflective view. The act of reflection is acting as a polarising filter on the incoming light from the shot. One of your eyes is seeing a view through a linear polariser and the other eye is not. There is a vast amount of stuff where you can get away with this, for example there may not be a lot of stuff within the shot that is throwing off polarised light such as reflections, puddles in the street, windows and all that kind of stuff but there are shots you really don’t.

There has been a bunch of stuff let through in productions in the last three and a half years which have really had quit extreme polarisation effects due to the way the films were designed. The polarisation effects cause massive colour differences between left and right eyes which make stereo viewing just that much harder so something that can seem a trivial problem can be become a massively complicated problem. It is not something people in a grading suite can trivially correct; it actually takes a lot of manual effort to correct. Every shot has a certain different type of correction required so there is a lot of dissection work that needs to go on in shot in order to put it together with the right colours.

3DF: How does Ocula fix such a problem?

Simon Robinson: Automated would be the wrong word but it assists the artist massively in getting from nowhere to a really good result with very little tweaking required. For a bunch of large shows that have been through recently that has just been massive. It is one of the key reasons why Ocula is used.

3DF: Is The Foundry getting involved with 2D – 3D conversion?

Simon Robinson: We have a lot of clients who are already doing 2D to 3D conversion with our software. Obviously, the main application that we are known for is Nuke to which Ocula is a plug-in component and obviously not all of our Nuke customers have Ocula. We do however have an increasing number of studios that are using Nuke as part of their pipeline to do conversion work so we involved whether we want to be or not!

There aren’t a lot of specialist pieces of software which are required for 2D – 3D conversion. A lot of what these companies are doing is simply being intelligent about the existing software out there that they apply and their ability to do the work really comes down to the fact that they are talented at managing the workflow – it is that which makes companies faster, better and smarter than others.  The best thing we can really do as software vendors is do what we do now really and that’s we spend a fair amount of time with these people and they say to us ‘if module x could be a bit faster or if we could reconfigure it slightly and do it this way then that would greatly improve this part of the workflow” and we try very much to keep up with those demands. There is nothing about it so far that tends to make us think we should put a big tick on our box and say we will convert 2D – 3D  for you because, in many ways, it is an effects problem like any other and every single facility is doing this in a different way.

3DF: What had The Foundry been involved with and currently involved with?

Simon Robinson: The big one where we spent a huge amount of effort was helping the team out at Digital Domain on their work on Tron: Legacy. There was a large amount of effort from people on our team to support the Tron production and that was particularly keeping the colour side of the film managed. There are two major Hollywood productions going on where we are having quite a significant day to day interaction with the production teams on how the stereo work gets handled on our software so it is not going away.

New Details Bar Foundry Founder Talks Ocula, i3D Live & Stereoscopic Filming

To contact the Journalist email

For more information about The Foundry visit www.thefoundry.co.uk

thin dividing line Foundry Founder Talks Ocula, i3D Live & Stereoscopic Filming

Previous Post Graphic Foundry Founder Talks Ocula, i3D Live & Stereoscopic FilmingNext Post Graphic Foundry Founder Talks Ocula, i3D Live & Stereoscopic Filming

FREE WEEKLY 3D NEWS BULLETIN –