Hacker News new | past | comments | ask | show | jobs | submit login
Pixar's Universal Scene Description (pixar.com)
126 points by gdubs on Oct 15, 2013 | hide | past | favorite | 31 comments



Proprietary 3d formats is still a pretty big problem, anything that helps that is good.

COLLADA tried to help that but formats like FBX have won out even though they are proprietary, bloated, binary and not well documented because one company (Autodesk) has no desire to improve interoperability since they own all 3d platforms essentially.

There really should be more standardization in 3d by now, this probably came out of necessity for Pixar. This takes it a bit further with full scene descriptions rather than just individual assets.

I have a feeling WebGL will do the most for pushing standardizing of 3d formats/objects/scenes over time as there is little efforts elsewhere. I actually hope with Unity, as big as they are, would help this front as they are sort of doing for animation with mecanim although still proprietary.


Hi Drawkbox,

Just to be clear, this isn't really a replacement for Collada or FBX. This is really intended for huge complex scenes that still contain live operators.

This also isn't really designed for WebGL because it is written in C++, thus it is really hard to get this to fit into the browser except as a server-streaming technology that just has a WebGL front end, which means that for most interactive WebGL experiences, this is not applicable.

It isn't really applicable to Unity because Unity is real-time and usually most things are baked if they are complex, where as Pixar's USD is designed for complex and huge scenes.


Would asm.js do the trick? It has OpenGL -> WebGL mapping.


I agree.

Yeah, one could also use NaCL, but that doesn't really solve the issue with local resource assumptions and a complex scripting environment. If you copy all the paradigms into the browser and it is fully dependent upon local resources, really why are you running it in the browser? Might as just run a native Python app that uses Qt.

Remember as well that in visual effects having a data set of 3TB was average a decade ago, it must be in the hundreds of TBs now. You really want to be accessing these as local resources over really good gigabit ethernet.

Some of it can be reused for sure, but a direct usage via asm.js will lead to a really poor experience.


COLLADA is a terrible format. It's designed by committee (thanks again, Khronos!).

It's XML, it's overly verbose, the spec is incomplete and has errors in it (at least in 1.4), there are too many moving parts to be useful.

Any format that combines both static mesh definitions, animation weighting and keyframe data, shader programs, material definitions, and scene data, and also has ways of smuggling in additional stuff, is doomed to fail.

.OBJ still wins out for simple static meshes, dumb as it is. Other formats win out for animation.

What we ended up doing was defining some simple binary formats and shoving the metadata and the rest into .json, and zipping the things together. Boom. Done.

EDIT: I don't come from a cinematic CG background, so my sensibilities are weighted more towards realtime and gamedev.


gamedev and CG will merge with gamedev winning out as compute bandwidth increases. The bottom end always eats the top.


This format isn't really that relevant to Unity or WebGL (as othrs have also pointed out).

But you might want to keep an eye out for (Open Game Engine Exchange format) (http://www.opengex.org) which is supposed to be the result of this IndieGoGo campaign: http://www.indiegogo.com/projects/open-3d-model-exchange-for...

It remains to be seen OpenDDL or OpenGEX will ever become more than a single vendor format.


It's disappointing that Autodesk seems to have eaten everything, but does WebGL necessarily help to standardise any sort of 3D asset format?

The (few, simple) WebGL examples I've looked at seem to use JSON to transport the assets from the server to the browser, but the particular representation used is apparently arbitrary.

I admit I'm thinking by analogy, I don't see OpenGL itself having helped standardise asset formats, so I'm wondering if WebGL is different here.


There's a new Khronos project glTF which wants to establish a standardized run-time file format for 3D assets: http://www.khronos.org/gltf, geometry is stored in binary blobs so they can be dumped directly into buffer objects, and the scene description is in JSON for simple (and somewhat fast) parsing. Unlike FBX or COLLADA this is meant as an "engine file format", not as asset exchange format between 3D packages.


glTF seems nice. We are currently adding export support for glTF it to our tool - http://clara.io. glTF is very similar to the run-time format used by both http://Verold.com and http://Sketchfab.com but it is standardized.

glTF is basd on Collada pretty closely I understand.


I don't think WebGL will have any affect here. The requirements for runtime asset format vs. interchange formats are completely different.

Generally runtime formats are designed to optimize out all the cruft that interchange formats like COLLADA and FBX have to keep around in order to be portable.

This is the reason most engines (including ours, PlayCanvas) use custom model formats at runtime. They are specifically designed to work well in a particular engine.

In Autodesk's defence the FBX SDK loads COLLADA and OBJ files very well, so we're able to support all these formats easily. The downside is that other software (such as Blender) has poor FBX support because they can't (or don't want to) use Autodesk's SDK.


I didn't know the FBX SDK loads Collada. Thanks Daredevildave. We've had our own Collada pipeline for http://Clara.io, but maybe we can replace that.

I have to say that I don't seen Pixar's USD being useful to PlayCanvas as PlayCanvas is focused on real-time, where as USD is focused on huge complex scene transfers. Maybe you could take a USD and bake it to static assets but that would be about it. I say this because our Alembic tool set (which is related to USD) is barely used in the games industry (and never used as a run-time format), although it is incredibly popular in the films/animation industry.


Its not standardised. Different frameworks provide parsers for different formats.


This looks like more of a RIB on steroids than FBX replacement. FBX started out as Kaydara's "product" (more of an integration scheme for Filmbox/Motionbuilder), Autodesk brought it in - as all of their tech. Actually, if you think about it, Autodesk is actually managing their products really good considering how they are a soup of different companies, products, cultures, everything.


There's also the Verse protocol for networked 3D.

http://verse.github.io/

The Love MMO (http://www.quelsolaar.com/love/index.html) is built on top of it and there was partial support in Blender for it for a while. IDK if that is still there.


Alembic seems to be helping a lot. It's not a full file description, but as a way of getting stuff from animation to everything afterward, it's pretty darn good.

Programs like Clarisse IFX (which is pioneering a totally new way of assembling scenes - very impressive) have basically been made possible by Alembic.


Clarisse has really just exposed already existing methods of doing things that the high-end VFX studios were using proprietary tools for as off-the-shelf software. They've not pioneered anything, other than the pricing.

Katana's been doing the same for 10 years at SPI, and companies like ILM, Weta, etc have had their own equivalents (to some extent - not as good as Katana, hence why a lot of studios are adopting it) written in Maya for asset-based pipelines for a while.


While I'm no expert on either, having seen a few product demonstrations of both could you explain why you feel Clarisse is equivalent to Katana? I understood Katana to be a renderer agnostic lookdev tool. Clarisse has it's own high performance renderer and seems to have a broader scope.


Clarisse is like a cut-down version of Katana with a built-in renderer.

It's also got limited layer-based compositing functionality.

It's cool in itself, but it's a bit like a "poor man's" Katana in terms of what you can do with it from a high-end pipeline perspective, and Katana can scale to huge scenes which I'm not convinced Clarisse can (this isn't including replicators or instances, which are cheap and easy to do and both cope with this) - I'm talking unique geometry in the billions of polygons.

Generally the big studios will connect Katana to PRMan or Arnold to do the heavy lifting at render time.

They are both procedural applications for describing and editing scene descriptions, which allows for very powerful workflows, in terms of overridable attributes which propagate down the scene graph, and they both allow lazy evaluation, meaning you don't open the source asset files until you need to.

But Katana's got awesome recursive procedural support, which linked to PRMan and Arnold, allows literally rendering of infinite scenes, as well as the CEL language which lets you script nodes in terms of behaviour and modification down the scene graph, basically allowing you to do almost anything in terms of defining nodes, or attributes, items in the scene graph.

Basically, Katana's a huge toolbox that the big VFX studios are using to fit (some of) their pipelines around and write their own tools for. Clarisse is more like an off-the-shelf out-the-box tool which can do some cool stuff which is useful to small teams and maybe independent artists, but pales in comparison to Katana in terms of capabilities.


Ok, interesting to hear your perspective. You are a Foundry developer though, right? I'd have to give the Isotropix guys the opportunity to make their case as well.


For a bit longer, yeah :)

Of course, and they'd rightly say that Clarisse is useful out-of-the-box, whilst Katana most definitely isn't - you need a renderer, it needs a lot of pipeline work to integrate it with your asset management system, and it needs a lot of customisation to make it "nice" for artists to use.

Out of the box, I wouldn't even class Katana as a good look-dev tool, as the default light manipulators are pretty poor - most studios generally customise them and make their own versions for artists. But that comes back to my original point (which I deviated from quite a bit), which is Katana is more a toolbox with very impressive (but not perfect by any means, there are issues with it) infrastructure capabilities, that VFX studios can build on to fit into their lighting and rendering pipeline.


Fair point - I'm coming at this from the perspective of someone who doesn't get to play with the proprietary tools :)

For the animator in the street, as it were, Clarisse looks pretty impressive.


This is nice addition (as in complimentary) to the Alembic format, see here:

http://en.wikipedia.org/wiki/Alembic_(Computer_Graphics)

In high end visual effects should should be quite useful. Just to be clear, it isn't intended for games or real-time 3D graphics. It is designed for huge complex scenes where one wants to have delayed loading and renderer integrations.

I have been hearing about this for some time.

It isn't clear how quickly this will be embraced by 3D content creation vendors as it can significantly increase interoperability and reduce lock-in.

Here is the overview of the format and motiviation:

http://graphics.pixar.com/usd/overview.html


I wouldn't call it an "addition" exactly.... It could be used in conjunction with it, as the overview points out, with the Abc files being used as the leaf nodes for the actual geometry representation.

But the SceneGraph representation (USD) will need to be the starting point, as Alembic doesn't (yet) provide a nice way of doing overrides or referencing other external files which is what this spec provides.


I meant "addition" in the sense of a complimentary tool set to Alembic. :) But you are right that it can be misinterpreted.


It should be noted that this is designed for huge production scenes, with lots of overrides of attributes down the SceneGraph and lots of referencing.

This is not a replacement for FBX or other object interchange formats - it's for entire scenes.

Most big VFX studios will already have their own version of this already in use. What's interesting is it's Pixar pushing this, who don't tend to work with other VFX studios on sharing assets and shots.


Is this something that POVray could use? I enjoy playing around with POVray a bit but it would be cool to have a standard language to move between small, consumer oriented tools.


For someone not from the world of 3D rendering and animation, can someone please explain in layman's terms exactly what a scene description format does beyond the obvious?


USD being used in Katana: https://vimeo.com/76739820


The pdf mentions Ogawa as possible replacement for BDB (BerkeleyDB) - anyone knows what it is?


Ogawa is a new multithreaded binary backend for the Alembic file format. I wrote about Ogawa here:

http://exocortex.com/blog/alembic_is_about_to_get_really_fas...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: