Hacker News new | past | comments | ask | show | jobs | submit login
Thor – A Project to Hammer Out a Royalty Free Video Codec (cisco.com)
422 points by TD-Linux on Aug 11, 2015 | hide | past | favorite | 164 comments



"We also hired patent lawyers and consultants familiar with this technology area. We created a new codec development process which would allow us to work through the long list of patents in this space, and continually evolve our codec to work around or avoid those patents. Our efforts are far from complete, but we felt it was time to open this up to the world."

This burden is becoming far too great, when this is the cost necessary to achieve innovation.


Amen. As Thomas Jefferson (correctly) described patents, they're a part of positive law, not natural law. In other words, their only justification is pragmatic, not moral. You can't "own" an idea the way that you can own a couch or a car. We allow for (temporary) patent protection because it's supposed to encourage innovation and help our economy. If it doesn't - and it's clearly reached the point where it hinders rather than helps innovation - then we need to change things.


To be clear, because some forget this part of patent law, the encouragement towards innovation was to encourage people to share and build on a communal set of ideas. The temporary monopoly on ideas was the carrot to get people to register their ideas in a central ___location (patent office) rather than lock ideas behind closed doors and secrecy.

Too many people think the carrot of the temporary monopoly was the point of the patent office.

In other words, we know that the community of ideas gets better with more sharing and building off of other people's ideas. As a society we decided to make laws to encourage this sharing. As a result, technological progress was immense.

Sadly, the current situation seems to strongly suggest that we may need to find better ways to encourage sharing (Open Source has been great for much of software).


Copyright was built with a similar purpose - enriching the public ___domain - and has failed in a similar manner.

Both patents and copyright are failed experiments. They weren't meant to 'benefit creators' or 'guarantee an income', and they cannot take that role in a healthy society.


Look at the direction of TPP, which mandates stronger copyright but has optional public rights:

https://www.techdirt.com/articles/20150805/15521431864/why-d...

"...the copyright section includes thirteen "shall provide" and just the one measly "shall endeavour." And if you add in the enforcement section you get another thirty eight "shall provide" and just a single "shall endeavour" buried in a footnote unrelated to the key points in the document.

So, for those of you playing along at home, the message being sent by the TPP is pretty damn clear: when it comes to ratcheting up copyright and setting the ground rules for enforcement everything is required and every country must take part. Yet, when it comes to protecting the rights of the public and making sure copyright is more balanced to take into account the public... well, then it's optional."


> Copyright was built with a similar purpose - enriching the public ___domain - and has failed in a similar manner.

Copyright has failed in an altogether different manner. The trouble with copyright is what has been erected to enforce it: Laws against circumventing DRM that encourage monopolization of copyright-adjacent markets, absurd statutory damages, internet censorship, easily abused takedown schemes. And the term is far too long.

But if "copyright" is only the ability of an author to sue copyright infringers in court for actual damages, it's basically harmless. If you don't like proprietary software then you can excise it from your life by simply not using it, and actually doing that is continually becoming more practical as free software improves.

The trouble with software patents is that you can't do that. There is no option to build your own system because independent creation is not a defense. And that failure is inherent in the nature of what a patent is. You can't fix it without eliminating software patents entirely.


Copyright has succeeded a thousand times more than its failed. It's pretty great for the most part. It enables a vast, vast number of jobs and new creations. It could certainly be better. But I'd much rather have what copyright as it exists today than for it to not exist at all. And don't forget that GPL is only enforceable due to copyright law.


Good point about GPL. Some people advocate for much shorter copyright terms, just enough to make it worthwhile creating the work, like say 20 years. But that would push some GPL software into the public ___domain, thus damaging the GPL ecosystem by allowing commercial competitors to use the same code.


Pardon me if I have trouble seeing a terrible loss for the GNU ecosystem if releases from 1995 were transferred to the public ___domain.


There are some very old but hard-to-do-yourself things like compilers, early versions of Linux, etc. Sure it won't be massive, but in the long term it'd slow the ever-growing collection of GPL work, as the old stuff leaks out the back end.


We are talking about works which then has had no patches for 20 years. Even compiles of dead languages do get minor patches over time, if nothing else then to fix memory leaks, crashes, infinitive loops and compatibility with new os kernels. You would for example have to enable low memory address exception in the kernel to run many 20 year old programs, since many programs back then assumed they would start at address space zero, which modern kernel forbids for security reasons.


A big difference that can make it reasonable to have long or even infinite copyright terms is that copyright works are the creations of people's minds, not discoveries about nature. Somebody else creating something doesn't prevent you from creating your own thing that's just as good, or even the same thing if you can show you did it independently. But somebody else patenting a method for compressing video does prevent you from independently creating and selling your own video compression method if it happens to work the same way. Maybe that's the best way to compress video and the world would be denied it as long as the patent remains valid.

So nobody really needs to use other people's copyright works. Unlike patented ideas, you can always create your own instead. Or course, sometimes it's expensive and feels like a waste of effort reproducing what's already been done, but it's still not a restriction on doing your own thing the way patents are.

The way I see property ownership is that it's like a store of human labor. When you do work, you might get money from it, you might get a physical object from it (I made a table, now I own that table), or you might get a copyright work from it. All of these things can hold the value of the work you did and can be used to trade for other things. I don't think it's unreasonable to imagine that in a fair world, the people who did that work can keep the stored value they created as long as the market is willing to trade things for it. In the case of computer software, some value is lost whenever somebody uses your work, since you're not able to sell it to them anymore. Value is also lost if the the demand falls (obsolete software) but that's the risk of investing your labor in something with an uncertain lifetime, just as the value of a table is lost if fashions change and people want steel tables instead of wood.

Some copyright work has value and that's why people want to copy it, so they can gain that value for themselves at the expense of the copyright holder. But wouldn't it be more fair to make your own, instead of siphoning value out of other people's work without compensating them?


It's also less effective because of double-dipping intellectual proprietary systems.

Of a single part of a software product a company can enjoy trade-secret protection, copyright, and patent protection... then they multiply the force many fold with burdensome one sided 'license agreements'.

This is possible because the bar for disclosure in patents has basically been put at zero. If the patent teaches the invention in question to a single hypothetical omnipotent expert in the art, then it meets the disclosure bar. In many fields patents are obfuscated beyond belief and it is pointless to read them: You'll learn nothing.

Simultaneously, parties keep their source code secret-- but distributed binaries which are obfuscated forms of the source that effectively hide their design, and they use the force of anti-circumvention provisions of copyright law to threaten people who would reverse engineer the software (perhaps to extract some information about the 99.9% of non-patented ideas in it). And yet they enjoy a long lasting copyright protection, the same as other works which are made transparent through their publication.

The patent system on its own is unbalanced, especially for software good which have a much lower resource requirements for their exploration... but when coupled with the other schemes that were hardly envisioned to overlap, the result creates incredible costs, stifles innovation, and is generally bad for society.


Hate to break it to you, but your treasured natural laws are just as man made is the positive law that you dislike.

We allow for (temporary) ownership of property because it's supposed to encourage benefits to society of various varieties. If it doesn't - and it's starting to become clear that, say, inequality in the US is becoming too high - then we need to change things.


Natural law doesn't exist. It's essentially a religious belief Jefferson had.


Natural law does exist. The difference between natural law and laws of a civilisation? I'll let Feynman hint at it... "Reality must take precedence over public relations, for nature cannot be fooled". To give the context, this was said in relation to the Challenger Shuttle explosion.


I think what GP meant is that it's like the "Scroll direction: natural" option that appeared in OS X system preferences after Apple changed the default scroll behavior to match what happens on mobile devices.

I am guessing that a lot of us "see what they did there".


Evidence showing the existence of natural law?


Natural (physical) laws are presumed to exist. Natural (social, moral) laws don't. (At least not in a readily usable form.)


Natural laws are laws that any rational nation would arrive at... well, naturally. Killing is bad, so is stealing, etc... These are natural laws. Anyone capable of rational thought can see how these things would ultimately be bad for any society.

The others are laws that a nation creates as a result of specifics surrounding itself. Patents are one such example. These exist solely to spur the sharing of ideas by encouraging inventors to document and share them in detail so that they can later be used by society once the patent expires. A temporary monopoly is granted on an idea to encourage this sharing. This way, ideas aren't lost to eternity simply b/c the inventor didn't have the means to fabricate them. This concept is purely a construct of western nations and culture. It's unlikely an isolated society would arrive at a similar construct naturally.


Even forbidding killing isn't a natural law. In many societies it has been legal to kill slaves, animals, foreigners, gays, people who break the artificial laws, etc. So killing is often seen as good for society. No laws of property ownership, right to life, right to freedom, etc are natural. They're just things that some societies, sometimes have decided are worth restricting.


To quote the parent to your comment: "Natural laws are laws that any rational nation would arrive at..."


"Rational" isn't objective; it's apparently rational for parts of the US to kill its citizens for committing certain crimes, while most developed nations do not.


Every country with an armed army or police force would be classed as irrational by that standard.


Are we saying you can no longer commercially develop a video codec?

If it's so good for the industry, the patent holders could just choose to make it free. Everyone else could choose to :gasp: pay.

I actually agree with your point. I just want to make it clear what that means. When it comes to patents, to me a compression algorithm is the closest thing in software to a mechanic device. At that level could patent equivalent ideas as hardware. Are we saying no more patents on mechanical inventions or hardware?

For a long time I've thought that the patent office letting crummy patents through as being a separate issue than having patents at all. It also never made sense to separate out software patents from anything else.


The "crummy patents" and "having patents at all" are very much intertwined. The problem is the bit about obviousness to someone skilled in the art. Even though I am a programmer, I know nothing at all about video codecs, so nothing seems obvious at all. But to someone skilled at making codecs, it's a different matter.

I remember very clearly when Ogg Theora was being developed the difficulty they had in choosing a technique that would work and was not already patented. It's not like they were looking up algorithms in a big book and saying "I wonder if that one is patented". They were coming up with techniques independently and then having to search to see if it was patented.

At what point is something obvious to a person skilled in the art? What should be patentable? Should you be able to get a patent across a whole field of techniques because you managed to implement one example of that technique?

The overall approach might be obvious to someone skilled in the art, but the devil is in the details. If someone can patent the overall approach because they have implemented an example of that approach, then it shuts down everybody else. If you have a company that goes around buying up (or making strategic partnerships with companies that own) patents that cover all conceivable approaches, then they can completely lock down any new developments for a couple of decades.

This is the reality of codec development right now. Is this what we want? Is it good for the industry and society in general?

Imagine as a programmer being told, "No matter what idea you have, it is already patented. You are not allowed to program without paying someone a fee. If they decide not to sell to you, then you can't program at all". That's the world of a codec developer. It's something that I personally do not want.


You can program it, you can use it. You might have trouble distributing it or selling it.

Is it better to patent something and tell the whole world how to do it? Or is it better to just keep it secret? It seems like some refinements to when you can sue might be in order, specifically, I might outlaw suing if you don't market an implementation of your own; that radically changes the value of a lot of property though.


Actually, the crazy things about patents is the you can't legally program it, even for your own use.

As a programmer, I am firmly against software patents. They are a burden only. How many programmers troll patents thinking, "I'll look for useful techniques so that I can license them in my program"? Even if you wanted to, you couldn't because there are millions of them. And if you start looking, you may get sued for wilful infringement if you forget about something you saw and start using it.

That aside from any arguments about the patentability of ideas rather than inventions. I would be 100% in favour of software patents if the implementation was covered (i.e. the source code). If I used a different implementation for the same idea, then the patent wouldn't hold. Of course such a patent would be useless in a world where software is covered by copyright.


You absolutely can implement it for your own use, research, etc. distribution and sale of code implementing patents is potentially problematic.


"Is it better to patent something and tell the whole world how to do it? Or is it better to just keep it secret?"

That's quite the false dichotomy.


> Is it better to patent something and tell the whole world how to do it? Or is it better to just keep it secret?

Where did this 'patents are good because they make inventions public' idea suddenly come from?

I've seen it bandied around a lot lately from people who are uncomfortable with directly supporting patents.

No. It's not better. Don't be ridiculous.

> Or is it better to just keep it secret?

How do you imagine anyone will do that?

Imagine I come up with a new compression algorithm that achieves 1:4 compression on 80% of compressed files and normal 1:2 on the rest.

In a patent system I can:

1) Use it privately and not tell anyone. Keeping it secret.

2) Patent it and license it to other people. I risk being sued out of existence by existing patent holders and trolls.

In a non-patent system I can:

1) Use it privately and not tell anyone. Keeping it secret.

2) Share and sell it as a black box implementation. People will immediately reverse engineer the compression method.

???

How is privacy and secrets an issue here?

In both cases (1) is the best choice if you don't want your competitors to get access to your algorithm.

In the patent case its easier for 3rd parties to find the implementation details by doing no work themselves. It's also significantly more risky.

In the non-patent case, people have to actually work to reverse engineer the implementation, sure, but then they're free to use it. There's also no risk in selling and distributing the product.

So, lets see here, things which are better, since both paths lead to the algorithm being made freely available in the end:

1 - Do research, at risk that you'll get sued into oblivion the moment you publish & sell. Even if you don't immediately get sued, you have zero temporary competitive advantage because you just told every competitor what you're doing.

2 - Do research and have temporary competitive advantage once you release it?

The only people who win in the patent way is lawyers.

The 'secrets are bad' argument is a straw man; first you setup the straw man (but then we would never get to know the secret details!!?!!), then you punch it a few times (but sharing knowledge is good! How will the global body of knowledge grow if everything is just secrets??).

It's just daft.

So yes, "It seems like some refinements to when you can sue might be in order"; indeed; ie. never.


Patenting your algorithm is going to make you more likely to be sued by patent trolls? That's not how it works... More importantly, why aren't the hevc groups doing this to each other?

It's not about privacy, it's about the competitive advantage of inventing something better.

Why do insanely profitable companies get patents?


Either to help them wage war against less profitable companies or to help defend themselves against an attack from a fellow superpower.


The thing with ideas, is many of them aren't really unique or particularly innovative. Most build on other ideas and concepts, this includes audio, video and other compression schemes. The issue at hand is that with many ideas, they take heavy manufacturing and physical models for testing that are expensive and time consuming. Where as an idea expressed in software is an extremely small fraction in terms of resources.

I've always felt that if we allow for software patents (which I'm not sure even that is a good idea), then they should be much more limited (say 5 years) where commercial costs can be recouped, and some profits made as well as being a short enough time that the greater society can still benefit.

Both the scale and scope of what is being approved regarding software patents in this country are ridiculous compared to the natural rate of change and innovation... 20 years ago the average computer would have a lot of trouble trying to display a 1080p video stream. Today just about everyone has something in their pocket that can handle this. We can't limit software expression and bind it for 20 years at a time, for ideas that take a fraction of that time for multiple people to come up with and implement.


Tesla(edit:SpaceX) is not bothering to file patents because it is just a blueprint for countries like China who quickly build knockoffs on someone else's R+D dime.


You are confusing Tesla and SpaceX. SpaceX is not filing for patents.


Didn't tesla open all their patents as well?

http://www.teslamotors.com/blog/all-our-patent-are-belong-yo...


Do you have any source for this grand claim?


Well put. It's crazy that they're pumping that much money paying lawyers in order to avoid using work that someone else has already done. They're forced to waste money and engineering talent in order to re-invent a technology due to our fucked up IP legal landscape.


Not just re-invent, since if you happen to invent something by yourself the same way as it happens to have been patented then you're still in trouble.


Or (I'm assuming, IANAL) you could pursue a court battle, and attempt to prove that your independent invention means it's too obvious to be patentable.


I think you mean "too obvious" (can't be too novel)

I think it would be a big mistake to admit that yours is the same, because if this doesn't carry you've effectively said you are infringing.


Or they could pay the people who invented it already.

oh that's right. The whole point is they don't want to.


That's because it's prohibitively expensive and negates the possibility of its inclusion in free products. At least, according to the blog post.


Older codec were not prohibitively expensive otherwise the proliferation of video products using patented codecs would not exist.

His complaint against H.265 will likely be fixed, and then it too will catch on, just as H.264 did, despite these same types of complaints against it.


> Older codec were not prohibitively expensive otherwise the proliferation of video products using patented codecs would not exist.

Because audio and movie pirates don't care what codecs they use, and VLC has supported pretty much everything you throw at it, patents be damned.

What do you think most early iPod adopters filled the disks with? Surely no legally bought music, lol.


There are many more products besides the few that skirt the law.

>What do you think most early iPod adopters filled the disks with?

The first iPods shipped with decoders for several patented codecs, including mp3.

This supports my claim that the codecs were not prohibitively expensive to make gadgets that include royalty payments to codec patent holders.


Apple refused to release QuickTime 6 until the MPEG-LA licensing for MPEG-4 was changed. This supports the claim that codec licensing has always been a contentious area.


I agree it has always been contentious. I disagree that it was "prohibitively expensive".


Well Apple thought it would kill the market so I guess we can all decide who to believe.


As has been said elsewhere, this is not the cost need to achieve innovation, this is the cost needed to avoid paying others who have already invented something before.

And as often it is with patents, the outcome is more invention. All those workarounds are also innovation.


Many readers would take issue of your use of the word "invented" in this context. Was the Pythagorean theorem also "invented" in your opinion?


Even mathematicians are conflicted whether mathematics is discovered or invented. My personal belief is that if something is true about the world without the presence of man, those are things to be discovered. If something is created that could not exist without man, those are invented. Of course, lots of shades of gray in between, but let's face it: Video codecs exist only because of things man created. They are definitely invented, even though the principles on which they operate may be discovered. Example: Taking the DCT of an image results in a particular pattern of numbers. That is a discovery. Exploring the repetition in that pattern to achieve compression is an invention.


Actually, I'm still rooting for Daala (from Xiph.org, the same folks that did so well with Opus). It's still a long ways away from being finished, but their work is awesome and I've been following it for a while now!

Either way, having another effort competing to make a great format is not a problem. Here's hoping it goes well!


Hello, I'm the Daala tech lead.

One of the things that made Opus a success was the contributions of others. We certainly don't have a monopoly on good ideas. We'll take pieces of Daala and stick them in Thor and pieces of Thor and stick them in Daala, and figure out what works best. Some of that experimentation has already begun:

https://github.com/cisco/thor/pull/8

https://review.xiph.org/874/

https://www.ietf.org/proceedings/93/slides/slides-93-netvc-5...

Because none of us have a financial incentive to get our patents into the standard, we're happy to pick whatever technology works best, as long as we end up with a great codec at the end. Hopefully NETVC can replicate the success of Opus this way.


It's this kind of awesome attitude that makes me so happy whenever I look into a Xiph project. I certainly did not mean to imply that the efforts of the org in OP should be moved to Daala and that competing projects are a bad thing; far from it! Rather, I was just voicing my support for a similar project that has had some great progress so far.

You all have done a great job in the past and I'm always on the edge of my seat to see what incredible things you will do. Keep up the rockin' work!


Yeah, we're going to have to start arguing on what to name the hybrid soon, since 'Vopus' is already taken.


Brainstorming fun names combining "Thor" and "Daala" reminded me of the Robert E. Howard villains "Thulsa Doom" and "Thoth-Amon". :)


I propose Kollóttadyngja!


I hope you could have a comprehensive write-up about how codec works.

your article, such as this http://people.xiph.org/~xiphmont/demo/daala/demo1.shtml is the best codec introduction I could find online, but still I couldn't follow, say, the deblocking part.

there doesn't seem to be a book covering this area.


Is Thor based on similar principles to Daala (i.e. like lapped transforms) to make it useful for merging?

And unrelated question, what will be the name of the merged codec? I hope it won't remain as NetVC, as that name is awful.


According to the spec, Thor is a more traditional block-based codec with quadtree subdivisions in the vein of H.264 and forward.

Beyond that the design is rather conservative and different from Daala's "start over from scratch" methodology. It doesn't look like the two would be poised to merge directly a-la Opus at this point, but rather to be mutual test beds and borrow and exchange unencumbered ideas that work well.


I see, thanks.


Apparently, as part of the NetVC[0] effort Daala has started experimenting with Thor tech and innovation, Thor's been added to AreWeCompressedYet[1] and they looked into integrating Thor's motion compensation and its constrained low-pass filter into Daala during the IETF93 NetVC hackaton[2] a few weeks ago.

[0] https://tools.ietf.org/wg/netvc/

[1] https://arewecompressedyet.com

[2] https://www.ietf.org/proceedings/93/slides/slides-93-netvc-5... (PDF)


I'm betting on Daala and Thor. Both the teams involved are motivated by the same goal to make a royalty free codec. I think the odds are good that the IETF standards process will result in combining the best ideas of Daala and Thor and reviewing them from and patent point of view. If the guys doing VP10 wanted to work with us in an open way, that would be great too.


Opus was actually a similar combination of two codecs, CELT from Xiph.org and SILK from Skype.

So similarly, there are now two projects, Daala from Xiph and Thor from Cisco, which are both preliminary but being worked on in the same group, so hopefully the best parts can be taken from each to produce a new, better open codec.


There was a great talk about Daala at Linux.conf.au 2015 (https://www.youtube.com/watch?v=Dmho4gcRvQ4). It has both general, and technical information about Daala, so it's well worth a watch.


looks interesting, Opus is great


The MP4 patent situation needs another close look. MP4, which was first standardized in 1998, ought to come out of patent soon, if it hasn't already. There are a few remaining patents in the MPEG-LA package, but they're mostly for stuff you don't need on the Internet, such as interlaced video, font loading, error tolerance for broadcast, and VRML. This hasn't been looked at hard since 2011[1] and it's time for a new look. Some of the key patents related to motion compensation expired last April.[2]

It looks like the last patent on MP3 audio decoding expires next month.

[1] http://www.osnews.com/story/24954/US_Patent_Expiration_for_M... [2] http://scratchpad.wikia.com/wiki/MPEG_patent_lists#MPEG-1_Au...


If by MP4 you are referring to H.264, there are still many years remaining on most of the patents. MPEG-LA publishes patent lists, if you're interested to look.

You are right in that there are many other encumbered technologies that have patents expiring soon. MPEG-1 and MPEG-2 video, MP3 and AC3 audio, and several container formats are included. Notably, this is almost all of the technologies required to make a DVD.


Yes, MPEG-LA publishes lists, and they need to be looked at closely. Most of the patents have expired. When you go down the list, you see things such as US #6,181,712, which has to do with multiplexing two unrelated video streams into one. Broadcasters and cable systems do this, but Internet video does not. US #6,160,849 only applies to compression of interlaced video, which nobody uses on line. US #7,627,041 is about dealing with missing header data due to transmission noise. US #5,878,080 is about backwards-compatible multichannel (>2 channels) audio, also seldom used on-line. US #6,185,539 is for video with overcompressed extra-low-quality audio. So is US #6,009,399.

MPEG-LA has been padding their patent portfolio by dumping in all these patents on little-used features. Until this year, they still had some good patents, such as the ones on motion estimation, which is a hard problem and is needed to make compression work. But those have now expired. What's left looks like it can be avoided as unnecessary for Internet use.


I think you're right. It's worth the hard look as it would let us take advantage of the existing format instead of push a new one. One's always easier than the other. Looks like we'll be doing same for MP3 soon, too. Glad you mentioned that one as I was going to dodge it for a future project for licensing reasons. Might not have to. :)


If you're looking at new compression ideas, take a look at FrameFree.[1] This is a technology developed around 2005 at Kerner Optical, which was an effects unit of Lucasfilm. It's a completely different approach to compression, not based on frames. It's based on delaminating the image into layers which seem to be uniform in motion, then interpolating by morphing each layer, then reassembling the layers.

Because the interpolation operation is a full morph of a mesh, (which GPUs can do easily) you can interpolate as much as you want. Ultra slow motion is possible. You can also up the output frame rate and eliminate strobing during pans.

Kerner Optical was spun off as a separate company, then went bust. The technology was sold off, but nobody could figure out how to market it. The delamination phase turned out to be useful for 2D to 3D conversion, which was popular for a while. But Framefree as a compression system never went anywhere after Kerner tanked. Nobody is doing much with it at the moment, and it could probably be picked up cheaply. At one point, there was a browser plug-in for playback and an authoring program, but they're gone. I'm not sure who to contact; the "framefree.us" ___domain is dead. the "framefree.com" ___domain is dead. Here's its last readable version: [2] The remnants of the technology seem to be controlled by "Neotopy LLC" in San Francisco, which is Tom Randoph's operation.[3]

[1] https://hopa.memberclicks.net/assets/documents/2007_FFV_Comp... (Open with OpenOffice Impress; it's a slide show.) [2] https://web.archive.org/web/20120905065521/http://www.framef... [3] https://www.linkedin.com/in/neospace


Very different type of technology. I could see this being much easier to cheaply accelerate in hardware, too, due to simplicity. Thanks for link! Will keep copies of this and see who's interested.


I don't want to disagree just to be disagreeable, but I don't find much correlation between applicability of patents and their ability to prevent adoption of libre codecs. I've run through a few video patents myself (from the ones Nokia/MS claimed VP8 infringed on) and they were absurdity hidden behind legalize. Daala's approach at taking a radically different design is much more comforting. That being said, even IPR claims have scared off adoption of Opus by Digium : (


MP4 usually means MPEG-4 Part 2 or MPEG-4 Visual (ISO 14496-2). That's the codec that DivX and XviD implemented.


<sarcasm mode> No wonder Big Media hates tech, they are trying to take all their money away. </sarcasm mode>

I think this is a great effort, and if you'll recall Google went and attempted to do the same thing with VP8, but found that people could file patents faster than they could release code[1]. I would certainly support a 'restraint of trade' argument, and a novelty argument which implies (although I know its impossible to currently litigate this way) that if someone else (skilled in the art) could come up with the same answer (invention) given the requirements, then the idea isn't really novel, it is simply "how someone skilled in the art would do it." I've watched as the courts stayed away from that theory, probably because it could easily be abused.

[1] Conspiracy theory or not, the MPEG-LA guys kept popping up additional patent threats once the VP8 code was released.


Why attack novelty instead of non-obviousness? An expert can attest to obviousness, but not necessarily novelty (you shouldn't need an expert for that: simply produce the prior version).


In my experience (and that experience is limited, I've only participated as an expert witness in two cases that have gone to trial) looking backward through time on "obviousness" is really hard. Once you know how a magician does his trick, its obvious to you, but before you knew it wasn't. Compare that to multiple people from different places trying to solve the same problem came to implement that solution in exactly (or an infringing) way, speaks to the notion that of non-novelty more strongly. And its something that has already happened prior to disclosure, so you cannot argue that the other people were "taught" by the patent disclosure. (one of the tenets of the patent is that it should teach others skilled in the art how to do what it is you're patenting)

Anyway, I'm not a lawyer, and none of this is legal advice or patent advice. Just my thoughts (or perhaps frustrations) on how hard it is to generate deliberately patent free technology. That difficulty suggests to me a way in which patent law could be improved.


Sure, determining past obviousness can be hard. That's why you bring an expert or many experts to attest to how obvious the technique is.

But you don't need an expert for novelty. Either you can show a prior art or you can't. I'll grant that there may be some some edge cases where the prior art needs some nuanced interpretation from an expert witness.


I think we agree :-). I was thinking of the more subtle version of novelty which is perhaps best expressed as, "as requested". Here is a fictional example of what I'm thinking about.

Lets say someone asks you to make a mud pie[1] and put bits of lavastone in it. You make your mud pie and then you patent "system and method for creating a mud pie with lava stones."

Perhaps there is no prior art because nobody asked for a mud pie with lava stones, perhaps there is no prior art because others who made mud pies with lava stones didn't see anything useful about it. But someone, somewhere, filed a patent. And the patent office grants it.

The question I pose is how to come up with a defense that anyone skilled in the art of making a mud pie, would make one with lava stones in just that way ? And yes, I know all the legal arguments why it doesn't work like that, so my point is how do we fix the patent system such that utility patents on methods or combinations of methods, would likely be independently arrived at by anyone skillled in the art?

How do we fix it so that Cisco, writing their patent free video codec in the open, doesn't get "scooped" by someone taking their project, projecting out a month or a year in advance of what it is going to need to work, and then throwing together a provisional that pre-dates the open source project getting there, thus depriving the people working on Cisco's efforts their ability to ship without hindrance?

[1] Really, just dirt and water.


> Lets say someone asks you to make a mud pie[1] and put bits of lavastone in it.

This is begging the question (in the original sense of the phrase). The process is usually not somebody saying "make me a mudpie with lava stones". It usually starts with "how do I make a more attractive mudpie?" There are countless ways of doing so. You could use marbles, leaves, different mud, different levels of consistency... But maybe using lava stones gives you the most bang for the buck. So then you are really filling a patent on "method and system for increasing mudpie attractiveness".

The infamous Amazon one-click patent can similarly be viewed that way. The patent is not really solving the problem of "how do we enable purchases with one-click?" (the solution to which is blindingly obvious) but of "how do we get people to buy more things on our online store?" Now, the path from there to "one-click buying" may also be obvious, no doubt, but it's not as obvious as the path from "how do you build one-click?" to "here's how" simply because the solution space is so much bigger.


Well, firstly, I think we should probably throw the entire patent system out. It's a hindrance to innovation. <--- yes, that's a period

In your case, I think you argue that there is nothing special about lava stones, and in fact, dirt/mud contains many stones. Probably, including tiny lava stones, whose only real difference with the additional lava stone bits is size.

So, adding stones is not very novel. There's also not much difference between a lava stone and a non-lava stone; if I can put in a non-lava stone, I can probably just as easily put in a lava stone. Is it not obvious that if I can put a quartz into a mud pie, I could also put a lava stone?

I guess the general strategy is to find the more general pattern and then show that the patent is just a specific instance of a larger, known pattern.


Why not throw the weight behind VP9? edit: I actually am curious, this isn't a question pointed at the validity of Thor. I just really want to see a great, open-source standard emerge and see people get behind it.


Apparently because they're innovating with Thor, according to[0] Daala people are experimenting with Thor tech (IETF 93 was a few weeks ago) and Daala tech is being experimented for Thor[1]

[0] https://www.ietf.org/proceedings/93/slides/slides-93-netvc-5...

[1] https://github.com/cisco/thor/pull/8


I'm no expert, but the article lists VP9 as "proprietary". Which I take to mean not open source, and potentially not free. Though the proprietariness could be a response to the issues they had with VP8 and suspected reactionary patenting.


VP9 is open source and royalty-free though. I'm confused.


And it's not just open source, it's BSD which is about as none restrictive as you can get. So now 1) Open Source 2) Royalty-free 3) Free as in beer 4) Free as in freedom (Open Source OSI certified BSD)

Software can now be call proprietary?

pro·pri·e·tar·y : of or relating to an owner or ownership.

So what is this guys saying. That now anything with a company behind is is proprietary? Linux has got LMI, so I guess that's proprietary, and firefox has got Mozilla. Libreoffice...by this guys twisted version of reality, what is not proprietary?


VPx reference software is open source. The article is not referring to the software but the specification which is developed and published by Google (a private company), as opposed to an open standards consortium like ISO, ITU or the IETF.

VP8 was published as an informational RFC under the IETF, but not as part of a standards track, see "Not All RFCs are Standards": https://tools.ietf.org/html/rfc1796


Parent has the right idea. One of the few valid criticisms of VP8 was that the code /was/ the standard, so you had to reverse engineer the encoder/decoder. This is not only a PITA but also prompts questions such as whether an obvious bug is part of the spec.

It also took a full year after Google bought the company behind VP8 to actually release the code. Someone from Firefox actually wrote an open letter to Google basically asking WTF was going on.

I don't have any first or even second hand knowledge of the current situation, but I suspect that Google has continued to ... not collaborate as much as everyone else would like.

(Caveat emptor: the above is based off of memory of events that took place a few years ago.)


Is Google also contributing to the NetVC effort?


Wasn't Ogg Theora created under just the same principles? I'm not smart enough in all things codec to know how it stacks up technically, but best I can tell, it's unencumbered.

https://en.wikipedia.org/wiki/Theora


Theora simply can't do what H264 can.

example: http://www.streaminglearningcenter.com/articles/ogg-vs-h264-...


Deleted a prior comment on "why not theora?" because this link is what I was asking for. Thanks.



Theora is a few generations old now-- it's available for anyone to use, but there is a lot of interest in improved performance.

Theora has also suffered somewhat from not being developed through an open process: On2 code-dropped one of the older propritary formats that they were uninterested in supporting anymore, and the Xiph side (which previously had just done audio) picked it up, formally specified it, and radically improved the implementation (of both the encoder and decoder) while retaining compatiblity.

Codecs thrive on network effect-- you might be happy to take some efficiency loss for improved licensing terms and patent assurance, but there are plenty of other who don't care (or at least won't care until the lawsuit arrives on their desk). It's not good enough to be very good in one dimension, to be wildly successful a codec needs to be great in many dimensions; a decade ago you could argue that Theora was in that space (and I did!), less so today.

... and wildy success is itself the ultimate performance objectve, not for the sake of the egos of the developers but because only though ubiquity do codecs stop being an issue that people trying to deliver great works have to worry about. 99.9% of the time an application developer is not thinking at all about what his TCP stack is doing, who made it, or what its settings are-- ultimately that kind of effortlessness needs to move up the stack in order for the world to move forward.


Yes, Theora is also royalty free. NETVC's goal is to surpass the quality per bit of existing codecs though, which Theora doesn't do.


The key principle in Thor development is doing it in the IETF process. Theora is much more like VP8 and VP9 in approach (it was based on VP3). The technical approach, basically use fairly standard approaches while avoiding the patent landmines, is similar across all these projects, it's doing it in the IETF that's different for Thor and may aid acceptance in some quarters, and help avoid some patent issues.


> Google’s proprietary VP9 codec

That's an odd choice of phrase; it's unfortunate that a press release chooses to disparage alternatives without explanation.


Why is that odd? It is pretty much a textbook example of a proprietary standard.

Here are some definitions of "proprietary" as used by members of the FOSS community when talking about standards:

https://news.ycombinator.com/item?id=4634957 (BrendanEich)

>"Proprietary" as in "sole proprietor" is appropriate for a project with zero governance, launched by Google after some incubation closed-source, dominated by Googlers.

https://news.ycombinator.com/item?id=9395992 (pcwalton, Mozilla employee and Rust core developer)

>In a competitive multi-vendor ecosystem like the Web, public-facing protocols that are introduced and controlled by a single vendor are proprietary, regardless of whether you can look at the source code. NaCl and Pepper, for example, are proprietary, even though they have open-source implementations.


That is a really odd thing to claim, given that there are so many proponents of the MIT license. People claim that MIT is "more free than GPL" because MIT has less restrictions. GPL has more restrictions, and although those restrictions are there to ensure that the same rights are passed on other people, MIT proponents don't buy that argument: they argue that even if someone forks an open source project, slap a GUI on it and sell it as a proprietary product, no freedom is lost because the original is still available. It does not matter that you cannot contribute to the fork. The ability to make proprietary forks is seen as good.

Yet when applied to Google's products, this is suddenly viewed in a different manner? Even if the maintainer does not accept patches, you can still fork it, so no freedom is lost. And it's ok for other people to make a proprietary fork, but not ok for the author to make a proprietary fork? That sounds like hypocrisy to me.


Did you apply to the correct comment? Your post doesn't make much sense. We are talking about standards not code. Code is an implementation of the standard, and the license of the code is irrelevant to the status of the standard.

Forking standards is completely different to forking a codebase. It should be obvious why.


It's referring to the lack of standardization of the VP9 codec, and the terminology is pretty common in standards bodies - it does sound weird and misleading when intermingled with free software though.


I am sure that some entity holds a broad enough patent that all your bases will belong to a Texas court.


And that's the real problem. Heck, it doesn't even need to be an NPE, it just needs to be one of the patent holders they're "avoiding" who wants to fire up some litigation.

They don't even need to be able to win. An existing "legit" patent holder might choose to simply throw lawyers at as a tactic to delay or defeat a potential competitor. In that case, it comes down to a cost/benefit analysis for the would-be litigator.


Certainly the risk is better with a royalty-free video codec, though. In the case of Daala, the goal is to be sufficiently different to avoid these broad patents, too.

Also, any companies contributing to the NETVC standard are required to declare IPR, which is not the case for MPEG standards.


> Certainly the risk is better with a royalty-free video codec, though.

I'm not sure what this means. A royalty-free video codec basically has a bullseye painted on it from the perspective of existing rightsholders. The only reason such entities might exercise restraint is because the cost/benefit analysis doesn't support litigation. Even if they don't think a competitor codec is a threat at the outset, there's nothing stopping an attacker from just sitting on the sidelines and waiting until the threat profile (and depth of infringers's pockets) becomes clearer. I.e. the "submarine patent" model, except it could even be a known patent in this case.

edit: clarity.


Er, sorry, you're right, that was a bad assertion. The submarine patent risk is pretty much the same in both cases - RF does not make it better. Early declarations to discourage use are more likely towards RF codecs, but are also easier to deal with.

I think Daala's development process (and IPR disclosure policies at the IETF) reduce the risk substantially. However, this is not automatically true of any RF codec.


This risk is there for pretty much any engineering you might want to do. The point here is to make a royalty-free codec, with as much IPR defense as possible.


So... this is a separate project from Daala which Cisco also works on. Is there a story here?


Both Xiph's Daala and Cisco's Thor projects are contributing to the NetVC Working Group at IETF - https://datatracker.ietf.org/wg/netvc/charter/ - to attempt to create a new, competitive, royalty-free video codec - in much the same way as Skype contributed SILK and Xiph contributed CELT to the audio codec WG to create the truly excellent Opus audio codec.

We'll see what comes out of it!


*Daala - but yes, I was wondering the same thing...


Oops, I somhowe missed it good catch.


What I would like to see is a video codec that has a library implementation for reading and writing video in that format, that is cross-platform and relatively easy to build, like libjpeg or libpng does for images. I have tried to build VP9 on windows and it was a tedious and ultimately unfruitful process.

I don't really care about the compression ratios achieved, or speed of compression/decompression.

Something like motion JPEG would be good, if it was actually a proper standard (AFAICT it isn't).


Motion JPEG isnt resilient. H264/H265/VP9, etc all build on some of the ideas that JPEG introduces, but introduce features that allow for the stream to be resilient to dropped packets or frames.

Its a cool idea, it just doesn't work in practice. Especially since a lot of these video standards are transmitted over UDP.


Interesting. I would have thought it would be reasonably resilient though, due to its intra-frame nature. If you get lost in the stream you could scan forward to the next JPEG header.


Thats true in a sense, actually. The thing is with H264/H265/VP9, etc dropped packets or missing data is somewhat acceptable as long as you have a key frame. You just end up with interpolated or 'guess' data (aka the droopy or ice-cream frame effects). With Motion JPEG the frame typically just freezes until another good frame is retrieved and decoded. Motion JPEG isn't a standard though, really, so perhaps what I've seen doesn't match others experiences. mJPEG is cool though for stuff on your local network or other places where you know you'll have a reliable dedicated network. If you have a nice wifi or wired network too you can crank up the quality on an mjpeg stream and get some really gorgeous quality streaming video, as long as the receiving devices can effectively buffer all that network data :)


Depends on what you mean by MJPEG, it's not really standardized anywhere. I'm aware of at least two meanings:

1) Take a stream of images calculate images that are the differences between subsequent frames and then JPEG encode the first image and the subsequent differences. The decoder then does the opposite. Once you lose a frame you're done so its better to actually encode a full frame once in a while (do I and B frames like normal codecs).

2) Just push a set of full JPEG images as individual frames. This is the most common usage of MJPEG these days as it's what you get from IP surveillance cameras and stuff like that. This is actually reasonably standardized as it's basically HTTP multipart where each of the parts is just a new jpeg. If you point an HTML img tag to a HTTP GET endpoint like that most browsers will display a video stream.


Didn't we already go through this with VP8/VP9/WebM?


VP9 is a good choice if you want a royalty free video codec right now. NETVC is shooting for the next generation. In addition, the goal is to get the video codec standardized at the IETF.

NETVC participation is open to anyone though, so it is possible more players will show up.


"NETVC is shooting for the next generation."

What makes a codec "next generation"? I assume, broadly, it involves trading off more yet more computation for a tighter encode? (As nearly an embarassingly-parallel problem, video coding continues to get faster with more silicon even if serial performance is stagnant.) What kind of gains can we expect from the "next generation"?

All honest questions, BTW. Links welcome, though something focused on this question and not just a laundry list of features with no reference to the past would be preferred.


From https://xiph.org/~xiphmont/demo/daala/demo1.shtml

> Our performance target is roughly a generation beyond current 'next-generation' codecs like VP9 and HEVC, making Daala a next-next-generation effort.

and

> The next-generation VP9 and HEVC codecs are the latest incremental refinements of a basic codec design that dates back 25 years to h.261. This conservative, linear development strategy evolving a proven design has yielded reliable improvement with relatively low risk, but the law of diminishing returns is setting in. Recent performance increases are coming at exponentially increasing computational cost.

> Daala tries for a larger leap forward— by first leaping sideways— to a new codec design and numerous novel coding techniques. In addition to the technical freedom of starting fresh, this new design consciously avoids most of the patent thicket surrounding mainstream block-DCT-based codecs. At its very core, for example, Daala is based on lapped transforms, not the traditional DCT.


Video encoding is not embarrassingly parallel; no kind of compression ever can be, because if any bit doesn't depend on the previous bit you've wasted it. It is pretty suited to ASICs.

Codecs are only efficient up to a certain image size, and then stop working because all the details are too large-scale for them. HEVC works much better than H.264 on 4K. Besides that, there's higher bit depth pixels, 3D, that kind of stuff.

Also there's usually so many mistakes and compromises in any standard that you can always find something to fix in the next one.


"Video encoding is not embarrassingly parallel; no kind of compression ever can be, because if any bit doesn't depend on the previous bit you've wasted it."

That objection makes no sense. That just implies that at worse parallelization may cost some encoding efficiency. In general, we are quite often willing to pay for that encoding efficiency with gusto given the speedup we can obtain. For instance, http://compression.ca/pbzip2/.


I don't think you two are using the same definition of embarrassingly parallel (emphasis mine). Given real world constraints, video compression is nowhere near as embarrassingly parallel as, say, motion jpeg. Many of the recent improvements (eg. better motion vector estimation) only see any benefit when you're encoding sequentially.


* no compression aiming for efficiency can be

If you have that much need for a speedup, you probably have multiple video streams going (like you're Youtube or a livestream broadcaster). In that case, it's better to do one video per CPU, and now you really are parallel.

Also, you can get up to 4x parallel through slice-threads safely on one video, or 16x through x264's frame-threads if you don't care about your target bitrate. I wouldn't consider that embarrassingly parallel until it's up to 1024x or so, but maybe you do.


Are there not stages of compression that are highly parallelizable, though? Like basic transformations that operate locally on the image (maybe DCT, per-block motion vector calculation)?


Sure, if they worked on the original image.

But that doesn't happen - when you're encoding, the DCT isn't actually run on the image but on the output of previous compression steps (prediction) which are based on the last encoded block. So there's a dependency on every pixel of the image to the upper left of you.

And when you're decoding, it just never ends up worth it to read the whole bitstream so you have a whole frame of motion vectors to do it at once. The whole data locality thing.


Next generation is indeed a vague term, but I would use HEVC as a good example of a next generation codec.

For another codec to be next gen, it should have the same or better compression than HEVC: about half as many bits to encode the same video than H.264 required.


Okay they maybe the link text should say "Thor – A Project to Hammer Out the Next Royalty Free Video Codec".

I'd be happy to see VP9 actually get deployed on any significant scale before putting another new codec into the mix.


I was thinking the same thing.


There should be efforts outside of large corporations dedicated to building these standards. Because in general even when large corporations promise free / open-source licensing they really only mean non-commercial licensing or "open with caveats". So they pretty much own the commercial rights.

I want open-source to subsidize a small team of engineers to create a completely open standard where no single entity owns it and everyone is free to branch / fork it.


There really needs to be a change to patent law around independent derivation of a concept. At very least we need to look into generalised thicket busting laws. The current situation is fundamentally unscalable.


Seems to me the success will depend on the quality and whether chip manufacturers will embrace this for hardware encoding/decoding. Right now looks to me like h265 is the winning horse


Thor is part of the NetVC effort, tentatively a royalty-free successor to HEVC.


Are they? I thought they would be scared off by the existence of two distinct patent royalty groups.


i wonder how many orders of magnitude slower this one will be compared to x264. vp8/9 was like 9x slower last time i checked


It is currently quite a bit slower, but the goal is to make a codec fast enough for real time communication use.

VP9 is still about 9x slower than x264, but yields the same quality at half the bitrate. You can set VP9 to run a lot faster, but you'll lose some of the bitrate advantages. Still, VP9 is practical for a lot of applications, such as Youtube.


VP9 producing the same quality as x264 at half the bitrate is hard to believe. Do you have a citation? Which `--preset` for x264 are you basing this on?


It's based on --best for VP9 1.4.0 and placebo for x264. Generally improvement tends to be from 30-50%, based on the quality target and content (the lower the bitrate, the greater the improvement). I have objective metrics which test this at http://arewecompressedyet.com/.

If you're more of a visual person, you can take a look at some images here, compressed to 60KB: https://people.xiph.org/~tdaede/pcs2015_vp9_vs_x264/0.25/

Certainly libvpx still has a lot of optimization and tuning work left to do. But there's only so much x264 can do with a 15 year old bitstream format.


Those example images are of really low quality video. Usually people who care about image quality do not care about that. Can you show some ~720p 5mb/s x264 versus 3-4mb/s VP9 samples?


Low resolution samples are the norm, I think it's because low resolution makes it so you can see how the compression algorithm works. If you a high resolution comparison, you would need to zoom in to see the difference anyway. There are, of course, compression artifacts that are readily apparent even in high-resolution (chain link fences, transparent wipes, etc) but I suspect that they are in the minority.


VP9 being 9x slower than x264 is hard to believe. Do you have a citation?


"VP9 encoding (using libvpx) is horrendously slow – like, 50x slower than VP8/x264 encoding. This means that encoding a 3-minute 1080p clip takes several days on a high-end machine. ... libvpx multithreading [encoding] performance is deplorable. It gains virtually nothing."[1]

1. https://blogs.gnome.org/rbultje/2014/02/22/the-worlds-fastes... n.b. x264 comparisons were taken with `--preset veryslow` which understates x264's potential performance by an order of magnitude. From the same link: "it can be fast, and it can beat x264, but it can’t do both at the same time."


This is old. libvpx 1.4.0 is a lot faster now and has multithreading. On my i7-4900MQ laptop, I get about 3fps encoding 1080p content. Still very slow, but 24 minutes for a 3 minute clip, not days.


I can anecdotally agree with him. I ran a service that did a lot of media encoding and VP9 is very, very slow to encode.


Could that be due to lack of hardware support?


No, generally encoding is done in software. It's just because VP9 is a much more complex format with many more different coding possibilities to search. It also hasn't been around as long as x264 to be hyper-optimized.


There are probably some issues with missing SIMD, but no further hardware support would be helpful.

It's purely an implementation issue - you don't get software as good as x264 just by paying for it.


Comparing x264 to hardware-accelerated encoders (QuickSync, NVENC, VCE), the speed/quality/bitrate tradeoff is massively in favour of the cpu-only x264. So i think hardware encoding could help VP9, but it's not a magic bullet (unless your CPU is busy with other work simultaneously).


This seems like fantastic news after the HEVC patent disaster.

Has anyone tested this or has more information on the performance/quality vs other codecs?


I have, at my website (for objective metrics) http://arewecompressedyet.com/

Summary is that Thor is performing at a slightly better level than Daala and worse than VP9 or H265. But it's also missing a lot of features right now, and the encoder is only tuned for PSNR.


Why not simply work with the VP9 project rather than starting a new effort? Per Wikipedia: "VP9 is an open and royalty free[3] video coding format being developed by Google."


So Daala will be fused with Thor like happened with CELT and SILK to create Opus? Does it make sense technically, or they are radically different?


Is it common to characterize BSD licensed software as proprietary? As in, 'Google’s proprietary VP9 codec'?


This refers to the fact that the bitstream format of VP9 has not been standardized anywhere, whereas the NETVC group intends to produce a standard.


Yay another one!



The comparison would be more accurate if most of those 14 competing standards are stated to be proprietary and burdened with patents galore; the stated goal of Thor isn't to try to unify a bunch of standards, but instead to actually create a proper non-proprietary standard.


See also the massively popular, and critical-to-rails Gem https://github.com/erikhuda/thor - naming is hard.


Naming may or may not be hard. But the fact that two projects in hugely different contexts happened to pick the same name does not bear that out, nor does anyone confuse that ruby gem with an even more massively popular Avengers member.

Perhaps if you stopped worrying about name collisions with any existing project ever, you might find naming things a little easier.


Isn't Thor a Norse god, too?


And a washing machine, and a family of satellites, and a ramjet engine. https://en.wikipedia.org/wiki/Thor_%28disambiguation%29 . Not listed there, also a special purpose DBMS for small molecule chemistry data.


And they're both named after the superhero!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: