Ogg vs World (as picked up from Slashdot)

Ogg had been discussed a lot lately. Having messed a bit with it and having felt the pain about dabbling with it and with vorbis and theora probably I could chip in.

I said “a pain”, quite subjective and I think the term summarizes my overall experience with it. If that’s because it is “different”, “undocumented” or just “bad” is left to you readers to decide. Even messing with NUT hadn’t been this painful and NUT is anything but mature. Now let’s go back digressing about Ogg. Mans stated what he dislikes and why, Monty defended his format stated that his next container format will address some of shortcomings he agrees are present in Ogg. Both defense and criticism are quite technical, I’ll try to say why I think Ogg, as is, should not considered a savior using more down to earth arguments.

Tin cans, class jars, plastic bottles… Containers!

Lets think about something solid and real. If you consider a container format and a real life storage you might see some similarities:

  • Usually you’d like the container to be robust so it won’t break if it falls
  • You’d like to be able to know which is its content w/out having to open it
  • You’d prefer if weight as less as possible if you are going to bring it around
  • You’d like to be able to open and close it with minor hassle.
  • If it has compartments you’d like that those won’t break and that picking and telling apart what it contains as easier as possible.

There could be other points but I think those are enough. Now let’s see why there are people that dislike Ogg using those 5 points: robustness, transparency, overhead, accessibility and seekability

Robustness

I think Ogg is doing fine about robustness, the other containers are fine as well in my opinion.

Transparency

In this case the heading is a bit strange so let me explain what I mean. If you think about a tin can or a glass jar usually you can figure out better what’s in a transparent container than in an opaque one, obviously you can have labels with useful information. Usually you feel better if you can figure out some details about the content of a can even if you don’t know how to cook it.

In Ogg in order to get lots of data that the other containers provide as is you need to know some intimate details about the codec muxed in. “What’s the point of knowing them if I’m not able to decode it?” is an objection I saw raised on Slashdot comments. Well, what if you do not want to DECODE it but just give information or serve it in different way, you know actual streaming ー not solutions based on HTTP ー? (e.g Feng and DSS to name a couple).

Overhead

People likes plastic bottles over glass ones since the latter are heavier. When you move and store them that’s an actual concern.

Monty states that Ogg with a recent implementation of the muxer (libogg 1.2) the overhead in Ogg is about 0.6-0.7%, Mans states that it ranges between 0.4% and 1% usually nearer to the 1% than the 0.4%. So in my opinion they agree. How does that value fares with other containers? Mans states that the ISO mp4 container can easily archive about a tenth of the Ogg overhead. Monty went rambling about lots of different container stating some depressing numbers and discussing about random access on remote storage over HTTP and other protocols that are not meant for that.

I think that’s a large degree for improvement, or at least some benchmarking are required to see how that is true or false.

Accessibility

As in “which tool I need to use them?”. Ogg has some implementations, some even in hardware, Mans states that in some situations (e.g. embedded/minimalistic platform) isn’t the best choice. Monty states that similar problems would be there also for the other containers.

As I stated before in order to be able to process an Ogg you need to be able to decode or at least have a knowledge of the codec that nears the ability to fully decode. That means that you cannot do some kind of processing that in other containers doesn’t require such quantity of code and, to a degree, such CPU resources. Being able to do a streamcopy or to pick just a range among a content shouldn’t require decoding ability, those features are quite nice when you do actual streaming.

Seekability

Mans thinks the Ogg ability to move to a random offset within the media has a great degree of issues some related to the mentioned before requirement to know a lot about the codec inside the container other due the strategy in use to actually find the requested offset within the file. Monty goes again rambling about access remote files using HTTP and on how the other containers aren’t that better. The Slashdot article is already full of people stating that they DO feel seeking in Ogg SLOW, no matter the player and kills Monty arguments alone.

In the end

Obviously nothing is perfect and everything is perfectible. I do not like Ogg, I’m not afraid to state it.

Quite often you get labeled as “evil” if you state that or, god forbid, say that Theora isn’t good enough and maybe if you are that concerned about patents mpeg1 is the way to go since the patents are expired.

I’m quite happy with mkv and mov, probably I’ll use NUT more if/once it will get more spin and community. I’ll watch with curiosity how transOgg will evolve.

PS: I liked a lot this comment, Monty do your homework better =P

VideoLAN Web Plugin: xpi vs crx

One of the main issue while preparing streaming solution is answering the obnoxious question:

  • Question: Is possible to use the service through a browser?
  • Answer: No, rtsp isn’t* http, a browser isn’t a tool for accessing any network content.
  • * Actually would be neat having rtsp support within the video tag but that’s yet another large can of worms

Once you say that you have half of your audience leaving. Non technical people is too much used to consider the browser the one and only key to internet. The remaining ones will ask something along those lines:

  • Question: My target user is a complete idiottechnically impairednaive and unaccustomed and could not be confronted with the hassle of a complex installation procedure, is there something that fits the bill?
  • Answer: VideoLAN Web Plugin

Usually that makes some people happy since it’s something they actually know or at least they have heard about. Some might start complaining since they experienced an old version and well it crashed a lot. What would you be beware of is the following one:

  • Question: Actually I need to install the VideoLAN Web Plugin and it requires attention, isn’t there a quicker route?
  • Answer: Yes xpi an crx for Firefox an Chrome

Ok, that answer is more or less from the future and it’s the main subject of this post: Seamless bundling something as big and complex as vlc and make our non tecnical and naive target user happy.

I picked the VideoLAN web plugin since it is actually quite good already, has a nice javascript interface to let you do _lots_ of nice stuff and there are people actually working on it. Additional points since it is available on windows and MacOSX. Some time ago I investigated how to use the extension facility of firefox to have the fabled “one click” install. The current way is quite straightforward and has already landed in the vlc git tree for the curious and lazy:


  
    vlc-plugin@videolan.org
    VideoLAN
    1.2.0-git
    
      
        {ec8030f7-c20a-464f-9b0e-13a3a9e97384}
        1.5
        3.6.*
      
    
  

Putting that as install.rdf in a zip containing a directory called plugins with libvlc, it’s modules and obviously the npapi plugin does the trick quite well.

Chrome now has something similar and it seems also easier so that’s what I put in the manifest.json:

{
"name": "VideoLAN",
"version": "1.2.0.99",
"description": "VideoLan Web Plugin Bundle",
"plugins": [{"path":"plugins/npvlc.dll", "public":true }]
}

Looks simpler and neater, isn’t it? Now we get to the problematic part about chrome extension packaging:

It is mostly a zip BUT you have to prepend to it a small header with more or less just the signature.

You can do that either by using chrome built-in facility or by a small ruby script. Reimplementing the same logic in Makefile using openssl is an option, for now I’ll stick with crxmake.

Then first test build for win32 are available as xpi and crx hosted on lscube.org as usual.

Sadly the crx file layout and the not so tolerant firefox xpi unpacker make impossible having a single zip containing both the manifest.xpi and the install.rdf served as xpi and crx.

by the way, wordpress really sucks

The zoom factor in webkit and gecko

Apparently all the major browsers tried to provide a zoom facility to improve the overall accessibility for the web users. Sadly that often breaks horribly your layout, if you are developing pixel precise interaction you might get a flood of strange bug reports you might not reproduce.

We got bitten by it while developing Glossom, severely…

Our way to figure out it’s value is quite simple once you discover it: Firefox scales proportionally the borders and makes the block dimensions constant, Webkit seems to do the opposite. It’s enough to check if a element with known dimensions and border width has it’s value reported as different and your can find our which is the factor.

This obviously is quite browser dependent and nobody grants that in different version it might get changed, anyway so far it seems to serve us well.

ACR got support for user permissions

ACR opensource Turbogears2 CMS got support for user permissions to allow users to edit only some pages and create children only in some sections. This should permit to separate work between multiple people in ACR based sites.

ACR also got support for blog/news slice template. This permits to create blogs in ACR with just two clicks instead of having to declare the ACR slicegroup youself.

As usual you can download ACR from http://repo.axant.it/hg/acr by using mercurial