Packt Publishing kindly gave us more books to review MySQL for Python, we had our share of experience binding python and mysql together in pyhp and we are using it on most of our infrastructure, albeit usually through SQLAlchemy. Puria will probably enjoy it since he does like most thin-layer/no-layer approaches and this book seems good to learn how to interface quite directly to mysql.
Python 3 Object Oriented Programming
Python 3 Object Oriented Programming had been reviewed already by a number of people in the python community and I’m lucky to have been given the occasion to read it as well.
As most of the other reviewers I say that the book is pleasant, it’s easy to follow and crystal clear. It gives a good overview of everything you have to expect from python.
The book is composed of 12 chapters
- The first 5 introduce the reader to basic concepts related to object oriented programming like Objects and Classes, Exceptions and Inheritance:
Object-oriented Design, Objects in Python, When Objects are Alike, Expecting the Unexpected, When to Use Object-oriented Programming - The next 2 delve into more python specific features and uses: Python Data Structures, Python Object-oriented Shortcuts
- Two chapters on common design patterns and patterns commonly used in python follow: Python Design Patterns I, Python Design Patterns II
- The remaining three provide some basic I/O introduction (Files and Strings) and a good intruduction to useful libraries and tools: Testing Object-oriented Programs and Common Python 3 Libraries
Each chapter share the same Introduction-Details-Case_Study-Exercise-Summary internal structure.
The book seems really good for teaching in university, it explains very clearly lots of basic concepts that usually are given as already known and introduces to concept like UML, design patterns and test driven development in a quite soft and easy way.
It is perfect for an initial object oriented programming course, if it comes along or one term before software engineering course.
- The usage of UML is to the point and isn’t heavy at all, making it good for people that just learnt what UML is or that were going to learn it along in other courses.
- The chapters about design patterns touch most of the commonly used patterns in python and explain why some patterns aren’t used at all. In an academic context those are quite good to show how the abstract concepts get used in the real code.
- Since we are using a lot nose probably I’d prefer having it used while explaining unittest, still I the chapter is quite good to have somebody without previous knowledge start playing with tests.
- The end of chapter summary and an exercise sections are quite useful for reviewing and do a bit of self check. Probably having also an appendix with the exercises solved would made the book even more student-suited.
All in all I consider this book quite well suited for Universities (both professors and students will enjoy it) and python newcomers. More skilled readers will still find it a good book to read about python3.
Cross Building Linux
Just a quick note since even if is quite easy to read the Makefile sometimes you’d rather have the solution now.
This is what I use for the efika images. The layout is the following:
/dev/sdc1 on /mnt/efika/boot type vfat (rw)
/dev/sdc2 on /mnt/efika/ type ext4 (rw)
CROSS_COMPILE and INSTALL_MOD_PATH are described quite well in the linux Makefile.
The former tell the configure which toolchain should be used, the latter where to install the modules, pretty easy, isn’t it?
You must be careful to not forget the trailing “-” from the CROSS_COMPILE and you can avoid the trailing “/” in the INSTALL_MOD_PATH
make CROSS_COMPILE=armv7a-unknown-linux-gnueabi- ARCH=arm menuconfig
make CROSS_COMPILE=armv7a-unknown-linux-gnueabi- ARCH=arm
make CROSS_COMPILE=armv7a-unknown-linux-gnueabi- ARCH=arm uImage
make CROSS_COMPILE=armv7a-unknown-linux-gnueabi- ARCH=arm INSTALL_MOD_PATH=/mnt/efika modules_install
In this specific case we need the u-boot tools to bake an uImage, in Gentoo
emerge u-boot-tools
GCC using C++
I got this news http://gcc.gnu.org/ml/gcc/2010-05/msg00705.html and it puzzled me a bit: you have the system C compiler depending on C++, making in fact it no more self hosting.
That alone makes me thing whoever decided and whoever requested that is next to suicidal. GCC is known for having _quite_ a shaky C++ standard library AND ABI, as in having at least an incompatibility every major version and sometimes even with minor ones.
I do dislike C++ usage mostly on this basis, let alone the fact is a language overly large, with not enough people dabbling it properly, let alone being proficient.
There are already compilers using C++, one that many people find interesting is llvm. It doesn’t aim to be a system compiler and it’s not exactly self hosting.
Many already stated that would switch to llvm clang front-end once it reaches full maturity (now freebsd proved that this level has been pretty well archived), I didn’t consider to fully switch to it just because it concerned me the fact it depends on C++ and how easy is to have subtle yet major breakages in that language implementations.
llvm people look to me way more capable of managing C++ than GCC ones and I saluted with please the fact they already have a libc++ implementation.
Back about being suicidal, if I have to pick between people that did well on C++ and people that botched many time on the same field, who would I pick?
The current discussions in the GCC mailing list are about C++ coding style, which features to pick and which to forbid, rearchitecture the whole beast to use a “proper” hierarchy and such, basically some/(many?) want to redo everything with the new toy. That makes me think again that llvm will be a better target for the next months/year.
I hope there are enough GCC developers and/or concerned party that will fork gcc now and keep a C branch. Probably having a radical cleanup and refactor is a completely orthogonal issue and should be done no matter they’ll pick C++ or C as their implementation language, GCC has lots of cruft, starting from their bad usage of the autotools.
Ogg vs World (as picked up from Slashdot)
Ogg had been discussed a lot lately. Having messed a bit with it and having felt the pain about dabbling with it and with vorbis and theora probably I could chip in.
I said “a pain”, quite subjective and I think the term summarizes my overall experience with it. If that’s because it is “different”, “undocumented” or just “bad” is left to you readers to decide. Even messing with NUT hadn’t been this painful and NUT is anything but mature. Now let’s go back digressing about Ogg. Mans stated what he dislikes and why, Monty defended his format stated that his next container format will address some of shortcomings he agrees are present in Ogg. Both defense and criticism are quite technical, I’ll try to say why I think Ogg, as is, should not considered a savior using more down to earth arguments.
Tin cans, class jars, plastic bottles… Containers!
Lets think about something solid and real. If you consider a container format and a real life storage you might see some similarities:
- Usually you’d like the container to be robust so it won’t break if it falls
- You’d like to be able to know which is its content w/out having to open it
- You’d prefer if weight as less as possible if you are going to bring it around
- You’d like to be able to open and close it with minor hassle.
- If it has compartments you’d like that those won’t break and that picking and telling apart what it contains as easier as possible.
There could be other points but I think those are enough. Now let’s see why there are people that dislike Ogg using those 5 points: robustness, transparency, overhead, accessibility and seekability
Robustness
I think Ogg is doing fine about robustness, the other containers are fine as well in my opinion.
Transparency
In this case the heading is a bit strange so let me explain what I mean. If you think about a tin can or a glass jar usually you can figure out better what’s in a transparent container than in an opaque one, obviously you can have labels with useful information. Usually you feel better if you can figure out some details about the content of a can even if you don’t know how to cook it.
In Ogg in order to get lots of data that the other containers provide as is you need to know some intimate details about the codec muxed in. “What’s the point of knowing them if I’m not able to decode it?” is an objection I saw raised on Slashdot comments. Well, what if you do not want to DECODE it but just give information or serve it in different way, you know actual streaming ー not solutions based on HTTP ー? (e.g Feng and DSS to name a couple).
Overhead
People likes plastic bottles over glass ones since the latter are heavier. When you move and store them that’s an actual concern.
Monty states that Ogg with a recent implementation of the muxer (libogg 1.2) the overhead in Ogg is about 0.6-0.7%, Mans states that it ranges between 0.4% and 1% usually nearer to the 1% than the 0.4%. So in my opinion they agree. How does that value fares with other containers? Mans states that the ISO mp4 container can easily archive about a tenth of the Ogg overhead. Monty went rambling about lots of different container stating some depressing numbers and discussing about random access on remote storage over HTTP and other protocols that are not meant for that.
I think that’s a large degree for improvement, or at least some benchmarking are required to see how that is true or false.
Accessibility
As in “which tool I need to use them?”. Ogg has some implementations, some even in hardware, Mans states that in some situations (e.g. embedded/minimalistic platform) isn’t the best choice. Monty states that similar problems would be there also for the other containers.
As I stated before in order to be able to process an Ogg you need to be able to decode or at least have a knowledge of the codec that nears the ability to fully decode. That means that you cannot do some kind of processing that in other containers doesn’t require such quantity of code and, to a degree, such CPU resources. Being able to do a streamcopy or to pick just a range among a content shouldn’t require decoding ability, those features are quite nice when you do actual streaming.
Seekability
Mans thinks the Ogg ability to move to a random offset within the media has a great degree of issues some related to the mentioned before requirement to know a lot about the codec inside the container other due the strategy in use to actually find the requested offset within the file. Monty goes again rambling about access remote files using HTTP and on how the other containers aren’t that better. The Slashdot article is already full of people stating that they DO feel seeking in Ogg SLOW, no matter the player and kills Monty arguments alone.
In the end
Obviously nothing is perfect and everything is perfectible. I do not like Ogg, I’m not afraid to state it.
Quite often you get labeled as “evil” if you state that or, god forbid, say that Theora isn’t good enough and maybe if you are that concerned about patents mpeg1 is the way to go since the patents are expired.
I’m quite happy with mkv and mov, probably I’ll use NUT more if/once it will get more spin and community. I’ll watch with curiosity how transOgg will evolve.
PS: I liked a lot this comment, Monty do your homework better =P
VideoLAN Web Plugin: xpi vs crx
One of the main issue while preparing streaming solution is answering the obnoxious question:
- Question: Is possible to use the service through a browser?
- Answer: No, rtsp isn’t* http, a browser isn’t a tool for accessing any network content.
* Actually would be neat having rtsp support within the video tag but that’s yet another large can of worms
Once you say that you have half of your audience leaving. Non technical people is too much used to consider the browser the one and only key to internet. The remaining ones will ask something along those lines:
- Question: My target user is
a complete idiottechnically impairednaive and unaccustomed and could not be confronted with the hassle of a complex installation procedure, is there something that fits the bill? - Answer: VideoLAN Web Plugin
Usually that makes some people happy since it’s something they actually know or at least they have heard about. Some might start complaining since they experienced an old version and well it crashed a lot. What would you be beware of is the following one:
- Question: Actually I need to install the VideoLAN Web Plugin and it requires attention, isn’t there a quicker route?
- Answer: Yes xpi an crx for Firefox an Chrome
Ok, that answer is more or less from the future and it’s the main subject of this post: Seamless bundling something as big and complex as vlc and make our non tecnical and naive target user happy.
I picked the VideoLAN web plugin since it is actually quite good already, has a nice javascript interface to let you do _lots_ of nice stuff and there are people actually working on it. Additional points since it is available on windows and MacOSX. Some time ago I investigated how to use the extension facility of firefox to have the fabled “one click” install. The current way is quite straightforward and has already landed in the vlc git tree for the curious and lazy:
vlc-plugin@videolan.org
VideoLAN
1.2.0-git
{ec8030f7-c20a-464f-9b0e-13a3a9e97384}
1.5
3.6.*
Putting that as install.rdf in a zip containing a directory called plugins with libvlc, it’s modules and obviously the npapi plugin does the trick quite well.
Chrome now has something similar and it seems also easier so that’s what I put in the manifest.json:
{
"name": "VideoLAN",
"version": "1.2.0.99",
"description": "VideoLan Web Plugin Bundle",
"plugins": [{"path":"plugins/npvlc.dll", "public":true }]
}
Looks simpler and neater, isn’t it? Now we get to the problematic part about chrome extension packaging:
It is mostly a zip BUT you have to prepend to it a small header with more or less just the signature.
You can do that either by using chrome built-in facility or by a small ruby script. Reimplementing the same logic in Makefile using openssl is an option, for now I’ll stick with crxmake.
Then first test build for win32 are available as xpi and crx hosted on lscube.org as usual.
Sadly the crx file layout and the not so tolerant firefox xpi unpacker make impossible having a single zip containing both the manifest.xpi and the install.rdf served as xpi and crx.
by the way, wordpress really sucks
The zoom factor in webkit and gecko
Apparently all the major browsers tried to provide a zoom facility to improve the overall accessibility for the web users. Sadly that often breaks horribly your layout, if you are developing pixel precise interaction you might get a flood of strange bug reports you might not reproduce.
We got bitten by it while developing Glossom, severely…
Our way to figure out it’s value is quite simple once you discover it: Firefox scales proportionally the borders and makes the block dimensions constant, Webkit seems to do the opposite. It’s enough to check if a element with known dimensions and border width has it’s value reported as different and your can find our which is the factor.
This obviously is quite browser dependent and nobody grants that in different version it might get changed, anyway so far it seems to serve us well.
Remote desktop, meet multimedia; school, meet remote desktop
Recently we got contacted about crafting some kind of solution for remote participation to school lesson. Hospitalized students may have hard time catching up and the current technologies, even the overpriced and underused “interactive blackboards” may help a bit in the picture.
What’s an “interactive blackboard” ? It is more or less a projector and any kind of tracking pointer, usually the IR flavour you can see in wide use through the Nintendo Wii. Not exactly a breakthrough it’s something you could craft with about 50e of components and any price for a projector (like the relatively inexpensive and highly portable ones from 3m). You might have some “value added software” that give you an UI that is more “blackboardish” than your standard desktop but that’s all.
How is it used during lessons? Pretty much like a normal blackboard, worst case you have a dumb teacher feeding his poor students dull slides made not so well.
That said it gets pretty easy think about a way to keep the hospitalized student and the rest of the class linked: put a remote desktop solution (nx, vnc, whatever) on the system wired to the blackboard and arrange some controls so that the teacher could give and take the “chalk” to the remote student.
Simple enough isn’t it?
Problems:
– What if the teacher would like to see and heard the remote student?
Well there is plenty of streaming solutions (I’m eyeing sip-communicator currently since they really put a great show at Fosdem, but ekiga or skype could do as well).
– What if the teacher starts to use the “interactive board” to show a DVD and wants the remote student enjoy it?
Ok, there we have a problem, having a large surface updated quite often and asking to have a _good_ quality and expecting the remote student having just a wireless link like umts, edge, gprs is getting really painful.
There are some solutions that have some heuristics in place to discover when a surface is holding a video and they try to compress using some not-so-lossy and quite-enough-low-delay. A bit suboptimal but should work somehow. I wonder if somebody has already thought about harnessing XV, XvMC and libVA capabilities and try to wire the not so fully decoded bitstream this way. Given you have a vaapi implementation on both endpoint and the right codec you may get a perfect movie and probably also spare some bandwidth. If I’ll have time probably I’ll try to have a proof of concept using the efikamx as endpoint, given it will get a vaapi bridget to it’s hardware accelerators.
For the audio you can compress it quite well w/out many complaints and wiring it from a desktop to another is relatively easy (hi, pulse!).
So I already described some months of work, now the last problem:
– What if I want many remote students interact with the same class and blackboard?
Ops. Given the class may have a link that’s no better than umts as well if we are talking about bare remote desktop might be feasible
If we could cut the video feeds and keep just the voices of the remote students we could still survive with 3-4 at most
If we want them to enjoy the video the teacher is about to show to the classe then… we need something else, completely. The whole blackboard&such software could stay better on a server with enough bandwidth and cpu to serve all the remote nodes with ease. Also the class could use a thin client wired to the interactive blackboard and more or less everybody could be happy. Sadly such technologies aren’t that ready. There is spice that’s quite promising, but not ready yet.
That’s all for now, I spent enough time rambling. We’ll see if this “cloud”y ideas will end in an implementation or not. And I haven’t started yet thinking about which software would run on this contraption… Anybody has a any experience with educational software for middle/high schools?
Syslog(3)
Recently Diego complained with me about the mod_accesslog I quickly drafted for Feng.
I didn’t check much about supporting bare files log but I just used syslog since that’s what I use for all the services that support it. Luckily he fixed the glitches I left since his usage patterns are quite different than mine.
Bare file logging usually is used by default in certain applications mostly because:
- You want to keep per-usage/per-user/per-deploy logs (think certain apache deploys)
- Your application doesn’t support syslog at all
- You didn’t knew about syslog and/or your logger daemon is a pain to configure
- You are fond of logrotate and/or you like to get your disk full of historic data
Having syslog based logging usually has some disadvantages over bare file just because you have to configure both the logger and the application, but gets quite handy when you need to tune all the logs in particular ways like sending them over the network or having the server automatically notify critical issues over email.
I really dislike bare file logging, mostly because I’m quite fond of metalog (so I’m not afraid of configuring my logger) and while deploying gluster for Stack! Studios render farm I really hated having those stupid bare files around while the rest of the well behaving applications would had their log correctly routed from the storage nodes to the almost centralized logging facility.
Having proper centralized logging is surely useful when you have to admin a system with a large number of applications, but for a big and increasing number of nodes it gets a boon.
Moreover if you plan to have read-only netbooted root images and almost zero local storage (more on the crazy solution I’m baking with bartek at Stack! could appear sooner or later) you really start to love it (and have a love-hate relationship with syslog-ng for its less than crystal clear documentation)
Gluster Experience (part two)
Luckily the issue I was experiencing with gluster 2.0.0rc1 was just an ugly bug squashed in the 2.0.0rc2 release. Right now I’m keeping the configuration I blogged about and now we are thinking about topologies and expansion.
Right now the big issue is trying to provide enough bandwidth for write in replication since a single Gbit link isn’t enough. It’s too late to order infiniband so I’m stuck thinking what is the best topology given we have a single writer, 70 readers, 3 storage (gluster) and about 4 24port gigabit switches with 10Gbit expansion link unused and at least 2 gigabit interfaces per node.
More will follow soon
PS: I’m wondering how hard would be trying to get a round-robin translator to accelerate replicated writes by just issuing a write from the client node to one of the N replicating nodes and then have them sync automatically by themselves…