Saturday, November 2nd, 2013 | Opensource | Comments
TurboGears2.3 has been a major improvement for the framework, most of its code got rewritten to achieve less dependencies, cleaner codebase a cleaner API and a faster framework. This resulted in reduction to only 7 dependencies in minimal mode and a 3x faster codebase.
While those are the core changes for the release, there are a lot of side effects that users can exploit at their benefit. This is the reason why I decided to start this set of posts to describe some of those hidden gems and explain users how to achieve the best from the new release.
The first change I’m going to talk about is how the response management got refactored and simplified. While this has some direct benefits it also provided some interesting side effects it makes sense to explore.
How TurboGears on Pylons did it
TurboGears tried to abstract a lot of response complexity through tg.response object and as there were not many reasons to override TGController.__call__ it was common that the response object body was always set by TurboGears itself.
Due to the fact that Pylons controllers were somehow compliant to WSGI itself the TGController was then in charge of calling the start_response function by actually providing all the headers user set into tg.response
response = self._dispatch_call() # Here the response body got set, removed for brevity if hasattr(response, 'wsgi_response'): # Copy the response object into the testing vars if we're testing if 'paste.testing_variables' in environ: environ['paste.testing_variables']['response'] = response if log_debug: log.debug("Calling Response object to return WSGI data") return response(environ, self.start_response)
While this made sense for Pylons, where you are expected to subclass the controller to perform advanced customizations, it was actually something unexposed to TurboGears users.
TurboGears made possible to change application behaviour using hooks and controller_wrappers. So the use for subclassing the TGController was actually strictly related to custom dispatching methods, which was usually better solved by specializing the TGController._dispatch method (tgext.routes is a simple enough example of this).
Cleaning Up Things
This lead to a curious situation where the TGController needed to speak with TGApp through WSGI to make Pylons happy, so it needed to call start_response and return the response iterator itself. TGApp was supposed to be the WSGI application, but in fact most of the real work was happening into TGController, in the end we had two WSGI applications: both TGController and TGApp were callable that spoke WSGI.
The 2.3 rewrite has been a great occasion to solve this ambiguity by providing a clear communication channel between TGController and TGApp by assigning each one a specific responsibility.
In TG2.3 only the TGApp is now in charge of exposing the WSGI application interface. The TGController is expected to get a TurboGears Request Context object and provide back a TurboGears Response object. The TGApp will then use the provided response object to submit headers and response body.
The TGController code got much more straightforward and the whole testing and call response part was moved to the TGApp itself:
try: response = self._perform_call(context) except HTTPException as httpe: response = httpe # Here the response body got set, removed for brevity return response
This has been possible without breaking backward compatibility thanks to the fact that the only subclassing of TGController common in TurboGears world was the BaseController class implemented by most applications.
The BaseController usually acts just as a pass-through between TGApp and TGController to setup some shortcuts to authentication data and other helpers for each request. So the fact that the parameters received by BaseController.__call__ changed didn’t cause an huge issue as they were just forwarded to TGController.__call__
A little side effect
One of the interesting effects of this change is that your controllers are now enabled to return any instance of webob.Response.
In previous versions it was possible to return practically only webob. WSGIHTTPException subclasses (as they exposed a wsgi_response property which was consumed by Pylons), so it was possible to return an HTTPFound instance to force a redirect, but it was not possible to return a plain response.
A consequence of the new change is enabling your controller to call third party WSGI applications by using tg.request.get_reponse with a given application. The returned response can be directly provided as the return value of your controller.
This behaviour also makes easier to write reusable components that don’t need to rely on tg.response and change it. Your application can forward the request to them and proxy back the response they return.
Part #2 will cover Application Wrappers, which greatly benefit from the new response management.
Monday, January 16th, 2012 | Opensource, Software Development | Comments
We are coders, and we love to code. Our job is coding, and we have fun to.
Hence we decided to have fun, just by having a 24h non-stop coding experiment in our preferred and beloved Italian restaurant (well actually a taverna)!
We will have a mini-hackathon tomorrow at 3pm our time, to celebrate together the past 2011 working year, that was great! Free Food, free alcohol and a couple of cots is our necessaire for brainstorm and actuate our plan to take over the world!
Actually this “coding marathon” it has a subject, all the projects and ideas that will be developed should be related to “Food / Dining”!
The products result of the day, if useful, will be developed on field and used by a restaurant and a couple of bars, free of charge. Hence if there are good result could be a first step for a new free-software project.
In fact probably tomorrow we’ll not have just software projects cause the team is heterogeneous, built of lawyers, designers, creative people, musicians… and obviously coders!
The plan includes a couple of hours to study the problematics of a restaurant and in particular of our host. The owner will illustrate to us his business processes and we will try to understand were we could help them with our notions.
The next step is to actuate the thoughts into reality in the next 20hrs (: that’s it, nothing easier huh?
In the next post I’ll update the outcomes and the people behind it!
Saturday, March 19th, 2011 | Opensource, Software Development, Web | Comments
Stroller it is our way to do e-commerce! We have already some clients that we fit on this module written in python and easily importable to have a fully e-commerce section to work with.
Just after testing it in production for some customers and already in a stable shaped we decided to deliver it as a community edition in free software.
In fact you can now download it from pypi:
or just checkout the sources over our repositories:
hg clone http://repo.axant.it/hg/stroller
For now stroller does not have a site and a home ;( but fortunately all of our projects are listed in a fancy “temp house” on project.axantlabs.com as it has stroller, just checkout http://projects.axantlabs.com/Stroller
You’ll see there the main features of this art of software and enjoy the sweetness!
Updates are coming for it we have just in mind some strategic moves to make it a ‘first choice’ product.
Wednesday, December 8th, 2010 | Opensource, Software Development, Web | Comments
As we are going deep with ACR our cms, it’s becoming a full featured product, we are using it more and more in production and we decided to release a new version on pypi.
New features include:
- Plugins Support. You can now add your own sections to the administration panel, views to the add slice menu and implement new functions or views.
- Multisite support, serve multiple sites from one single WSGIDaemonProcess
- Themes Support, create your own themes for the ACRCMS
- A simple Scripting language to automate some actions on theme setup
- User Defined Views, create new type of contents without having to write a single line of code
- SliceGroup Admin to permit to editors to change content of image galleries and news without have to need access to the admin panel
- Support for Disqus comments
- Export html slicegroups as RSS feeds
- Various Plugins for Slideshows, Image Galleries, Accordions, etc.
- RDisk should now be faster and has content caching
- Change My Password for current logged in user
- Permit to store binary data inside contents as base64 and serve them through the /data call.
- Integration with the Stroller eCommerce (stroller will be released opensource in the near future)
Here is the complete changelog since our latest release:
* work around to make it go with buggy sprox 0.6.10 (patched version still not released)
* Fix problem with Tg2.0 not casting headers to str
* Add delete udv and preview template for udvs
* Finalize user defined views with support for single selection fields, html and files
* Permit to serve fields encoded as per HTML5 base64 data source definition
* Add etag caching to rdisk based on file modified time
* Skeleton for user defined views
* Merge changes from master branch which improve multisite support
* Move every reference to stroller inside the stroller plugin itself
* Working delete action for slicegroup admin
* ACR might be mounted averywhere, never suppose its url, always use url() from libacr to generate acr urls
* Fix for TG2.0.3 (before tg2.1 remainder is a tuple and is not editable)
* SliceGroupAdmin plugin seems to work fine for adding things
* Make it work with Pylons1.0
* First import of sgadmin
* Stroller plugin preparatives
* Google Analytics change section
* Added new LinkedImage view same as the Image, just with a link utility. Must be evolve it, in the future to make him reuse the original view, without code duplication.
* Try to solve problems with repoze.who and repoze.what when running multiple acr sites inside the same daemon process and group
* Remove code render template from views as it collides with plugins
* Add setup script support to themes and fix default page url
* MCE options
* Fix icons and section for new plugins
* Fix for setup-app failing due to plugins
* gmap working (hopefully)
* gmap.js from static to plugin
* Merged single process mode, seems to be stable enough
* Moved GoogleMaps viewer to GoogleMaps plugin
* Disqus plugin
* Google Analytics from plugin static to site instance
* GoogleMaps key moved to database & added modify plugin
* Added tag classes for slices
* Add slice cloning
* Make rdisk_root dynamic to be able to run multiple instances in same process
* Permit to force lang from request
* Use genshi dict dotted access instead of a module
* permit to access and manipulate content from genshi slice
* add tabs plugin
* Added a class to the page as the uri of the page to make possible Custom css classes for page, hence different style for different pages
* Style fixes to administration menu
* fix crash when unable to contact pypi
* tests with multiengine
* engine from config
* first attemp at making a single process acr in the simplest way
* bind engine at each request
* cache session per db
* More experiments to make acr work on single process
* try to make acr work inside one single process
* Automated merge with ssh
* make section use id instead of class and declare in a less colliding way
* detect script slices also derived from default page
* Removed unused imports
* Fix done to correct the position of the excerpts under IE7
* Closes #43 it adds exception handling in case of failure of pypi version check, and just logs a warning to notify
* Minor edits
* Replaceing file templating, with python's Template string module
* Minor edits
* Added new plugin to insert google analytics tracking to the site
* removed useless import
* Moved update check under helpers as it is more appropriate and, changed naming convention to follow standards
* Added new plugin to add uservoice feedback tab to pages
* add slice type class to slices, refactor properties management and add find_by_property helper
* Added help on accordion plugin, to help user interaction on creation
* Added same size of the edit button menus as the slice/slicegroup element
* Created new container for the heading admin section as pseudo-tabbed links + minor style edits on the css
* Added release update notification
* make script view wrap content with script tag and migrate existing plugins to use it
* add script slice type and menu
* Make rdisk upload view type dependant, fix videos and make deletion work on actual slice content instead of slice name
* Removed oops, console.log + Removed timestamp from end of slicegroup names to allow, reusability of the acrodion within a page, if you edit and recreate the sliceroup
* add description placement management for image slice
* Added Accordion Plugin, it will put a template for creation of Accordion galleries filters by tag on uploaded images
* fix problem with image thumbnails not showing if not logged in and add Slideshow plugin
* plugin injected resources
* add image, video and file slices to add slice menu if there is rdisk available
* themes plugin for acr
* initial work to make acr_cms working on multiple sites with only on installation, necessary to implement themes support
* improvements to edit bar
Friday, August 27th, 2010 | Opensource, Web | Comments
Like every year Packt Publishing organizes the Open Source CMS Awards, right now they started the nomination phase.
The following categories make up the contest.
Open Source CMS
Hall of Fame CMS
Most Promising Open Source Project
Open Source E-Commerce Applications
Open Source Graphics Software
We decided to propose ACR for the Most Promising Open Source Project and Open Source CMS.
Even if young our CMS is already interesting since it’s quite easy to deploy, it integrates in other turbogears applications (like we did for iJamix) in a breeze and has already most of the features you would expect from a full fledged CMS.
In our humble opinion is the best and most promising Turbogears2 based CMF/CMS out there.
Saturday, July 31st, 2010 | Opensource | Comments
Just a quick note since even if is quite easy to read the Makefile sometimes you’d rather have the solution now.
This is what I use for the efika images. The layout is the following:
/dev/sdc1 on /mnt/efika/boot type vfat (rw) /dev/sdc2 on /mnt/efika/ type ext4 (rw)
CROSS_COMPILE and INSTALL_MOD_PATH are described quite well in the linux Makefile.
The former tell the configure which toolchain should be used, the latter where to install the modules, pretty easy, isn’t it?
You must be careful to not forget the trailing “-” from the CROSS_COMPILE and you can avoid the trailing “/” in the INSTALL_MOD_PATH
make CROSS_COMPILE=armv7a-unknown-linux-gnueabi- ARCH=arm menuconfig make CROSS_COMPILE=armv7a-unknown-linux-gnueabi- ARCH=arm make CROSS_COMPILE=armv7a-unknown-linux-gnueabi- ARCH=arm uImage make CROSS_COMPILE=armv7a-unknown-linux-gnueabi- ARCH=arm INSTALL_MOD_PATH=/mnt/efika modules_install
In this specific case we need the u-boot tools to bake an uImage, in Gentoo
Wednesday, July 28th, 2010 | Computer Science, Opensource, Software Development, Web | Comments
Now you just need to:
pip install libacr
and you are done, as easy as saying!
More details on http://pypi.python.org/pypi/libacr/
Tuesday, July 27th, 2010 | Opensource | Comments
Recently we decided to move our Turbogears 2 CMS, ACRCMS, from an example of how to use libacr to something more complete. Consequently to this decision we implemented the plugins architecture inside libacr, which permits to add plugins to libacr at run-time.
Until now there were no real plugins, the only available plugins were the three acr slice templates which have been moved from an internal function to a plugin.
Today the first real plugin for ACR has been pushed in a separate branch, the plugin is the Theme engine, which permits to add theme support to ACR. The plugin is available inside the ACRCMS acr_plugins directory and is now loaded by default by ACRCMS, it will be officially part of ACRCMS in the next release.
For now you can test it from the themable branch of ACR
Monday, May 31st, 2010 | Opensource, Software Development | Comments
I got this news http://gcc.gnu.org/ml/gcc/2010-05/msg00705.html and it puzzled me a bit: you have the system C compiler depending on C++, making in fact it no more self hosting.
That alone makes me thing whoever decided and whoever requested that is next to suicidal. GCC is known for having _quite_ a shaky C++ standard library AND ABI, as in having at least an incompatibility every major version and sometimes even with minor ones.
I do dislike C++ usage mostly on this basis, let alone the fact is a language overly large, with not enough people dabbling it properly, let alone being proficient.
There are already compilers using C++, one that many people find interesting is llvm. It doesn’t aim to be a system compiler and it’s not exactly self hosting.
Many already stated that would switch to llvm clang front-end once it reaches full maturity (now freebsd proved that this level has been pretty well archived), I didn’t consider to fully switch to it just because it concerned me the fact it depends on C++ and how easy is to have subtle yet major breakages in that language implementations.
llvm people look to me way more capable of managing C++ than GCC ones and I saluted with please the fact they already have a libc++ implementation.
Back about being suicidal, if I have to pick between people that did well on C++ and people that botched many time on the same field, who would I pick?
The current discussions in the GCC mailing list are about C++ coding style, which features to pick and which to forbid, rearchitecture the whole beast to use a “proper” hierarchy and such, basically some/(many?) want to redo everything with the new toy. That makes me think again that llvm will be a better target for the next months/year.
I hope there are enough GCC developers and/or concerned party that will fork gcc now and keep a C branch. Probably having a radical cleanup and refactor is a completely orthogonal issue and should be done no matter they’ll pick C++ or C as their implementation language, GCC has lots of cruft, starting from their bad usage of the autotools.
Wednesday, April 28th, 2010 | Multimedia, Opensource | Comments
I said “a pain”, quite subjective and I think the term summarizes my overall experience with it. If that’s because it is “different”, “undocumented” or just “bad” is left to you readers to decide. Even messing with NUT hadn’t been this painful and NUT is anything but mature. Now let’s go back digressing about Ogg. Mans stated what he dislikes and why, Monty defended his format stated that his next container format will address some of shortcomings he agrees are present in Ogg. Both defense and criticism are quite technical, I’ll try to say why I think Ogg, as is, should not considered a savior using more down to earth arguments.
Tin cans, class jars, plastic bottles… Containers!
Lets think about something solid and real. If you consider a container format and a real life storage you might see some similarities:
- Usually you’d like the container to be robust so it won’t break if it falls
- You’d like to be able to know which is its content w/out having to open it
- You’d prefer if weight as less as possible if you are going to bring it around
- You’d like to be able to open and close it with minor hassle.
- If it has compartments you’d like that those won’t break and that picking and telling apart what it contains as easier as possible.
There could be other points but I think those are enough. Now let’s see why there are people that dislike Ogg using those 5 points: robustness, transparency, overhead, accessibility and seekability
I think Ogg is doing fine about robustness, the other containers are fine as well in my opinion.
In this case the heading is a bit strange so let me explain what I mean. If you think about a tin can or a glass jar usually you can figure out better what’s in a transparent container than in an opaque one, obviously you can have labels with useful information. Usually you feel better if you can figure out some details about the content of a can even if you don’t know how to cook it.
In Ogg in order to get lots of data that the other containers provide as is you need to know some intimate details about the codec muxed in. “What’s the point of knowing them if I’m not able to decode it?” is an objection I saw raised on Slashdot comments. Well, what if you do not want to DECODE it but just give information or serve it in different way, you know actual streaming ー not solutions based on HTTP ー? (e.g Feng and DSS to name a couple).
People likes plastic bottles over glass ones since the latter are heavier. When you move and store them that’s an actual concern.
Monty states that Ogg with a recent implementation of the muxer (libogg 1.2) the overhead in Ogg is about 0.6-0.7%, Mans states that it ranges between 0.4% and 1% usually nearer to the 1% than the 0.4%. So in my opinion they agree. How does that value fares with other containers? Mans states that the ISO mp4 container can easily archive about a tenth of the Ogg overhead. Monty went rambling about lots of different container stating some depressing numbers and discussing about random access on remote storage over HTTP and other protocols that are not meant for that.
I think that’s a large degree for improvement, or at least some benchmarking are required to see how that is true or false.
As in “which tool I need to use them?”. Ogg has some implementations, some even in hardware, Mans states that in some situations (e.g. embedded/minimalistic platform) isn’t the best choice. Monty states that similar problems would be there also for the other containers.
As I stated before in order to be able to process an Ogg you need to be able to decode or at least have a knowledge of the codec that nears the ability to fully decode. That means that you cannot do some kind of processing that in other containers doesn’t require such quantity of code and, to a degree, such CPU resources. Being able to do a streamcopy or to pick just a range among a content shouldn’t require decoding ability, those features are quite nice when you do actual streaming.
Mans thinks the Ogg ability to move to a random offset within the media has a great degree of issues some related to the mentioned before requirement to know a lot about the codec inside the container other due the strategy in use to actually find the requested offset within the file. Monty goes again rambling about access remote files using HTTP and on how the other containers aren’t that better. The Slashdot article is already full of people stating that they DO feel seeking in Ogg SLOW, no matter the player and kills Monty arguments alone.
In the end
Obviously nothing is perfect and everything is perfectible. I do not like Ogg, I’m not afraid to state it.
Quite often you get labeled as “evil” if you state that or, god forbid, say that Theora isn’t good enough and maybe if you are that concerned about patents mpeg1 is the way to go since the patents are expired.
I’m quite happy with mkv and mov, probably I’ll use NUT more if/once it will get more spin and community. I’ll watch with curiosity how transOgg will evolve.
PS: I liked a lot this comment, Monty do your homework better =P
- November 2013
- September 2012
- August 2012
- March 2012
- January 2012
- November 2011
- October 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011
- January 2011
- December 2010
- November 2010
- September 2010
- August 2010
- July 2010
- May 2010
- April 2010
- March 2010
- February 2010
- January 2010
- December 2009
- November 2009
- October 2009
- August 2009
- July 2009
- June 2009
- May 2009
- April 2009
- March 2009
- February 2009
- December 2008
- November 2008
- October 2008
- August 2008