The origins of the class Meta idiom in python

So I keep finding this class Meta idiom in python APIs lately. Found it in factory-boy and WTForms and I suspected they both got it from Django, but I googled and couldn’t find any explanations of the reason for it or where it came from or why they’re all it class Meta. So here it is!

TL;DR What it is

The inner Meta class has absolutely no relation to python’s metaclasses. The name is just a coincidence of history (as you can read below).

There’s nothing magical about this syntax at all, here’s an example from Django’s documentation:

class Ox(models.Model):
    horn_length = models.IntegerField()
    class Meta:
        ordering = ["horn_length"]
        verbose_name_plural = "oxen"

Having an inner Meta class makes it easier for both the users and the ORM to tell what is a field on the model and what is just other information (or metadata if you like) about the model. The ORM can simply do your_model.pop('Meta') to retrieve the information it needs. You can also do this in any library you implement just as factory-boy and WTForms have done.

Some early Django history

Now for the longer story. I did some software archaeology, which should totally be a thing!, and discovered the first commit which mentions class Meta (actually class META1) in Django: commit 25264c86. There is a Release Notes wiki page which includes that change.

From there we can see how Django models were declared before the introduction of the internal class Meta. A Django Model class had a few special attributes. The db_table attribute held the SQL table name. A fields attribute was a tuple (!) of instances of field types (e.g. CharField, IntegerField, ForeignKey). These mapped to SQL table columns. One other interesting attribute was admin which was mostly used to describe how that model would behave in django’s admin interface. Now all these classes were defined in the django.core.meta package i.e. meta.Model, meta.CharField, meta.ForeignKey, meta.Admin. That’s so meta! (and probably where the name came from in the end)

In a django-developers mailing list thread from July 2005 titled Cleaner approach to ORM fields description user deelan suggests bringing some of SQLObject ‘s ideas to Django’s ORM. This seems to be the first seed of the idea of having an inner class in Django to store part of a model’s attributes:

it’s desiderable to avoid name clashes between fields, so it would be
good to have a way to wrap fields into a private namespace.
deelan, Cleaner approach to ORM fields description

At the end of the thread, django ticket 122 is created which seems to contain the first mention of a separate internal Meta class.

What started off as a backwards-compatible change, soon turned backwards-incompatible and was the first really big community-driven improvement to Django as Adrian Holovaty will later describe it in the release announcement which included the change.

The first patch on ticket 122 by Matthew Marshall started by suggesting that fields should be able to be defined directly on the model class, as class attributes (Build models using fieldname=FieldClass) rather than in the fields list. So:

class Poll(meta.Model):
    question = meta.CharField(maxlength=200)
    pub_date = meta.DateTimeField('date published')

rather than:

class Poll(meta.Model):
    fields = (
        meta.CharField(maxlength=200),
        pub_date = meta.DateTimeField('date published'),
    )

But also that there should be two ways of defining a ForeignKey:

ForeignKey = Poll, {'edit_inline':True, 'num_in_admin':3}
.
#the attribute name is irrelevant here:
anything = ForeignKey(Poll, edit_inline=True, num_in_admin=3)

In his first comment, mmarshall introduces the inner class Meta to hold anything that’s not a field: the table name (strangely renamed to module_name) and the admin options. The fields would be class attributes.

The decision over what goes in an inner class and what goes in the outer class seems to be left to the user. An optional class Field inner class would be supported so the fields would live there and the metadata would live as class attributes (this seemed to offer the advantage of being backwards-compatible with the admin class attribute while allowing tables to have a column that’s also named admin.

There are some other ideas thrown around and the syntax for ForeignKey is also discussed. At one point, Adrian Holovaty (adrian) intervenes to say (about the original class Meta/class Field suggestion):

It’s too flexible, to the point of confusion. Making it possible to do either class Meta or class Field or plain class attributes just smacks of “there’s more than one way to do it.” There should be one, clear, obvious way to do it. If we decide to change model syntax, let’s have class Meta for non-field info, and all fields are just attributes of the class.
— Adrian Holovaty, django ticket 122, comment 9

The thread goes on from there. There are some detractors to the idea (citing performance and conformance to other python APIs), there are discussions about implementation details and talking again about the ForeignKey syntax.

Then, in a dramatic turn of events, Adrian Holovaty closes the ticket as wontfix!:

Jacob [Kaplan-Moss] and I have talked this over at length, and we’ve decided the model syntax shouldn’t change. Using a fieldname=FieldClass syntax would require too much “magic” behind the scenes for minimal benefit.
— Adrian Holovaty, django ticket 122, comment 33

It’s interesting, because IMHO this was a huge differentiator in terms of making django’s models API more human and was also what other frameworks like Rails and SQLObject were doing at the time.

An IRC discussion is then referenced in the ticket.2 From that discussion, it seems that adrian’s reasons for closing were mostly concerns about the ForeignKey syntax and making a backwards-incompatible change to the model. rmunn does a great job of moderating the discussion, clarifying the situation and everyone’s opinions while strongly pushing for the new syntax.

The trac ticket is reopened as a consequence and it looks like smooth-sailing from the on. Some days later the new syntax is merged and the ticket is once again closed, this time with Resolution set to fixed.

Adrian will later announce the change in a django-developers mailing list post. Here are some interesting fragments from that mailing list post:


I apologize for the backwards-incompatibility, but this is still unofficial software. ;-) Once we reach 1.0 — which is much closer now that the model syntax is changed — we’ll be very dedicated to backwards-compatibility.

I can’t think of any other backwards-incompatible changes that we’re planning before 1.0 (knock on wood). If this isn’t the last one, though, it’s at least the last major one.
— Adrian Holovaty, IMPORTANT: Django model syntax is changing

Things didn’t go as planned. In May 2006, came commit f69cf70e which was exactly another let’s-change-everything-in-one-huge-branch commit which was released as part of Django 0.95. As part of this API change, class META was renamed to class Meta (because it’s easier on the eyes). You can find the details on RemovingTheMagic wiki page. It’s funny how in ticket 122 all the comments use the Meta capitalization, except for the last person (who I guess submitted the patch) who uses META. There was some discussion, both in the ticket and on IRC, about it and a few people had concerns that users of Django would actually want to have a field called Meta in their models and the inner class name would clash with that.

That’s it. Almost…

Anyway, so that’s the end of the story of how Django got its class Meta. Now what if I told you that all of this had already happened more than one year before in the SQLObject project? Remember that first post to django-developers which said Django models should hold some of its attributes in a separate inner class like SQLObject already does?

In April 2004, Ian Bicking (creator of SQLObject) sent an email to the sqlobject-discuss mailing list:

There’s a bunch of metadata right now that is being stored in various instance variables, all ad hoc like, and with no introspective interfaces. I’d like to consolidate these into a single object/class that is separated from the SQLObject class. This way I don’t have to worry about name clashes, and I don’t feel like every added little interface will be polluting people’s classes. (Though most of the public methods that are there now will remain methods of the SQLObject subclasses, just like they are now) So I’m looking for feedback on how that should work.
— Ian Bicking, Metadata container

His code example:

class Contact(SQLObject):
     class sqlmeta(SQLObject.sqlmeta):
         table = 'contact_table'
         cacheInstances = False
     name = StringCol()

SQLObject’s community did not seem nearly as animated as Django’s. There were a couple of emails on the sqlobject-discuss mailing list from Ian Bicking which included the proposal and asked for feedback. I suspect some discussion happened through some other channels, but this community was neither as big nor as good at docummenting its functioning as Django. (And sourceforge’s interface to the mailing list archives and cvs logs does not make this easy to navigate).

A year later, Ian Bicking takes part in the django-developers mailinglist discussion where he makes some small syntax suggestions, but it does not seem that he made any other contributions to the design of this part of the Django models API.

Conclusion

As far as I could tell, Ian Bicking is the originator of the idea of storing metadata in a metadata container inner class. Although it was the Django project which settled on the class Meta name and popularised it outside of its own community.

Anyway, that’s the end of the story. To me, it shows just how awesome open source and the open internet can be. The fact that I was able to find all of this 11 years later, complete with the original source code, commit logs and all the discussion around the implementation on the issue tracker, mailing lists and IRC logs is just amazing community work and puts a tear in my eye.

Hope you’ve enjoyed the ride!

1 because in 2005, people were less soft-spoken

2 It’s very fun, you should read it. At some point someone’s cat catches a blue jay. And I think they meant it literally.

Comments

FOSDEM 2012 review

I went to FOSDEM this year. Thanks SUSE for sponsoring my trip! Here is a short review for the projects that I found interesting at this year’s FOSDEM.

SATURDAY

The Aeolus Project

Francesco Vollero – Red Hat

This is a very interesting project if you can go past how meta it is. It wants to be an abstraction over all the existing private and public cloud solutions. The aim of the project is to be able to create and control a virtual system throughout its life cycle. It can be converted from one VM image format to another and be deployed/moved from one cloud provider to another. Groups of images can be setup and controlled together. The way resources are managed and billed would also be cloud-independent.

It relies heavily on the DeltaCloud project.

Open Clouds with DeltaCloud

Michal Fojtik – Red Hat

DeltaCloud aims to be a RESTful API that is able to abstract all of the other public or private cloud APIs, allowing for the development of cloud-independent software. The project says it wants to be truly independent (esp. from Red Hat). It was accepted as a top-level Apache project.

DMTF CIMI and Apache DeltaCloud

Marios Andreou – Red Hat

The CIMI API is a specification for interacting with various cloud-resources. A lot of very big companies are part of the DMTF Cloud Management Working Group: Red Hat, VMware Inc., Oracle, IBM, Microsoft Corporation, Huawei, Fujitsu, Dell. It is currently being implemented as part of the DeltaCloud API. The presenter also showed some implementation details: a lot of the code is shared between the DeltaCloud and the CIMI API.

Infrastructure as an opensource project

Ryan Lane – Wikimedia Foundation

The talk went into some detail about the whole Wikimedia setup. It is built on top of open source projects and aims to be entirely free and available to anyone who wants to know more about it. The speaker presented some of the issues that the Wikimedia organization faced when they decided to give full root access to their machines to volunteers and how to allow for different levels of trust.

Orchestration for the cloud – Juju

Dave Walker – Canonical

Juju is a system for building recipes of configurations and packages that can then be deployed on openstack/EC2 systems. The project aims to integrate with tools like chef and puppet to be able to manage deploying, connecting, configuring and running suites of applications in the cloud.

OpenStack developers meeting

This was a rather informal discussion. 4 major distros were present: Fedora, Ubuntu, SUSE and Debian, but also some other contributors. Upstream asked about the problems that distributions face, some minor one-time occurrences were discussed briefly. Stefano Maffulli, the openstack community manager was also present and there were some heated discussions about the way the project is governed. There are still a lot of things being discussed behind closed doors. Negotiations about the future of the project and fund-gathering is done with only a few big companies at a very high level. The community, on the other hand, was very vocal about wanting to rule itself with no enterprise interference.

Rethinking system and distro development

Lars Wirzenius

Advanced the idea of maintaining groups of packages, all locked at a specific version. Having the maintainers always know which combination of versions a bug comes from would make the whole environment easier to replicate and the bug easier to reproduce. This would also, supposedly, reduce some of the complexities of dealing with dependencies.

These groups of packages would be built directly from the upstream’s sources, following rules laid out in a git repository. The speaker also said he wants to get rid of binary packages completely.

If this were to be implemented, distributions could write functional tests against whole systems (continuously built images), rather than individual binary packages and ensure that a full configuration works.

Someone from the audience mentioned that a lot of the ideas in the talk are already implemented in NixOS(nixos.org) (which looks like a very interesting project in itself).

SUNDAY

Continuos Integration/ Continuos Delivery

Karanbir Singh – CentOS

The speaker discussed the system which CentOS uses for continuous integration. I liked their laissez-faire approach to which type of functional test language they should be using. They basically allow any type of language/environment to be used when running tests. The only requirement is that the test returns 0 on success and something else on failure. Anyone can write functional tests in any language they want (they just specify the packages as requirements for their test environment). People can choose to maintain different groups of packages along with the tests associated to them.

The Apache Cassandra Storage Engine

Sylvain Lebresne

A lot of interesting concepts about the optimizations that were made in the Cassandra project in order to speed up writes and make reads twice as fast (almost as fast as reads): different levels of caching, queuing writes, merge sorting the read cache with the physical data on reads etc.

Freedom, Out of the Box!

Bdale Garbee

An interesting project about making a truly free easily available software as well as hardware system. Some interesting concepts are used in this project like GPG keys for authentication, but also for the trust required to provide a truly decentralized peer based network, free from DNSes.


I’ve been to a few other talks that I can’t remember anything from either because of the bad quality of the presentation or because I didn’t have the prerequisite knowledge to understand what they were talking about. Next time I should also take notes.

A lot of the talks were recorded and are available over here (with more coming): FOSDEM 2012 videos. The quality of the recordings (esp. in the main room) is sometimes even better than being there live. The voice is clearer and there is no ambient noise. Also, as it was really cold in most of the rooms – I had to keep my jacket and hat on.

Comments

SQL and Relation Theory Master Class

This video course is perhaps the best way to meet the famous C. J. Date and his astonishingly comprehensive style. The lectures are a great introduction to database theory while at the same time they lay a very solid foundation for any database practitioners or theorists. The author introduces some very useful theoretical notions that are essential to grasping the more subtle concepts of database design and he does so in a high-class fashion.

C. J. Date’s style of explaining and teaching, which can also be seen in his books, is didactic and very thorough while at the same time astonishingly clear. Many times while reading the book that these videos are based on and even afterward while watching the videos, I had to stop in order to reflect at the great volume of information that I had absorbed in a surprisingly simple manner. These videos are full of very deep notions about databases and can really benefit from reviewing at a later time, just to cement the knowledge or reflect on certain topics which come up during everyday practice.

C. J. Date sets out to demolish SQL as a language fit for relational theory and databases in general. While going through all the database theory concepts he presents the ideal case and an ideal query language (actually not ideal, but as he demonstrates, the correct ones) contrasting them to generic SQL. He also posits and sets out to prove, in a very interesting argument, that relational databases are the only way to store data and all other data models will not endure.

These are the days of NOSQL databases, but I think that the information contained in these lectures will be useful for a lot more time and in a lot more settings than just conventional SQL databases that are used in the majority of current systems. I oftentimes find myself thinking in relational terms even while designing the redis data model that I’m currently working on.

The only problem I have is that I sometimes felt that the lectures were a bit dull. It is also possible that I got this impression because I was watching too many without interruption :). While the content of the lectures is excellent, the presentation could be improved. Often times I felt that the audience present in the classroom could have done more to improve the dynamism of the lectures. It seemed that the only reason why they were there was so that the presenter wouldn’t feel alone. I would have enjoyed more challenging questions and especially some skeptical comments from industry veterans perhaps. I’m sure those would have led to very interesting debates considering the high class of the lecturer and presumably, the attendants.

Comments

Copr final report

Fedora Summer Coding is now over for me and I’m really glad of what I learned and coded this summer.

Our initial goal was to develop a TurboGears2 Web app and JSON API for Fedora Copr. When finished, Copr should provide everyone with a place to build Fedora packages and host custom repositories for everyone to enjoy. This is a project that should prove quite popular in the Fedora Community when it gets released and I’m glad to have played a role in its development.

At first I worked on the web app, modeling the database and the relationship between coprs and repos and packages and then developing the JSON API. When the midterm came, my mentor and I decided that I should also contribute to the other parts of Copr. The original schedule had a simple command-line client planned, but we went further than that. In the end all of the functionality of the JSON API also got implemented in a client library (based on and very similar to python-fedora) and in a command-line client. I also got to dive into python-fedora’s and repoze.who’s internals in order to get basic HTTP authentication working for TurboGears2.

My latest work has been on the func module. This is the buildsystem part of Copr. Func minions running this module will be commanded by headhunter (Copr’s scheduler) to build packages in mock and then move them into repositories. The module also creates, updates and deletes package repositories and will check the built packages for Fedora conformance (e.g. licensing) – this last part is not yet implemented. I got to play with virtual machines and func and mock and createrepo.

There is a more synthethic overview of all the different things that got implemented on the wiki.

Overall, I’m really glad of what I learned this summer. This project really got me involved in a lot of different levels of the architecture of a web service and a lot of different technologies. Some of the things I worked on looked really scary at first, but as I went nearer and read more code the mist slowly vanished.

My mentor, Toshio Kuratomi was great as always. This isn’t the first project I’ve had him as my mentor. He was always there to talk to and always had great answers to all of my questions. He had great patience in answering and explaining anything I asked about. Our discussions were mostly about the architecture of the app we were building, but he also gave me great tips on the inner workings of python-fedora or on deploying the web app. I felt I had a lot of liberty to decide the way things will get implemented. Regardless of whether we will ever work together again, Toshio will always be a great inspiration for me as a programmer and as a person.

Comments

FSC: moving to the buildsystem

I started working on the buildsystem part of copr this week. Right now, I’m still getting familiar with func. That’s what we’ll be using to communicate with the builder machines: get them running errands and get back status reports at any time. I spent a lot of time getting a virtual machine setup with libvirt; networking especially was a pain (mostly because of my pppoe connection I think).

One nice feature of func that I think we’ll be using a lot is the async mode. A mock build takes a lot of time, what with all the yumming and compiling. So starting a task via one of the user interfaces and then choosing whether or not to keep watching it and what to watch for will probably be an essential part of the buildsystem’s functionality.

In the meantime, we’re slowly getting resources for the deployment of Copr. Toshio got a running instance of the current state of the TG app on publictest1. It looks just like a quickstarted TG app, because it doesn’t have any WebUI. But it can CRUD coprs, handle dependencies between them, handle permissions and CRD packages. Most of the functions require a FAS account, but you don’t need one to see a list of all the coprs, or a list of packages in a copr.

Comments

the Copr client part II

I spent this week finishing up the copr client. It now supports all the functionality that the Copr TG API supports. It’s not much, but it’s a starting point.

I spent a lot of time trying to understand the way repoze.who works and the authentication plugins that we’re using for the python-fedora FAS authentication plugin. I finally understood it, I think… The Fedora client library didn’t support basic HTTP Authentication for TG2 apps so I had to figure out how to integrate that into our authentication plugin. It was quite fun all in all, repoze.who has a very interesting way of doing authentication and writing wsgi middleware is always exciting ;). This patch will hopefully go upstream to python-fedora now.

This next week I’ll probably start working on the buildsystem part of Copr. There are a lot of new things to learn there.

Comments

the Copr client

This last week I started working on the command line client to Copr. Luckily, the python-fedora already has a lot of code in place to make the task of writing clients for TurboGears apps a lot easier. Some of the apps in infrastructure are already using this library, which make for some good examples.

So I’m building a client library and a client um… command line client. The command line client is basically one big argparse application that calls the functions in the client library and sometimes does a bit of formatting on the output. The client library implements a fedora.client.BaseClient that mostly just calls json methods on the Copr server.

It’s all pretty simple. The hard part is deciding what the command line client’s interface will look like. In argparse parlance, which ones should be the positional arguments and which should be the optional arguments. So far I’ve been inclined to use something that looks like a VCS’s interface. Here’s what it looks like so far:

$ python client/bin.py -h
usage: bin.py [-h] [-v] [-u USERNAME] [-p PASSWORD] [--url URL]
              {info,edit,create,list,delete}
Command line tool for interacting with Fedora Copr
positional arguments:
  {info,edit,create,list,delete}
    list                list all the available Coprs
    info                get information about a specific Copr
    create              create a new Copr
    edit                edit an existing copr
    delete              delete an existing copr
optional arguments:
  -h, --help            show this help message and exit
  -v, --version
  --url URL             provide an alternate url for the Copr service
authentication:
  -u USERNAME, --username USERNAME
  -p PASSWORD, --password PASSWORD

Right now, all the copr functions are top-level. I wonder if I’ll have to create a deeper level of nesting when I start implementing package-related functions.

I’m also having a few problems with the BaseClient that I’ll probably have to solve this week. All of the other client libraries were written for TurboGears 1.x and it seems that authentication has changed in TurboGears 2. There’s also no support for HTTP PUT and DELETE which I would like to use since I used RestControllers in the API. I also had to write a patch for file upload support; that seems to work well so far.

Comments

Ruby koans

I had a great time today with Ruby Koans. It took me about 5 hours in all. A good way to spend a Sunday afternoon I suppose.

These Ruby Koans are a great way to go on a quick journey through a lot of Ruby’s common features. You basically have to edit tests in order to get them working. It’s mostly reading tests actually, but the fact that you have to fill in some blanks keeps the mind from wandering. There are also a couple of exercises which imply a bit more coding.

I have a good knowledge of Python and have worked with Ruby in the past on a little Rails project. I had forgotten anything I knew about Ruby though. Yesterday, I don’t think I would’ve been able to write a foobaz in Ruby without looking for help online. This proved to be a welcome refresher. Solving these koans gives a great tour of Ruby. As I went through them I kept thinking of how I would do those things with Python. I really like Python’s philosophy and maybe solving all these ruby koans has made me appreciate Python’s simplicity and predictability a bit more. Ruby allows for a lot more flexibility however and the koans left me to wonder at what amazing feats this language could accomplish.

I wouldn’t recommend this to a beginner however. While I think I now have a pretty good idea of what the language can do, there were no whys or recommendations about all these features. Maybe it would be a good starting point (or a dive) for someone coming from a similar language (like Python), before moving on to a good Ruby book. The website claims that they teach culture in addition to Ruby. I would’ve liked more of that. Maybe it was too subtle for me, but I didn’t notice anything other than some references to oriental philosophy: test_assert_truth has damaged your karma. You have not yet reached enlightenment ...

There are a lot of ports of the Ruby Koans. There’s one for python and there are also a bunch for functional languages: Clojure, F#, Haskell and Scala. These look like a lot of fun, maybe I’ll try them next week.

Comments

Fedora Summer Coding midterm

This midterm scared me when I found out about it on Friday when I looked at the schedule I had set myself. However, I have done the work that I should have done by this point in the project. When I wrote the proposal I had assumed that the buildsystem would already be built before me starting coding on the TG app, but that is not the case. Therefore I could only code the user-facing JSON interface that interacts with the DB as it would if the buildsystem would provide it with packages and repos. Except that there are no packages and repos at this stage.

So for this midterm, we’ve got working Copr CRUD, dependency handling and release/repo editing on a Copr. I also coded the Package CRUD, which basically allows for uploads of SRPMs, stores the info in the db and also allows for information retrieval and deleting packages. Actually building packages and retrieving packagebuilds will have to wait for the buildsystem to be built.

After I finish polishing things a bit, I will probably start working on a basic client and then maybe move on to working on the buildsystem part of Copr. That should be loads of fun especially since I haven’t done anything quite like this before. So it will be hard, but fun :).

If anyone wants to check out what Copr looks like so far, you’ll just have to install TurboGears 2.0.x and then:

 $ bzr branch bzr://bzr.fedorahosted.org/bzr/copr/devel
 $ cd devel
 $ python setup.py develop
 $ paster setup-app development.ini

And you should have a working Copr. You can run the unit tests with the nosetests command and all 52 of them should run fine. Yay!

Congratulations to everyone who is finishing their FSC adventure today! I’ll still be coding for another month or so.

Comments

Copr design - being all things to all people

Lots of things happened this third Fedora Summer Coding week. Most people are already wrapping up, but I feel like I’m still at the beginning.

The biggest accomplishment of this week has got to be the fact that we (I and my mentor, Toshio) settled on a stable design for representing Coprs, Repos and their relationships. It was harder than it might seem, since we’ve got all these different entities in Fedora: we’ve got repos that you could look at as being either a directory with a release and an architecture or a repofile that is the same across releases and arches. When talking about releases we’ve got Fedora releases (e.g. Fedora 13, Fedora 14) and then we’ve got the packages for other distros with their own releases: EPEL and OLPC.

Now, on top of all of this we’ve got Coprs and (at least) two groups of users for the API: the end-users of the Coprs – the people that install the repos and the packages and the developers of the packages in the Coprs. The end users shouldn’t have to deal with the intricacies of the Copr/Repos/Releases model; ideally they’d just have one big button per the distribution they’re using, so they can install the repo once and have it work even after they’ve upgraded their distro three times or reinstalled five times (which is sort of how a repofile works). The package developers on the other hand could get hurt by the differences between different distro releases and their different packages – when depending on different package versions for example.

So finally we get to Coprs which should basically be a collection of packages that are available for one or more distros with each one having one or more releases. The package maintainer gets to create a Copr and choose a number of releases which they want to support with that Copr. One Copr can depend on as many other Coprs as needed. When the maintainer creates a Copr, the Copr App will automatically create repos for all of the specified releases and for each of the architectures that are supported by the buildsystem.

Everything I said until now is already implemented at the level of the TurboGears App which will provide the API for the web interface and any number of JSON clients. The schema is built and the database insteraction works fine, but repos don’t actually get created, because that’s not part of my proposal and will be handled at a different level. Oh and it’s all unit tested!

This week wasn’t just designing and building though, I spent a lot of time digging through TurboGears2 and its sub-packages’ documentation for things that should make the code simpler: raising JSON errors from nested functions, sending list arguments to JSON functions via WebTest post requests and even returning a flat list from a SQLAlchemy query on a single table column. All of these things seem to me like they should already be implemented and easy to use which makes me waste time searching for them. In fact they either are bugs or require coding them myself (at least from what I understand so far). I’ll have to investigate further, especially now since the weekend is over and I hope there’ll be more people answering questions on IRC and on issue trackers.

This next week I’ll mostly start worrying about what happens when a package maintainer submits a package to be built and that package has the right dependencies available in some releases, but not others, even though the Copr should support all of them. Will she have to submit different SRPMS for each release or should the Copr have the same version of the package in all of its releases? This will be a matter of settling upon a contract that the Copr provides its users and how uniform the Copr’s content has to be.

Fedora Summer Coding! Yay!

Comments

CRUD for Coprs and testing

This last week I worked on the first controller for the Copr TG App. There is now a JSON API to CRUD Coprs in the TG App’s database. It also supports adding/removing Copr dependencies. And everything in this first controller is (mostly?) tested with nose unit tests. The happy thing is I’m still on schedule, though I’m not ahead of it anymore, which I actually expected.

I encountered a couple of problems while setting up testing. I installed python-fedora’s FAS authorization repoze-who plugin and wasted a lot of time trying to get that working with webtest. In the process I managed to screw up something in my TurboGears installation. Since I was already too deep down the rabbit’s hole I gave up on it. (I also figured out that I don’t actually need to test anything about the FAS integration so I don’t even need to install it). So I proceeded to install TG2 inside a python virtualenv which feels a lot more hygienic and will be a lot easier to replace in case of future screw-ups. I had a few problems there aswell since the documented way to install TurboGears2 without distro-packages is broken ATM, but I now have virtualenv! Yay!

Now the next step is to figure out the right relationship between Coprs and Repos and write some code to manage Repos transparently for the user. I also have to learn to write more frequent status updates.

Comments

The late Fedora Summer coder

I started my Fedora Summer Coding last week. Although most people started almost two months ago, I chose (and was allowed to – Yay, FSC!) a different schedule because I just finished college last week.

This summer I’ll be working on a new project for Fedora – Copr. Fedora Copr will allow any Fedorian to have their own package repository with packages built and hosted by Fedora’s Infrastructure. My mentor this summer will be Toshio, I’ve always enjoyed working with him and this summer will be no different. Here is my actual FSC proposal. Although the things written in that proposal are turning out to be a bit inaccurate, it’s still a good bird’s eye view of what I’m going to do this summer.

So about the first week. Things started really slow. I did a lot of orientation, certainly more than I thought I would. I hadn’t used TurboGears2 before, though I had worked with TurboGears 1.x on Fedora’s pkgdb. When I started out I had only a TG2 automatically generated skeleton app – well it’s mostly the same now, though at least I now know a lot more about what’s in there. The fact that I had to start it up myself meant I had to learn a lot of things about TG2 that I would’ve normally just copied from other parts of a fully-functional project. And that was a great experience. In a way it’s fulfilling to be able to pioneer things in this way ;). I’m trying to only ask my mentor questions about designing the actual app and solve my “How do I … in TurboGears/Python?” questions elsewhere. My mentor has always given me a lot of independence when working on things and that feels really good, though at times I feel inexperienced. There’s the thought that the project I’m working on will be used by a lot of technical users and I’m really not sure what my decisions’ impact will be on the whole project.

I’m mostly on time with my mock-up schedule because I had set the first week for orienteering. I also wrote the DB schema for Coprs, though that was on the second week. That doesn’t mean I’m ahead of schedule however, because I’ll probably have a lot to work on the Copr controllers, and a lot of documenting and designing.

I’m proud that I setup testing after a day of wading through the scattered documentation of TurboGears2 testing. There’s mostly no documentation on testing on the TurboGears2.0 docs website. So I went to the python nose webpage. But they don’t have any info on the TurboGears2 web helpers which I needed to use. So I went to pylonshq docs about testing, but they use a slightly different syntax because they’re using paste.fixture. I finally found the TurboGears2.1 testing docs which was what I really needed. It turns out that TurboGears 2.x uses WebTest.

So now I have testing. My project is not supposed to have any web interface at this point, so writing tests is the easiest way to prove that things are actually working.

This next week I’ll probably get some work done on Copr controllers. Implementing the ability to CRUD Coprs and Repos.

Comments

Cum să actualizezi metadatele yum în mod automat

Unul dintre lucrurile care mă enervează cel mai tare ca utilizator desktop la yum e că de fiecare dată când vreau să caut sau să instalez un pachet, trebuie să aștept câteva secunde bune până își actualizează metadatele pentru toate depozitele active. Astăzi am avut timp să caut o metodă de a scăpa de neplăcerea asta și a fost destul de simplu de găsit.

Rezolvarea nu este să dezactivăm complet actualizarea metadatelor, pentru că am putea încerca să instalăm pachete a căror dependințe au fost actualizate și a căror versiune exactă nu se mai găsește în depozit => dependency hell.

Se pare că există un program yum-updatesd (su -c "yum install yum-updatesd") care poate actualiza automat metadatele.

După ce l-am instalat, putem modifica /etc/yum/yum-updatesd.conf dacă vrem să facem lucruri dubioase, cum ar fi să îl lăsăm să instaleze actualizări automat — pentru că trăim într-o utopie în care actualizările nu strică niciodată nimic — sau, mai puțin dubios, doar să le descarce.

Acum că am terminat cu setările, putem porni serviciul cu:

 $ su -c "service yum-updatesd start"
 $ cacamaca

Și îl putem pune să se pornească automat la fiecare boot:

 $ su -c "chkconfig yum-updatesd on"
Comments

Beginning packaging for Fedora

With GSOC now over, which I should write a blogpost about soon, I’ve taken a break from pkgdb and web programming and started developing another skill.

The fact that this page is empty has been bugging me for too much time, so I set out to fix it. I also wanted to find out more about the packaging process and the road of a package before it gets accepted into fedora’s official repos which is a bit complex. This knowledge would also help me better understand the parts of pkgdb which packagers interact with.

It was not my first time trying to make a package for fedora. I think this is actually my third time. I’d given up before, scared by all the different tools and scattered documentation. In a previous life, I had made some AUR packages, but the experience is a lot different. I’m now starting to get used to all the different tools that I was scared of before, like mock and rpmlint and I can now find my way around the fedora wiki for package related information. There is a wealth of information in the wiki, but you need a lot of patience.

I started slow, with quite a complex application to package: Calibre. Having all those different distros is great when you’re a packager, because you have somewhere to look for help and the Debian package of calibre helped me a lot. It took me about two days to get a somewhat acceptable version of calibre packaged, which I then posted to redhat’s bugzilla. The following days I found more small apps to be packaged and it became easier and easier for me to do it. Last night for example, I was just browsing Hacker News as usual when I found a link to facebook’s opensourcing of their web server framework . I just rushed to the download and installation instructions page and I quickly got it packaged. I’m not saying it’s perfect, it probably needs a lot of improvement, but it was fun to do. It’s fun to think that you’re making things easier for someone and learning a lot at the same time. I now have 5 packages waiting to be reviewed and I’ve found someone willing to sponsor me into the packager group. Anyone in a package-review mood? ;)

My journey into the packaging world has been enlightening so far and the good thing is I’m just beginning. There’s a whole new world to be discovered out there and also another part of the fedora community.

Comments

GSOC - it begins...

My Fedora proposal got accepted to this year’s Google Summer of Code Program. You can look at a short abstract here . Now I’m going to try to explain what this project is about and what I did to prepare for being accepted, hopefully without going mad about how happy I am about it.

I started work on the Fedora Project almost a year ago. One day I popped on the mailing list and then on the irc channel of the infrastructure team and asked for something to do. Luckily, Toshio Kuratomi was on the watch and after giving me a short tour of the various projects he could help me get familiar with, I picked the package database. Most of the work I’ve done so far is in the pkgdb (the search capability is the most obvious thing I worked on). The overview on the front page describes it quite well; it’s got package information and it’s aimed at package developers. It’s not a very famous part of the fedora project websites, certainly not as famous as something like packages.ubuntu.com is for ubuntu. But that’s not what it was intended for, even if that’s what attracted me to the project at first. I liked the exposure of such a website, but also the fact that, at the time, it was easier for me to understand what it did and how it worked :).

The idea of making the package database more user-friendly as opposed to developer-centric wasn’t a new one. Toshio, the main developer had been thinking about it for a long time, but I guess it never really became a priority. The idea had also been proposed for last year’s GSOC, but it hadn’t been accepted (this scared me a bit when I found out). I picked this idea on a whim when I told Toshio I wanted to participate in this year’s GSOC on pkgdb and he asked me what exactly I wanted to do. I wasn’t expecting the question, so I answered with the first thing that came to mind. Looking back, I think it was a good choice.

All my involvement with the Fedora Project owes a lot to the best possible person who could have become my mentor for GSOC. The Infrastructure Team is a great one to work with, and the Fedora contributor community is made up of a lot of smart, fun and selfless people. I say this after having spent a lot of time lurking the IRC channels, the various mailing lists, the planet etc. and to a somewhat lesser extent interacting with other contributors. However, I wouldn’t have continued contributing if it weren’t for the continuous support and guidance of Toshio. I probably wouldn’t have been able to participate in the GSOC without the many discussions (starting in February) with Toshio about the proposal and the support when explaining the idea to other community members. Having said that, I think that being familiar with the pkgdb also helped a lot with writing the proposal. I didn’t have to waste time on getting to know the code, the community, the devs as I would have if I had written a proposal for a different project. I also had a fair idea of what would constitute a good proposal and a rough idea about how it could be implemented. I think this helped with my credibility in the eyes of the mentors who ranked my proposal.

I was never convinced I would get a spot on Fedora & JBoss’s accepted proposal’s list , but it was is a great thing to dream of. The butterflies in my stomach were killing me at the end of the waiting period, especially since it had lasted for more than 2 months. I now have a summer to work full time on my hobby :).

At the end of the summer, the fedora community will hopefully have a package database with package versions, size, dependencies, rss feeds, tagging, package reviews etc. There’s even a detailed schedule from my proposal you can drool on if you’re so inclined.

And hello, fedora planet! Sorry for being late.

Comments
Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.