msn-pecan 0.1.4 released


Long time no see! I’ve been busy with many other things and haven’t had time to work on msn-pecan, but I’m back.

This a minor bugfix release, mostly to fix issues while reconnecting. These will be useful for HTTP non-persistent connections which is a work in progress. There’s plenty many other patches pending in the queue, some you can try on the master branch.

Felipe Contreras (6):
      node: avoid writes when connection is closed
      ssl-conn: cleanup properly on errors
      io: properly cancel connect operations
      build: fix platform detection
      win32: update instructions for Arch Linux
      win32: version bump

Download from the (usual place).

Without dissent, there’s no progress

We all know free speech is “good”, but many people don’t know why, or even, have a misconception of what free speech actually means. Others argue that free speech is good in a society, but not on certain communities, like online forums, or technical mailing lists, and are quick to ban the dissidents. I have been called a troll simply for speaking my mind, which happened to be against the established view, but is that really the case? Should everyone that disagrees vigorously be called a troll and be banned? Should dissidence be prohibited? Are there communities where free will is not desirable? Let’s explore.

Dissent is merely defined as a difference in opinion, but more broadly; to actively challenge an established doctrine, policy, or institution[1]. In some cases it’s clear that there’s nothing wrong with dissent, it might be even desirable–as in the case of Mahatma Gandhi opposing the British Empire. In other cases it’s not so clear, or so it would appear judging from the reactions of many people.

If you were a king (before the french revolution) you probably were against peasant dissidence, but if you were a peasant, you would probably be OK with it–in fact, you would think it’s essential for the well-being of the republic and constitute free speech as an inalienable right, as in fact many revolutionaries did, after getting rid of monarchies, dictatorships, etc.

Watch Stephen Colbert roasting George W. Bush in front of his face, and that’s fine–that’s how a modern society is supposed to work.

Is this offensive? Is this demeaning? Of course, but there’s nothing wrong with that; criticism is part of a modern society, even if it’s towards the most important man in that society (in theory), and it should be protected specially in those cases. In the past it would be inconceivable to do this against the king, even in modern societies, as for example against Vladimir Putin. It certainly would be convenient to the Bush administration to just shut up the press, but that shouldn’t happen in a modern society.

And that is something that is often forgotten: free speech laws are specifically designed to protect the weak; the majority of the population doesn’t need a government to grant them free speech. It’s not the majority that loves the king that needs protection to speak against him; it’s the minority that needs it. It’s not the majority that thought slavery was OK in the past, it was the minority that was against the establishment. It is unpopular ideas that need protection.

Free speech has limits, of course; your right to free speech ends when you violate the right of somebody else, like committing defamation. The problem is that a lot of people don’t understand what free speech actually means, and what the law actually says (as in the case of defamation). For example, in modern societies I wouldn’t have any problem saying that Obama is stupid, that Jesus is gay, or that Muhammad the prophet can kiss my ass. Some of this might be false, offensive, and maybe even stupid, but there’s no law that protects people from offense, lies, or stupidity. And there’s a reason for that; it’s not easy to define what is wrong or stupid, and what should be offensive. A king might say that criticizing the king is offensive–very conveniently for himself–and therefore nobody would be safe to throw criticism, and unless the king was indeed flawless–that’s a problem.

Even the UN Secretary-General seems to not understand this, as he tried to push to make blasphemy a crime, probably in response to the increasing violence from Muslims all around the world angry about videos and cartoons that are protected by this very basic freedom in modern societies.

Barack Obama does a pretty good job in explaining why free speech is important.

With foresight it’s easy to see that Martin Luther King Jr. was doing something good in fighting against the establishment, but that was not always the case, many people are considered activists, inciters, and even terrorists, just for speaking truth to power. Nelson Mandela was considered a terrorist by the USA until 2008[2]. History determines, time and again, that people that fought to silence the dissidents were plain wrong.

Julian Assange is an example of a modern dissident that is still in mid-fight, and there’s still people that claim he should be killed, even though there are laws that protect the media from not divulging their sources precisely for that reason; in order for the media to criticize the government efficiently, government secrets should be exposed, especially those who are damming to the government.

Well, this is all well and good, for the freedom fighters such as Gandhi, Assange, Mandela, and you probably share a lot of ideas with them, but what about that annoying guy that criticizes everything, or maybe the conspiracy nut? They also need protection, they also need free speech.

Christopher Hitchens goes to great extents to explain not only why free speech–all free speech–is important, but also what exactly is free speech, something many people get wrong.

Hopefully at this point it should be clear that free speech is good, and dissent shouldn’t be squashed, but yet, one of the first things online communities do when encountering unpopular (or unpleasant) opinions, is ban the messenger. Have we not learned anything?

I have explained before why I think Linux is the most important software project in history, but I forgot one aspect: freedom of expression. Anybody can say anything they want in the mailing list, and they won’t be banned. If you are truly a pest and have nothing to contribute, the worst that will happen is that you will be ignored. Most open software projects don’t do that (to their detriment), and that’s one of the reason why many feel compelled and eventually fork, which would be homologous to a revolution; the old dictator didn’t listen, so we took over–unfortunately, they end up doing the same mistake, and eventually ban people that disagree with them.

So, the next time you feel somebody is annoying, or stupid, or just plainly obviously wrong to the point that you want them to shut up forever with a ban… don’t. Everybody deserves the right to say what they want, but more importantly; everybody should have the right to listen to them. You (and everybody else) have the right to ignore them.

Without dissent, there’s no progress.

Top 10 suggestions from the 2011 GNOME user survey

I’ve taken the time to read one by one the suggestions from the 2011 GNOME user survey, I’ve only managed to read 20% of them so far, but I don’t think they will shift that much (I might update this post if they do).

The next version of the survey will have these and other suggestions already listed, so it would be easier for users’ voice to be heard, and summarize the results.

Note: I already posted most of these on my previous post, but apparently people didn’t like my critical tone, so these are the suggestions only, not my opinion.

1. Better customization

This is by far the most requested, people don’t want to manually fiddle with gconf/dconf, extensions, or gnome-tweak, they want a whole lot more options integrated. One suggestion was to have an advanced mode.

Not only do people want more options, but they think some of the current ones are useless, and that defaults are all wrong. They also want more options for power management, like deciding what happens when you close the lid. Also more options to change the appearance: font, icons, keyboard bindings, screensaver, etc. Also, disable the accessibility stuff.

2. GNOME 2

People love their GNOME 2. A lot of them suggested to have two interfaces: the traditional one, and the shell, others requested to improve the fallback mode, but most of them demanded to get rid of GNOME shell completely. A lot of people also asked specifically to bring back the GNOME 2 panel, and taskbar.

3. Improve performance and footprint

A lot people think GNOME is too bloated and asked for better performance, less CPU usage, less memory usage, smoother animations, faster start-up times, etc. Specially the ones that have used GNOME shell, which apparently is a resource hog, or at least that’s what users say.

4. Nautilus

People are definitely not happy with nautilus. They complain it’s too slow, takes too much time to start, and lacks a lot functionality. Many suggested to get rid of it and use some of the already existing alternatives.

5. Notifications

The new notifications are annoying; one shouldn’t need to move the mouse cursor the corner to see them; that makes it easy to miss them, which defeats the purpose of a notification.

6. Shutdown / Restart / Suspend

This is a no-brainer; just add this option. Yes, you can see them by pressing “alt” but people don’t want that, there’s no excuse to remove basic functionality, and users are complaining hard for this particular feature.

7. Improve theming

Users want better themes and icons, they don’t like the default theme, also want a dark theme, and think the amount of customization a theme can achieve is not enough.

8. Better multi-monitor support

Another very requested feature was to improve multi-monitor support, which apparently works very badly in GNOME shell.

9. Improve Evolution

A considerable amount of people suggested to get rid of Evolution completely, because it has so many problems, and because there are better alternatives (i.e. Thunderbird). Some people only want to use the calendar part of it, because it’s integrated to other parts of GNOME’s UI, so they suggested to split this component. The rest suggested many ways to improve it, like stop the constant crashing.

10. Listen to users

Lastly, a lot of people think that GNOME developers are ignoring users, and their suggestions. Also, they don’t like the developers’ attitude that users are idiots, and developers/designers know best.

The problem

As #10 shows, the problem is that GNOME developers don’t listen to the users, or at least that’s the perception from a lot of users, and since the community rejected the idea of running and/or blessing this survey, it would be safe to assume that they would ignore the results. The story of this survey is rather interesting, and explains a lot, but you can read about it through Bruce Byfield’s mouth instead of mine in his article ‘The Survey That GNOME Would Rather Ignore‘.

Even before the survey was run, GNOME developers said the results would be worthless, however, the only valid criticism was the possibility of non-response bias, which fortunately didn’t happen, and I explained why in my analysis of the survey results. I still haven’t received any comments from GNOME developers after I publicized the analysis of the results, so if there’s any valid criticism remaining, I still haven’t heard of it.

I wish they would look at these results critically, not outright reject them, and if there’s any problem with the survey, tackle the problems for the 2012 survey, and even more: run the survey themselves. Wouldn’t that be great?

 Other suggestions

  • Reduce dependencies (specially PulseAudio)
  • Improved reliability / stability
  • Minimize / Mazimize
  • Tiling
  • alt+tab
  • Faster shell search
  • Collaborate more with other communities
  • Fix ATI fglrx issues
  • Integrate Zeitgeist
  • Render KDE apps seamlessly
  • Reduce dead space
  • Compiz compatibility

Analysis of the 2011 GNOME user survey results

Last year Phoronix launched the 2011 GNOME user survey, an effort I initiated to try to give a voice to GNOME users, a voice I though was sorely lacked. I knew the effort wasn’t going to be accepted by GNOME developers, because through the years they have always rejected anything that resembles user feedback: surveys, polls, brainstorm, or something as easy as enabling voting in bugzilla. I gave it a try anyway because I made a bet with Zeeshan Ali that this would happen, and it did (although he never accepted that).

However, in the process the effort received a lot of positive feedback and the survey improved quite a lot, it would have been a shame to throw it to the trash, so, as Alan Cox suggested, I tried to make this survey happen anyway and thanks to Michael Larabel from Phoronix, it did.

According to GNOME people, such a survey, or in fact, any survey will always give bad results, because it’s impossible to get rid of all the biases. This was discussed extensively in the GNOME desktop-devel mailing list, and despite all my efforts to convince them otherwise, I didn’t succeed (which wasn’t a surprise at all). To give an idea of the sort of resistance we are talking about, I made the argument that even president elections across the globe would have similar problems, and they argued that even those are flawed. One would have to wonder what method in their minds would give proper statistically significant user feedback, but the answer is not surprising: nothing. Yet, they claim they listen to users, somehow. Although their method of “listening to users” remains mysterious, it probably involves personal experience: talking with their friends and family, and close circles (which are more biased than any survey could).

But in this discussion there was valid criticism, mainly the non-response bias. It was argued that only people that hate GNOME would answer the survey, and thus the survey would be biased towards negative comments, and the results worthless. There is some truth to that claim, and the survey tried to identify and mitigate that factor, with success I think. I will start by explained how this was done, because otherwise the survey would indeed be worthless for many purposes, which is not the case. Other arguments against the results of the survey are invalid in my opinion, but unlike other blogs from GNOME developers, this blog has comments open, so if you disagree and find other valid criticism to this survey, you are free to comment below.

Nevertheless, the most important point that GNOME people missed, is that this was a survey v1.0, and like all open source software, it can (and will) improve. Even if they are correct (they aren’t), and the results of v1.0 is totally worthless, that doesn’t mean that v2.0, or later versions, will. Unfortunately in their mind neither v1.0, nor v100.0 can provide any helpful results at all, or for that matter any other survey in the history of humanity.

I will present the rationale of why I think these results are good and valid, and I think any rational person would agree, but you would be the judge of that.

If you want to see the straight results of the survey, go to this Phoronix page.

Non-response bias

Let’s invent a scenario where we have a population of 1000 people, and we want to figure out how many people like shopping. To do that we sample 100 people and it turns out around 25% of them say they like shopping. That’s it, right? Job done. Well, what happens if most of people interviewed where males, say 2/3 males, 1/3 females. We know from other sources that we should expect the male/female ratio to be around 50/50, so we know there was a disproportion, but is that a problem? If it turns out that say, females have a bias towards liking shopping, then yes, there is a problem.

If we find results like that, we know there is bias, but we can use our knowledge about the male/female ratio expected in the total population to calculate what would be the real amount of people that like to shop in the total population. But as GNOME people were quick to mention, we don’t have a census of Linux desktop users, so we don’t have any idea of the proportions we might expect for different biases. In the example above, if we don’t know the male/female ratio of the total population, then we have no idea if 33/66 was disproportionate or not, all we know is that there is bias. And that’s the end of the game, the 25% our survey returned is worthless. We can say how many females and males like shopping, but nothing about the total population. I wouldn’t call this totally worthless, but GNOME people would.

But that is only if there is a bias. If it turns out that males and females like shopping at roughly the same proportion, then it doesn’t matter what is the ratio of the total population, and it doesn’t matter if we are being disproportionate or not. All we can do is identify the bias, if we don’t even ask the question “What is your sex?”, then we are truly lost, because we can’t even know if there was bias or not. So all we can do is try to identify all possible sources of bias, and then check if this bias is indeed happening or not.

Even worst is what happens if no females participate in the survey at all. That is highly unlikely in this case, but consider online surveys; people without Internet would be unlikely to participate, and if they have a bias then the results would be skewed. This was also used by GNOME people as an argument against any kind of online survey.

However, I had an idea; what if we ask people to print the survey and give it to these kind of people without Internet. This in effect would be no different from a survey that is not online, the only difference is that instead of paying people to go around finding GNOME users, we ask the community to do so – crowd-sourcing the problem. We didn’t ask this directly, perhaps for v2, but I added a question to identify how people are filling the survey: on their own, on behalf of somebody else, and so on.

We did spend a considerable amount of time trying to identify all these possible biases, and then add questions in the survey to see if there’s bias or not, and that’s all anybody can do.

So all the criticism GNOME people threw was taken care one way or the other, even if they didn’t agree so. The only way the results of this survey would be worthless is if somebody comes up with a possible bias that no question in the survey would identify (we need a new survey), or if it turns out there’s a real bias (we need a census).

Was there bias?

Let’s examine the main claim of GNOME people; people that answer the survey would have a bias towards hating GNOME.

Overall, how satisfied are you with GNOME?

Barely, Completely, Halfway, Mostly, Not At All

How are you taking this survey?

I am acting on behalf of somebody else, Completely on my own, Other, Somebody is pushing for me to do it

As you can see irrespective of the way people answered the survey, there’s a clear tendency: most people avoided the extremes and answered primarily “Mostly”, and then “Halfway”. The people that answered “I am acting on behalf of somebody else” went more for the extremes, but the difference is not significantly different from the people that answered “Completely on my own”. So, fortunately it seems GNOME people were wrong; there’s no bias.

Just to be sure, lets look at another, similar question:

How do you compare your current GNOME version with the version from one year ago?

Better, No Changes, Cannot Say, Worse

Again we see no significant differences depending on how people answered the survey; most people said either better, or worse at about the same rate. The people that answered on behalf of somebody else had a tendency to answer “Better”, and the people that got pushed, answered “Worse” a bit more, but in general no big difference.

Clearly, if there is a bias, it’s not dependent on how people are answering the survey. There’s another possibility; we didn’t get enough proxy responses. Statistically speaking, we would expect the results of “I am acting on behalf of somebody else” to be ±18% off target, but “Somebody is pushing for me to do it” only ±8, based on the amount of responses with a 95% certainty. So it would be nice if we get more of these responses on the next survey, but we can be relatively certain that if there is a bias, it’s not that great. Certainly not enough to declare the whole survey worthless.

I did similar analyses on the different questions of the survey, and I couldn’t find any significant biases from groups that could be underrepresented, therefore, the results of the survey are valid.

However, there are certain interesting results.

Interesting results

This is the same improvement question, but depending on whether or not people used GNOME 3.

How do you compare your current GNOME version with the version from one year ago?

Better, No Changes, Cannot Say, WorsePeople that haven’t used GNOME 3 mostly say that it hasn’t changed, but the ones that have used it have very different opinions, which matches what we have seen in online discussions: people either hate it or love it, but it’s interesting to see that the split is 50/50.

Another theory that popped in the discussions was that people that use terminals would hate GNOME 3, but “normal people” would love it.

Overall, how satisfied are you with GNOME?

Barely, Completely, Halfway, Mostly, Not At All

How often do you use a terminal/console?

Is there anything else?, When I have no other option, I can’t live without them, What is that?

However, it seems to be the opposite: people that don’t even know what a terminal is certainly don’t like GNOME, it’s the rest that do: from people that use it when there’s no other option, from people that don’t use anything else.

One of my theories was that contributors to the project would have a bias toward liking it.

Overall, how satisfied are you with GNOME?

Barely, Completely, Halfway, Mostly, Not At All

Have you contributed to the GNOME project?

No, Yes

But it appears I was also wrong; contributors to the project have roughly the same opinion as the non-contributors.

There is a much more clear differentiator tough: people that have tried to contact the project, unsuccessfully.

Overall, how satisfied are you with GNOME?

Barely, Completely, Halfway, Mostly, Not At All

Have you ever contacted the GNOME team?

No, I don’t know how; No, never had the need; Yes, successfully; Yes, unsuccessfully

The people that have tried to contact the project unsuccessfully tend to say either they are barely satisfied, or not at all with GNOME. Of all the people that have contacted the project 2/3 of them say it was unsatisfactory. This should be worrying.

And then, people that like to use another window manager.

Overall, how satisfied are you with GNOME?

Barely, Completely, Halfway, Mostly, Not At All

Have you ever contacted the GNOME team?

Yes, successfully; Yes, unsuccessfully; No, I don’t know how; No, never had the need

Clearly GNOME has problems interacting with other window managers. Or at least people seem to think so.

What needs to change?

Unfortunately there was no easy way to ask GNOME users this question so it was more or less open form, and that’s very difficult to analyze, however, I’ve taken the time to read them one by one, and count them, and so far I have 20% of the 10000 responses, but I doubt reading the rest will change the results much. I will try to do so in the future, as time permits, but I don’t promise much.

Better customization (397)

This is by far the most requested, people don’t want to manually fiddle with gconf/dconf, extensions, or gnome-tweak, they want a whole lot more options integrated. One suggestion was to have an advanced mode, which is something I have suggested in the past, but GNOME developers are adamant against it for no rational reasons.

Not only do people want more options, but they think some of the current ones are useless, and that defaults are all wrong. They also want more options for power management, like deciding what happens when you close the lid. Also more options to change the appearance: font, icons, keyboard bindings, screensaver, etc. Also, disable the accessibility stuff.

GNOME 2 (113)

People love their GNOME 2. A lot of them suggested to have two interfaces, others requested to improve the fallback mode, but most of them demanded to get rid of GNOME shell directly. A lot of people also asked to bring back the GNOME 2 panel, and taskbar.

Improve performance and footprint (89)

A lot people think GNOME is too bloated and asked for better performance, less CPU usage, less memory usage, smoother animations, faster start-up times, etc. Specially the ones that have used GNOME shell, which apparently is a resource hog.

Nautilus (57)

People are definitely not happy with nautilus. They complain it’s too slow, takes too much time to start, and lacks a lot functionality. Many suggested to get rid of it and use some of the already existing alternatives.

Notifications (46)

The new notifications are annoying; one shouldn’t need to move the mouse cursor the corner to see them; that makes it easy to miss them, which defeats the purpose of a notification.

Shutdown / Restart / Suspend (44)

This is a no-brainer; just add this option. Yes, you can see them by pressing “alt” but people don’t want that, there’s no excuse to remove basic functionality, and users are complaining hard for this particular feature.

Improve theming (32)

Users want better themes and icons, they don’t like the default theme, also want a dark theme, and also think the amount of customization a theme can achieve is not enough.

Better multi-monitor support (32)

Another very requested feature was to improve multi-monitor support.

Others (in order)

  • Improve Evolution
  • Listen to users
  • Reduce dependencies (specially PulseAudio)
  • Improved reliability / stability
  • Minimize / Mazimize
  • Tiling
  • Get rid of Evolution (or split the calendar component
  • alt+tab
  • Faster shell search
  • Collaborate more with other communities
  • Fix ATI fglrx issues
  • Integrate Zeitgeist
  • Render KDE apps seamlessly
  • Reduce dead space
  • Compiz compatibility
  • Developer attitude


What is clear is that most people want more configuration options, a lot liked GNOME 2 much better than GNOME 3, and they want an option to have an interface similar to GNOME 2 with GNOME 3 technology. For every user that likes GNOME 3, there is one that hates it. There’s plenty of people that think they are ignored by GNOME developers, and that users are treated like idiots (and they don’t like that). Users want a better attitude from the developers, not only toward users, but towards other FOSS projects like KDE (GTK+ apps like fine under KDE, Qt apps don’t under GNOME), and less of the not-invented-here syndrome.

Hopefully it has become clear to rational people that there is some value in the results of this survey, even if GNOME developers reject it. All the known biases were identified, and fortunately the ones that could cause non-response bias didn’t do so, so the results are valid and nobody was underrepresented :)

The next survey will incorporate some of the findings here, and hopefully more people would be aware of the need to reach people that normally wouldn’t answer this survey. Perhaps at some point GNOME developers would accept that there is no significative non-response bias because of that and start listening to the results.

Half a million views, and stats

This blog finally reached the 500,000 views milestone, which might not be interesting to many people, but perhaps the stats will, so here they are :)

From the looks of it, most of the readers come from RSS feeds, not really other blogs or planets. The busiest day I wrote the blog post My disagreement with Elop on MeeGo which was even featured in Finish online newspapers, so clearly the was a huge bump, but not enough to be the #1 post. Also, a lot of people are interested in GNOME 3 suckage, unfortunately the only post I have about GNOME 3 is rather small (and even then it got criticism from GNOMErs), so I shouldn’t pull my punches and write a full-blown criticism (there’s plenty of material). I’m already working on an analysis of the GNOME user survey results, which is already very interesting on its own, but will take time.

So, here they are. Enjoy :)



  • views all-time: 501,133
  • views on your busiest day, June 22, 2011: 11,762

Top twenty posts / pages

  • Home page / Archives – 55,470
  • Mercurial vs Git; it’s all in the branches – 47,406
  • My disagreement with Elop on MeeGo – 42,153
  • After two weeks of using GNOME 3, I officially hate it – 39,933
  • GStreamer hello world – 25,940
  • Installing scratchbox 1 and 2 for ARM cross-compilation – 16,041
  • Free TV episodes: Penn & Teller: Bullshit! (fun and enlightening) – 15,660
  • git send-email tricks – 14,374
  • Public Domain Torrents: Classic Movies and B-Movies For FREE! – 10,709
  • GStreamer hands-on introduction – 10,565
  • Automounting a storage device with GNOME – 8,684
  • No, mercurial branches are still not better than git ones; response to jhw’s More On Mercurial vs. Git (with Graphs!) – 8,579
  • New project: gst-dsp, with beagleboard demo image – 8,041
  • Fedora 8 network install from USB – 7,782
  • msn-pecan on the N900; great address-book integration – 7,638
  • MeeGo scales, because Linux scales – 6,503
  • gst-openmax demo on the beagleboard – 6,287
  • Alternative MSN plugin for Pidgin – 5,821
  • Scrobbler for Maemo, now both on N900, and N9 – 5,820
  • Nokia; from a burning platform, to a sinking platform – 5,801

Top ten searches

  • mercurial vs git – 8,465
  • gstreamer tutorial – 6,106
  • gnome 3 sucks – 3,433
  • penn and teller bullshit episodes – 2,610
  • gstreamer example – 2,095
  • felipe contreras – 1,989
  • git vs mercurial – 1,585
  • tortillas – 1,581
  • git send-email – 1,375
  • bullshit episodes – 1,131

Top ten referrers

  • Search Engines – 65,202
  • – 21,009
  • Reddit – 9,421
  • – 4,875
  • Hacker News – 4,565
  • – 4,448
  • – 3,433
  • Google Reader – 2,725
  • Twitter – 2,294
  • – 2,223

Top ten countries

  • United States – 20,059
  • United Kingdom – 4,127
  • Germany – 3,973
  • India – 3,128
  • Finland – 2,740
  • France – 2,347
  • Canada – 2,286
  • Italy – 2,219
  • Russian Federation – 1,899
  • Australia – 1,777

The meaning of success

I once had a quite extensive discussion with a colleague about several topics related to the software industry, and slowly but methodically we reached a fundamental disagreement on what “success” means. Needless to say, without agreeing on what “success” means it’s really hard to reach a conclusion on anything else. I now believe that many problems in society — not only in the software industry — can be derived from our mismatches in our understanding of the word “success”. It is like trying to decide if abortion is moral without agreeing on what “moral” means — and we actually don’t have such an agreement — and in fact, some definitions of morality might rely on the definition of “success”.

For example: which is more successful? Android? iPhone? or Maemo? If you think a successful software platform is the one that sells more (as many gadgets geeks probably do), you would answer Android, if on the other hand you think success is defined by what is more profitable (as many business people would do), you would answer iPhone. But I contend that success is not relative only relative to a certain context; there’s also an objective success that gives a clear answer to this question, and I hope to reach it at the end of this post.

This not a meta-philosophical exercise, I believe “success” in the objective sense can be clearly defined and understood, but in order to do that, I would need to show some examples and counter-examples in different disciplines. If you don’t believe in the theory of evolution of species by natural selection, you should probably stop reading.


The Merriam-Webster dictionary defines success as (among other definitions):

  • a : to turn out well
  • b : to attain a desired object or end <students who succeed in college>

From this we can say there’s two types of success; one is within a specific context (e.g. college), and the other one is in general. In this blog post I will talk about generic success, with no specific goal, or rather with the generic ultimate goal. Since it’s highly debatable (and difficult) how to define this “ultimate goal”, I will concentrate on the opposite; to try to define the ultimate failure in a way that no rational person would deny it.

Humans vs. Bacteria

My first example is: what organisms are more successful? Humans, or bacteria? There are many angles in which we could compare these two organisms, but few would reach a definite answer. The knee-jerk reaction of most people would be to say: “well, clearly humans are more evolved than bacteria, therefore we win”. I’m not an expert in the theory of evolution, but I believe the word “evolved” is misused here. Both bacteria and humans are the results of billions of years of evolution, in fact, one could say that some species of bacteria are more evolved because Homo Sapiens is a relatively new species and only appeared a few hundred thousands years ago, while many species of bacteria have been evolving for millions of years. “Kids these days with their fancy animal bodies… I have been killing animals since before you got out of the water… Punk” — A species of bacteria might say to younger generations if such a thing would be possible. At best humans are as evolved as bacteria. “Primitive” is probably the right word; bacteria is more primitive because it closely resembles its ancestors. But being primitive isn’t necessarily bad.

In order to reach a more definitive answer I will switch the comparison to dinosaurs vs. bacteria, and come back to the original question later. Dinosaurs are less primitive than bacteria, yet dinosaurs died, and bacteria survived. How something dead can be considered successful? Strictly speaking not all dinosaurs are dead, some evolved into birds, but that’s besides the point; let’s suppose for the sake of argument that they are all dead (which is in fact how many people consider them). A devil’s advocate might suggest that this comparison is unfair, because in different circumstances dinosaurs might have not died, and in fact they might be thriving today. Maybe luck is an important part of success, maybe not, but it’s pointless to discuss about what might have been; what is clear is that they are dead now, and that’s a clear failure. Excuses don’t turn a failure into a success.

Let me be clear about my contention; anything that ceases to exist is a failure, how could it not? In order to have even the smallest hope of winning the race, whatever the race may be, even if it doesn’t have a clear goal, or has many winners; you have to be on the race. It could not be clearer: what disappears can’t succeed.

Now, being more evolved, or less primitive, is not as a trump card as it might appear; nature is a ruthless arena, and there are no favorites. The vast majority of species that have ever lived are gone now, and it doesn’t matter how “unfair” it might seem, to nature only the living sons matter, everyone else was a failure.

If we accept that dinosaurs failed, then one can try to use the same metric for humans, but there’s a problem (for our exercise); humans are still alive. How do you compare two species that are not extinct? Strictly speaking all species alive today are still in the race. So how easy is it for humans to go extinct? This is a difficult question to answer, but lets suppose an extreme event turns the average temperature of the Earth 100°C colder; that would quite likely kill all humans (and probably a lot of plants and animals), but most certainly a lot of bacterial species would survive. It has been estimated that there’s 5×1030 bacteria on Earth, countless species, and possibly surpassing the biomass of all plants and animals. In fact, human beings could not survive without bacteria, since it’s essential to the human microbiome, and if you sum the bacteria genes in a human body, it probably outranks the human genes by a factor of 100-to-1. So, humans, like dinosaurs, could disappear rather easily, but bacteria would still be around for a long long time. From this point of view, bacteria are clearly more successful than humans.

Is there any scenario in which humans would survive, and bacteria would not? (therefore making humans more successful) I can think of some, but they would be far into the future, and most definitely we are not yet there. We are realizing the importance of our microbiome only now, and in the process of running the Human Microbiome Project, so we don’t even know what role our bacteria plays, therefore we don’t know how we could replace them with something else (like nanorobots). If bacteria disappeared today, so would we. It would follow then that bacteria are more successful, and there’s no getting around that.

Fundamentals and Morality

Could we define something more fundamental about success? I believe so: a worse failure than dying, is not being able to live in the first place, like a fetus that is automatically aborted because of a bad mutation, or even worse; an impossibility. Suppose “2 + 2 = 5″; this of course is impossible, so it follows that it’s a total failure. The opposite would be “2 + 2 = 4″; this is as true as anything can be, therefore it’s a total success.

There’s a realm of mathematics that is closer to what we consider morality: game theory. But don’t be fooled by its name; game theory is as serious as any other realm of mathematics, and the findings as conclusive as probability. An example of game theory is the prisoner’s dilemma — here’s a classic version of it:

Two men are arrested, but the police do not possess enough information for a conviction. Following the separation of the two men, the police offer both a similar deal—if one testifies against his partner (defects/betrays), and the other remains silent (cooperates/assists), the betrayer goes free and the one that remains silent receives the full one-year sentence. If both remain silent, both are sentenced to only one month in jail for a minor charge. If each ‘rats out’ the other, each receives a three-month sentence. Each prisoner must choose either to betray or remain silent; the decision of each is kept quiet. What should they do? If it is supposed here that each player is only concerned with lessening his time in jail, the game becomes a non-zero sum game where the two players may either assist or betray the other. In the game, the sole worry of the prisoners seems to be increasing his own reward. The interesting symmetry of this problem is that the logical decision leads each to betray the other, even though their individual ‘prize’ would be greater if they cooperated.

There are different versions of this scenario; with different rules and more complex agents game theory arrives to different conclusions as to what rational agents should do to maximize their outcomes, but these strategies are quite factual and universal; we are not talking about human beings; they are independent of culture, or historicism; the rules are as true here as they are in the other side of the universe. So if game theory determines that a certain strategy fails in certain situation, that’s it; it’s as hard of a failure as “2 + 2 = 5″.

With this notion we might be able to dive into more realistic and controversial examples — like slavery. Nowadays we consider slavery immoral, but that wasn’t the case in the past. One might say that slavery was a failure (because it doesn’t exist (at least as a desirable concept)), but that is only the case in human society, perhaps there’s another civilization in an alien planet that still has slavery, and they are still debating, so one might be tempted to say that slavery’s failure is still contended (perhaps even more so if you live in Texas). But we got rid of slavery because of a reason; it’s not good for everybody. It might be good for the slave owners, and good for some slaves, but not good for everybody. It is hard to imagine how another civilization could arrive to a different conclusion. Therefore it is quite safe to say that in all likelihood slavery is a failure, because of its tendency to disappear. Perhaps at some point game theory would advance to the point where we can be sure about this, and the only reason it took so long to get rid of slavery is that we are not rational beings, and it takes time for our societies to reach this level of rationality.

Objective morality and the moral landscape

Similarly to the objective success I’m proposing, Sam Harris proposes a new version of objective morality in his book The Moral Landscape. I must admit I haven’t read the book, but I have watched his online lectures about the topic. Sam Harris asserts that the notion that science shouldn’t deal with morality is a myth, and that advances in neuroscience (his field of expertise) can, and should, enlighten us as to what should be considered moral. Thus, morality is not relative, but objective. The different “peaks” in the landscape of morality are points in which society aims to be, in order to be a good one, and historically the methods to find these “peaks” has been rather rudimentary, but a new field of moral science could be the ultimate tool.

Regardless of the method we use to find these “peaks”, the important notion (for this post), is that there’s an abyss; the lowest moral point. The worst possible misery for all beings is surely bad:

The worst-possible-misery-for-everyone is ‘bad.’ If the word ‘bad’ is going to be mean anything surely it applies to the worst-possible-misery-for-everyone. Now if you think the worst-possible-misery-for-everyone isn’t bad, or might have a silver lining, or there might be something worse… I don’t know what you’re talking about. What is more, I’m reasonably sure you don’t know what you’re talking about either.

I want to hijack this concept of the worst-possible-misery-for-everyone that is the basis of (scientific) objective morality, and use it as a comparison to my contention that ceasing-to-exist is the basis for objective success.

Today our society is confronted with moral dilemmas such as gay marriage and legal abortion, many of these are hijacked by religious bigotry and irrationality, and it’s hard to move forward because many still define morality through religious dogmas, and even people who don’t, and are extremely rational, still cling to the idea that morality comes from “God” (whatever that might mean). Even many scientists claim that morality cannot be found through science, and others that morality is relative. But yet others disagree and have tried to define morality in universal terms, like Sam Harris. The jury is still out on this topic, so I cannot say that morality should definitely be defined in terms of what is successful to our worldwide society, merely that it is a possibility — A rather strong one, in my opinion.


It’s a little more tricky to define what constitutes a successful life, because all life ends. The solution must be one on the terms of transcendence: offspring, books, memes, etc. However people living a more hedonistic life might disagree; but lets be clear, a life can be unsuccessful in the grand scheme of things, but still be good, and the other way around. It might be tempting to define success in different terms: “if my goal is to enjoy life, and I do; I’m succeeding”, and while that is true, that’s being successful in relative terms, not general terms.

Some people might have trouble with this notion, so I would give an example: Grigori Perelman vs. Britney Spears. Most people probably don’t know Grigori, but he solved one of the most difficult problems in mathematics, and was awarded one million USD for it. Clearly this would have helped him to become famous, but he rejected interviews and rejected the money. Does this means he rejected success? Well, lets try to view this from the vantage point of 500 years into the future; both Britney Spears and Grigori Perelman would be dead by that time, so the only things that remain would be their transcendence. Most likely nobody would remember Britney Spears, nor listen to any of her music, while it’s quite likely that people would still be using Grigori Perelman’s mathematics, as he would be one of the giants upon which future scientists would stand. In this sense Grigori is more successful, and any other sense of success would be relative to something else, not objective.


Hopefully my definition of success should be clear by now in order to apply it to the initial example.


iPhone is clearly successful in being profitable, but many products have been profitable in the past and have gone with the wind. The real question is: What are the chances that the iPhone will not disappear? It is hard to defend the position that the iPhone will remain for a long period of time because it’s a single product, from a single company, and specially considering that many technology experts can’t find an explanation for its success other than the Apple Cult. While it was clearly superior from an aesthetic point of view while it was introduced, there’s many competitors that are on par today. Maybe it would not disappear in 10 years, but maybe it would. It’s totally unclear.


Compared to the iPhone, Android has the advantage that many companies work on it, directly and indirectly, and it doesn’t live on a single product. So if a single company goes down, that would not kill Android, even if that company is Google. So, as a platform, it’s much more resilient than iOS. Because of this reason alone, Android is clearly more successful than the iPhone — according to the aforementioned definition.


Maemo is dead (mostly), so that would automatically mean that it’s a failure. However, Maemo is not a single organism; it consists of many subsystems that are part of the typical Linux ecosystem: Linux kernel,, Qt, WebKit, GStreamer, Telepathy, etc. These subsystems remain very much alive, in fact, they existed before Maemo, and will continue to exist, and grow. Some of these subsystems are used in other platforms, such as WebOS (also dead (mostly)), Tizen, MeeGo (also dead (kinda)), and Mer.

A common saying is that open source projects never die. Although this is not strictly true, the important thing is that they are extremely difficult to kill (just ask Microsoft). Perhaps the best analogy in the animal kingdom would be to compare Maemo to a sponge. You can divide a sponge into as many parts as you want, put it into a blender, even make sure the pieces pass through a filter with very minute holes. It doesn’t matter; the sponge would reorganize itself again. It’s hard to imagine a more resilient animal.

If this is the case, one would expect Maemo (or its pieces) to continue as Tizen, or Mer (on Jolla phones), or perhaps other platform yet to be born, even though today it seems dead. If this happens, then Maemo would be even more successful than Android. Time will tell.


Like any a scientific theory, the really interesting bit of this idea would be it’s predictive power, so I will make a list of things in their order of success, and if I’m correct the less successful ones would tend to disappear first (or their legacy):

  • Mer > Android > iOS > WP
  • Linux > Windows
  • Bill Gates > Steve Jobs > Carlos Slim (Richest man in the world)
  • Gay marriage > Inequality
  • Rationality > Religious dogma
  • Collaboration > Institutions

To me, this definition of “success” is as true as “2 + 2 = 4″ (in fact, it’s kind of based on such fundamental truths), unfortunately, it seems most people don’t share this point of view, as we still have debates over topics which are in my opinion a waste of time. What do you think? Are there examples where this definition of success doesn’t fit?

No, mercurial branches are still not better than git ones; response to jhw’s More On Mercurial vs. Git (with Graphs!)

I’ve had plenty of discussions with mercurial fans, and one argument that always keeps poping up is how mercurial branches are superior. I’ve blogged in the past why I think the branching models are the only real difference between git and mercurial, and why git branches are superior.

However I’ve noticed J. H. Woodyatt’s blog post Why I Like Mercurial More Than Git More On Mercurial vs. Git (with Graphs!) has become quite popular. I tried to engage in a discussion in that blog, but commenting there is a painful ordeal (And my comments have been deleted!).

So, in this blog post I will explain why mercurial branches are not superior, and how everything can be achieved with git branches just fine.

The big difference

The fundamental difference between mercurial and git branches can be visualized in this example:

Merge example

In which branches is the commit ‘Quick fix’ contained? Is it in ‘quick-fix’, or is it both in ‘quick-fix’ and master? In mercurial it would be the former, and in git the latter. (If you ask me, it doesn’t make any sense that the ‘Quick fix’ commit is only on the ‘quick-fix’ branch)

In mercurial a commit can be only on one branch, while in git, a commit can be in many branches (you can find out with ‘git branch --contains‘). Mercurial “branches” are more like labels, or tags, which is why you can’t delete them, or rename them; they are stored forever in posterity just like the commit message.

That is why git branches are so useful; you can do absolutely anything that you want with them. When you are done with the ‘quick-fix’ branch, you can just remove it, and nobody has to know it existed (except for the fact that the merge commit message says “Merge branch ‘quick-fix’”, but you could have easily rebased instead). Then, the commit would only be on the ‘master’ branch’.

Bookmarks are not good enough

Mercurial has another concept that is more similar to git branches; bookmarks. In old versions of mercurial these were an extension, but now they are part of the core, and also new is the support for repository namespacing (so you could have upstream/master, backup/master, and so on).

However, these are still not as useful as git branches because of the fundamental design of mercurial; you can’t just delete stuff. So for example, if your ‘quick-fix’ bookmark didn’t go anywhere, you can delete it easily, but the commits won’t be gone; they’ll stay through an anonymous head (explained below). You would need to run ‘hg strip‘ to get rid of them. And then, if you have pushed this bookmark to a remote repository, you would need to do the same there.

In git you can remove a remote branch quite easily: ‘git push remote :branch‘.

And then, bookmark names are global, so you can’t push a branch with a different name like in git: ‘git push remote branch:branch-for-john‘.

Anonymous heads

Anonymous heads are probably the most stupid idea ever; in mercurial a branch can have multiple heads. So you can’t just merge, or checkout a branch, or really do any operation that needs a single commit.

Git forces you to either merge, or rebase before you push, this ensures that nobody else would need to do that; if you have a big project with hundreds of committers this is certain useful (imagine 10 people trying to merge the same two heads at the same time). In addition, you know that a branch will always be usable for all intends and purposes.

Even mercurial would try to dissuade you from pushing an anonymous head; you need to do ‘hg push -f‘ to override those checks.

The rest of the uses of anonymous heads were solved in git in much simpler ways; ‘git pull’ automatically merges the remote head, and remote namespaces of branches allow you to see their status after doing ‘git fetch’.

Anonymous heads only create problems and solve none.

Nothing is lost

So, let’s go ahead with jhw’s blog post by looking at his example repository:


According to him, it’s impossible to figure out what happened in this repository, but it’s not. In fact, git can automatically find out what is the corresponding branch for the commit with the ‘git name-rev‘ command (e.g. ‘release~1‘).

Now let’s assign colors based on the output of ‘git name-rev‘:

Repository with names

The colors are exactly the ones that jhw used for his mercurial example.

Now the only difference is that there is no ‘temp’ branch, but that is actually good; it was removed. Why would we want to see a branch that was removed? We wouldn’t. Either way, the information remains; “Merge branch ‘temp’ into release” says it all; that all those commits come from the ‘temp’ branch.

Of course, one would need to manually look through the commit messages to find those removed branches, but that is fine, because you would rarely (never?) need that. And if he really needs that information readily, he can write a prepare-commit-msg hook to store the branch name the commit was originally created from.

Real use-cases

jhw tried to defend the need for this information by presenting some use cases:

A more clever rebuttal to my question is to ask in return, “Why do you need to know?” Let me answer that preemptively:

A) I need to know which branch ab3e2afd was committed to know whether to include it in the change control review for the upcoming release

It’s easy to find out what commits are relevant for the next release with ‘git log master^..release‘:

Release commits

But then he said:

I didn’t ask for a list of all the commits that are currently included in the head of the branch currently named ‘release’ that are not included in the head of the branch currently named ‘master’. I wanted to know what was the name of the branch on which the commit was made, at the time, and in the repository, where it was first introduced.

How convenient; now he doesn’t explain why he needs that information, he just says he needs it. ‘git log master..release‘ does what he said he was looking for.

B) I need to know which change is the first change in the release branch because I’d like to start a new topic branch with that as my starting point so that I’ll be as current as possible and still know that I can do a clean merge into master and release later

Easy; ‘git merge-base master^ release‘, that would return ‘master~1′ (76ae30ef).

But then he said:

I didn’t want to know the most recent commit included in both the currently named ‘master’ and ‘release’ heads, because that may have actually occurred either prior to, or after, the creation of either the branch currently named ‘release’ or the branch currently named ‘master’.

And again he doesn’t explain why on earth would he need that.

To find the most current commit from the ‘release’ branch that can also be merged into ‘master’ cleanly you can use ‘git merge-base‘; the first commit of the ‘release’ branch doesn’t actually help as it has already diverged from ‘master’ and it’s not even “as current as possible” as there will probably be newer commits on the release branch.

Either way, if he really wants that, he can pick any commit that he wants from ‘git log master..release‘.

C) I need to know where topic branch started so that I can gather all the patches up together and send them to a colleague for review.

Easy: ‘git send-email --to john release..topic‘.

But then he said:

I didn’t want to know all the commits present in the head of the branch currently named ‘topic’ that aren’t present in head of the branch currently named ‘release. I wanted to know the first commit that went into a branch that was called ‘topic’ at the time when the change was committed. Your command may potentially include commits that were in a different branch that wasn’t called ‘topic’ at the time.

Why would you send patches for review that are dependent on commits your colleague has no visibility of? No, you want to send all the patches that comprise the ‘topic’ branch, doing anything else would be confusing

If for some reason you don’t want to send the patches that were part of another branch, you can select them out with ‘^temp’.


All the use-cases jhw explained are supported just fine in git, he is just looking for corner-cases and then complaining because we would need to do extra stuff.

I have never seen a sensible use-case in which mercurial “branches” (branch labels) would be more useful than git branches. And bookmarks are still not as good.

So git branching model wins.

gst-av 0.6 released; more reliable

gst-av is a GStreamer plug-in to provide support for libav (fork of FFmpeg), it is similar to gst-ffmpeg, but without GStreamer politics, which means all libav plugins are supported, even if there are native GStreamer alternatives; VP8, MP3, Ogg, Vorbis, AAC, etc.

This release takes care of a few corner-cases, and has support for more versions of FFmpeg.

Here are the goods:

And here’s the short-log:

Felipe Contreras (19):
      adec: flush buffer on EOS
      adec: improve timestamp reset
      adec: avoid deprecated av_get_bits_per_sample_fmt()
      adec: avoid FF_API_GET_BITS_PER_SAMPLE_FMT
      vdec: properly initialize input buffer
      parse: add more H.264 parsing checks
      parse: fix H.264 parsing for bitstream format
      get_bits: add show_bits function
      build: set runpath for libav
      vdec: fix potential leaks
      vdec: use libav pts stuff
      vdec: get delayed pictures on eos
      build: trivial improvements
      parse: trivial fix
      h264enc: fix static function
      vdec: add support for old reordered_opaque
      adec: add support for old sample_fmt
      adec: add support for really old bps()
      adec: add support for all MPEG-1 audio

Mark Nauwelaerts (1):
      parse: be less picky regarding some reserved value

EFI adventures

If you are like me, you probably have heard of EFI, but don’t know what it means in practice, how to eat it, or what to mix it with. But I recently bought a new PC, and it happened to come with EFI, so I decided to enable it and then I learned many things about it.

Whole new world

First of all, EFI is not something just can just “enable”. Even if your kernel has support for EFI, that won’t make a difference unless you boot it through EFI, and you can’t seamlessly boot through EFI unless you create a boot entry, which is done with efibootmgr, which only works once you are in EFI mode. So it’s a bit of a chicken-and-egg problem.

So how do you enter this world in the first place? First of all, you would need a shell. Some BIOSes come with one, but otherwise you would need to install one, and you can find public links on this wiki page.

But where to put this “shell”? Well, the EFI system partition–yes, you need a dedicated partition (more info here). This partition should be formatted using FAT32, and marked in a special way, so the EFI BIOS would be able to identify it, and use it’s contents.

You copy the shell to this partition on /shellx64.efi (I tried UEFI Shell 2.0 (Beta), but it didn’t work for me, so I used the old one).

Your BIOS should now be able to boot this shell… Somehow. It took me a long time to figure out that in my BIOS, that’s a hidden option on the exit menu, where you can “exit” into a shell.

Congratulations, now you are in EFI mode, and probably type some commands, like in the old DOS days, but not be able to do anything really (unless you have some EFI version of Prince of Persia).

The bootloader

There are a few EFI bootloaders out there, like ELILO, and efilinux, but I decided to go with the safest and familiar choice: GRUB2 (well GRUB was familiar, GRUB2 not so much). Arch Linux’s wiki describes in great detail how to setup GRUB2 for EFI, so I’ll just mention that you need to use a different command (‘grub_efi_x86_64-install’ in the case of Arch Linux), and the files need to be installed in the aforementioned EFI system partition.

You would end up with a file /efi/arch/grubx64.efi, and this is an EFI binary that you can use in the shell. Theoretically you should be able to enter the command “\efi\arch\grubx64.efi”, but for some reason that didn’t work for me, so I had to “cd \efi\arch”, and then “grubx64.efi”. Of course, you first need to choose the right partition, but the shell makes it easier by having aliases to the EFI system partitions, so you first type “fs0:” (or something like that), like I said; very similar to DOS.

Directly, please

But going into the shell and typing those commands every time you boot is cumbersome, which is why you would want to add a boot entry. So, once you have been able to boot Linux through EFI, you should be able to load the ‘efivars’ module, and thus access ‘/sys/firmware/efi/vars/’, which is what ‘efibootmgr’ needs.

Then you can do something like:

efibootmgr --create --gpt --disk /dev/sdX --part Y --write-signature --label "Arch Linux (GRUB2)" --loader '\efi\arch\grubx64.efi'

WARNING: You shouldn’t use this on Mac’s, there’s a different process for those.

In your BIOS you should see the option of booting into “Arch Linux (GRUB2)”, like you have the option of booting into a CD-ROM, and then you can choose that as the first one in the boot order :)


But wait a second, that starts to look like two bootloaders: the BIOS has a boot entry, and so does GRUB. Why not boot directly into Linux?

A couple of patches from Matt Fleming enable just that. They should be in version 3.3. Enable CONFIG_EFI_STUB, and copy bzImage to /efi/linux/linux.efi.

So now you can boot directly from the BIOS into Linux (or Windows) with no bootloader at all :)

Update: echo "initrd=\efi\arch\initramfs.img root=/dev/sda3 ro quiet" | iconv -t ucs2 | efibootmgr --create --gpt --disk /dev/sda --part 1 --label "Arch Linux" --loader '\efi\arch\vmlinuz.efi' --append-binary-args -


While you are on that, why not enable this new partitioning scheme? It’s more EFI friendly, and you can’t actually install a version of Windows that works with EFI without it. For Windows 7 it’s either EFI and GPT, or non-EFI and MBR; you can’t mix them (don’t ask me why).

Fortunately it’s very easy to switch from MBR to GPT; just run gdisk (from a rescue disk, of course). It’s also easy to switch back, as long ad you haven’t used more than four partitions.

Android vs. Maemo power management: static vs. dynamic

Some of you might have heard about Google’s Android team proposal to introduce wakelocks (aka suspend-blockers) to the Linux kernel. While there was a real issue being solved in the kernel side, the benefits on the user-space side were dubious at best, and after a huge discussion, they finally didn’t get in.

During this discussions the dynamic and static power management were described and discussed at length, and there was a bit of talk about Maemo(MeeGo) vs Android approaches to power management, but there was so much more than that.

Some people have problems with the battery on Android devices, for some people it’s just fine, some people have problems with the Maemo, other don’t, so in general; your mileage might vary. But given the extremely different approaches, it’s easy to see in which cases you would have better luck with Maemo, and in which with Android–Although I do think it’s obvious which approach is superior, but I am biased.

An interesting achievement was shared by Thiago Maciera, who managed to get ‘5 days and a couple of minutes‘ out of the Nokia N9 while traveling, and actually using it–and let’s remember this is a 1450 mAh battery. Some Android users might have trouble believing this, but hopefully this blog post would explain what’s going on.

So lets go ahead and explore the two approaches.

Dynamic Power Management

Perhaps the simplest way to imagine dynamic power management, is the gears of a manual transmission car. You go up and down depending on how much power does the system actually needs.

In some hardware, such as OMAP chips, it’s possible to have quite a lot of fine control on the power states of quite a lot of devices, plus different operating power points on the different cores. For example, it might be possible to have some devices, such as the display on, and active, some other devices partially off, like speaker, and other completely off, like USB. And based on the status of the whole system, whole blocks can be powered off, other with low voltage levels, etc.

Linux has a framework to deal properly with this kind of hardware, the runtime power management, that originally came from the embedded world, and a lot from OMAP development, but is now available to everyone.

The idea is very simple; devices should sleep as much as possible. This means that if you have a sound device that needs chunks of 100ms, and the system is not doing anything else but playing sound, then most of the devices go to sleep, even the CPU, and the CPU is only waken up when it needs to write data for the audio device. Even better is to configure the sound device for chunks of 1 second, so the system can sleep even more.

Obviously, some co-operation between kernel and user-space is needed. Say, if you have an instant messenger program that needs to do work every minute, and a mail program that is configured to check mail every 10 minutes, you would want them to do work at the same time when they align at every 10 minutes. This is sometimes called IP heartbeat; the system wakes up for a beat, and then immediately goes back to sleep. And there are kernel facilities as well, such as range timers.

All this is possible thanks to very small latencies required for devices to go to sleep and wakeup, and have intermediary modes (e.g. on, inactive, retention, off), so, for example a device might be able to go to inactive mode in 1ms, retention in 2ms, and off in 5ms (totally invented numbers). Again, the more sleep, the better. Obviously, this is impossible on x86 chips, which have huge latencies–at least right now, and it’s something Intel is probably trying to improve effusively. All these latencies are known by the runtime pm framework in the kernel, and based on that it and the usage, it figures out what is the lowest power state possible without breaking things.

Note I’m not a power management expert, but you cant watch a colleague of mine explain the OMAP 3 power-managment on this video:

Advanced Power Management for OMAP3

And there’s plenty of more resources.

Update: That was the wrong link, here are the resources.

Static Power Management

Static power management has two modes: on and off. That’s it.

OK, that’s not exactly the case in general, but it is in the Android context; the system is either suspended, or active, and it’s easy to know in which mode you are; if the screen is on, it’s active, and if it’s off; it’ is suspended (ideally).

There’s really not much more than that. The complexity comes from the problem of when to suspend; you might have turned off the display, but there might be a system service that still needs to do work, so this service grabs a suspend blocker which, as the name suggests, prevents the system from suspending (until the lock is released). This introduces a problem; a rouge program might grab a ‘suspend blocker’ and never release it, which means your phone will never suspend, and thus the battery would drain rather quickly. So, some security mechanisms are introduced to grant permissions selectively to use suspend blockers.

And moreover, Android developers found race conditions in the suspend sequences in certain situations that were rather nasty (e.g. the system tries to suspend at the same time the user clicks a button, and the system never wakes up again), and these were acknowledged as real issues that would happen on all systems (including PC’s and servers, albeit rarely, because they don’t suspend so often), and got fixed (or at least they tried).


First of all, it’s important to know that if you have dynamic pm working perfectly, you reach exactly the same voltage usage than static pm, so in ideal cases they both behave exactly the same.

The problem is that it’s really hard for dynamic pm to reach that ideal case, in reality systems are able to sleep for certain periods of time, after which they are woken up, often times unnecessarily, and as I already explained; that’s not good. So the goal of a dynamic pm system is to increase those periods of time as much as possible, thus maximizing battery life. But there’s a point of diminished returns (read this or this for expert explanations), so, if the system manages to sleep 1s in average, there’s really not much more to gain if it sleeps 2s, or even 10s. These goals were quite difficult to achieve in the past (not these, I invented those numbers), but not so much any more thanks to several mechanisms that have been introduced and implemented through the years. So it’s fair to say that the sweet spot of dynamic pm has been achieved.

This means that today a system that has been fine-tuned for dynamic pm can reach reach a decent battery life compared to one that uses static pm in most circumstances. But for some use-cases, say, you leave your phone on your desk and you don’t use it at all, static pm would allow it to stay alive for weeks, or even months, while dynamic pm can’t possibly achieve that any time soon. Hopefully you would agree, that nobody cares about those use-cases were you are not actually using the device.

And of course, you only need one application behaving badly and waking up the system constantly, and the battery life is screwed. So in essence, dynamic pm is hard to achieve.

Android developers argued that was one of the main reasons to go for static pm; it’s easier to achieve, specially if you want to support a lot of third party applications (Android Market) without compromising battery life. While this makes sense, I wasn’t convinced by this argument; you still can have one application that is behaving badly (grabbing suspend blockers the whole time), and while permissions should help, the application might still request the permission, and the user grant it (who reads incomprehensible warnings before clicking ‘Yes’ anyway?).

So, both can get screwed by bad apps (although it’s likely that it’s harder in the static pm case, albeit not that much).

But actually, you can have both static and dynamic power management, and in fact, Android does. But that doesn’t mean Android automatically wins, as I explained, the system needs to be fine-tuned for dynamic pm, and that has never been a focus of Android (there’s no API’s or frameworks for that, etc.). So, for example, a Nokia N9 phone might be able to sleep 1s in average, while an Android phone 100ms (when not suspended). This means when you are actually using the device (the screen is on), chances are, a system fine-tuned for dynamic pm (Nokia N9) would consume less battery, than an Android device.

That is the main difference between the two. tl;dr: dynamic pm is better for active usage.

So, if Android developers want to improve the battery usage while on active usage (which I assume is what the users want), they need to fine-tune the system for dynamic pm, and thus sleep as much as possible, hopefully reaching the sweet spot. But wait a second… If Android is using dynamic pm anyway, and they tune the system to the point of diminishing returns; there is not need for static pm. Right? Well, that’s my thinking, but I didn’t manage to make Android developers realize that in the discussion.

Plus, there’s a bunch of other reasons while static pm is not palatable for non-Android systems (aka. typical Linux systems), but I won’t go into those details.

Nokia’s bet was on dynamic, Google’s bet was on static, and in the end we agreed to disagree, but I think it’s clear from the outcome in real-world situations who was right–N9 users experiencing more than one day of normal usage, and even more than two. Sadly, only Nokia N9 users would manage to experience dynamic pm in full glory (at the moment).


But not all is lost, and this in my opinion is the most important aspect. Dynamic pm lives on the Linux kernel mainline through the runtime power management API. This is not a Nokia invention that will die with the Nokia N9; it’s a collaborative effort where Nokia, TI, and other companies worked together, and now not only benefits OMAP, but other embedded chips, and even non-embedded ones. Being upstream means it’s good, and it has been blessed by many parties, and has gone through many iterations, and finally it probably doesn’t look like the original OMAP SRF code at all. Sooner or later your phone (if you don’t have an N9) will benefit from this effort, and it might even be an Android phone, your netbook (if not already benefiting in some way), and even your PC.

Android’s suspend blockers are not upstream, and it’s quite unlikely that they will ever be, or that you would see them used in any system other than Android, and there’s good reasons for that. Matthew Garrett did an excellent job of summarizing what went wrong with the story of suspend blockers on his presentation ‘Android/Linux kernel: Lessons learned’, but unfortunately the Linux foundation seems to be doing a poor job of providing those videos (I cannot watch them any more, even though I registered, and they haven’t been helpful through email conversations).

Update: I managed to get the video from the Linux Foundation and pushed it to YouTube:

Here is part of the discussion on LKML, if you want to read it for some strange reason. WARNING; it’s huge.