N9 Swipe undocumented feature; activate sane behavior

Ed Page recently blogged about his idea to improve the Swipe UI. Fortunately for him, a bunch of people and I had the same idea inside Nokia :)

Update: apparently this is not present in the images distributed with the Nokia N950′s, it was introduced after 25-3.

If you open ~/.config/mcompositor/mcompsitor.conf, you’ll see a bunch of swipe-action-foo configs, all of them set to “away” (by default).

You can, however, change them to something like:
swipe-action-up: switcher
swipe-action-down: close
swipe-action-left: events
swipe-action-right: launcher

And voilà! Now depending on which direction you swipe, is the action that would happen. You need to kill mcompositor for them to become active (or send SIGTERM/SIGINT, I don’t remember which one).

This was possible because I got fed up arguing about the benefits of swiping down to close applications, and decided to implement the thing by myself. While doing that, I also decided to implement an idea that was flying around, which was to do something different depending on the direction of the swipe. It took me about two days to do, including making it configurable, mostly trying to familiarize myself with the code that was split into multiple packages, and wrapping myself around C++, which I have avoided as much as possible. That was much less time than the time spent discussing before and after I implemented the patches. Fortunately, after seeing the thing in action, many people jumped into the wagon, and at least swipe down to close is available to the masses, which people seem to like. Guerrilla design ;)

However, after a lot of though, I realized the configuration I mentioned above is ideal. And this is my rationale:

Rationale

The current design is inconsistent; when you swipe away, you never know in which desktop view you are going to end up, specially if you just unlocked the device. You might end up in ‘switcher’, or ‘launcher’, or ‘events’. Same action, different behaviour equals inconsistency.

There’s a simple solution; utilize the simple mental model required to navigate the desktop views.

Swipe mental model

In this way, it’s easy to know exactly what would happen when swiping on each direction, decreasing the amount of actions needed from the user. The same action always produces the same behavior.

It is unclear if this mode is ever going to be officially supported on the device, despite the fact that I think it’s obviously superior (and Ed Page seems to agree, as he came with the exact same idea by himself), maybe some people are worried about updating manuals and what not, but at least it’s trivial to activate, and anybody can create a 3rd party app for that ;)

BTW. This is yet another reason why I distrust people telling me I’m a “geek”, and “designers” know better when it comes to design. In the words of Elaine Morgan; yes, they can all be wrong, history is strung with occasions when they all got it wrong.

Enjoy :)

msn-pecan 0.1.3 released

Hi,

This is a normal maintenance release mostly to fix reliability on
authentication, and offline messaging fixes.

Felipe Contreras (19):
      build: use makensis
      win32: update README with full instructions
      gitignore: update
      build: trivial cleanup
      build: remove redundant -ansi specifier
      pn_util: remove clang warnings
      auth: properly cleanup pending request
      Improvements from clang analyzer
      Trivial cleanups from clang analysis
      pn_util: trivial improvement in pn_html_unescape
      http: fix crash when connecting from another location
      sync: fix gression in LST command
      msn: send messages offline when invisible
      notification: properly update the status on CHG
      node: extra check for open signal handler
      auth: fix a few memory leaks
      node: fix trivial warning
      build: use GIO by default
      win32: tag 0.1.3

Download from the (usual place).

Why geeks should be your target for marketing

It’s simple; because they listen. As opposed to the masses, which are really good at ignoring you.

Do you really think Average Joe will listen when you say your Samsung Galaxy S II has a 1.2Ghz dual-core processor? No, he will ask his friend, Geek Mike, which phone he should buy.

Seth Godin, marketing guru, explains it perfectly:

He explains the TV industrial complex is gone, the golden years of the 60′s when all companies had to do is flood the TV with ads is now history. There’s too many choices, and way less time. Nowadays there’s complex networks of information, and everybody knows where and how to get recommendations of services and products, which probably will only be noticed if they are remarkable.

How do you explain that Google became such a big company without advertisement? You make a good product, and people talk about it. First the hard-core geeks, then normal geeks, there’s normal people, and eventually, even grandmas and grandpas. They started with doing one thing, but very well, and that made them remarkable. That’s why today even though Google+ as 74% males and 25% engineers (reference), they will spread the word, and eventually everybody will be there.

That is the viral effect of social networks. If you want something to spread through the network, you have to target the nodes that are more tightly connected, the ones at the center of the network. These would obviously be geek bloggers, many of which have thousands of followers, each of which will spread it to more and more people, until eventually you get to the masses. Google, Twitter, Facebook, the iPhone, they all seemed to be targeting only a small niche of technical people, but what many people didn’t consider, is that these geeks would find the value, and spread the message.

This should be obvious, but for some people it’s not. Some people say “oh, the geeks don’t like this, but it’s OK, they are not our target market” (Nokia comes to mind), which completely misses the point that if geeks don’t spread the message, you already lost.

And that is why GNOME 3, for example, already lost. They are ignoring the huge backlash from existing users telling them “we don’t care about you, you are a tiny minority”.

In this hugely interconnected world, you should not disregard the tiny minority that loves you, to seek the huge majority that doesn’t care about you, that is not only dangerous, but doomed to fail. Moreover, I would go even further and say, this tiny minority can actually tell you what you should do to reach the big majority better.

gst-av 0.5 released; now with video encoding and decoding support

gst-av is a GStreamer plug-in to provide support for libav (formerly FFmpeg), it is similar to gst-ffmpeg, but without GStreamer politics, which means all libav plugins are supported, even if there are native GStreamer alternatives; VP8, MP3, Ogg, Vorbis, AAC, etc.

In addition, it is much simpler (2654 vs 16575 LOC), has better performance, and has a bit of extra features (such as less latency), and doesn’t use deprecated API’s. In a previous post I measured exactly how much improvement compared to gst-ffmpeg there is; it’s not much, but it’s some.

IOW; it’s possible that gst-av is the only GStreamer codec plug-in you would ever need :)
Continue reading

Getting proxy support on GNOME; for real (libproxy-simple)

If you open gnome-network-properties and setup your proxy accordingly you would notice that most network applications don’t really work. There are several reasons for that.

Since as long as I can remember I have looked for a way to add proxy support on my applications. There has been a number of network libraries, none of which have ever fitted the bill for me. Finally, came GIO, now part of GLib, which has many advantages (although IMO not ideal; it works perfectly). Unfortunately, no proxy support.

There’s also libproxy, which one might think has proxy support, but it doesn’t; it merely provides a mechanism to find proxy configuration. This is good, but not the full picture.

But since GIO has plug-ins, it might be possible to provide proxy support transparently, with the help of libproxy… right? That’s where GLib networking comes. Great! So we have everything we need.

Not really. Continue reading

Pidgin picking the wrong DVCS again; Mercurial

I have long criticized Pidgin’s move to Monotone. I have tried to analyse this rather marginal DVCS tool, I wrote my own mtn->git conversion tool, and I helped validate and improve Monotone’s official tool afterwards. I have spent countless hours identifying Pidgin’s contributors, finding their real names, email address, etc. I have also manually dug through old commits to properly identify the real authors of a patch (as opposed to the committer). Finally, I have also wrote scripts that automatically create a nice, clean, git repo.

pidgin-git-import

Now that Pidgin is thinking on switching away from Monotone (which was my recommendation long time ago), they are considering Mercurial, and I’ve also helped on their conversion scripts.

pidgin-mtn-conv-files

However, I feel they are making yet another mistake, because they haven’t actually analysed their decision. In order to demonstrate the double standards, cognitive dissonance, and lack of follow up, I’m posting chunks of the recorded history on the mailing list, with links.

Continue reading

msn-pecan 0.1.2 released; critical bug-fix

Hi,

This is an important maintenance release, everybody should update. Apparently Microsoft shut down the Nexus servers that were used for authentication, which meant msn-pecan stopped working completely. Fortunately I found a trick to circumvent the problem without requiring an update the protocol used; use Passport 3.0 authentication. Had I known the fix would have been so easy I would have tried earlier. Sorry for the long delay.

Plus:

  • Fix authorization; Passport 3.0 instead of nexus
  • Fix offline message reception
  • Improve reconnection/disconnection detection
  • Ignore reverse-list; the server returns wrong info
  • Show correct alias in chat window

Enjoy :)

Demétrio (1):
     Ignore reverse presence

Felipe Contreras (19):
     plugin: show proper alias in the chat window
     contactlist: fix for existing "null" group
     Improve disconnections
     ns: add time out detection
     Ignore reverse list completely
     Update libmspack to 0.2 alpha
     libmspack: fix compilation warnings
     libsiren: fix compilation warnings
     Fix some compilation warnings
     build: check for more warnings
     Fix authentication
     Improve auth parsing
     auth: reorganize to have a callback
     auth: add private header
     oim: use generic auth stuff
     oim: improve and fix message parsing
     Get rid of pn_auth_start()
     hack: mingw32 workarounds
     win32: tag 0.1.2

Download from the (usual place).

Why Linux is the most important software project in history

Here’s another post that for some people is obvious, but there are other (e.g. high level managers) that might not necessarily see the importance of Linux, in fact, I have been surprised by many open source developers who don’t seem to be familiar with how Linux works (they think it’s just something that works?). The fact of the matter is that Linux is light years ahead of any other software project, open or closed, I’ll try to explain why.

BTW. By “Linux”, I mean the kernel, not the ecosystem. Which is the name of the project.

Everywhere

First of all, Linux runs everywhere; desktops, smartphones, routers, web servers, supercomputers, TVs, refrigerators, tablets, even on the stock market (London, NY, Johannesburg, etc.).

Not only it runs anywhere, but in many areas it’s the undisputed #1. In smartphones, Android already has the most market share, and grabbing more and more. In supercomputers, Linux has 92% of the TOP500. On servers, it’s 64%.

There’s a lot of benefits to having a single kernel that runs on all kinds of hardware, I’ll mention two examples.

One is the improvements in power consumption that all the embedded people have been pushing for not only benefit laptops and desktops, but even servers. Also, features like dynamic power management that works really well on embedded influence the desktop and server hardware.

Another one is the VFS scalability patches. Nick Piggin found some issues with 64 CPUs machines that require some reorganization of VFS. The problem is that it’s tricky to test issues with these patches, however, since also have a real-time community (they have their own patches, but eventually will get merged), they could find issues on these patches more easily.

Here’s a nice interview with Jim Zemlin from Linux Foundation that explains where Linux comes from, where it is, how it might very well become the building block of all devices in the future.

Collaboration

There’s many open projects, but not many where everyone is involved and working together. The Linux Foundation issues a yearly report of how the kernel is being developed, and you can see competing companies working together, such as Red Hat (12.4%), Novell (7.0%), IBM (6.9%), Intel (5.8%), Oracle (2.3%), Renesas (1.4.%), SGI (1.3%), Fujitsu (1.2%), Nokia (1.0%), HP (1.0%), Google (0.8%), AMD (0.8%), etc.

This is true synergy, not the management bullshit kind; nobody alone (Microsoft) can compete with what everyone together can produce (Linux).

Communication

The traffic of the main mailing list (LKML) is astronomical; 250 messages per day, but that’s only one mailing list, there are around 200 subsystem lists. Many of these lists have a lot of traffic as well, one I’m subscribed to is linux-media, which has around 30 messages per day.

And there’s a reason why so many people can follow so much traffic without it becoming a total mess. All kernel mailing lists follow common guidelines; you don’t have to be subscribed!, don’t do reply-to munging, encourage cross-posting, cc the right people, trim unnecessary context, and don’t top post.

Also very important is to send patches through the mailing list. You don’t even have to think about it, just type ‘git send-email’ with a proper –cc-cmd, and Linux’s get_maintainer.pl script would find the right maintainers and contributors to cc, and the proper mailing lists to send the patch to. Again, no need to be subscribed. More next.

Patches

I have explained before why sending patches through the mailing list is superior to bugzilla. But it can be summarized as; you don’t need to be subscribed, you don’t need to login anywhere, you don’t need to search for the right component, etc.

As an example I put myself. I am paid by Nokia to work on GStreamer stuff, yet, even though I have a bugzilla account and everything, it’s easier for me to submit (and get merged) patches to Linux (mostly on my free time); it’s just one command. It’s not only easier for me to submit patches, but also to review them; just click reply. I’m not even going to mention closed source, which is horrible in this area.

It is also rewarding that usually that the response to patches is immediate, however, sometimes there are so many comments that patch series have 3, 5, 10, even 30 revisions before they are accepted. This is great for quality reasons, not only for for the project, the developers involved also learn a lot.

Using a mailing list also means that it’s easy to switch from reviewing a patch to a new discussion, based on the patch, thus helping communication.

Speed

It is not surprising that such a fine tuned process the results of producing stable releases each three months like clockwork, introducing major features, and from 5 to 6 patches per hour.

I can’t really explain how many things are going on in Linux, but Jonathan Corbet does in his Kernel Report.

Conclusion

So, Linux is an incredibly massive endeavor, easy and fun to work with, an unstoppable behemoth in the software industry, and IMO companies trying to stand in its way are going to realize their mistake in a painful way.

MeeGo scales, because Linux scales

To me, and a lot of people, it’s obvious why MeeGo scales to a wide variety of devices, but apparently that’s not clear to other people, so I’ll try to explain why that’s the case.

First, let’s divide the operating system:

  1. Kernel
  2. Drivers
  3. Adaptation
  4. System Frameworks
  5. Application Framework
  6. Applications

“Linux” can mean many things, in the case of Android, Linux means mostly the Kernel (which is heavily modified), and in some cases the Drivers (although sometimes they have to be written from scratch), but all the layers above are specific to Android.

On Maemo, MeeGo, Moblin, and LiMo, “Linux” means an upstream Kernel (no drastic changes), upstream Drivers (which means they can be shared with other upstream players as they are), but also means “Linux ecosystem”; D-Bus, X.org, GStreamer, GTK+/Qt/EFL, etc. Which means they take advantage of already existing System and Application Frameworks. And all they have to do, is build the Applications, which is not an easy task, but certainly easier than having to do all the previous ones.

Now, the problem when creating MeeGo, is that for reasons I won’t (can’t?) explain here, Maemo and Moblin were forced to switch from GTK+ to Qt. This might have been the right move in the long term, but it means rewriting two very big layers of the operating system, in fact, the two layers that differentiate the various mobile platforms for the most part. And this of course means letting go of a lot of talent that helped build both Maemo and Moblin.

For better or worse, the decision was made, and all we could do is ride along with it. And maturizing MeeGo, essentially means maturizing these two new layers being written not entirely from scratch (as Qt was already there), but pretty much (as you have to add new features to it, and build on top).

Now, did MeeGo fail? Well, I don’t know when this UI can be considered mature enough, but sooner or later, it will be (I do think it will be soon). The timeframe depends also on your definition of “mature”, but regardless of that, it will happen. After that, MeeGo will be ready to ship on all kinds of devices. All the hardware platform vendors have to do, is write the drivers, and the adaptation, and they already do anyway for other sw platforms.

Needless to say, the UI is irrelevant to the hardware platform.

So, here’s the proof that the lower layers are more than ready:

Just after a few months of announcing MeeGo IVI, these guys were able to write a very impressive application thanks to QML, and ignore the official UI.

The OMAP4 guys went for the full MeeGO UI. No problems.

Even though Freescale is probably not that committed to MeeGo, it’s easier to create demo using it (Qt; Nomovok) rather than other platforms. It’s even hardware accelerated.

Renesas also chose the Nomovok demo to show their hardware capabilities.

MeeGo 1.1 running on HTC’s HD2

One guy; yes, one guy. Decides to run MeeGo on his HTC, and succeeds. Of course, he uses the work already done by Ubuntu for HD2, but since MeeGo is close to upstream, the same kernel can be used. Sure, it’s slow (no hardware acceleration), and there’s many things missing, but for a short amount of time spent by hobbyists, that’s pretty great already.

This is one is not so impressive, but also shows the work of one guy porting MeeGo to Nexus S

And running on Archos 9. Not very impressive UI, but the point is that it runs on this hw.

Conclusion

So, as you can see MeeGo is already supported in many hardware platforms; not because the relevant companies made a deal with Nokia or Intel; they don’t have to. The only thing they have to do is support Linux; Linux is what allows them to run MeeGo, and Linux is what allows MeeGo to run on any hardware platform.

This is impossible with WP7 for numerous reasons; it’s closed source, it’s proprietary, it’s Microsoft, etc. It’s not so impossible to do the same with Android, but it’s more difficult than with MeeGo because they don’t share anything with a typical linux ecosystem; they are on a far away island on their own.

Mercurial vs Git; it’s all in the branches

I have been thinking on comparing Git to other SCM’s. I’ve analyzed several of them and so far I’ve only done the comparison with Monotone (here). Nowadays it’s pretty clear that Git is the most widely used DSCM out there, but Mercurial is the second. So, here is a comparison between Git and Mercurial.

Note: I’m a long-time advocate of Git, and a contributor to the project, so I am biased (but I hope my conclusions are not).

Google’s analysis

So, let’s start with Google’s analysis where they compared Git with Mercurial in order to choose what to use in Google Code. Here I’ll tackle what Google considered to be Mercurial advantages.

Learning Curve. Git has a steeper learning curve than Mercurial due to a number of factors. Git has more commands and options, the volume of which can be intimidating to new users. Mercurial’s documentation tends to be more complete and easier for novices to read. Mercurial’s terminology and commands are also a closer to Subversion and CVS, making it familiar to people migrating from those systems.

That is a value judgement and while that seems to be the consensus, I don’t think Git is much more difficult than Mercurial. In the last Git user survey, people answered the question ‘Have you found Git easy to learn?’ mostly as ‘Reasonably easy’ and few as ‘Hard’ or ‘Very Hard’. Plus, the documentation is not bad, as people answered ‘How useful have you found the following forms of Git documentation?‘ very positively. However, it’s worth noting that the best documentation is online (not the official one) (according to the survey).

Regarding the amount of commands, most of them can be ignored. If you type ‘git’ only the important ones would be listed.

The last point is that Mercurial is easier for people migrating from Subversion and CVS; that is true, but personally I don’t consider that’s a good thing. IMO it’s perpetuating bad habits and mental models. For people that have not been tainted by CVS/Subversion, it might be easier to learn Git.

Windows Support. Git has a strong Linux heritage, and the official way to run it under Windows is to use cygwin, which is far from ideal from the perspective of a Windows user. A MinGw based port of Git is gaining popularity, but Windows still remains a “second class citizen” in the world of Git. Based on limited testing, the MinGW port appeared to be completely functional, but a little sluggish. Operations that normally felt instantaneous on Linux or Mac OS X took several tenths of a second on Windows. Mercurial is Python based, and the official distribution runs cleanly under Windows (as well as Linux, Mac OS X, etc).

That was probably true at the time, but nowadays msysGit works perfectly fine. There has been a lot of work to make Git more portable, and the results have been positive.

Maintenance. Git requires periodic maintenance of repositories (i.e. git-gc), Mercurial does not require such maintenance. Note, however, that Mercurial is also a lot less sophisticated with respect to managing the clients disk space (see Client Storage Management above).

Not any more.

History is Sacred. Git is extremely powerful, and will do almost anything you ask it to. Unfortunately, this also means that Git is perfectly happy to lose history. For example, git-push –force can result in revisions becoming lost in the remote repository. Mercurial’s repository is structured more as an ever-growing collection of immutable objects. Although in some cases (such as rebase), rewriting history can be an advantage, it can also be dangerous and lead to unexpected results. It should be noted, however, that a custom Git server could be written to disallow the loss of data, so this advantage is minimal.

This was an invalid argument from the beginning. Whether history is sacred or not depends on the project, many Git projects have such policy, and they don’t allow rebases of already published branches. You don’t need your SCM to be designed specifically to disallow your developers to do something (in fact rebases are also possible in Mercurial); this should be handled as a policy. If you really want to prevent your developers from doing this, it’s easy to do that with a Git hook. So really, Mercurial doesn’t have any advantage here.

In terms of implementation effort, Mercurial has a clear advantage due to its efficient HTTP transport protocol.

Git has had smart HTTP support since quite some time now.

So, of all the supposed advantages of Mercurial, only the learning curve stands, and I already explained, it’s not that strong of an argument.

IMO Google made a bad decision; it was a matter of time before Git resolved the issues they listed, and in fact Google could have helped to achieve them. Now they are thinking on adding Git support; “We’re listening, we promise. This is one of the most starred issues in our whole bugtracker.” (Update: already done). My recommendation to other people facing similar decisions is to choose the project that has a brighter future, chances are the “disadvantages” you see in the present would be resolved soon enough.

stackoverflow’s comparison

The best comparison I’ve seen is the one on stackoverflow’s question Git and Mercurial – Compare and Contrast, the answer from Jakub Narebski is simply superb.

Here’s the summary:

  • Repository structure: Mercurial doesn’t allow octopus merges (with more than two parents), nor tagging non-commit objects.
  • Tags: Mercurial uses versioned .hgtags file with special rules for per-repository tags, and has also support for local tags in .hg/localtags; in Git tags are refs residing in refs/tags/ namespace, and by default are autofollowed on fetching and require explicit pushing.
  • Branches: In Mercurial basic workflow is based on anonymous heads; Git uses lightweight named branches, and has special kind of branches (remote-tracking branches) that follow branches in remote repository.
  • Revision naming and ranges: Mercurial provides revision numbers, local to repository, and bases relative revisions (counting from tip, i.e. current branch) and revision ranges on this local numbering; Git provides a way to refer to revision relative to branch tip, and revision ranges are topological (based on graph of revisions)
  • Mercurial uses rename tracking, while Git uses rename detection to deal with file renames
  • Network: Mercurial supports SSH and HTTP “smart” protocols; modern Git supports SSH, HTTP and GIT “smart” protocols, and HTTP(S) “dumb” protocol. Both have support for bundles files for off-line transport.
  • Mercurial uses extensions (plugins) and established API; Git has scriptability and established formats.

If you read the whole answer you would see that most of the differences between Mercurial and Git are very subtle, and most people wouldn’t even notice them, however, there’s a fundamental difference that I’ll tackle in the next section: branches.

It’s all in the branches

Update: here’s a new blog post that goes deeper into the branching model differences.

Let’s go directly for an example. Say I have a colleague called Bob, and he is working on a new feature, and create a temporary branch called ‘do-test’, I want to merge his changes to my master branch, however, the branch is so simple that I would prefer it to be hidden from the history.

In Git I can do that in the following way:
git checkout -b tmp-do-test bob/do-test
git rebase master
git mergetool # resolve conflicts
git rebase --continue
git mergetool # resolve conflicts
git rebase --continue
git checkout master
git merge tmp-do-test
git branch -D tmp-do-test

(Of course this is a simplified case where a single ‘git cherry-pick’ would have done the trick)

Voilá. First I created a temporary branch ‘tmp-do-test’ that is equal to Bob’s ‘do-test’, then I rebase it on top of my master branch and resolve the conflicts, Git is smart enough to notice that by not picking any line of Bob’s last commit, the commit should be dropped. Then I go to the master branch and merge this temporary branch, and finally remove that temporary branch. I do variations of this flow quite a lot.

This is much more tricky in Mercurial (if possible). Why?

hg branch != git branch

In Git, a branch is merely one of the many kinds of ‘refs’, and a ‘ref’ is simply a pointer to a commit. This means that there’s nothing fundamentally different between ‘bob/do-test’ or ‘tmp-do-test’, or ‘master’, they are all pointers to commits, and these pointers can be easily be deleted, renamed, fetched, pushed, etc. IOW you can do pretty much whatever you want with them.

In Mercurial, a branch is embedded in a commit; a commit done in the ‘do-test’ branch will always remain in such a branch. This means you cannot delete, or rename branches, because you would be changing the history of the commits on those branches. You can ‘close’ branches though. As Jakub points out, these “named branches” can be better thought as “commit labels”.

Bookmarks

However, there’s an extension (now part of the core) that is relatively similar to Git branches, the bookmarks. These are like Git ‘refs’, and although they were originally intended to be kept local, since Mercurial 1.6 they can be shared, but of course, both sides would need this extension enabled. Even then, there is no namespace to delimit say, my ‘do-test’ bookmark from Bob’s ‘do-test’ bookmark, like Git automatically does.

Future

In the future perhaps Mercurial would add namespaces to bookmarks, and would make the bookmarks and rebase extensions part of the core. At this point it would be clear that traditional Mercurial branches didn’t make much sense anyway, and they might be dropped. And then of course there would not be much practical difference between Mercurial and Git, but in the meantime Git branches are simply better.

Conclusion

The conclusion shouldn’t be that surprising; Git wins. The way Git deals with branches is incredibly simple, and yet so powerful, that I think that’s more than enough reason to crush Mercurial in respect to functionality. Of course, for simple use-cases they are the same, and perhaps Mercurial would be easier for some people, but as soon as you start doing some slightly complicated handling of branches, the “complexity” of Git translates into very simple commands that do exactly what you want. So, every workflow is supported in Git in a straight-forward way. No wonder most popular projects have chosen Git.

Projects using Git: Linux Kernel, Android, Clutter, Compiz Fusion, Drupal, Fedora, FFmpeg (and libav), Freedesktop.org (X.Org, Cairo, D-Bus, Mesa3D), GCC, GNOME, GStreamer, KDE, LibreOffice, Perl5, PulseAudio, Qt, Ruby on Rails, Samba, Wine, VLC

Update: Eclipse is also using Git for a bunch of projects.

Projects using Mercurial: Adium, Go Programming Language, Mozilla, OpenJDK, OpenSolaris, Pidgin, Python, Vim