Freedom of speech in online communities

I have debated freedom of speech countless times, and it is my contention that today (in 2021) the meaning of that concept is lost.

The idea of freedom of speech didn’t exist as such until censorship started to be an issue, and that was after the invention of the printing press. It was after people starting to argue in favor of censorship that other people started to argue against censorship. Freedom of speech is an argument against censorship.

Today that useful meaning is pretty much lost. Now people wrongly believe that freedom of speech is a right, and only a right, and worse: they equate freedom of speech with the First Amendment, even though freedom of speech existed before such law, and exists in countries other than USA. I wrote about this fallacy in my other blog in the article: The fatal freedom of speech fallacy.

The first problem when considering freedom of speech a right is that it shuts down discussion about what it ought to be. This is the naturalistic fallacy (confusing what is to what ought to be). If we believed that whatever laws regarding cannabis are what we ought to have, then we can’t discuss any changes to the laws, because the answer to everything would be “that’s illegal”. The question is not if X is illegal currently, the question is should it? When James Damore was fired by Google for criticizing Google’s ideological echo chamber, a lot of people argued that Google was correct in firing him, because it was legal, but that completely misses the point: the fact that something is legal doesn’t necessarily mean it’s right (should be illegal).

Today people are not discussing what freedom of speech ought to be.

Mill’s argument

In the past people did debate what freedom of speech ought to be, not in terms of rights, but in terms of arguments. The strongest argument comes from John Stuart Mill which he presented in his seminal work On Liberty.

Mill’s argument consists on three parts:

  1. The censored idea may be right
  2. The censored idea may not be completely wrong
  3. If the censored idea is wrong, it strengthens the right idea

It’s obvious that if an idea is right, society will benefit from hearing it, but if an idea is wrong, Mill argues that it still benefits society.

Truth is antifragile. Like the inmune system it benefits from failed attacks. Just like a bubble boy which is shielded from the environment becomes weak, so do ideas. Even if an idea is right, shielding it from attacks makes the idea weak, because people forget why the idea was right in the first place.

I often put the example of the idea of flat-Earth. Obviously Earth is round, and flat-Earthers are wrong, but is that a justification for censoring them? Mill argues that it’s not. I’ve seen debates with flat-Earthers, and what I find interesting are the arguments trying to defend the round Earth, but even more interesting are the people that fail to demonstrate that the Earth is round. Ask ten people that you know how would they demonstrate that the Earth is round. Most would have less knowledge about the subject than a flat-Earther.

The worst reason to believe something is dogma. If you believe Earth is round because science says so, then you have a weak justification for your belief.

My notion of the round Earth only became stronger after flat-Earth debates.

Censorship hurts society, even if the idea being censored is wrong.

The true victim

A common argument against freedom of speech is that you don’t have the right to make others listen to your wrong ideas, but this commits all the fallacies I mentioned above, including confusing the argument of freedom of speech with the right, and ignores Mill’s argument.

When an idea is being censored, the person espousing this idea is not the true victim. When the idea was that Earth was circling the Sun (and not the other way around as it was believed), Galileo Galilei was not the victim: he already knew the truth: the victim was society. Even when the idea is wrong, like in the case of flat-Earth, the true victim is society, because by discussing wrong ideas everyone can realize by themselves precisely why they are wrong.

XKCD claims the right to free speech means the government can't arrest you for what you say.
XKCD doesn’t know what freedom of speech is

The famous comic author Randall Munroe–creator of XKCD–doesn’t get it either. Freedom of speech is an argument against censorship, not a right. The First Amendment came after freedom of speech was already argued, or in other words: after it was already argued that censorship hurts society. The important point is not that the First Amendment exists, the important point is why.

This doesn’t change if the censorship is overt, or the idea is merely ignored by applying social opprobrium. The end result for society is the same.

Censorship hides truth and weakens ideas. A society that holds wrong and weak ideas is the victim.

Different levels

Another wrong notion is that freedom of speech only applies in public spaces (because that’s where the First Amendment mostly applies), but if you follow Mill’s argument, when Google fired James Damore, the true victim was Google.

The victims of censorship are at all levels: society, organization, group, family, couple.

Even at the level of a couple, what would happen to a couple that just doesn’t speak about a certain topic, like say abortion?

What happens if the company you work for bans the topic of open spaces? Who do you think suffers? The people that want to criticize open spaces, or the whole company?

The First Amendment may apply only at a certain level, but freedom of speech, that is: the argument against censorship, is valid at every level.

Online communities

Organizations that attempt to defend freedom of speech struggle because while they want to avoid censorship, some people simply don’t have anything productive to say (e.g. trolls), and trying to achieve a balance is difficult, especially if they don’t have a clear understanding of what freedom of speech even is.

But my contention is that most of the struggle comes from the misunderstandings about freedom of speech.

If there’s a traditional debate between two people, there’s an audience of one hundred people, and one person in the audience starts to shout facts about the flat-Earth, would removing that person from the venue be a violation of freedom of speech? No. It’s just not part of the format. In this particular format an audience member can ask a question at the end in the Q&A part of the debate. It’s not the idea that is being censored, it’s the manner in which the idea was expressed that is the problem.

The equivalent of society in this case is not hurt by a disruptive person being removed.

Online communities decide in what format they wish to have discussions in, and if a person not following the format is removed, that doesn’t hide novel ideas nor weakens existing ideas. In order words: the argument against censorship doesn’t apply.

But in addition the community can decide which topics are off-topic. It makes no sense to talk about flat-Earth in a community about socialism.

But when a person is following the format, and talking about something that should be on-topic, but such discussion is hindered either by overt censorship (e.g. ban), or social opprobrium (e.g. downvotes), then it is the community that suffers.

Ironically when online communities censor the topic of vaccine skepticism, the only thing being achieved is that the idea becomes weak, that is: the people that believe in vaccines do so for the wrong reasons (even if correct), so they become easy targets for anti-vaxxers. In other words: censorship creates the exact opposite of what it attempts to solve.

Online communities should fight ideas with ideas, not censorship.

The visual style of a programmer

Recently I heard a person say that us “geeks” don’t have a good sense of style, presumably because we typically wear very plain clothes (check pictures of Steve Jobs or Mark Zuckerberg), however, what I think many people don’t see is that we do have style, but where it matters; our computer screens, not clothes.

Most of the time a programmer is staring at code, and the program that shows code and allows you to edit it properly is called a text editor.

This is vim, one of the most popular text editors for programmers with the default configuration.

By default it works, however, staring at these colors for hours gets tedious; I want better colors. Fortunately vim has the concept of “color schemes”, so you have several hundreds of themes to choose.

After trying tons of those, I decided none were exactly what I wanted, so I decided to create my own.

Color theory

I have been choosing colors for websites for about 20 years, so I am familiar with the ways colors are programmed, but many people are not.

While sometimes you can tell a program “red” and it will use the right color, sometimes you need a slightly darker red, or something between orange and red. So in order to be perfectly specific, the most common system to tell a computer a color is called RGB (red, green, blue). In this system, red is 100%, 0%, 0% (100% of the red component), green would be 0%, 100%, 0%, and yellow (which is a combination of red and green), 100%, 100%, 0%.

But computers don’t naturally deal with percentages; they are digital, so they need concrete numbers, which is why 100% is translated to 255 (the maximum value), thus 50% would be 128. And they don’t even use the decimal system; they use binary, and the closest between decimal and binary is hexadecimal, in which 255 is “FF”. Just like in decimal 9 is the biggest digit (1 less than 10), in hexadecimal F is the biggest digit representing 15 (1 less than 16).

So, red in hexadecimal RGB (the standard) is “FF0000”.

I can do the translation in my mind between many hexadecimals to their corresponding human colors, and do some alterations, like for example making an orange more red, or make a cyan darker, or less saturated.

This method of selecting colors has served me well for several years, and I have created aesthetically pleasing colors for many interfaces, but it’s always trial and error, and although the colors look OK, I could never be sure if they are precisely what I wanted.

For example if yellow is “FFFF00” (100% red and 100% green), I could figure out orange would be “FF8000” (50% green). But for more complicated colors, like a light red “FF8080”–where green is already halved–it’s not so clear how to combine it with a light yellow “FFFF80” where green is full, or how to make a purple that looks similar.

I wanted a programmatically precise method of generating the colors I wanted, and in order to do that I researched about color wheels and learned that in fact there’s many systems of colors, and many different color wheels.

What I wanted was a way to generate the RGB color wheel, but without using the RGB color model. It turns out there’s two alternate representations of the RGB model; HSL (hue, saturation, lightness) and HSV (hue, saturation, value). I was familiar with HSV, but it turns out HSL is the one that better serves my purposes.

In HSL red is 0°, 100%, 50%, yellow is 60°, 100%, 50%, orange is 30°, 100%, 50%; the saturation and lightness are not changing, only the hue. So now it’s clear how to generate the light orange, since light red is 0°, 100%, 75%, light yellow is 60°, 100%, 75%, so obviously light orange is 30°, 100%, 75%.

I can easily generate the full color wheel by simply changing the hue: red 0°, orange 30°, yellow 60°, chartreuse green 90°, green 120°, spring green 150°, cyan 180°, azure 210°, blue 240°, violet 270°, magenta 300°, rose 330°.

My color scheme

I have been using my own color scheme for about 10 years, but armed with my new-found knowledge, I updated the colors.

I cannot stress enough how incredibly different this looks to my eyes, especially after hours of programming.

These are the colors I ended up picking.

Is this not style?

If you are a programmer using vim, here’s my color scheme: felipec.

Font

But wait, there’s more. Colors are part of the equation, but not the whole. When reading so much text it’s important in what font that text is rendered.

Generally speaking there’s three kinds of typefaces, “serif”, “sans-serif”, and “monospace”. The kind virtually everyone uses for code is monospace, which looks like: this.

There’s tons of different monospace fonts, many created specifically to read code. In fact, there’s even sites that allow you to compare code in different programming languages with different fonts to see which one you like best, for example: Coding Fonts.

It’s this way I found my new favorite coding font: Input. Not only has the font been carefully designed, but it can be configured to accommodate different preferences, such as the shape of the letter “g”, which I decided to change. You can play with different preferences and preview how it looks in different languages (and in fact different vim color schemes).

This is what it looks like:

Probably most people don’t notice the difference between the DejaVu and Input fonts, but I do, and plenty of programmers do too, which is why these fonts were created in the first place.

There there

So there is it, just because most people don’t see it, doesn’t mean there’s no there there.

Programmers do have style. It’s just that we care more about the color of a conditional more than we do about the color of our shirt.

Why renaming Git’s master branch is a terrible idea

Back in May (in the inauspicious year of 2020) a thread in the Git mailing list with the tile of “rename offensive terminology (master)” was started, it lasted for more than a month, and after hundreds of replies, no clear ground was gained. The project took the path of least resistance (as you do), and the final patch to do the actual rename was sent today (November).

First things first. I’ve been a user of Git since 2005 (before 1.0), and a contributor since 2009, but I stopped being active, and only recently started to follow the mailing list again, which is why I missed the big discussion, but just today read the whole enchilada, and now I’m up-to-date.

The discussion revolved around five subjects:

  1. Adding a new configuration (init.defaultbranch)
  2. Should the name of the master branch be changed?
  3. Best alternative name for the master branch
  4. Culture war
  5. The impact to users

I already sent my objection, and my rationale as to why I think the most important point–the impact to users–was not discussed enough, and in fact barely touched.

In my opinion the whole discussion was a mess of smoke screen after smoke screen and it never touched the only really important point: users. I’m going to tackle each subject separately, leaving the most important one at the end, but first I would like to address the actual context and some of the most obvious fallacies people left at the table.

The context

It’s not a coincidence that nobody found the term problematic for 15 years, and suddenly in the height of wokeness–2020 (the year of George Floyd, BLM/ANTIFA uprising, and so on)–it magically becomes an issue. This is a solution looking for a problem, not an actual problem, and it appeared precisely at the same time the Masters Tournament received attention for its name. The Masters being more renowned than Git certainly got more attention from the press, and plenty of articles have been written explaining why it makes no sense to link the word “masters” to slavery in 2020 in this context (even though the tournament’s history does have some uncomfortable relationship with racism) (No, the masters does not need renaming, Masters Name Offensive? Who Says That?, Will Masters Be Renamed Due to BLM Movement? Odds Favor “No” at -2500, Calls for The Masters to change its name over ‘slave’ connotations at Augusta). Few are betting on The Masters actually changing its name.

For more woke debates, take a look at the 2 + 2 = 5 debate (also in 2020).

The obvious fallacies

The most obvious fallacy is “others are doing it”. Does it have to be said? Just because all your friends are jumping off a cliff doesn’t mean you should too. Yes, other projects are doing it, that doesn’t mean they don’t have bad reasons for it. This is the bandwagon fallacy (argumentum ad populum).

Even if it was desirable for the git.git project to change the name of the master branch for itself–just like the Python project did, it’s an entirely different thing to change the name of the master branch for everyone. The bandwagon argument doesn’t even apply.

The second fallacy comes straight out of the title “offensive terminology”. This is a rhetorical technique called loaded language; “what kind of person has to deny beating his wife?”, or “why do you object to the USA bringing democracy to Iraq?”. Before the debate even begins you have already poisoned the well (another fallacy), and now it’s an uphill battle for your opponents (if they don’t notice what you are doing). It’s trying to smuggle a premise in the argument without anyone noticing.

Most people in the thread started arguing why it’s not offensive, while the onus was on the other side to prove that it was offensive. They had the burden of proof, and they inconspicuously shifted it.

If somebody starts a debate accusing you of racism, you already lost, especially if you try to defend yourself.

Sorry progressives, the word “master” is not “offensive terminology”. That’s what you have to prove. “What kind of project defends offensive terminology?” Is not an argument.

Adding a new configuration

This one is easy. There was no valid reason not to add a new configuration. In fact, people already had configurations that changed the default branch. Choice is good, this configuration was about making it easier to do what people were already doing.

The curious thing is that the only places in the thread where the configuration was brought up was as a diversion tactic called motte and bailey.

What they started with was a change of the default branch, a proposition that was hard to defend (bailey), and when opponents put enough pressure they retreated to the most defensible one (motte): “why are you against a configuration?”

No, nobody was against adding a new configuration, what people were against was changing the default configuration.

Should the name of the master branch be changed?

This was the crux of the matter, so it makes sense that this is where most of the time debating was spent. Except it wasn’t.

People immediately jumped to the next point, which is what is a good name for the default branch, but first it should be determined that changing the default is something desirable, which was never established.

You don’t just start discussing with your partner what color of apartment to choose. First, your girlfriend (or whatever) has to agree to live together!

Virtually any decision has to be weighted in with pros and cons, and they never considered the cons, nor established any real pro.

Pro

If the word “master” is indeed offensive, then it would be something positive to change it. But this was never established to be the case, it was just assumed so. Some arguments were indeed presented, but they were never truly discussed.

The argument was that in the past (when slavery was a thing), masters were a bad thing, because they owned slaves, and the word still has that bad connotation.

That’s it. This is barely an argument.

Not only is very tenuously relevant in the present moment, but it’s not actually necessarily true. Slavery was an institution, and masters simply played a role, they were not inherently good or bad. Just because George Washington was a slave owner, that doesn’t mean he was a monster, nor does it mean the word “master” had any negative connotation back then. It is an assumption we are making in the present, which, even if true; it’s still an assumption.

This is called presentism. It’s really hard to us to imagine the past because we didn’t live it. When we judge it we usually judge it wrong because we have a modern bias. How good or bad masters were really viewed by their subjects is a matter for debate, but not in a software project.

Note: A lot of people misunderstood this point. To make it crystal clear: slavery was bad. The meaning of the word “master” back then is a different issue.

Supposing that “master” was really a bad word in times of slavery (something that hasn’t been established), with no other meaning (which we know it isn’t true) this has no bearing in the modern world.

Prescriptivism

A misunderstanding many people have of language is the difference between prescriptive and descriptive language. In prescriptivism words are dictated (how they ought to be used). In descriptivism words are simply described (how they are actually used). Dictionaries can be found on both camps, but they are mainly on the descriptive side (especially the good ones).

This misunderstanding is the reason why many people think (wrongly) that the word “literally” should not mean “virtually” (even though many people use it this way today). This is prescriptiveness, and it doesn’t work. Words change meaning. For example, the word “cute” meant “sharp” in the past, but it slowly changed meaning, much to the dismay of prescriptivists. It does not matter how much prescriptivists kick and scream; the masses are the ones that dictate the meaning of words.

So it does not matter what you–or anyone–thinks, today the word “literally” means “virtually”. Good dictionaries simply describe the current use, they don’t fight it (i.e. prescribe against it).

You can choose how you use words (if you think literally should not mean virtually, you are free to not use it that way). But you cannot choose how others use language (others decide how they use it). In other words; you cannot prescribe language, it doesn’t matter how hard you try; you can’t fight everyone.

Language evolves on its own, and like democracy; it’s dictated by the masses.

So, what do the masses say about the word “master”? According to my favorite dictionary (Merriam-Webster):

  1. A male teacher
  2. A person holding an academic degree higher than a bachelor’s but
    lower than a doctor’s
  3. The degree itself (of above)
  4. A revered religious leader
  5. A worker or artisan qualified to teach apprentices
  6. An artist, performer, or player of consummate skill
  7. A great figure of the past whose work serves as a model or ideal
  8. One having authority over another
  9. One that conquers or masters
  10. One having control
  11. An owner especially of a slave or animal
  12. The employer especially of a servant
  13. A presiding officer in an institution or society
  14. Any of several officers of court appointed to assist a judge
  15. A master mechanism or device
  16. An original from which copies can be made

These are not all the meanings, just the noun meanings I found relevant to today, and the world in general.

Yes, there is one meaning which has a negative connotation, but so does the word “shit”, and being Mexican, I don’t get offended when somebody says “Mexico is the shit”.

So no, there’s nothing inherently bad about the word “master” in the present. Like all words: it depends on the context.

By following this rationale the word “get” can be offensive too; one of the definitions is “to leave immediately”. If you shout “get!” to a subordinate, that might be considered offensive (and with good reason)–especially if this person is a discriminated minority. Does that mean we should ban the word “get” completely? No, that would be absurd.

Also, there’s another close word that can be considered offensive: git.

Prescriptives would not care how the word is actually used today, all they care about is to dictate how the word should be used (in their opinion).

But as we saw above; that’s not how language works.

People will decide how they want to use the word “master”. And thanks to the new configuration “init.defaultbranch”, they can decide how not to use that word.

If and when the masses of Git users decide (democratically) to shift away from the word “master”, that’s when the Git project should consider changing the default, not before, and certainly not in a prescriptive way.

Moreover, today the term is used in a variety of contexts that are unlikely to change any time soon (regardless of how much prescriptivists complain):

  1. An important room (master bedroom)
  2. An important key (master key)
  3. Recording (master record)
  4. An expert in a skill (a chess master)
  5. The process of becoming an expert (mastering German)
  6. An academic degree (Master of Economics)
  7. A largely useless thing (Master of Business Administration [MBA])
  8. Golf tournaments (Masters Tournament [The Masters])
  9. Famous classes by famous experts (MasterClass Online Classes)
  10. Online tournament (Intel Extreme Masters [IEM])
  11. US Navy rank (Master-at-Arms [MA])
  12. Senior member of a university (Master of Trinity College)
  13. Official host of a ceremony (master of ceremonies [MC])
  14. Popular characters (Jedi Master Yoda)
  15. A title in a popular game (Dungeon Master)
  16. An important order (Grand Master)
  17. Vague term (Zen master)
  18. Stephen Hawking (Master of the Universe)

And many, many more.

All these are current uses of the word, not to mention the popular BDSM context, where having a master is not a bad thing at all.

Subjectiveness

Even if we suppose that the word is “bad” (which is not), changing it does not solve the problem, it merely shuffles it around. This notion is called language creep (also concept creep). First there’s the n-word (which I don’t feel comfortable repeating, for obvious reasons), then there was another variation (which ends in ‘o’, I can’t repeat either), then there was plain “black”, but even that was offensive, so they invented the bullshit term African-American (even for people that are neither African, nor American, like British blacks). It never ends.

This is very well exemplified in the show Orange Is The New Black where a guard corrects another guard for using the term “bitches”, since that term is derogatory towards women. The politically correct term now is “poochies”, he argues, and then proceeds to say: “these fucking poochies”.

Words are neither good or bad, is how you use them that make it so.

You can say “I love you bitches” in a positive way, and “these fucking women make me vomit” in a completely derogatory way.

George Carlin became famous in 1972 for simply stating seven words he was forbidden from using, and he did so in a completely positive way.

So no, even if the word “master” was “bad”, that doesn’t mean it was always bad.

But supposing it’s always bad, who are the victims of this language crime? Presumably it’s black people, possibly descended from slaves, who actually had masters. Do all black people find this word offensive? No.

I’m Mexican, do I get offended when somebody uses the word “beaner”? No. Being offended is a choice. Just like nobody can make you angry, it’s you the one that gets angry, nobody inflicts offense on other people, it’s the choice of the recipients. There’s people with all the reason in the world, who don’t get offended, and people that have no reason, and yet they get easily offended. It’s all subjective.

Steve Hughes has a great bit explaining why nothing happens when you get offended. So what? Be offended. Being offended is part of living in a society. Every time you go out the door you risk being offended, and if you can’t deal with that, then don’t interact with other people. It’s that simple.

Collective Munchausen by proxy

But fine, let’s say for the sake of argument that “master” is a bad word, even on modern times, in any context, and the people that get offended by it have all the justification in the world (none of which is true). How many of these concerned offended users participated in the discussion?

Zero.

That’s right. Not one single person of African descent (or whatever term you want to use) complained.

What we got instead were complainers by proxy; people who get offended on behalf of other (possibly non-existent) people.

Gad Saad coined a term Collective Munchausen by proxy that explains the irrationality of modern times. He borrows from the established disorder called Munchausen Syndrome by Proxy.

So you see, Munchausen is when you feign illness to gain attention. Munchausen by proxy is when you feign the illness of somebody else to gain attention towards you. Collective Munchausen is when a group of people feign illness. And collective Munchausen by proxy is when a group of people feign the illness of another group of people.

If you check the mugshots of BLM activists arrested, most of them are actually white. Just like the people pushing for the rename (all white), they are being offended by proxy.

Black people did not ask for this (the master rename (but probably many don’t appreciate the destruction of their businesses in riots either)).

Another example is the huge backlash J. K. Rowling received for some supposedly transphobic remarks, but the people that complained were not transgender, they were professional complainers that did so by proxy. What many people in the actual transgender community said–like Blair White–is that this was not a real issue.

So why on Earth would a group of people complain about an issue that doesn’t affect them directly, but according to them it affects another group of people? Well, we know it has nothing to do with the supposed target victim: black people, and everything to do with themselves: they want to win progressive points, and are desperate to be “on the right side of history”.

They are like a White Knight trying to defend a woman who never asked for it, and in fact not only can she defend herself, but she would prefer to do so.

This isn’t about the “victim”, it’s all about them.

The careful observer probably has already noticed this: there are no pros.

Cons

Let’s start with the obvious one: it’s a lot of work. This is the first thing proponents of the change noticed, but it wasn’t such a big issue since they themselves offered to do the work. However, I don’t think they gauged the magnitude of the task, since just changing the relevant line of code basically breaks all the tests.

The tests are done now, but all the documentation still needs to be updated. Not only the documentation of the project, but the online documentation too, and the Pro Git book, and plenty of documentation scattered around the web, etc. Sure, a lot of this doesn’t fall under the purview of Git developers, but it’s something that somebody has to do.

Then we have the people that are not subscribed to the mailing list and are completely unaware that this change is coming, and from one day to the next they update Git and they find out there’s no master branch when they create a new repository.

I call these the “silent majority”. The vast majority of Git users could not tell you the last Release Notes they read (probably because they haven’t read any). All they care about is that Git continues to work today as it did yesterday.

The silent majority doesn’t say anything when Git does what it’s supposed to do, but oh boy do they complain when it doesn’t.

This is precisely what happened in 2008, when Git 1.6.0 was released, and suddenly all the git-foo commands disappeared. Not only did end-users complained, but so did administrators in big companies, and distribution maintainers.

This is something any project committed to its user-base should try to avoid.

And this is a limited list, there’s a lot more than could go wrong, like scripts being broken, automated testing on other projects, and many many more.

So, on one side of the balance we have a ton of problems, and in other: zero benefits. Oh boy, such a tough choice.

Best alternative name for the master branch

Since people didn’t really discuss the previous subject, and went straight to the choice of name, this is where they spent a lot of the time, but this is also the part where I paid less attention, since I don’t think it’s interesting.

Initially I thought “main” was a fine replacement for “master”. If you had to choose a new name, “main” makes more sense, since “master” has a lot of implications other than the most important branch.

But then I started to read the arguments about different names, and really think about it, and I changed my mind.

If you think in terms of a single repository, then “main” certainly makes sense; it’s just the principal branch. However, the point of Git is that it’s distributed, there’s always many repositories with multiple branches, and you can’t have multiple “main” branches.

In theory every repository is as important as another, but in practice that’s not what happens. Humans–like pretty much all social animals–organize themselves in hierarchies, and in hierarchies there’s always someone at the top. My repository is not as important as the one of Junio (the maintainer).

So what happens is that my master branch continuously keeps track of Junio’s master branch, and I’d venture to say the same happens for pretty much all developers.

The crucial thing is what happens at the start of the development; you clone a repository. If somebody made a clone of you, I doubt you would consider your clone just as important as you. No, you are the original, you are the reference, you are the master copy.

The specific meaning in this context is:

an original from which copies can be made

Merriam-Webster

In this context it has absolutely nothing to do with master/slaves. The opposite of a master branch is either a descendant (most branches), or an orphan (in rare cases).

The word “main” may describe correctly a special branch among a bunch of flat branches, but not the hierarchical nature of branches and distributed repositories of clones of clones.

The name “master” fits like a glove.

Culture war

This was the other topic where a lot of time was spent on.

I don’t want to spend too much time on this topic myself–even though it’s the one I’m most familiar with–because I think it’s something in 2020 most people are faced with already in their own work, family, or even romantic relationships. So I’d venture to say most people are tired of it.

All I want to say is that in this war I see three clear factions. The progressives, who are in favor of ANTIFA, BLM, inclusive language, have he/him in bio, use terms like anti-racism, or intersectional feminism, and want to be “on the right side of history”. The anti-progressives, who are pretty much against the progressives in all shapes or forms, usually conservatives, but not necessarily so. But finally we have the vast majority of people who don’t care about these things.

The problem is that the progressives are trying to push society into really unhealthy directions, such as blasphemy laws, essentially destroying the most fundamental values of modern western society, like freedom of speech.

The vast majority of people remain silent, because they don’t want to deal with this obvious nonsense, but eventually they will have to speak up, because these dangerous ideologies are creeping up everywhere.

For more about the subject I can’t recommend enough the new book of Gad Saad: The Parasitic Mind: How Infectious Ideas Are Killing Common Sense.

It really is a parasitic mindset, and sensible people must put a stop to it.

Update: The topic has been so controversial that as a result of this post reddit’s r/git decided to ban the topic completely, and remove the post. Hacker News also banned this post.

The impact to users

I already touched on this on the cons of the name change, but what I didn’t address are the mitigation strategies that could be employed.

For any change there’s good and bad ways of going about it.

Even if the change from “master” to “main’ was good and desirable (which it isn’t), simply jumping to it in the next version (Git 2.30) is the absolute worst way of doing it.

And this is precisely what the current patch is advancing.

I already briefly explained what happened in 2008 with the v1.6.0 release, but what I find most interesting is that looking back at those threads many of the arguments of how not to do a big change, apply exactly in the same way.

Back then what most people complained about was not the change itself (from git-foo to “git foo”) (which they considered to be arbitrary), but mainly the manner in which the change was done.

The main thing is that there was no deprecation period, and no clear warning. This lesson was learned, and the jump to Git 2.0 was much smoother precisely because of the warnings and period of adjustment, along with clear communication from the development team about what to expect.

This is not what is being done for the master branch rename.

I also find what I told Linus Torvalds very relevant:

What other projects do is make very visible when something is deprecated, like a big, annoying, unbearable warning. Next time you deprecated a command it might be a good idea to add the warning each time the command is used, and obsolete it later on.

Also, if it’s a big change like this git- stuff, then do a major version bump.

If you had marked 1.6 as 2.0, and added warnings when you deprecated the git-foo stuff then the users would have no excuse. It would have been obvious and this huge thread would have been avoided.

I doubt anyone listened to my suggestion, but they did this for 2.0, and it worked.

I like to refer to a panel Linus Torvalds participated in regarding the importance of users (educating Lennart Poettering). I consider this an explanation of the first principles of software: the main purpose of software is that it’s useful to users, and that it continues to be useful as it moves forward.

“Any time a program breaks the user experience, to me that is the absolute worst failure that a software project can make.”

Linus Torvalds

Now it’s the same mistake of not warning the users of the upcoming change, except this time it’s much worse, since there’s absolutely no good reason for the change.

The Git project is simply another victim of the parasitic mindset that is infecting our culture. It’s being held hostage by a tiny amount of people pushing for a change nobody else wants, would benefit no one, would affect negatively everyone, and they want to do it in a way that maximizes the potential harm.

If I was a betting man, my money would be on the users complaining about this change when it hits them on the face with no previous warning.

The amount fallacy

Finding a new star nobody has found before is rare, but it happens—the same goes for fallacies. Errors in reasoning happen all the time, and most of those times people don’t bother looking up the specific name of that error; identifying it as an error suffices. When an error is too common, somebody eventually bothers to name it and thus a fallacy is born. It’s convenient to name fallacies because it saves time trying to disentangle the logic; you can just google the fallacy, and soon enough you will find examples and explanations.

I believe I have found a new fallacy, but unlike most new fallacies, this one has been under our nose for god knows how long.

I’ve decided to coin it the “amount fallacy”, although a close second was “toxic fallacy”, and also “sweet spot fallacy”. This concept is far from new, but it doesn’t seem to have a name. It has already been spread around in toxicology for at least four centuries with the aphorism “the dose makes the poison”. The idea is simple: everything is toxic. Yes, even water can be toxic, it all depends on the amount.

This concept applies absolutely everywhere, which is perhaps why nobody has bothered to name it. Is hot water good or bad? It depends on what precisely you mean by “hot”; it can be 40°C, 60°C, 1000°C, or just about any amount. Since language is often imprecise, the fallacy can sneak by very inconspicuously.

It can be spotted much more easily by realizing sweet spots; too little or too much of something is bad. Water above a certain temperature is bad, but so is water below certain temperature. A similar notion is the Goldilocks principle.

As obvious as this fallacy is, it’s committed all the time, even by very intelligent people.

Take for example income inequality. The debate about inequality is still raging in 2020, perhaps more than ever, and the positions are very clear: one side argues it’s possible for income inequality to be “too high” (and in fact it already is), the other side argues income inequality is inevitable (and in fact desirable). These two positions don’t contradict each other; all you have to do is accept that there is a sweet spot. It’s that simple.

Income inequality for different Gini coefficients

Surely it cannot be that easy. Surely people must have realized this obvious fallacy while discussing income inequality. Of course they have! But they also haven’t named it. This makes it so people fall into the same fallacy over and over, and it has to be explained why it’s a fallacy over and over.

What a piece of work is a man! How noble in reason, how infinite in faculty! In form and moving how express and admirable! In action how like an angel, in apprehension how like a god!

William Shakespeare

People often aggrandize the intellectual capabilities of the human mind, so they assume intelligent people surely can’t be making fallacies this ludicrous, and if they do; surely they would realize when somebody points that out, and if they don’t; surely somebody would record this kind of fallacy so others don’t fall for it. But remember that it took thousands of years after the invention of the wheel before humans came up with the brilliant idea of putting them on luggage (around 50 years ago). So don’t be too quick to assume the grandioseness of the human mind.

Here’s another example: immigration. One side argues immigration enriches the culture of a country, the other side argues immigration dilutes the national identity. Perhaps there’s an amount of immigration which isn’t too much or too little? Yes, there’s some people that argue precisely this, except without naming the fallacy.

Another example: exposure to germs. Too many germs can certainly overwhelm any immune system, but too little weakens it. The immune system requires constant training, and in fact there’s a theory that the current allergy epidemic is due to children’s underexposure to germs (hygiene hypothesis).

A more recent example: epidemic mitigation measures. Many people argue that masks must be compulsory, because not wearing them “kills people”, this is of course very likely true. But what part is missing in the argument? The amount. Everything kills people. Just driving a car increases the probability that you will kill somebody. Driving cars kill people; that’s a fact. But how many? Richard Dawkins—a famous evolutionary biologist, and author—recently made precisely this fallacy in a popular tweet.

The same applies to anything antifragile, but the examples are countless: recreation, safety, criticism, politeness, solitude, trust, spending, studying, exercise, thinking, planning, working, management, circumlocution, sun exposure, child play, child discipline, vitamin consumption, etc.

Technically this falls into the category of hasty generalization fallacies; the fact that some rich people are greedy doesn’t mean all rich people are greedy. In particular it’s an imprecision fallacy, similar to the apex/nadir fallacies, except in terms of amounts.

The form is:

  • 1. Some amounts of X are bad
  • 2. Some amounts of X don’t represent all amounts of X (ignored)
  • ∴ All amounts of X are bad

The amount fallacy happens because premise 2 is ignored. An exception could be cheating: a small amount of cheating is bad, even more so a large amount; the amount of cheating doesn’t change its desirability.

Perhaps this fallacy already has a name, but I don’t think so; I’ve looked extensively. Even if it’s too obvious, it needs to have a name, because people commit it all the time.

So there you have it. Any time somebody says something is bad (or good) because some amount of it is bad, be on your guard; that doesn’t mean any amount of it is bad.

Basics in rational discussion

Lately I’ve been having deep discussions that get so abstract we reach deep into the nature of reality. One might think that in the 21st century we would at least have gotten that right, we could agree on some fundamentals, and move on to more important stuff–after all, philosophy is taught in high school (last I checked)–sadly, that’s not the case.

Nature of reality

The first thing we have to agree is the nature of reality. For example, it’s possible that there are multiple realities, maybe your reality is different than mine. Maybe I see an apple as red, and you see the apple as yellow, both our perceptions are correct, but the realities are different; you take a picture, and it’s yellow, I take a picture, and it’s red.

If this was the case, it would be useless to discuss reality; what does it matter if I see the apple as red and you as yellow? In fact, it’s pointless to discuss about anything; does she loves you? Or is she using you? Maybe there’s two versions of her, and if that’s the case, the discussion is over. It gets even more hypothetical than that: maybe I’m alive in my reality, but dead in everybody else’s.

This is where philosophy enters the picture, and more specifically; epistemology–the study of knowledge. We, rational people, have decided that we need to assume there’s only one reality, which is objective. It makes sense if we want to discuss about anything. What makes the apple look red to me, and yellow to you, is our subjective experience of the objective reality. Experience can be subjective, but reality is not.

So if somebody tells you there’s many realities, or that reality is subjective; end the discussion. Just say: fine, your reality is different, whatever it is, we would never know. Look for other rational people willing to discuss about the real reality we all live in.

Discerning reality

We have agreed that there’s one objective reality, but, can we really know it? If I’m thirsty can I really know that drinking water will help me? We can’t know for sure, but we have to assume reality is discernable. If I don’t drink water and I die, well, we know that drinking water would probably have helped me.

There really is no alternative; if there’s no way to know if water will help, then there’s no point in discussing anything.

But what if the first time a human being drinks water it helps, but the second one it doesn’t? What if reality is constantly changing, including the laws of physics? If that was the case it would be quite tricky to discern reality, and again; there would be no point in discussing.

Here enters science–the method to build and organize knowledge. Science assumes uniformitarianism; the basic laws of nature are the same everywhere, have always been, and would continue to be. It’s only with this assumption that we can even begin to attempt to recognize reality.

Now we have to decide the method. We can go with dogma, tradition, or even feelings, however, the only method that has reliably produced results through history is science. Science has taken us to the Moon, and improved dramatically our way of living–precisely by recognizing reality correctly. Science has proven dogma and tradition wrong, many times, and never has any of these methods proved science wrong. To put it simply; science works.

So, again, if somebody tells you reality can’t be known, just move on, and if he tells you he doesn’t believe in science, well, he isn’t interested the real reality.

Basic tips

After we have aligned all our necessary assumptions, and agreed on a method to find out reality, we can start the real discussion. In the process of doing this for centuries, we have identified a bunch of common mistakes in reasoning, and we call them fallacies.

Our minds are faulty, but what’s even worse; bad at recognizing our own faults. Fortunately we have given names to many of our faults in reasoning, in the hope that it will make it easier to recognize them.

However, not many people are interested in their faults in reasoning. Again, if somebody tells you he isn’t interested in fallacies, move on; it will be quite unlikely that you will be able to show him when his reasoning is faulty.

fallaciesposterhigherres

Summary

So to engage in a rational discussion we need to agree on:

  • Objective reality
  • Reality is knowable
  • Uniformitarianism
  • Science is the best method
  • Fallacies should be avoided

If anybody disagrees, you should be free to end the discussion immediately. Perhaps you can point that person to this post, so you don’t have to explain why yourself 🙂

The vanguard in the war of ideas

Language is interesting; it tells you about what’s going on inside somebody’s mind, but also; it tells you what’s going on inside the minds of a society.

At some point somebody came with the word “thought”, which changed the way we communicate forever. Same with many other words, like “racism”. There was a point where “racism” wasn’t a thing, and it’s essentially impossible to fight a concept for which you have no word.

“Racism” and “bigotry” are easy enough (although we don’t even have a word for “bigotry” in Spanish), but with them come more complicated notions, like “affirmative action”, and “the soft bigotry of low expectations”, both real things we should worry about.

I like “the soft bigotry of low expectations”, because it gives a name to an idea I adhere to; do not forgive a person that wronged you just because you are “morally superior”; hold other people to the same ethical and moral standards you hold yourself to, and you want others to hold you. It’s part of the golden rule, and it’s something the left doesn’t do with other cultures; we give them a free pass in the name of multiculturalism. It’s an issue.

But progressives don’t stop; while society catches on with ideas like “the soft bigotry of low expectations”, there’s even more novel ones, like “the regressive left” (also a real issue), which was coined only recently.

There’s a constant war of ideas, and it feels good when an issue finally gets identified and named, because all of us who felt the same way can rally and say; “yes! I feel the same way you do: the regressive left is an issue”. It feels good to be on the vanguard on the war of ideas, it feels good to know you are on the right side of history, just as I imagine the first people that said “racism is an issue” must have felt.

Best TV series of all time

After watching a lot of TV series, here is my list of what I consider the best TV series of all time. It’s mostly based on this list by IMDB, but also my personal preferences.

1. Game of Thrones

This one doesn’t really need an explanation, it’s the best TV series of all time by far. Not only it’s based on an amazing series of books, but it has an unparalleled production value. Each character is incredibly rich and complex, and there’s scores of them, many which will die sooner than you would expect.

It’s a huge phenomenon and if you haven’t watched it already, you should be ashamed and do it now.

Yes, it’s fantasy, but only the right amount. Paradoxically it is more realistic than most shows; there is no such thing as good or evil, just people with different points of view, motivations and in different circumstances. Good people die, bad people win, honor can kill you, a sure victory can turn into crap. And just when you think you know what will happen next; your favorite character dies.

2. Breaking Bad

Breaking Bad is the story of a high school teacher going, as the title suggests, bad. Step by step a seemingly average family man starts to secretly change his life. While at first you might think you would do the same morally dubious actions, eventually you will reach a point where you will wonder if the protagonist has gone too far.

It is incredibly rewarding to see how a teacher of chemistry, a man of science, would fare in the underworld of drug cartels. His knowledge and intelligence come in handy in creative ways to find solutions to hard problems.

His arrival to the scene doesn’t go unnoticed, and a host of characters are affected by this new player, and the chain reaction that follow is interesting to see to say the least.

3. The Wire

The Wire is simply a perfect story. It is local, and although you might not relate with most of the characters; it feels very real. The politics, the drama, the power dynamics, the every day struggles, everything is dealt with masterfully.

The characters are rich, some drug dealers are human, some politicians monsters, street soldiers incredibly smart. This show would give you insight into why a clean police detective would choose not to investigate a series of (possible) murders, why breaking the law can be sometimes good, and why in general violence is a much deeper problem that won’t be solved by simply putting some bad people in jail.

4. True Detective

What are Matthew McConaughey and Woody Harrelson doing in a TV series? History. True Detective is anything but a typical show. It might start slow, and if you are not keen in admiring the superb acting that shows in every gesture, you might find it boring, but sooner or later it will hit you like a truck.

This is not CSI, do not expect easy resolutions to multiple cases, in fact do not expect any resolution at all. The show is about the journey of investigation and everything that goes along with it, including the political roadblocks, and the toll it has on the people doing it (officially or unofficially), and their loved ones.

Also, thanks to the beloved character played by McConaughey (Rust); we are greeted with a heavy dose of philosophy, human relations, and in general; life.

6. Last Week Tonight with John Oliver

John Oliver is relatively new to the world of comedy, and as many students of The Daily Show, he graduated to be one of the best. Now he has his own political/comedic show dealing with subjects that actually matter, weekly, and deals with them masterfully, and at length.

Since the show is in HBO, it is not afraid of the reprisal of advertisers, and fiercely attacks commercial companies (as any real news show should) when they do something bad (which is very often).

The first season became and instant hit, and since all the important segments are available in YouTube for free, and are from 10 to 30 minutes in length, you really have no excuse not to watch it. In fact, do it now. Seriously.

6. Sherlock

Imagine the most egotistical asshole you know, add a big dose of raw pure genius, spray a chunk of autistic disregard to what anybody else thinks, disinterest in money, love, or hobbies. Finally add a side-kick who is well mannered, polite, and in general: normal. Use this concoction to solve crimes, and what you have is Sherlock.

Sherlock is a very uncommon show, starting from the fact that each episode feels more like a movie. so if you don’t want to watch a movie, perhaps you shouldn’t watch an episode of Sherlock either.

The show is not without its flaws, and sometimes caricaturesque endings–as I said, it’s different–but it is definitely worthwhile.

7. The Sopranos

Can you ever sympathize with a psychopath? After watching The Sopranos you might. The show follows the life of Tony Soprano, the boss of a New Jersey-based mafia. As you would expect, there will be violence, betrayals, and a constant supply of lies. However, you would also experience Tony’s human side, including caring for a family of ducks, and his constant duel with his psychologist.

Can you actually get better if you can’t even tell your psychologist that you killed one of your closest friends? How do you take care of your friend’s family with a straight face? These are the problems Tony faces all the time, not to mention trying to raise a couple of teenagers, and keep a marriage together which is surrounded by mystery.

And can you even blame him for being the way he is after you learn about his mother and father? Can a monster have a conscience?

After watching the show a lot of these questions will have clearer answers.

8. Rick and Morty

Rick and Morty is a cartoon, but it’s deep, funny, witty, definitely not for children. It centers around an old mad drunk scientist, and his grandson companion (which is not so smart). Together they have so many ridiculous adventures, so crazy that the mere premise of them will make you laugh.

Yet, despite the overblown adventures they have (due to the impossibly advanced technology the old man has developed), the show is at times deep and will leave you thinking with a renewed perspective about life, family, love, priorities, the human race and its place in the universe, and all the things that could have been, and might be… In a parallel universe.

9. Firefly

Cowboys in space. Star Wars but better. Relatable, warm and interesting characters. Renegades, an empire, the wild outskirts of the galaxy in a distant future that is so different, yet feels so familiar.

Easily the best science fiction series of all time, unfortunately there’s only one season, which is why Firefly became so much of a cult, and a phenomenon. There’s a movie (not as good), and even a documentary about the phenomenon. It is really something else.

There is only one drawback; after watching it, you will become one of us and wonder–why the f*ck did they cancel this wonder?

10. Better Call Saul

Better Call Saul is a spin-off of Breaking Bad. A good honest lawyer in an extremely precarious situation tries his best to succeed with integrity, but it turns out it’s not so easy to achieve that.

The show is very recent, and the first season hasn’t finished yet, so there is really not much more to explain, except that it is dark and intense.

So why is it in the list of the best tv shows of all time? I just know 🙂

The white and gold dress, and the illusion of free will

At first I didn’t really understand what was all the fuzz about, the dress was obviously white and gold, and everybody that saw it any other way was wrong, end of story. However I saw an article in IFLScience that explained why this might be an optical illusion, but I still thought I was seeing it right, the other people were the ones getting it wrong. Then I saw the original dress:

Original dress
#TheDress

Well, maybe it was a different version of the dress, or maybe the colors were washed away, or maybe it was a weird camera filter, or a bug in the lens. Sure, everything is possible, but maybe, I was just seeing it wrong.

I’ve read and heard a lot about cognitive science and the more we learn about the brain, the more faults we find in it. We don’t see the world as it is, we see the world as it is useful for us to see the world. In fact, we cannot see the world as it is, in atoms and quarks, we cannot, because we don’t even fully understand it yet. We see the world in ways that managed to get us where we are, we sometimes get an irrational fear of the dark and run quickly up the stairs in our safe home even if we know there can’t possibly be any tigers chasing behind us, but in the past it was better to be safe than sorry, and the ones that didn’t have that fear gene are not with us any more; they got a Darwin award.

I know what some people might be thinking; my brain is not faulty! I see the world as it truly is! Well, sorry to burst your bubble, but you don’t. Optical illusions are a perfect example, and here is one:

Optical illusion

If you are human, you will see the orange spot at the top darker than the one at the bottom, why? Because your brain assumes the one at the bottom is a shadow, and therefore it should be darker. However, they are exactly the same color (#d18600 in hex notation), remove the context, and you’ll see that, put the context back, and you can’t see them the same, you just can’t, and we all humans have the same “fault”.

This phenomenon can be explained by the theory of color constancy, and these faults are not limited to our eyes, but ears, and even rational thinking.

So, could the white and gold vs. blue and black debate be an example of this? The argument is that the people that see the dress as white and gold perceive it to be in a shadow behind a brightly lit part of a room, the people that see it as blue and black see it washed in bright light. Some people say they can see as both; some times white, some times blue.

XKCD

I really did try not to see it in a shadow, but I just couldn’t, even after I watched modified photos; I just saw a white and gold dress with a lot of contrast. I decided they were all wrong, no amount of lighting would turn a royal blue dress into white.

But then I fired GIMP (the open version of Photoshop), and played around with filters. Eventually I found what did the trick for me, and here you can see the progress:

So eventually I managed to see it, does that mean I was wrong? Well, yes, my brain saw something that wasn’t there, however, it happened for a reason, if the context was different, what my brain saw would have been correct. Perhaps in a parallel universe there’s a photo that looks exactly the same, but the dress was actually white and gold.

At the end of the day our eyes are the windows through which we see reality, and they are imperfect, just like our brains. We can be one hundred percent sure that what we are seeing is actually there, that what we remember is what happened, and that we are being rational in a discussion. Sadly one can be one hundred percent sure of something, and still be wrong.

To me the most perfect example is the illusion that we are in control of our lives. The more science finds out about the brain, the more we realize how little we know of what actually happens in the 1.5 kg meatloaf between our ears. You are not in control of your next thought any more than you are of my next thought, and when people try to explain their decisions, their reasons are usually wrong. Minds can be easily manipulated, and we rarely realize it.

There’s a lot of interesting stuff in the Internet about the subconscious and how the brain really works (as far as we know). Here’s is one talk that I particularly find interesting.

So, if you want to believe you are the master of your own will, go ahead, you can also believe the dress was white and gold. Those are illusions, regardless of how useful they might be. Reality, however, is different.

The meaning of success

I once had a quite extensive discussion with a colleague about several topics related to the software industry, and slowly but methodically we reached a fundamental disagreement on what “success” means. Needless to say, without agreeing on what “success” means it’s really hard to reach a conclusion on anything else. I now believe that many problems in society — not only in the software industry — can be derived from our mismatches in our understanding of the word “success”. It is like trying to decide if abortion is moral without agreeing on what “moral” means — and we actually don’t have such an agreement — and in fact, some definitions of morality might rely on the definition of “success”.

For example: which is more successful? Android? iPhone? or Maemo? If you think a successful software platform is the one that sells more (as many gadgets geeks probably do), you would answer Android, if on the other hand you think success is defined by what is more profitable (as many business people would do), you would answer iPhone. But I contend that success is not relative only relative to a certain context; there’s also an objective success that gives a clear answer to this question, and I hope to reach it at the end of this post.

This not a meta-philosophical exercise, I believe “success” in the objective sense can be clearly defined and understood, but in order to do that, I would need to show some examples and counter-examples in different disciplines. If you don’t believe in the theory of evolution of species by natural selection, you should probably stop reading.

Definition

The Merriam-Webster dictionary defines success as (among other definitions):

  • a : to turn out well
  • b : to attain a desired object or end <students who succeed in college>

From this we can say there’s two types of success; one is within a specific context (e.g. college), and the other one is in general. In this blog post I will talk about generic success, with no specific goal, or rather with the generic ultimate goal. Since it’s highly debatable (and difficult) how to define this “ultimate goal”, I will concentrate on the opposite; to try to define the ultimate failure in a way that no rational person would deny it.

Humans vs. Bacteria

My first example is: what organisms are more successful? Humans, or bacteria? There are many angles in which we could compare these two organisms, but few would reach a definite answer. The knee-jerk reaction of most people would be to say: “well, clearly humans are more evolved than bacteria, therefore we win”. I’m not an expert in the theory of evolution, but I believe the word “evolved” is misused here. Both bacteria and humans are the results of billions of years of evolution, in fact, one could say that some species of bacteria are more evolved because Homo Sapiens is a relatively new species and only appeared a few hundred thousands years ago, while many species of bacteria have been evolving for millions of years. “Kids these days with their fancy animal bodies… I have been killing animals since before you got out of the water… Punk” — A species of bacteria might say to younger generations if such a thing would be possible. At best humans are as evolved as bacteria. “Primitive” is probably the right word; bacteria is more primitive because it closely resembles its ancestors. But being primitive isn’t necessarily bad.

In order to reach a more definitive answer I will switch the comparison to dinosaurs vs. bacteria, and come back to the original question later. Dinosaurs are less primitive than bacteria, yet dinosaurs died, and bacteria survived. How something dead can be considered successful? Strictly speaking not all dinosaurs are dead, some evolved into birds, but that’s besides the point; let’s suppose for the sake of argument that they are all dead (which is in fact how many people consider them). A devil’s advocate might suggest that this comparison is unfair, because in different circumstances dinosaurs might have not died, and in fact they might be thriving today. Maybe luck is an important part of success, maybe not, but it’s pointless to discuss about what might have been; what is clear is that they are dead now, and that’s a clear failure. Excuses don’t turn a failure into a success.

Let me be clear about my contention; anything that ceases to exist is a failure, how could it not? In order to have even the smallest hope of winning the race, whatever the race may be, even if it doesn’t have a clear goal, or has many winners; you have to be on the race. It could not be clearer: what disappears can’t succeed.

Now, being more evolved, or less primitive, is not as a trump card as it might appear; nature is a ruthless arena, and there are no favorites. The vast majority of species that have ever lived are gone now, and it doesn’t matter how “unfair” it might seem, to nature only the living sons matter, everyone else was a failure.

If we accept that dinosaurs failed, then one can try to use the same metric for humans, but there’s a problem (for our exercise); humans are still alive. How do you compare two species that are not extinct? Strictly speaking all species alive today are still in the race. So how easy is it for humans to go extinct? This is a difficult question to answer, but lets suppose an extreme event turns the average temperature of the Earth 100°C colder; that would quite likely kill all humans (and probably a lot of plants and animals), but most certainly a lot of bacterial species would survive. It has been estimated that there’s 5×1030 bacteria on Earth, countless species, and possibly surpassing the biomass of all plants and animals. In fact, human beings could not survive without bacteria, since it’s essential to the human microbiome, and if you sum the bacteria genes in a human body, it probably outranks the human genes by a factor of 100-to-1. So, humans, like dinosaurs, could disappear rather easily, but bacteria would still be around for a long long time. From this point of view, bacteria are clearly more successful than humans.

Is there any scenario in which humans would survive, and bacteria would not? (therefore making humans more successful) I can think of some, but they would be far into the future, and most definitely we are not yet there. We are realizing the importance of our microbiome only now, and in the process of running the Human Microbiome Project, so we don’t even know what role our bacteria plays, therefore we don’t know how we could replace them with something else (like nanorobots). If bacteria disappeared today, so would we. It would follow then that bacteria are more successful, and there’s no getting around that.

Fundamentals and Morality

Could we define something more fundamental about success? I believe so: a worse failure than dying, is not being able to live in the first place, like a fetus that is automatically aborted because of a bad mutation, or even worse; an impossibility. Suppose “2 + 2 = 5”; this of course is impossible, so it follows that it’s a total failure. The opposite would be “2 + 2 = 4”; this is as true as anything can be, therefore it’s a total success.

There’s a realm of mathematics that is closer to what we consider morality: game theory. But don’t be fooled by its name; game theory is as serious as any other realm of mathematics, and the findings as conclusive as probability. An example of game theory is the prisoner’s dilemma — here’s a classic version of it:

Two men are arrested, but the police do not possess enough information for a conviction. Following the separation of the two men, the police offer both a similar deal—if one testifies against his partner (defects/betrays), and the other remains silent (cooperates/assists), the betrayer goes free and the one that remains silent receives the full one-year sentence. If both remain silent, both are sentenced to only one month in jail for a minor charge. If each ‘rats out’ the other, each receives a three-month sentence. Each prisoner must choose either to betray or remain silent; the decision of each is kept quiet. What should they do? If it is supposed here that each player is only concerned with lessening his time in jail, the game becomes a non-zero sum game where the two players may either assist or betray the other. In the game, the sole worry of the prisoners seems to be increasing his own reward. The interesting symmetry of this problem is that the logical decision leads each to betray the other, even though their individual ‘prize’ would be greater if they cooperated.

There are different versions of this scenario; with different rules and more complex agents game theory arrives to different conclusions as to what rational agents should do to maximize their outcomes, but these strategies are quite factual and universal; we are not talking about human beings; they are independent of culture, or historicism; the rules are as true here as they are in the other side of the universe. So if game theory determines that a certain strategy fails in certain situation, that’s it; it’s as hard of a failure as “2 + 2 = 5”.

With this notion we might be able to dive into more realistic and controversial examples — like slavery. Nowadays we consider slavery immoral, but that wasn’t the case in the past. One might say that slavery was a failure (because it doesn’t exist (at least as a desirable concept)), but that is only the case in human society, perhaps there’s another civilization in an alien planet that still has slavery, and they are still debating, so one might be tempted to say that slavery’s failure is still contended (perhaps even more so if you live in Texas). But we got rid of slavery because of a reason; it’s not good for everybody. It might be good for the slave owners, and good for some slaves, but not good for everybody. It is hard to imagine how another civilization could arrive to a different conclusion. Therefore it is quite safe to say that in all likelihood slavery is a failure, because of its tendency to disappear. Perhaps at some point game theory would advance to the point where we can be sure about this, and the only reason it took so long to get rid of slavery is that we are not rational beings, and it takes time for our societies to reach this level of rationality.

Objective morality and the moral landscape

Similarly to the objective success I’m proposing, Sam Harris proposes a new version of objective morality in his book The Moral Landscape. I must admit I haven’t read the book, but I have watched his online lectures about the topic. Sam Harris asserts that the notion that science shouldn’t deal with morality is a myth, and that advances in neuroscience (his field of expertise) can, and should, enlighten us as to what should be considered moral. Thus, morality is not relative, but objective. The different “peaks” in the landscape of morality are points in which society aims to be, in order to be a good one, and historically the methods to find these “peaks” has been rather rudimentary, but a new field of moral science could be the ultimate tool.

Regardless of the method we use to find these “peaks”, the important notion (for this post), is that there’s an abyss; the lowest moral point. The worst possible misery for all beings is surely bad:

The worst-possible-misery-for-everyone is ‘bad.’ If the word ‘bad’ is going to be mean anything surely it applies to the worst-possible-misery-for-everyone. Now if you think the worst-possible-misery-for-everyone isn’t bad, or might have a silver lining, or there might be something worse… I don’t know what you’re talking about. What is more, I’m reasonably sure you don’t know what you’re talking about either.

I want to hijack this concept of the worst-possible-misery-for-everyone that is the basis of (scientific) objective morality, and use it as a comparison to my contention that ceasing-to-exist is the basis for objective success.

Today our society is confronted with moral dilemmas such as gay marriage and legal abortion, many of these are hijacked by religious bigotry and irrationality, and it’s hard to move forward because many still define morality through religious dogmas, and even people who don’t, and are extremely rational, still cling to the idea that morality comes from “God” (whatever that might mean). Even many scientists claim that morality cannot be found through science, and others that morality is relative. But yet others disagree and have tried to define morality in universal terms, like Sam Harris. The jury is still out on this topic, so I cannot say that morality should definitely be defined in terms of what is successful to our worldwide society, merely that it is a possibility — A rather strong one, in my opinion.

Life

It’s a little more tricky to define what constitutes a successful life, because all life ends. The solution must be one on the terms of transcendence: offspring, books, memes, etc. However people living a more hedonistic life might disagree; but lets be clear, a life can be unsuccessful in the grand scheme of things, but still be good, and the other way around. It might be tempting to define success in different terms: “if my goal is to enjoy life, and I do; I’m succeeding”, and while that is true, that’s being successful in relative terms, not general terms.

Some people might have trouble with this notion, so I would give an example: Grigori Perelman vs. Britney Spears. Most people probably don’t know Grigori, but he solved one of the most difficult problems in mathematics, and was awarded one million USD for it. Clearly this would have helped him to become famous, but he rejected interviews and rejected the money. Does this means he rejected success? Well, lets try to view this from the vantage point of 500 years into the future; both Britney Spears and Grigori Perelman would be dead by that time, so the only things that remain would be their transcendence. Most likely nobody would remember Britney Spears, nor listen to any of her music, while it’s quite likely that people would still be using Grigori Perelman’s mathematics, as he would be one of the giants upon which future scientists would stand. In this sense Grigori is more successful, and any other sense of success would be relative to something else, not objective.

Test

Hopefully my definition of success should be clear by now in order to apply it to the initial example.

iPhone

iPhone is clearly successful in being profitable, but many products have been profitable in the past and have gone with the wind. The real question is: What are the chances that the iPhone will not disappear? It is hard to defend the position that the iPhone will remain for a long period of time because it’s a single product, from a single company, and specially considering that many technology experts can’t find an explanation for its success other than the Apple Cult. While it was clearly superior from an aesthetic point of view while it was introduced, there’s many competitors that are on par today. Maybe it would not disappear in 10 years, but maybe it would. It’s totally unclear.

Android

Compared to the iPhone, Android has the advantage that many companies work on it, directly and indirectly, and it doesn’t live on a single product. So if a single company goes down, that would not kill Android, even if that company is Google. So, as a platform, it’s much more resilient than iOS. Because of this reason alone, Android is clearly more successful than the iPhone — according to the aforementioned definition.

Maemo

Maemo is dead (mostly), so that would automatically mean that it’s a failure. However, Maemo is not a single organism; it consists of many subsystems that are part of the typical Linux ecosystem: Linux kernel, X.org, Qt, WebKit, GStreamer, Telepathy, etc. These subsystems remain very much alive, in fact, they existed before Maemo, and will continue to exist, and grow. Some of these subsystems are used in other platforms, such as WebOS (also dead (mostly)), Tizen, MeeGo (also dead (kinda)), and Mer.

A common saying is that open source projects never die. Although this is not strictly true, the important thing is that they are extremely difficult to kill (just ask Microsoft). Perhaps the best analogy in the animal kingdom would be to compare Maemo to a sponge. You can divide a sponge into as many parts as you want, put it into a blender, even make sure the pieces pass through a filter with very minute holes. It doesn’t matter; the sponge would reorganize itself again. It’s hard to imagine a more resilient animal.

If this is the case, one would expect Maemo (or its pieces) to continue as Tizen, or Mer (on Jolla phones), or perhaps other platform yet to be born, even though today it seems dead. If this happens, then Maemo would be even more successful than Android. Time will tell.

Predictions

Like any a scientific theory, the really interesting bit of this idea would be it’s predictive power, so I will make a list of things in their order of success, and if I’m correct the less successful ones would tend to disappear first (or their legacy):

  • Mer > Android > iOS > WP
  • Linux > Windows
  • Bill Gates > Steve Jobs > Carlos Slim (Richest man in the world)
  • Gay marriage > Inequality
  • Rationality > Religious dogma
  • Collaboration > Institutions

To me, this definition of “success” is as true as “2 + 2 = 4” (in fact, it’s kind of based on such fundamental truths), unfortunately, it seems most people don’t share this point of view, as we still have debates over topics which are in my opinion a waste of time. What do you think? Are there examples where this definition of success doesn’t fit?

Unique Mexican music; Son Jarocho, folklore and more

There’s a lot of interesting and unique music in Mexico, both modern and traditional, but there’s one kind that I find particularly unique and beautiful that I think it’s extremely underrated in Mexico, let alone in the world; Son Jarocho.

This first video is from Cafe Tacuba, IMO the best band from Mexico, although I’m not sure what kind of style it is, it’s certainly awesome 🙂 (I couldn’t find a better video quality)

The rest of the videos are of what I consider Son Jarocho in the right setting; small room, 3 guys; jarana jarocha (small guitar), requinto jarocho (even smaller guitar), and more importantly; arpa jarocha (a special harp). It’s a mixture of different styles from different continents, and the lyrics are often funny and sometimes improvised to make fun of something, or somebody. BTW, jarocho means from Veracruz, one of the 31 states of Mexico.

La Bamba is the most famous one, but I couldn’t find one video worthy of highlighting, so I just put the best one I could find. And before you ask, yes, the high pitch and loud voices in the chorus are intended, also, wait for the solos 😉

This is what you most likely would expericence; a group wandering around restaurants, improvising and making jokes.

This one seems professionally recorded. Just for measure.

For more more about Mexican music and culture, check this previous post.