December 12, 2010

On Reviewing Old Code

I recently committed an old programming project to Github.

The project is several years old – older than Java 5 at least, since I had to add some generics to the code to get rid of some compiler nags before committing it.

It was one of my early TDD exercises. It was the last iteration of a quest I was on to develop a realistic world map generator that could be used, for example, in games such as Civilization or for any other purpose that might call for fictional worlds. Over the years I had tried several different algorithmic approaches and used several different programming languages: C, C++, Visual Basic, and finally Java.

When I was about to take a look at the code after all these years, I felt some trepidation. I’ve downloaded many open-source projects and I’m used to the shock that often accompanies the first peek at unfamiliar code. Clean code is not a universal value, and sometimes beautiful code is in the eye of the beholder. This code for the map generator was old enough to have become somewhat unfamiliar, and I was prepared for the worst.

On the whole, I was relieved to find that the code was reasonably clean. My experience of reading clean code is that you feel a kind of disbelief that code that seems so simple could be producing such complex behaviour. Well-factored, clean code looks, paradoxically, unimpressive because it is so easy to follow. To reverse a questionable old adage, it was harder to write so it could be easier to read.

What I saw was certainly not perfect. There were many conventions regarding unit testing that I evidently had not yet established for myself at the time the program was written, and it could still use some even more aggressive refactoring. As I recall, I simply ran out of steam toward the end of the project, so it’s possible I knew about these problems at the time but didn’t get around to fixing them.

The biggest thing that struck me upon reviewing this project was how far away it seemed. I had obviously been obsessed with map generation for many years, to the point of learning some GIS techniques and using it as a guinea pig project for new languages and new techniques. Even though I’m pretty self-motivated, in the end I ran out of steam. In the absence of a user community for this project, it became impossible to rationalize the effort anymore.

No matter how independent-minded I am, no matter how internal the art of programming is, in the end it seems that software is all about fulfilling the needs of others.

November 17, 2010

The Usefulness of Philosophy

I came across a comment on a blog recently where the author adjured the other participants to stop “philosophizing” so much and get down to the real matter at hand. This reminded me that, for many people, “philosophy” is a term of abuse. For them, philosophy is pointless blather among elitist twits with no practical consequence - the very opposite of anything practical and useful.

Given the name of this blog, you can guess that I don’t agree with this negative sentiment. I won’t deny there are some philosophers and some philosophies that I think are pointless blather, but to take them as our basic definition is to throw the baby out with the bath water.

I want to propose that philosophy is the study of mental models, and, as I said in my previous post, I think mental models are the basis of our competence. Since our competence determines how well we manage and how effective we are at realizing our goals, there is obvious practical importance in understanding how mental models work, getting used to taking them apart and building new ones.

As I explained in my very first post regarding my chosen name for the blog, I think software development is an eminently philosophical activity. It is all about constructing mental models of systems and manipulating those systems using the mental models. It doesn’t matter whether these systems are machines, protocols, teams, problem domains, programming languages, etc: how effectively you work with them depends on your ability to construct and manipulate good mental models of how they work.

Being unaware of your own mental models is a limit to your own effectiveness. We have all known people who thought they had found the perfect hammer and were busy nailing everything. Likewise, we have probably known someone who repeated the same dysfunctional pattern over and over again in spite of not getting the desired result.

Philosophy as the study of mental models can make you aware of the mental models underlying these behaviours and can give you the skills you need to improve them.

A word of warning though: as Socrates found out the hard way, people often get very upset when you question their cherished mental models, and this can happen even when we question our own. However, if you want to improve effectiveness and grow in competence, there is much to recommend the use of philosophy to take apart and rebuild our mental models.

November 14, 2010

The Importance of Mental Models

Many years ago, my wife and I were in Paris, staying in a quaint Left Bank hotel that did not provide an iron and ironing board. My mental model of a hotel is that it should provide these, free of charge, and preferably one to each room: I like having wrinkle-free clothes, even on holiday. However, as we will see, I’ve learned that my mental models are not always adequate representations of reality, and that in order to solve my problems, I may have to construct a new mental model. We decided that the solution to our problem was to buy a compact travel iron to solve the problem once and for all.

To Find an Iron in Paris: How Hard Can It Be?


At home in Toronto, the obvious place to buy a small electrical appliance would be at a large department store, so we thought that the first place to look for our quarry would be at a well-known Parisian department store a healthy walk from our hotel. When we got there, we wandered around a bit, but couldn’t see an appliances department, so we asked a saleslady where we could find such a thing. (We are both fluent in French, so we had a leg up on most tourists in such a situation.)

The saleslady was polite and helpful, but bewildered that we would be looking for an iron in her store: in her mind this clearly wasn’t the kind of place you shopped for such things. It was as if I had gone into a sporting goods store and asked if they had a fresh produce section.

Another faulty mental model: in spite of being two adults with years of experience fending for ourselves, able to speak the local language, familiar with Paris from previous trips, it dawns on us that we are simply lacking the competence to perform the simple task of buying a travel iron in Paris.

What the Heck Is “Darty”?


Luckily, when we asked our friendly saleslady where we might purchase an iron nearby, she offered one word: “Darty.” We weren’t sure if this was the name of a street, a neighbourhood, a local shop-keeper or a store, but she pointed us vaguely up the street, and off we went. Along the way, we found a small shop whose sign indicated that they sold electrical supplies, and we thought this might be what we were looking for, but in fact this store only sold light bulbs of every shape and variety, all stored in rows of wooden drawers mounted like a library’s old card-catalog on the walls. My mental model of the world did not contain the possibility of such a store, so I was mystified and delighted: if I ever need a light bulb in Paris, I will now know where to go.

After wandering around in circles through the figure eight streets, repeatedly asking reservedly helpful passersby for directions, and being vaguely pointed, sometimes in contradictory directions, we finally found a fairly large store, set back from the street with a sign proclaiming it to be “Darty”. At last!

Making a Purchase: What Could Be Easier?


Once we entered the store, it was clear we had found the right place. There were rows and rows of various kinds of home appliances and electrical gadgets. Not all of the logical groupings were readily apparent to me: for example there might be electric fans and electric razors in the same shelving island. Each item had a single display model, out of box, on a shelf with a small card with a number next to it. After some wandering around, we did find a small row of irons, one of which was a compact travel iron.

Now for our next challenge: how to buy one? There were no boxed models to pick up and take to the sales counter, just the floor model and the little number. We stood their looking and feeling clueless for a while, until a saleslady spotted us and asked if we needed help. Saved! Now, we thought, she will get us our item, process our transaction and our quest will be over.

We indicated to her the travel iron we had selected. She took out a little paper form, filled out the number of our item on it, handed it to us, and cheerfully bid us good day. Another perfectly good mental model crushed by a cruel Gallic world! I sheepishly asked where I was supposed to take the form. She pointed vaguely across the store, saying there was a counter.

Sure enough, we crossed the store and found a counter with several clerks standing around. We handed one of them our form, they processed our payment, gave us a new stamped form, and bid us a somewhat perfunctory good day. I waited for a moment, expecting our iron to appear, in spite of the fact that our clerk seemed to have completely lost interest in us. After a few moments, I asked where my iron was. They pointed vaguely in a new direction across the store, and we traipsed off, ending up among rows of televisions sets.

The sales guy there took pity on us, despite my mild irritation that I had already paid for my iron, but did not yet have it in my hand (another mental model). He explained that we actually had to leave the store, and walk half-way down the hall of the indoor mall it was in and we would be able to get our iron there. I’m starting to feel like a sucker: they’ve taken my money, but they are now telling me to leave the store with just a little stamped form and someone down the way will give me my item? Riiiight! Nonetheless, we followed his instructions, finding a little kiosk down the way, unmarked and unattended. After a few moments standing there, dejected and simmering, an attendant appeared, took our stamped form, and handed us our boxed travel iron. At last!

The Joys of Travelling


Now you could take this story as a mockery of the French way of doing things, but the American tourists I’ve seen loudly berating French workers for their bad customer service have got that angle covered.
In fact, I love these kinds of experiences, though they can be distressing at the time, and they are exactly part of the reason I travel. I want to have my mental models challenged by a different culture.

There is a perfectly good system at work there that Parisians navigate every day, I just didn’t understand it, but now I do, and having done so, I’m now just a little bit more competent at how to get needful things done in Paris.

Working with Systems Is Working with Mental Models


Though you could just read this as an amusing anecdote about travel, my real purpose in telling it is to apply it to thinking about useful systems. Doing software development, or managing a team, or running a business are all about navigating, manipulating and improving systems. To work with a system effectively, you need to have a good mental model of that system.

If you want to lead a team, one of your biggest challenges is to communicate your mental model of the undertaking you want the team to pursue. Only if all the members of the team share a mental model which is adequate to the task at hand can they work together to produced the desired end.

In fact, I would go so far as to say that to call yourself competent at some skill is to say that you have acquired an adequate mental model of that skill.

Humility Is the Path to Competence


As the iron-buying story illustrates, it can be frustrating and humiliating to be confronted by a foreign mental model. We are comfortable considering ourselves to be competent adults, knowing how to do things in the world, and being confronted with our own incompetence in the face of an unknown mental model can be painful.

This often leads us to disparage the people that have that mental model. For example, I’ve seen tech teams and sales teams run each other down behind their backs, each side thinking that what they do is complex and valuable, and what the other does is simple or over-valued. The fact is that each has invested a lot into understanding a complex mental model that underlies their respective competence, and it is easier to run down the other’s mental model than to accept their own lack of competence in the other’s domain.

My experience is that if I’m not getting the results I want, or if I’m having trouble communicating with someone else, the challenge is to discover the right mental model to make me competent at that task. The necessary ingredient, sometimes hard to practice, is the humility to abandon the comfortable mental model I’ve already mastered to be able to absorb the new mental model at which I am just a clueless newbie. Only by accepting my incompetence can I find the road to competence.

October 25, 2010

Captain Kirk Was a Lousy Tech Manager

Those of you who remember the original Star Trek series will remember the running trope of Captain Kirk asking Scotty how long something will take, and when Scotty responds something like “two days”, Kirk would respond with “You have two hours”, or some other ludicrously short period of time. Scotty would shake his head exasperatedly and go off to spin straw into gold (always making the deadline), while Kirk would get a smug, self-satisfied “There’s brilliant leadership at work” look on his face to let us know what a genius commander he was.

Years later on Star Trek: the Next Generation, Scotty made a guest appearance and confided to the engineer that he should never tell how long it would really take to do something: it turns out that the result of Kirk’s management style was to train Scotty to game the system.

Now sometimes good leadership requires pushing team members to pursue “stretch goals,” so that they continue to grow professionally and stay engaged with their jobs. But to make asking for the impossible into a routine part of every task assignment is just a bad idea. Scotty shows us why: it backfires, since the team member learns that honest estimations are punished.

The commanders in the next-generation Star Trek shows, Captains Picard, Sisko and Janeway, had much better leadership skills in this regard. They encouraged honest estimates and had open and respectful discussions with their team members about priorities and deadlines. It’s hard enough dealing with alien invasions and other calamities without introducing dysfunctional group dynamics into your own team through “heroic” Captain Kirk-style management.

July 26, 2010

The Diabolical Genius of C++

I have been feeling a strange pull to revisit C++ lately. Partly this is because C++ continues to be the language of choice for certain domains such as games and graphics programming that I find interesting. But more importantly, I have been curious to see what I would make of C++ if I took a fresh look at it all these years later, with the benefit of all the practical and theoretical expertise in programming and programming languages that I have acquired in the meantime.

I remember when the first edition of Effective C++ by Scott Meyers came out, and I was very tempted to buy it, but in those days computer books were ridiculously expensive, and I bought Bjarne Stroustrup’s equally fresh-off-the-press The C++ Programming Language 2nd edition instead, on the logic that it was the official reference and so its utility would stand the test of time. (Something you couldn’t count on with most technical books of the time.)

So given that Meyers’ book is still in print (in its 3rd edition) almost 20 years later, I figured it was the best place to go to reacquaint myself with C++.

Overall, I found Effective C++ to be a good book with good advice (though I might quibble here or there) and an excellent reminder of what programming C++ is like. What I rediscovered was that C++ is both a paragon and abomination of programming language design. It is in a way a poster child for the title of my blog, “Philosophy Made Manifest”, in the sense that it is so exquisitely the logical outcome of its philosophical premises. More particularly, it is the synthesis of two, quite different philosophies of software construction, and the extent to which it is a paragon or abomination is a direct result of the relative compatibility and incompatibility of these two different paradigms , in the Thomas Kuhn, The Structure of Scientific Revolutions sense.

To give you a sense of what these two philosophies are like, I can use my own early development as a programmer as an example. Like many programmers of my generation, my first language was a flavour of BASIC. BASIC was a good language to get your feet wet with programming, but too much of it was “magic”, in the sense that it buffered you from the real workings of the machine. It was fine for relatively simple programs, but once you got to more complex applications on a machine with memory measured in kilobytes, it didn’t really give you enough awareness or control of your environment to manage your resources.

The next step up from there was often assembly code or even raw machine language, and that looked like alphanumeric gibberish rather than comprehensible language.

So when I discovered C, it was a revelation. Here was a language that had a comprehensible syntax like BASIC but that really allowed you to specify exactly how you were using your resources. By this time, I had 1MB of RAM and clock speeds in the MHz, which seemed like a lot at the time, but for some of the applications I was interested in, you still had to juggle your resources to make this work, and C let you do that. With C, I went from being a dabbler in programming to being a programmer.

Moreover, C helped to give me an entrée into the world of assembly. The C compiler I used allowed me to generate assembly code as output, and I became very familiar with how my C code was translated to the machine. Sometimes I even wrote super-optimized functions in assembly for maximum speed and efficiency and called them from C.

But this was the first sign of trouble in paradise. Two things became apparent to me around this time.

The first was that the “C with assembly” approach wasn’t very portable or maintainable. Different versions of the 80x86 architecture, of DOS (and soon Windows), and of the compiler and associated libraries, made work done this way very fragile.

The second was that the low-level approach of C was very awkward for higher-level tasks such as GUI design, where what you wanted was reusable components that interacted on an event model. And this is the locus of the paradigm shift: from programmer as juggler of resources on a machine to programmer as designer of solutions in the problem space.

I think this dichotomy is a major one in software development, and it is not the exclusive domain of the C/C++ world. I’ve seen it play out in the Java world too.

Many technical people are (reasonably enough) oriented towards the technical side of things. They’re focused on the solution space. They have staked their professional mastery on understanding the arcane details of various languages, platforms, libraries and tools. This leads to a natural tendency to believe that the role of the programmer is to redefine the problem until it fits the bounds of the available solutions. They seek to turn the problem at hand into the proverbial nail for the hammer they have.

This is not a wholly bad phenomenon. In fact, in the kind of resource poor environment my C-programming self was contending with, this kind of shoe-horning was a necessity for being able to accomplish anything of value. And since resources are never completely unlimited, some of this thinking always ends up being essential on a technical project of any scope.

But the world changed as more computing resources became widely available, and richer, more user-friendly GUIs and application came to be the norm. Customers for software products and the software teams that built them started to want a more problem-focused approach to software. Instead of reworking the problem to fit the technical solution, there was more value in squeezing the technical solution into the mould of a more natural, conceptual model of the problem space being addressed.

And this is where Object-Oriented Programming (OOP) came in. The classes and objects that were the ++ to C gave developers a new set of abstractions to capture such a model in the source code. In principle, OOP was supposed to remove the focus from the implementation details of the application and place it on the modelling of the entities and activities associated with what the user wanted the application to do. Again, in principle, the programmer of an OOP language was supposed to cede control over some of the low-level resource allocation issues to the language implementation, and focus on the world according to the user. Some OOP languages did go quite a way down this road.

C++, however, made a different choice. It decided to try to fulfill both paradigms. Want to micro-manage resource usage? No problem: C++ includes C. Want to work at the higher-level abstraction of OOP? No problem: C++ has all the OOP features you want.

In some ways, C++ has been wildly successful in its goals. It succeeds in having all the features of both so that the developer is free to choose his approach. But it is exactly this blend that makes it such a nightmare from a design perspective.

As I was reading Meyers’ book, I was struck by how often he says of some C++ feature “there is a very simple rule for this in C++, with a few specific exceptions”, and the exceptions turn out to be mind melting and highly unintuitive to someone trying to avail himself of the “high-level” approach to their application. The language is designed to “help” you by doing certain things automatically, but it often doesn’t do the thing that seems to be most reasonable, since it doesn’t want to conflict with the freedom of the programmer to operate at a lower-level of control.

I think this illustrates a general principle of design: simple solutions can only exist for focused design philosophies. When you try to be “all things to all people” you necessarily end up with complicated, hard-to-manage solutions.

Having identified the “original sin” of C++, I think we still need to give it its due. Decades later, it is still going strong in application domains such as bleeding-edge games and graphics and in device-embedded controllers – domains where the spirit of scarce resources is still alive and well, and where the need exists for both the large-scale organizing principles of OOP and the “down-to-the-bare metal” optimization of resources.

Given that the pendulum in popular programming languages has swung back to the spirit of BASIC (interpreted languages that are “fun” and where resource allocation is mostly “magic”), I wonder if the end of Moore’s Law spells a return to the spirit of C++. If so, any successor to C++ will have to start where it left off and learn from both the bad and the good of its diabolical genius.

February 16, 2010

Lessons from Confucius for Software Development

In my previous post, I talked about lessons from Confucius that I think are still useful in the modern world, but there is a field of endeavour that I think can particularly benefit from understanding the Confucian worldview: software development.

At first glance, it seems highly unlikely that Confucianism might have some application to software development. The Confucian values of respect for the past, harmonious social relations, decorum and moral leadership don’t have an obvious affinity for an industry that is known for its focus on what is new and shiny, and which is stereotypically populated by raging individualists with a disregard for social standards.

But it is worth considering that the Confucians represent one of the earliest groups of knowledge workers in human history. They were trained in specialized knowledge for specialized tasks in a complex society, and they had a sense that their specialized knowledge made them an elite group in society.

An important concept in Confucian texts that bears on this is junzi. Etymologically, it means “ruler’s son, prince”, but already in Confucius’ time it is used more metaphorically, often translated into English as “superior man”, “gentleman”. The Yiddish term “mensch” has a similar ring. It represents what every Confucian was striving to be.

In spite of its etymology, junzi is one of the earliest-known notions of elite status that is not derived from the happenstance of one’s birth, such as being the literal son of a ruler, or a free man of Athens. The kind of “superior man” we are talking about here is defined entirely by his knowledge and skills, and his savoir faire in using them. Confucianism is a truly meritocratic system of thought, and a modern IT specialist can easily relate to such an ethic.

A second consideration is the fundamentally social nature of software development. If it ever truly existed, the age of the lone genius changing the world with his software is over. Any non-trivial software development these days involves a whole team of people with different specialties and knowledge, and building reliable and maintainable applications requires that these people work together as an effective and harmonious community.

The Confucian junzi has a sense of noblesse oblige, a sense that the status conferred on him by his knowledge and skills requires that he use them for the betterment of his society and to achieve collective goals. He is willing to lead and mentor new members of the fraternity of knowledge workers, and does this not by pontificating, but by providing an example through good practices.

The very fact that these kind of values are not what most of us think of when we consider software development suggests that there is still much benefit and advantage to be gained by nurturing them in software teams, and a team lead could do worse than to study the Analects of Confucius to prepare for the challenges they face.

After all, Confucius and his followers have several centuries of experience organizing and training knowledge workers to draw on.

January 15, 2010

Lessons from Confucius

When I was in university studying East Asian studies, it was the height of the political correctness era. Of the three major streams of Chinese “religious” thought — Confucianism, Taoism and Buddhism — Confucianism was considered to be the “bad guy”, representing everything that is authoritarian, sexist and hierarchical in Chinese culture.

By contrast, everyone loved the individualistic enthusiasm of Taoism, which extolled the virtues of the feminine, or the egalitarian austerity of Buddhism. Confucians, though, were the “dead white men” of Chinese studies.

So, while I got an early grounding in Taoist and Buddhist thought, it wasn’t until I had been in the work-world for some years that I came back to Confucius, starting by reading the Analects in translation, and later, in the original.

The Confucius I found was quite different from my pre-conceived understanding of him, and I was surprised to find that his outlook and advice were unexpectedly relevant to modern life. I also found an attitude quite different from the stern authoritarian stereotype I had previously accepted.

In fact, my surprise started with the first line of the Analects (translations are my own):

To learn something, and to review it now and again, isn’t it pleasurable?

I had, of course, learned as a student about the traditional Chinese respect for learning, but I had been left with the impression that the kind of learning that was meant was rote memorization in strict conformance to orthodox interpretation. But this primary source of Confucius’ personal thought starts off with a child-like enthusiasm for the simple pleasure of learning, a feeling I knew well. This attitude struck me as even more relevant for our modern world, where there is so much to learn and where knowledge and skills are the currency of our society.

Another traditional value of the Confucians I had learned about in my student days was often translated as “ritual”, but it seems to encompass politeness, observance of correct social forms and social hierarchy, as well as what we would think of as actual rituals. We used to roll our eyes at this, since we tended to think that these things are intended to suppress our sincere, individual feelings – that they are empty formalities intended to ensure conformity. But in another line from the Analects, I got another surprise:

Lin Fang asked about the basics of ritual. Confucius said: A big question! In ritual, prefer modesty to extravagance; in mourning, prefer sincere sadness to formality.

Confucius is genuinely concerned with the observance of social forms (and the social forms of his time and place sometimes seem very foreign to us), but he doesn’t recommend them as a replacement for individual feeling, but rather as a vehicle to allow the free expression of individual feeling in harmony with the functioning of the community.

This sense of communal harmony is underlined by the cardinal virtue of Confucian thought, often translated as “benevolence”. It is related to the Chinese word for “person”, and I can’t help but feel that the best modern equivalent for it is the Yiddish word “mensch”. Etymologically, “mensch” means “person”, but its full meaning is someone who has a strong sense of community, someone who can be relied upon to help others and who has the courage to stand up for what is right. This is a pretty close match with the Confucian principle.

Confucius is also very concerned with good leadership, and a recurring preoccupation of Confucians (and ancient Chinese thinkers in general) is how to be an effective ruler. Here is another representative passage from the Analects:

Ji Kang asked: How can the people be made to respect the ruler, to be loyal and take his advice? Confucius said: Ruling them with solemnity will result in respect; showing respect for elders and being kind will result in loyalty; promoting good and instructing the unskilled will result in persuasion.

Contrary to my early stereotype, Confucius has no time for the “because I said so” school of leadership. This is all the more remarkable when you consider that rulers in ancient China did have absolute power. His prescription for leadership is firmly in the “lead by example” camp. Those millennia ago, Confucius had already recognized that people can tell the difference between a cynical, self-serving leader and one who genuinely has the collective good at heart. He knew that the difference between genuine commitment and mere grudging compliance rests on this distinction.

So, my re-examination of Confucian thought not only changed my mind about its essence but actually showed me that there were values and lessons to be learned that I could apply to my life, personally and professionally.

A genuine love of learning, accepting social forms as ways of collectively expressing individual feeling, cultivating sincere feelings of communality with others, and leading by moral example: all of these ideas are still relevant in the 21st century and, if practiced, can make a real difference in one’s life as a leader and as a human being.