Humility and Better Programming, Part 2
In part 1, I talked about how I grew up in a coding culture that emphasized community. While not the only way, I prefer this approach. It takes some humility. Your code is out there for others to read, debug, explain, and modify—and in the end, laud or complain about.
Truly excellent programmers learn how to work and play well with others. Writing readable code is part of being a team player. The computer probably reads your program as often as other people do, but it’s a lot better at reading poor code than people are. As a readability guideline, keep the person who has to modify your code in mind. Programming is communicating with another programmer first and communicating with the computer second.
The majority of the cost of software is incurred after the software has been first deployed. Thinking about my experience of modifying code, I see that I spend much more time reading the existing code than I do writing new code. If I want to make my code cheap, therefore, I should make it easy to read.
Bringing this back to my remark about spaghetti code that began part 1, let me include this sentence from Edsger W. Dijkstra, one of the giants in the field of computer science..
The competent programmer is fully aware of the strictly limited size of his own skull; therefore he approaches the programming task in full humility, and among other things he avoids clever tricks like the plague.
— Edsger W. Dijkstra, The Humble Programmer, ACM Turing Lecture 1972, EWD340
Steve McConnell, expands on this idea…
The people who are best at programming are the people who realize how small their brains are. They are humble. The people who are the worst at programming are the people who refuse to accept the fact that their brains aren’t equal to the task. Their egos keep them from being great programmers. The more you learn to compensate for your small brain, the better a programmer you’ll be. The more humble you are, the faster you’ll improve.
Let’s face it, though, our customers work on complex problems that often need complex solutions. But, I carefully did not use the word “complicated”. Complex and complicated do not go hand in hand…
“Complex” and “complicated” may sound similar, but they are in fact two very different beasts. Complexity is often essential. Certain topics, issues, activities and missions are inherently complex and there’s nothing wrong with that. But complicatedness involves unnecessary complexity. It’s caused by the addition of non-value-added parts, of gears that turn without reason or grind against other gears.
The LabVIEW source code is complex; no doubt about it. But, we strive to keep it from being complicated. The parts that are complicated are ones we struggle with—often because we haven’t figured out how to make them simple, yet. Paul Austin, another old-timer on the LabVIEW R&D team, encourages young engineers to find the simplest solution to the problem they are trying to solve. Great advice.
The quote from Major Ward above is from a short manifesto that’s definitely worth a read. The main point is that as a system evolves, it either collapses from its own complexity, or it thrives because of its simplicity.
Two and a half decades earlier, Tony Hoare (another giant of computer science) said the same thing…
I conclude that there are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.
Programmers are always surrounded by complexity; we cannot avoid it. Our applications are complex because we are ambitious to use our computers in ever more sophisticated ways.
— C.A.R. Hoare, The Emperor’s Old Clothes, ACM Turing Award Lecture 1980
Some of the code in LabVIEW is over 20 years old. We didn’t know then it would still be with us after 20 years, but we strive to make our code maintainable enough that it could be. If you want to create code that’s built to last, make it as simple as possible.
In part 3, I’ll talk about things you can do to create better, more maintainable code.
Excellent series! I can’t wait for the next part!
I couldn’t agree more. I fully believe the best solution is the simplest solution in 99% of the situations (there’s always those outliers). KISS.
I once worked with a guy who was to teach me LabVIEW and I told him a very simple way to perform a test. I will never forget the next words out of his mouth: “That’s too simple. It’ll never work.” And that is why he spent months working on a project only to have me delete all he did and write the whole program in a single day.
Brian, I totally agree. NI doesn’t seem to be heading in the direction of increased simplicity, though. It drives me a little crazy. Here’s my current axe, apologies if it’s a little grindy:
Python access to inherited class data:
class A(object):
member_string = ‘Hello world’
class B(A):
def print_string(self):
print self.member_string
if __name__==”__main__”:
bob = B()
bob.print_string()
Save file, run using python. “Hello world”.
LabVIEW access to inherited class data (without even showing code!):
Create a project through Create Project dialog. Right click, new, create class. Name class in dialog. Open private data control and populate with member_string control. Enter Hello world, right click, set as default. Get prompted to save whole project. Right click, new, create class. Another naming dialog. Right click B class, properties. Go to inheritance item. Click change inheritance button, go to another dialog, inherit from A. Right click A class, create a vi for data access. Fill in VI wizard. Save VI. Right click, new… find out the create data access option isn’t available in class B because there’s no data. Right click, new, static access VI. Insert data access VI for A, add and wire outputs. Save. (This entire paragraph is 5 lines of Python!)
Create new VI, drop constant instance of B, wire to data access VI for B, wire out string control, and run. “Hello world”.
In the spirit of offering solutions, this is what I would prefer:
Open “class definition VI” A. Drop cluster constant, populate with member_string constant. Wire to output. Open class definition VI B. Drop instance of A, wire to “inherit from” node (other input is for cluster constant, leave unwired). Wire result of “inherit node” to output. (Not so bad, eh? And it’s in LabVIEW, not wizards and dialogs).
Drop instance of B on a new VI, open output cluster containing data for both A and B. Select member_string, create string indicator, run. “Hello world”
This way, classes are defined in LabVIEW, not wizards and dialogs. Class definition VI’s can apply properties (wire types, public/private, etc) on their block diagrams. Outputs from class definition VI’s can be customized as probe front ends. And classes aren’t tied to their XML leashes.
…
Okay, back to reality. LVOOP is what it is, and it took a huge amount of work to get it there. Hopefully the next big shift in LV is… a little simpler.
For a better discussion of using VI’s to define classes, check http://lavag.org/topic/16879-why-are-lvoop-classes-not-specified-in-g/.