Thursday, April 20, 2023

The Purpose of Testing

I may not be an active Software Engineer, but I still have the heart of one.  And, as such, there are a couple of directives that I personally find very compelling:

1. Test often.
2. A successful test is one that finds a problem.

As an instructor, I tried to hammer this philosophy home, with varying degrees of success.  

The first hurdle is that everyone has a tendency to grow tired of something you have to do all of the time....  (Oh, geeez, I've got to go give water to the birds AGAIN!)  Many programmers and software designers are much more attracted to the building of new things.  It is not nearly so glamorous to expend great amounts of time and effort in devising and then employing test after test to see if an existing creation is working correctly. 

Hey!  I want water - NOW!

You see, I wasn't clueless as an instructor and I am not clueless now.  To most people, it's more fun to write program code than it is to write tests to break that code.  Just like it is more fun to write new blog posts than it is to edit them.  (Edit often and successful editing uncovers writing errors? Now there's a thought!)

But I still had the gall to tell people to test often.  And then I had the gall to tell everyone that it isn't necessarily a good thing if the tests you design fail to uncover any problems!  If you have ever written code to create a computer program or an app (or whatever), you know exactly how difficult it is to get a program to run in the first place.  The suggestion that successful tests cause a program to break is usually not received well!  

How terribly rude of me.

We continue to test our clothes dryer

Why Testing for Failure is a Good Thing

In the world of software development, it is my belief that the last person you want to discover a problem with the code is the end-use client who is trying to use the software to do their task.  They are the least likely person to be able to fix the problem once it arises and they are most likely to move on to a competitor's product, if they can, as the solution. 

What's worse?  What if they can't move on to another product (or they don't see the problem in the first place)?  What sort of harm might this cause?  

This is a bit easier to see the impact if we are talking about programs for air traffic controllers or automated tools that deliver doses of radiation to cancer patients.  The discovery of a software error while these tools are being used in industry can result in injury and death.  But, don't discount the ripple effects that occur when a less "critical system" fails because it was not properly tested.

One case in point,  consider an email program that reported emails were sent correctly to customers - but were instead deleted or missent (this is an actual case).  How do we measure the potential damage that comes from this failure?  Perhaps the majority of the lost communications were not terribly critical, but the damage that was done to the reputations of the senders and recipients of these emails could be significant.  For example, one person thinks they sent something and they've never gotten a reply to their questions - what might they be thinking about the people who did not respond?  Could it negatively impact their interactions in the future?

To the dismay of my students, I often employed the strategy of testing for failure with the exams I gave them to assess their learning.  It would be fair to say that most of my tests were considered to be difficult (at the least).  

Yes, I wanted them to have success.  But, my viewpoint was that they would see more success if we could identify some of what they still did not understand.  Granted, this approach was hard on all of us.  I didn't like that the scores made them unhappy and I really didn't want it to be about that.  But, there was no avoiding the fact that there was value in the exercise.  In the end, it wasn't an issue that solid communication couldn't solve.

The green carts in front could have used some testing.

Real Life Testing Needs

Here is a case for testing that applies to the farm.  The two green carts at the front in the picture shown above are likely familiar to anyone who farms at our scale or to people who have gone to nurseries to buy plants for a garden.  The green payload area is actually pretty sturdy.  The design of the overall cart is pretty good.  I bet the prototypes tested out (if they tested them at all) pretty well.  But, they clearly stopped testing once these things went into mass production.

What is wrong with these carts you ask?  After all, these two have been with us for nearly 18 years.  They must be successful, right?

First, you should note the other carts in the picture.  The smaller cart is a left-over from our pre-farm days, so it doesn't really count in the discussion.  But, the black cart is something we have purchased in the time SINCE.  We went and found ourselves a better cart.  

So, why did we move on?

The nice 'pneumatic' wheels that were highly touted for the green carts have a tendency to break on the single weld at the axle.  Of the three carts of this type we have, there are NO original wheels remaining.  We've had to replace them all.  Some of them were replaced more than once until we could find a better manufacturer for replacement wheels.  To top it off, we know we are not the only clients who have dealt with this flaw.

No Test Means No Problem?

Mother Nature is always testing for failure, creating stresses that remove the weak and favor the strong.  Gingko trees do not tend to take well to late frosts and freezes.  If the tree is strong and healthy, it can pull on reserves to send out new leaves.  This year, the Gingko did not succumb to the temptation to begin to bud out prior to the recent cold weather.  But, in 2020, it was sorely tested and it actually survived. 

Humans seem to prefer to have their testing done in real time and in real life - just like Mother Nature.  We do this, rather than employing some patience and some intelligence to run tests BEFORE the consequences are dire.  It gets even sadder when I consider the tendency to ignore test results that point to a problem and then we proceed without addressing the problem.  We could, at the least, determine that the likelihood of that problem occurring is very, very low before proceeding.  But we don't. 

I have a hard time understanding the apparent preference for surprises that we could have prepared for if only we had tested for them.

In the end, I think the true Software Engineers in this world have it right:

1. Test often.
2. A successful test is one that finds a problem.
And, now, I will go test my limits for giving water to the poultry.  Have a good day everyone!

No comments:

Post a Comment

Thank you for your input! We appreciate hearing what you have to say.