Various levels of test automation

QA/Test managers tend to divide their QA/testing operation in two parts – manual and automated. In such a simple world, typically automation comes later than manual testing. In the mental image of a manager, automation starts replacing manpower a test case at a time – till he can “free up” his manpower to “do more productive things” (quotes are euphemisms for “fire” and “save the budget and get that big promotion” respectively.)

This dream is never achieved because of reasons I mentioned in one of my previous posts.

This is what happens in a typical product’s life:

  1. Product starts, everyone is in a rush to deliver features
  2. Features have shallow bugs – test cases proliferate
  3. In a few versions, features stabilizes (in other words, the code learns test cases)
  4. There is more income and time
  5. Someone remembers automation because “juicy” bugs stopped coming
  6. Automation is started by a bunch of enthusiasts
  7. Automation reaches to 40%+ of testing, test work is greatly reduced. Managers make heroes of automators
  8. A few releases go by, UI technology changes. Automation breaks
  9. Automation is faster than manpower – so loads system I/O in a manner it isn’t designed. Developers come in friction with automation guys
  10. Automation manpower is dragged to manual testing because the very new UI requires intensive testing
  11. Automation plays a catch up game
  12. Manual testing lead/manager revolts against the blue eyed boy called automation. He has undeniable arguments: a. Automation doesn’t find bugs, manual testing does (obviously! The code has learnt the automation!) and b. Automation maintenance is expensive (because it broke with change in the UI). Developers join them because “Automation finds irrelevant bugs. System wasn’t designed for 50,000 commands per second.”
  13. Automation continues at a slower speed
  14. Back to step #8

The cure to this is to see that neither automation is a goal, nor a means to achieve the goal. Automation is no silver bullet.

Realistic strategy is to divorce “test” and “automation” from the phrase “test automation”.

TASKS related to testing should be automated at various levels, not test cases.

To give you an example, take an embedded product like set top box (STB). An STB can have many bugs initially. Let us take a list of bugs:

  1. The STB does not responding to the remote control
  2. The STB crashes every now and then
  3. The STB does not have new interface (say HDMI) working
  4. The STB does not display with a type of a TV or DVR
  5. The STB fails when SNR goes down by 20 dB and the channel is flipped before 20 mS of that

Now look at the approaches to automate all the tests:

  1. The STB started responding to the remote control starting version 1.2 and now we are in the version 11. (Because the developer doesn’t check in the code drunk) the code NEVER breaks after version 1.2. Still someone has to make sure the remote control does work with the STB. So automation is (rightfully) verifying this in the build verification test
  2. Finding crashes is a pride of manual testing. They often take jibes at automation for not being able to find as many crashes as the men do. However, the automation guy smiles without brilliance and tells them that automation does look at cores with every test it runs – cores just don’t happen as much during automation runs. In a private meeting with the QA manager, the support manager shows number of cores that happened in the field that should be caught by QA. The QA manager realizes that automation for such cases doesn’t exist – and junior manpower doesn’t look at cores so often
  3. QA finds a lot of bugs, automation doesn’t have a libraries to use HDMI interface! Waiting and waiting on GitHub and Sourceforge…
  4. Manual and automation – both approaches are at loss. TVs and DVRs are raining in the market. Which one to test the compatibility with? QA manager goes by the market survey to identify top 5. It takes a quarter to come up with market survey. By then all the bugs are already in the support database
  5. Oops! Didn’t the picture quality test plan have it? Doesn’t the channel flipping test plan have it?

As you can see, the STB testing manager is in a crisis. What happened to all the good job automation team has done for these many years?

The right way for this team is to split the work in FIVE levels of automation. (What? Five levels? Read on.)

First of all, understand that the goal is to deliver as much quality (that is inverse of field reported bugs) in as little cost (and most probably as less time). Automation doesn’t matter, productivity matters. Not automating itself is a perfect option.

However, not automating isn’t the first option in the hierarchy. Understand that testing can be carried out by testers of varying degree of qualification – cheaper the better. The tester who can understand resource lockup isn’t necessary to test compatibility with 50 TV models.

So our approaches from least to most complex code in automation are:

  1. Nautomation – No Automation, rather anti-automation – Deploy a crowd of minimal wage workers to look at each TV and give one bell to each. Put a large screen in front of them and another behind them. Wire up webcams, passing their outputs through a multiplexer. In the front of them is seen your expensive test engineer demonstrating them how to test a TV with your STB. If their TV gives a different picture, they have to press the bell. The webcam behind that person’s back activates and projects to the screen on their back. Your expensive test engineer looks at the error and decides whether it is a bug or not. Collective stoppage of the crowd is less expensive than missing that bug on that TV model, which could be in the top 10. Here whatever automation was used for web cams and the multiplexer, is used to INCREASE the headcount, not decrease
  2. Manual testing – for new features. Until the interface or new hardware doesn’t “set in”, investing manual testers is cheaper than automation QA. As Krutarth quoted Google’s view in above post, automating too early is detrimental. Also, use manpower to test UI intensive testing. Because when UI changes, you don’t want your automation to be brittle. In this approach, there is zero automation and manpower neither increases, nor decreases
  3. Semi-automation – for what is called as “bull’s eye” observations – like watching for cores and process restarts and CPU usages. Give your manpower automated tools that act like an associate – checking fixed set of criteria and warn when something goes awry.  Yet another area you can automate is to challenge the manpower testing a feature with changing “everything else” or creating unexpected conditions like restarting a process, rebooting a box, failing over an High Availability solution etc. This will keep your testing safe from the code learning the tests. Combinations go out of hand really fast. So the code doesn’t get a chance to “learn and saturate”. Here automation is small and manpower marginally decreases (because in a large team, you may typically save a man or two by not always testing for those cores).
  4. Test automation – for regression testing, including of that of that remote control. Slowly test automation should cover up manual tests as much as possible. Don’t use test automation for UI intensive testing. In other matters like being aided by observation engines or combinatorial engines or event engines, test automation is identical to manual testing. Code actually learns faster from test automation because it is more predictable. Test automation is almost linear – more you have, more manpower you can substitute – once again, subjected to the UI limitations
  5. Meta-automation – this is the most abused word by theoreticians. Meta automation is like “automating the automation”. Someone on the web sells pair testing based on this label. Pair testing is just one of meta automation approaches possible.  Test automation with variable “everything else” will be an obvious extension of this approach. Another could be “off-by-one”, wherein you pass the constructor/destructor and the count of all kinds of classes you can think of. Yet another could be what I would like to call the Brahama-Vishnu-Mahesh (BVM) testing in which three independent loops try to create a object, invoke operations that “uses an” object and destroy an object. Given randomness of such operations, various life stages of an object can be tested. There could be so many patterns for testing like there are Design Patterns in the famous Go4 book. Here it may not be possible for the code to learn all the test scenarios. However, the flip side is, it may not be possible to even test all the scenarios, or to deduce the right behavior of the software under a given scenario – and at last, may not even be possible to recreate a bug at will with 100% confidence. However, such testing will expose the weakest assumptions in the design. Let me tell you, developers hate this testing :-). If the automation libraries are designed carefully, it will be as complex as number of features (or classes) plus number of cross-cutting concerns or aspects (like logging) times number of patterns (or templates) of testing. However, it will keep testing in an exponential manner. There is no point in comparing how much manpower it will save – yet you can safely bet, exponential saving in manpower is possible.

At various stages of my life, I have tried all the five approaches and have succeeded in all the five.

Once free from the dogma of “automation must save manpower linearly”, much higher levels of productivity and quality are possible.

What approaches have you seen in your experience? Is this list exhaustive?

Can you suggest more testing patterns? I am planning to hitch on a wide survey of some bug databases to find more patterns.

Also, next time, I will highlight how money can be saved by intelligently clubbing administration tools and testing tools. Stay tuned!

Advertisements