Various levels of test automation

QA/Test managers tend to divide their QA/testing operation in two parts – manual and automated. In such a simple world, typically automation comes later than manual testing. In the mental image of a manager, automation starts replacing manpower a test case at a time – till he can “free up” his manpower to “do more productive things” (quotes are euphemisms for “fire” and “save the budget and get that big promotion” respectively.)

This dream is never achieved because of reasons I mentioned in one of my previous posts.

This is what happens in a typical product’s life:

  1. Product starts, everyone is in a rush to deliver features
  2. Features have shallow bugs – test cases proliferate
  3. In a few versions, features stabilizes (in other words, the code learns test cases)
  4. There is more income and time
  5. Someone remembers automation because “juicy” bugs stopped coming
  6. Automation is started by a bunch of enthusiasts
  7. Automation reaches to 40%+ of testing, test work is greatly reduced. Managers make heroes of automators
  8. A few releases go by, UI technology changes. Automation breaks
  9. Automation is faster than manpower – so loads system I/O in a manner it isn’t designed. Developers come in friction with automation guys
  10. Automation manpower is dragged to manual testing because the very new UI requires intensive testing
  11. Automation plays a catch up game
  12. Manual testing lead/manager revolts against the blue eyed boy called automation. He has undeniable arguments: a. Automation doesn’t find bugs, manual testing does (obviously! The code has learnt the automation!) and b. Automation maintenance is expensive (because it broke with change in the UI). Developers join them because “Automation finds irrelevant bugs. System wasn’t designed for 50,000 commands per second.”
  13. Automation continues at a slower speed
  14. Back to step #8

The cure to this is to see that neither automation is a goal, nor a means to achieve the goal. Automation is no silver bullet.

Realistic strategy is to divorce “test” and “automation” from the phrase “test automation”.

TASKS related to testing should be automated at various levels, not test cases.

To give you an example, take an embedded product like set top box (STB). An STB can have many bugs initially. Let us take a list of bugs:

  1. The STB does not responding to the remote control
  2. The STB crashes every now and then
  3. The STB does not have new interface (say HDMI) working
  4. The STB does not display with a type of a TV or DVR
  5. The STB fails when SNR goes down by 20 dB and the channel is flipped before 20 mS of that

Now look at the approaches to automate all the tests:

  1. The STB started responding to the remote control starting version 1.2 and now we are in the version 11. (Because the developer doesn’t check in the code drunk) the code NEVER breaks after version 1.2. Still someone has to make sure the remote control does work with the STB. So automation is (rightfully) verifying this in the build verification test
  2. Finding crashes is a pride of manual testing. They often take jibes at automation for not being able to find as many crashes as the men do. However, the automation guy smiles without brilliance and tells them that automation does look at cores with every test it runs – cores just don’t happen as much during automation runs. In a private meeting with the QA manager, the support manager shows number of cores that happened in the field that should be caught by QA. The QA manager realizes that automation for such cases doesn’t exist – and junior manpower doesn’t look at cores so often
  3. QA finds a lot of bugs, automation doesn’t have a libraries to use HDMI interface! Waiting and waiting on GitHub and Sourceforge…
  4. Manual and automation – both approaches are at loss. TVs and DVRs are raining in the market. Which one to test the compatibility with? QA manager goes by the market survey to identify top 5. It takes a quarter to come up with market survey. By then all the bugs are already in the support database
  5. Oops! Didn’t the picture quality test plan have it? Doesn’t the channel flipping test plan have it?

As you can see, the STB testing manager is in a crisis. What happened to all the good job automation team has done for these many years?

The right way for this team is to split the work in FIVE levels of automation. (What? Five levels? Read on.)

First of all, understand that the goal is to deliver as much quality (that is inverse of field reported bugs) in as little cost (and most probably as less time). Automation doesn’t matter, productivity matters. Not automating itself is a perfect option.

However, not automating isn’t the first option in the hierarchy. Understand that testing can be carried out by testers of varying degree of qualification – cheaper the better. The tester who can understand resource lockup isn’t necessary to test compatibility with 50 TV models.

So our approaches from least to most complex code in automation are:

  1. Nautomation – No Automation, rather anti-automation – Deploy a crowd of minimal wage workers to look at each TV and give one bell to each. Put a large screen in front of them and another behind them. Wire up webcams, passing their outputs through a multiplexer. In the front of them is seen your expensive test engineer demonstrating them how to test a TV with your STB. If their TV gives a different picture, they have to press the bell. The webcam behind that person’s back activates and projects to the screen on their back. Your expensive test engineer looks at the error and decides whether it is a bug or not. Collective stoppage of the crowd is less expensive than missing that bug on that TV model, which could be in the top 10. Here whatever automation was used for web cams and the multiplexer, is used to INCREASE the headcount, not decrease
  2. Manual testing – for new features. Until the interface or new hardware doesn’t “set in”, investing manual testers is cheaper than automation QA. As Krutarth quoted Google’s view in above post, automating too early is detrimental. Also, use manpower to test UI intensive testing. Because when UI changes, you don’t want your automation to be brittle. In this approach, there is zero automation and manpower neither increases, nor decreases
  3. Semi-automation – for what is called as “bull’s eye” observations – like watching for cores and process restarts and CPU usages. Give your manpower automated tools that act like an associate – checking fixed set of criteria and warn when something goes awry.  Yet another area you can automate is to challenge the manpower testing a feature with changing “everything else” or creating unexpected conditions like restarting a process, rebooting a box, failing over an High Availability solution etc. This will keep your testing safe from the code learning the tests. Combinations go out of hand really fast. So the code doesn’t get a chance to “learn and saturate”. Here automation is small and manpower marginally decreases (because in a large team, you may typically save a man or two by not always testing for those cores).
  4. Test automation – for regression testing, including of that of that remote control. Slowly test automation should cover up manual tests as much as possible. Don’t use test automation for UI intensive testing. In other matters like being aided by observation engines or combinatorial engines or event engines, test automation is identical to manual testing. Code actually learns faster from test automation because it is more predictable. Test automation is almost linear – more you have, more manpower you can substitute – once again, subjected to the UI limitations
  5. Meta-automation – this is the most abused word by theoreticians. Meta automation is like “automating the automation”. Someone on the web sells pair testing based on this label. Pair testing is just one of meta automation approaches possible.  Test automation with variable “everything else” will be an obvious extension of this approach. Another could be “off-by-one”, wherein you pass the constructor/destructor and the count of all kinds of classes you can think of. Yet another could be what I would like to call the Brahama-Vishnu-Mahesh (BVM) testing in which three independent loops try to create a object, invoke operations that “uses an” object and destroy an object. Given randomness of such operations, various life stages of an object can be tested. There could be so many patterns for testing like there are Design Patterns in the famous Go4 book. Here it may not be possible for the code to learn all the test scenarios. However, the flip side is, it may not be possible to even test all the scenarios, or to deduce the right behavior of the software under a given scenario – and at last, may not even be possible to recreate a bug at will with 100% confidence. However, such testing will expose the weakest assumptions in the design. Let me tell you, developers hate this testing :-). If the automation libraries are designed carefully, it will be as complex as number of features (or classes) plus number of cross-cutting concerns or aspects (like logging) times number of patterns (or templates) of testing. However, it will keep testing in an exponential manner. There is no point in comparing how much manpower it will save – yet you can safely bet, exponential saving in manpower is possible.

At various stages of my life, I have tried all the five approaches and have succeeded in all the five.

Once free from the dogma of “automation must save manpower linearly”, much higher levels of productivity and quality are possible.

What approaches have you seen in your experience? Is this list exhaustive?

Can you suggest more testing patterns? I am planning to hitch on a wide survey of some bug databases to find more patterns.

Also, next time, I will highlight how money can be saved by intelligently clubbing administration tools and testing tools. Stay tuned!

First hundred integers in Fibonacci number system

1 1
2 10
3 100
4 101
5 1000
6 1001
7 1010
8 10000
9 10001
10 10010
11 10100
12 10101
13 100000
14 100001
15 100010
16 100100
17 100101
18 101000
19 101001
20 101010
21 1000000
22 1000001
23 1000010
24 1000100
25 1000101
26 1001000
27 1001001
28 1001010
29 1010000
30 1010001
31 1010010
32 1010100
33 1010101
34 10000000
35 10000001
36 10000010
37 10000100
38 10000101
39 10001000
40 10001001
41 10001010
42 10010000
43 10010001
44 10010010
45 10010100
46 10010101
47 10100000
48 10100001
49 10100010
50 10100100
51 10100101
52 10101000
53 10101001
54 10101010
55 100000000
56 100000001
57 100000010
58 100000100
59 100000101
60 100001000
61 100001001
62 100001010
63 100010000
64 100010001
65 100010010
66 100010100
67 100010101
68 100100000
69 100100001
70 100100010
71 100100100
72 100100101
73 100101000
74 100101001
75 100101010
76 101000000
77 101000001
78 101000010
79 101000100
80 101000101
81 101001000
82 101001001
83 101001010
84 101010000
85 101010001
86 101010010
87 101010100
88 101010101
89 1000000000
90 1000000001
91 1000000010
92 1000000100
93 1000000101
94 1000001000
95 1000001001
96 1000001010
97 1000010000
98 1000010001
99 1000010010

2-D programming – about function prototypes, definitions and calls

Talking to people about my ideas help me fine tune my ideas.

Recently I talked to a bunch of students about implementing my earlier post about 2-D programming. That got my brain ticking. I thought of function prototypes, definitions and function calls in 2-D programming.

Function prototyping  and declaration:

I had earlier wrote about improving the function prototyping by adding some contract to each argument. That is, rather than writing prototype of fwrite just as

size_t fwrite ( const void * ptr, size_t size, size_t count, FILE * stream );

we should also mention the hidden assumption (contract) for better understanding of the programmer who is going to use fwrite as:

size_t fwrite ( const void * ptr != NULL, size_t size > 0, size_t count >= 0, FILE * stream != NULL);

That is, human readability is also important in code maintenance and explicit statement about contract is helpful in avoiding a lot of bugs.

However, when you add to default values also mentioned in the prototype, 1-D arrangement becomes very difficult to read because with each argument, there will be four attributes associated now – type, argument name, default value and contract about the argument. The same exercise has to be repeated when the function is declared:

size_t fwrite ( const void * ptr != NULL, size_t size > 0, size_t count >= 0, FILE * stream != NULL) {
...
}

2-D definition may make it very succinct and intuitive. (Here) Assuming fwrite defaults to stdout as default stream to write,

/*type*/
size_t
const void *
size_t
size_t
FILE *
/*name*/
fwrite
ptr
size
count
stream
/*default*/  1
/*contract*/
>= 0
!= NULL
> 0
>= 0
!= NULL

Now that we have freedom to redesign the prototype or first line of function again and with one more dimension, the following will suite reader psychology the best:

/*name*/
fwrite
ptr
size
count
stream
/*type*/
size_t
const void *
size_t
size_t
FILE *
/*contract*/
>= 0
!= NULL
> 0
>= 0
!= NULL
/*default*/ 1

At the first glance (row #1) this definition shows fwrite will need a data pointer, size of element, number of elements and a destination stream to write. The reader doesn’t get lost into other details if not needed. As such, human mind thinks about arguments of a function first and later about its type, default values or restrictions. Often, when a function is fully understood, some of these attributes are understood better. By describing the list of argument in “one row view”, the writer communicates the reader very quickly what the function is all about.

Counterpart of function call – or statement structure can also become interesting in 2-D. Suppose you are pre-historic era Mathematician trying to find out area of an eclipse. You would deduce that AreaOfAnEclipse has to do something with its major axis and minor axis and nothing more. We all programmers have an intuitive feel of what arguments will be needed to evaluate a function – and later figure out necessary functions and operations to fit them in. 2-D programming structures can be thought out to do the same.

Take for example, area of a sphere with its major axis being A and B. We will not require function calls here. Only operators will suffice:

/*variables*/
areaOfAnEclipse =
pi
A
B
/*operations and functions*/ *
/*operations and functions*/ *

[As you can see, it is kind of wobbly. Someone with theory background may help me fine tune the presentation.]

For function calls it is easier. For example, if you are trying to determine value of gobbledegook as cosA*sinB – sinA*cosB, 2-D programming may look like:

/*variables*/ gobbledegook =
A
B
A
B
/*operations and functions*/ cos sin sin cos
/*operations and functions*/ * *
/*operations and functions*/


It is tempting to continue to think about what may happen to control structures and expressions (apart from the 2-D nesting we discussed earlier). However, it is late in the day and I am under allergy medication. Waiting for your comments, folks!

Smarter keyboard

Now you feel I am stretching it a bit too much.

Come on! What is there in for a keyboard to get smarter? A keyboard could be sleeker, fancier, pricier – and even just projected on a screen or table top. But smarter?

Yes, why not? A keyboard can be made smarter. The bad news is, it is getting redundant by technology.

A keyboard with built in dictionary can be built. Why can’t this statement be typed with fewer (and funkier) keystrokes? Like…

User types: ‘W’ – keyboard pushes ‘h’ – user mentally accepts so moves forward to

User types ‘y’ – keyboard pushes a blank space – user mentally accepts so moves forward to

User types ‘c’ – keyboard pushes ‘h’ – user presses an “override” key + ‘a’ – keyboard pushes ‘n’ – user mentally accepts so moves forward

User types an apostrophe – keyboard pushes ‘t’ – user mentally accepts so moves forward to

User types a blank space

User types ‘t’ – keyboard pushes ‘h’ – user mentally accepts so moves forward to …

You can imagine the keyboard as a thought reader with an override key.

Interestingly this problem is taken care into application software these days.