We hear complaints like: “Why can’t QA catch this bug in the lab?”
In my line of business, a lot of times such bugs arise because of loaded servers. Delayed responses that arise out of loaded servers cause the DUTs to queue up data, run short of memory and software starts misbehaving. Compared with “stress” testing, this could be thought as “boredom” testing.
QA labs are generally devised for maximum “test throughput”, that is, maximum number of test cases executed over a given time. A typical QA manager (including me) would like to have the broadest sweep in the shortest amount of time. As you can see, slower servers are antithesis to such a thinking!
Also, typically, QA has limited network and limited traffic generation capabilities. Come on, if your QA is your biggest customer, you should do some other business!
Naturally, QA is likely to miss bugs that arise out of delayed DHCP, DNS or authentication responses.
Then the question is, can a lab be specifically designed to test sluggishness? Yes.
The key lies in proxy-ing.
Say, you have a DUT that requires to be tested under delayed responses. Rather than forwarding its requests to a real server (which you have kept in best shape, alas!), forward them to a Linux box. This Linux box receives those queries, waits for configured amount of time and then forwards them to the actual server. In this way, most queries can be delayed in controlled fashion, except where IP address plays a key role.
Has anyone tried this?