jump to navigation

Automated Regression Testing (GUI only) May 11, 2010

Posted by stuffilikenet in Publishing Tools.
trackback

I have been tasked for some time with automating certain aspects of software testing at my place of employment (the leading maker and seller of PCR devices, but not, alas, the patent-holder for much of the intellectual property we need to sell product), specifically the testing of nightly builds of nearly-stable products with an eye to checking basic functionality such as creating new protocols and plates, making a simple RT run, opening several data files and checking each form in the data analysis package.  I had previously examined Vermont Hightest and found it too unstable for routine use, and began to look at other software packages. This means I get to test several packages which purport to do this.  Here are quick notes on that unhappy task:

 

H-P’s Quick Test Professional

This product is the child of the formerly famous WinRunner, and is a complex, many-headed Hydra with 1656 pages of disjointed, user-unfriendly documentation, a 136-page tutorial (web-centric and little help with my specific questions) and a mere 108-page installation manual.  For the potential tester exploring with a two-week license, there is also the "support" of a very slow-to-run and sparsely populated web forum.

In short, docs and help suck.

This does not give the evaluator much confidence in the cleverness of the management of the authoring company.  Surely, during the sales-decision period, a seller would try to make the product as appealing as possible, and suggest the selling company had huge support resources.

Didn’t happen here.

Unsurprisingly, there is an entire ecosystem surrounding this product centered on teaching people how to use it.  I guess many customers didn’t take the hint from their two-week experience in evaluation.  They learned pretty fast after that, since most of this ecosystem of tutorials, courses and certifications isn’t H-P’s.

This is not to say that the software itself is without value; it does perform several testing tasks which we find useful, such as file manipulation and form "exercising".  It does these in a way which makes checking the results of such tests manually absolutely required (explanation to follow below in the boring part "Things QTP Does Poorly").  It does this with apparently very fine scriptable control over some aspects of testing, and no control over other aspects.  In short, it is probably not suitable for our purposes.  Annoyingly, it took about two weeks to be pretty sure of this due to the convoluted documentation and non-existent support.  Marketing Documentation has a lot to answer for at H-P.

Things QTP Does Poorly
1) Documentation – I believe I have said this before.
2) Automation – If there is a way to automatically start a test or series of tests every night, I’m not sure what it is (see item 1, above).  It may not exist.
3) Reporting – When QTP records original images of forms, it does so immediately, usually before the form is done drawing.  This makes a bitmap comparison impossible and makes each report indicate a Fail of the bitmap comparison if one is requested.  This is because it makes bitmap comparisons after the form is fully drawn.  I can manipulate the delay time before bitmap recording after the form is drawn during the run, but not during the recording process.
4) The recordings are sometimes inaccurate (especially with regards to mouse dragging and dropping), but this can be edited from the scripts in a useful way (unlike Vermont Hightest) by trial and error pretty quickly.  Experience is helpful here.
5) Does not work well with a two-monitor system (don’t ask).

Things QTP Does Well
1) Scripts execute promptly, and completely.
2) Programmed delays work well, although are an undocumented (probably) feature.  I found it through desperate experimentation and a hint I found in the forums. Unfortunately, it didn’t solve the problems in recording, as I mentioned earlier.
3) Does make bitmap comparisons of what it considers active areas–except these may not be the areas we consider active.  This may be adjustable, but who can say without a guide to the documentation?

 

AutomationAnywhere’s Testing Anywhere

Although the ease of recording of Testing Anywhere is very great, the playback leaves much to be desired.  Screen redraws of our application are not completed, possibly because the Testing Anywhere is drawing too much in the way of CPU or memory resources.  This condition is not affected by closing all other applications prior to running the test.  The script itself cannot be modified to insert delay between a mouse action and the  first bitmap comparison, a deeply stupid oversight.

This makes bitmap comparisons useless, and leads to false failures.  In any case bitmap comparisons fail even when identical bitmaps are presented, although there may be some real (test-related) reason for this.

Bitmap comparison failure alone is enough to disqualify this software for our purposes.  I cannot recommend this product at all.

I tried to get their helpdesk guys to give me a hint, and they watched it happen and  suggested I use the object capture method for recording, but it doesn’t recognize many of the objects on our forms, so that’s a complete bust.

 

Test Complete 7

Test Complete 7 is also a complete bust.  It really doesn’t understand which items have or should have focus.  It has only one setting for delay, so that to make things delay and wait for new forms to open or the current to redraw, one must choose the longest possible delay expected…multiplying the test time possibly by factors of twenty or more.  Not good.

So, when it fails to recognize the Startup Wizard, it closes it as unexpected and tries to perform mouse functions on the underlying form…which leads to total inaction and failure of all test checkpoints.  At least the first time I tried it.

The second time I tried it without any checkpoints and it just barreled through all the test gestures without checking anything whatsoever.  When the time came to wait for the run to complete, it just barreled through the remaining mouse actions and reported the test as a complete failure.

Which I guess it is.

Oh, and non-paid support for prospective customers is non-existent for application testers, but not web development testers.  I presume they know where the money is.  Just sayin’.

My cow-orker (hyphenation intentional) Reza was able to get it to recognize the Startup Wizard, but the other problems remained.

Comments»

No comments yet — be the first.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: