Forum Discussion
AlexKaras wrote:
a) TestComplete provides call stack in the Call Stack log pane. I think that the provided call stack is good enough and is bound to the test code making it possible to navigate to code from call stack. At the same time I don't see any good reason to post a full call stack to the test log itself because test log should be compact, readable and understandable for non-programmers (like manual testers or managers). Thus I don't see the reason in getting call stack from exception.
Rightly pointed out, As far as automation script error call stack concerns TestComplete call stack on log pane is the best place to see. What RUDOLF_BOTHMA trying is looking to be complicated. I don't think we need to have this complicated logic for automation scripts. In my view, most of the run time error occurs can very well be identified during our test runs which we can cover before running against AUT. Also, most of the error during execution related to the object identification/timing issue which try...catch can't catch.
End of the day, automation should verify AUT without any false positive and if there is any failure a good framework and standard scripting will help to identify the issue. There will be a nightmare in every script where/how that failed that will be a happy headache to take and resolve by trying various scenarios.
shankar_r wrote:
Also, most of the error during execution related to the object identification/timing issue which try...catch can't catch.
Yes, absolutely valid and correct point.
- tristaanogre6 years agoEsteemed Contributor
I can't add much more beyond what AlexKaras and shankar_r have added. In my opinion, exception handling for automated testing should have the primary goal of trapping errors in the test. Sure, if the underlying code is complicated enough that there are code bugs, then you need a bit more. However, the truth of the matter is that the automated test failed indicating the possibility of a failed test. At this point, the call stack at the failure point provided by TestComplete is sufficient enough to inform as to where to start the investigation and determine the root cause of the failure. When you make your automation code TOO complicated, then you spend all your time debugging the automation code and not enough time actually validating/verifying your AUT.
- AlexKaras6 years agoChampion Level 3
tristaanogre wrote:
When you make your automation code TOO complicated, then you spend all your time debugging the automation code and not enough time actually validating/verifying your AUT.
And I would like to add even more:
-- Unfortunately, it is pretty rare case when the tested application is designed and documented good enough to make test automation to be done really in parallel with development;
-- In most cases, test automation or correction done in order to match application's changed behaviour is done when the development task is completed;
-- The above means that the more time will be required to put automated test into production, the more time will be spend by manual testers to verify the thing that can and should be verified automatically. And this means increased load on manual testing and its decreased efficiency with manual exploratory verification of complex, corner and non-standard cases.
With the above in mind, I think that sometimes it is better to have less perfect (from the classical development point of view) test code in favor of more easily understandable and modifiable one.
The less time it takes to the person who supports test code to put it into production, the more time manual testing has for extended application verification.
- RUDOLF_BOTHMA6 years agoCommunity Hero
Hi all,
Back from leave. Some good pointers there, thanks. Responses/updates/queries as follows : :smileyhappy:
Just keep in mind the throw("1") and throw("2") in my code is me literally throwing an exception, not putting it in psuodo-code to illustrate the point.
a) TestComplete provides call stack in the Call Stack log pane.
Yup, I'm accepting of that point. The call stack in the log should cover what is required for users. It's just me as a developer that would find it useful and even that I can handle with debugging if required.
Also, most of the error during execution related to the object identification/timing issue which try...catch can't catch.
I think we are all with you on that one. I think object recognition issues fill up more lines of code in my scripts than the actual testing the script does
In case test code fails because of unexpected situation (e.g. when attempting to write to a read-only file), then it is perfectly fine if the code fails. This reveals but not hides the code problem which can be immediately addressed. If it is possible that the given file can be read-only, then test code must be improved to make file writable before writing to it. If the file must not be read-only, then test code must be improved with verification block and clearly report a problem to the test log.
Hmm, my example is a bit rough and not a perfect representation of what my code does, but it's the best I can think of without trying to show you what my code doesn't do yet because I'm still trying to get input on the best way to do it :smileyfrustrated: Wow, what a messy sentence...
What I'm thinking of at the moment is negative confirmation, rather than positive confirmation. Perhaps if the file can't be written to because if it's readonly the test is a PASS rather than a fail. (No, I can't think of a case where this would be a test either, but bear with me on this one) If I write my script to always ensure the file is not read-only, I lose that ability using this script function.
Now, if I wrote a script that's very simple and can easily be used by standard users...
function WriteLineToTextFile(filename, text) { //get file //open file //write line in //save file }
Can't get much simpler than that. A user can call the function from a script or visualiser and it will work. Unless the file is read-only, in which case it will fail.
I could add code to make the file not readonly by finding the file with fileinfo or the like then changing the readonly flag in the same function. This means I have one function that is entirely dedicated to doing one job and can only be called in one context. If the code was modified slightly I could use it for more than one context
function WriteLineToFileInternal(filename, text) { try { //get file //open file //write to file //save file } catch(e) { throw e; } }
I can now use it in two contexts.
function PositiveFeedback(filename,text) { try { MakeFileWriteable(filename); WriteLineToFileInternal(filename,text); return true; } catch(e) { Log.Error("Unable to write to file"); return false; } } function NegativeFeedback(filename) { try { WriteLineToFileInternal(filename,"Test Text"); //I managed to write to the file, which I shouldn't have been able to Log.Error("You shouldn't be able to do this"); return false; } catch(e) {
if([e somehow gives away that the file is readonly])
{ Log.Message("The file was readonly"); return true;
}
else
{
throw(e) or return false;
} } }A user no longer calls WriteLineToFileInternal as before - it's now an internal utility function, but they have the PositiveFeedback function that does exactly the same, but with better error handling. If any better handling is required, the PositiveFeedback function can be tweaked internally without users needing to know about it. Does this look like a reasonable implementation? None of these script individually are particularly complex. As I said, it's a ramble looking for some feedback coming from an OO rather than scripting paradigm. Any feedback is welcome.
Related Content
- 14 years ago
- 6 years ago
- 10 years ago
- 4 years ago
Recent Discussions
- 7 hours ago