Forum Discussion

fyk's avatar
11 years ago

Guidelines to improve maintainability and reusability of KDT scripts

Could anyone please send me a link to any documentation (papers, Case Studies, list of customers, etc) that reports the successful use of Keyword-Driven Test in a long-term test automation solution? I'm looking for tips, best practices, and guidelines as to how to improve maintainability and reusability of KDT scripts. Such scripts would be created during sprints in an Agile/Scrum methodology and then become part of a Regression Test that would be shared among users/teams located in different sites and time zones.

I've contacted SmartBear's support regarding this issue and they suggested that I tried this forum.

Thanks in advance.

8 Replies


  •  


    I don't know if I can answer your questions directly.  I'm fairly new with TestComplete, but I can give you a quick user story.  I would appreciate any feedback on flaws with this approach as I am developing this now.


     


    I sort of inherited 2,000 of KDT scripts recently.  It turns out that the scripts did not do much verification.  They just walked through the application without really checking anything.  I was tasked with turning these lemons into lemonade. 


     


    Rewriting the KDT scripts as textual scripts was out of the question.  There was too way much to translate.  Plus the testers who wrote and maintain the scripts are not very technical, so they would have problems maintaining textual scripts. 


     


    One of the first things I learned about TC's KDT language is that it is too bulky and awkward for heaving coding.  It does not play well with complex things like database access.  Creating individual form field verifications is painfully slow.


     


    The approach I am taking is to wrap each KDT script in a common textual script function that does a standard test setup, runs the KDT script using KeywordTests.ScriptName.Run(), and does a standard teardown.  


     


    Fortunately, my SUT records all of its important database updates in a transaction table.  The transaction table has an autonumber key.  After every user operation that does an add, change or delete to the DB, records are added to the transaction table, and they are added in a consistent order. 


     


    For the standard test setup I note the key number of the last record.  I run the KDT script.  Then I diff the "actual" records in the transaction file with a set of "expected" records from a master DB.  I give the wrapper script a flag parameter so that diffing can be disabled.  I also give it a "update" flag parameter that causes the "expected" values to be replaced with the generated "actual" transactions in the master DB. 


     


    I will be running the scripts on a set of static DBs running with the SUT operating at a given point in time.  When running with the static DBs I will be applying the diff to do my data verification.  After a run is completed I can run it a second time with the "update" flag turned on which will cause it to update the "expected" data with some or all of the "actual" data.  


     


    Because I can diff like this, I can generate a lot of data verification quickly and easily.  I will be maintaining the KDT scripts as they are, with no verification steps in them.  Most KDT script maintenance will be updates to the name mappings.  When the scripts get too out of date the testers will do a quick re-record of some or all of a failing script.  KDT scripts are designed to be quick and easy to generate by less technical testers.  Where heavy lifting is needed in a KDT script, I am writing textual scripts to handle it.  By taking this approach I can do data verification fast and cheap using textual scripts, and I can do KDT script maintenance fast and cheap because I keep as much complex stuff out of the KDT scripts as possible. 


     


    I will also be running the scripts on random DBs with different data. When running with the random DB I will not apply the diff and I will not replace the "expected" with "actuals."  The KDT scripts will just do a walkthrough of the SUT.


     


    To make the KDT scripts cheap and easy to maintain the only hand coding I expect to see in the KDT scripts are operations to keep the script from crashing.  I'll let the diff determine if the data is correct.  I'll let the successful completion of the KDT script determine if the GUI is operating as expected.  I could add more detailed testing to the process, but how cost effective is it to add more testing than that to automated regression testing?  How likely is it that a label will be dropped as a regression?  Regression testing should not be turning up a lot of bugs.  When regression testing does find a bug, it tends to be blatant.  Checking the data and walking through the GUI should suffice.  


     


    I plan on having the testers do periodic hand testing as a supplement to look for regressions not found in the automatic GUI walkthrough.  I think the cost of doing that will be less than the cost of creating and maintaining automated checks of everything in the GUI.


     


    If I were working with a DB that did not have some kind of built-in transaction recording, I would consider writing a SQL script that generated post-add-update-delete triggers for all of the important tables.  I would have it append all of the fields, probably pipe or tilde delimited, into a single string, and I would have it save a key containing the table name and operation (add, update, or delete).  I would have it store that info to a QA transaction table with an autonumber key.  I would use that to diff with.  Some databases have built-in functionality for recording transactions for things like master-slave syncing.  Those would be candidates for quick and easy diffing too. 


     


    I am also working on ways to get my KDT scripts to work using alt and control keys instead of actual objects.  A lot of my application's forms use alt keys to navigate to fields on the forms, and alt and control keys to navigate through the GUI.  If I can take advantage of that I can get rid of a lot of frequent script failures caused by changes in the form objects.  For this I am trying out GUI mapping to see what might work.  I don't want to use full-on GUI mapping if I can help it because of the amount of coding and code maintenance involved.


     


    HTH


     


    Any feedback would be appreciated.

  • TanyaYatskovska's avatar
    TanyaYatskovska
    SmartBear Alumni (Retired)

    Hi Community,


     


    Do you have some documents that you can share with Flavio?


     

  • Hello Flavio,

    There is nothing as such rocket science in it.

    I can tell you some basic steps..

    1) Start creating small KDT scripts like to start with Login.
    2) Then start slowly reusing these scripts in another scripts.
    3) Build such common scripts that will build a framework for you relevant to the functionality of your product.
    4) And then start creating actual test scripts based on test cases available to you.

    During this phase, keep reviewing your scripts for any optimal use and changes as applicable to your product.

    You will need to strike a balance between maintaining existing scripts and creating new ones.

    This should be a good start for you assuming you are clear with basic programming and manual testing concepts. Hope it helps.

    Thanks,
    Shrirang




  • I would also like some advice on improving usability within my test scripts. Unfortunately I took an early decision to make a new project for each functional area of my application in order to make maintenance of the name mapping easier but this severely limits re-usability between projects because TestComplete does not support referencing another projects NameMapping. I have worked around this by creating a common Project that does not rely on any name mapping items but does generic actions such as launching the application. However, I would still like to improve re-usability of shared functions within the project. Specifically - if I create reusable functions within keyword tests, how do I then differentiate these from actual keyword tests that call these functions (stringing these all together into an actual test)? I guess a naming convention could help.