Incorrect Date Calculation When Adding One Month with AddMonths Function
I'm experiencing unexpected behavior when using the AddMonths function in TestComplete. When I input the date 30/01/2028 and add 1 month using AddMonths(1), the result is 29/02/2028 instead of the expected 01/03/2028. The same issue occurs with the date 29/01/2028, which also returns 29/02/2028 instead of 29/02/2028 or 01/03/2028, depending on expectations. Additionally, when using the date 31/08/2028, the result after adding 1 month is 30/09/2028, whereas I would expect 01/10/2028 in alignment with the day overflow logic. It seems the AddMonths function does not correctly handle month transitions for dates near the end of the month, particularly in leap years or months with fewer days. Please investigate and clarify whether this is intended behavior or a bug in the date calculation logic.24Views0likes2CommentsSaving Logs as part of Azure Devops pipeline
Hello there We have a Test Complete project suite which runs an execution plan in a test project. At the end of the execution plan, we generate a HTML log for the test run, which shows the results of all the steps in the execution plan. This works fine in TestComplete and generates a HTML report with all the steps included. I am now trying to run this project suite via a pipeline in Azure Devops server. I have managed to get the pipeline up and running, but when it runs via the pipeline it only generates a log for the last step in the execution plan (i.e. the SavedLogs step). I assume that the test runner is running all the test cases as separate processes, so it is generating individual logs for each test case. Is there any way to get the pipeline to run all the tests in the project suite together, so that a complete HTML log for the run can be produced?75Views0likes0CommentsProcess "crashed" and test fails when closing the application in test
Hi all, We use TestComplete 15 to perform regression testing on our Delphi desktop application. We have a simple test: open the application short wait follows close the application Testcomplete will fail the test stating the "TestApp.exe" process crashed and test execution is interrupted. Closing the application via X or through the 'Main Menu - Exit' results in the same error. The Sys.WaitProcess method also didn't help. We have also attempted to close it using the .Close() method. Maybe I'm placing the Sys.WaitProcess method in the wrong spot. I would be very grateful for a code snippet (JavaScript) and any help. thx.71Views0likes2CommentsGenerate a segregated report identifying error as script failure or an application issue
Hi Team, Problem Statement : How can you identify whether an error is a script failure or an application issue through logs, and generate a segregated report at the end of the test execution? ======================================================================== Here, I'm trying to understand if anyone has tried to create a framework (or has already developed and is using one) in such a way that, once the test execution completes, we will have logs attached to the test execution report. I want to go even deeper and segregate failures (if any) based on script and application issues. I understand that until we analyze, the failures cannot be easily segregated. But there could be some errors that can be clearly identified as script errors rather than application issues. Does TestComplete provide anything to identify such errors? If yes, what are those? What am I aiming at? -- To reduce the analysis time (so the report can be shared earlier) as not all the errors from logs need to be given the same priority. Thank you in advance for your thoughts and inputs.63Views0likes2CommentsTest script level summary can save as html when we export execution log file from TestComplete
Currently, after TestComplete execution we are able to export the log file as html which contains only test cases details like execution time execution status etc. but we are unable to see the test script level summary details directly on the main html page. If we click on each test script link then only we are able to see the test script level summary details. So we need to have test script level html file while exporting the execution log will help to validate detailed test step level status.46Views2likes0CommentsAdd feature to create Bug/Issue in Azure DevOps from TestComplete
I had check the TestComplete documentation and could not find any feature to Create Bug/Issue in Azure DevOps from TestComplete logs directly. So can you implement this feature in TestComplete as it would be helpful to automate the process of posting bug on failure. As currently I can see it is only possible to create issue in Jira, Bugzilla and QAComplete only.161Views11likes8CommentsUse Variables in the sql connectionstring
Hi there, I'm using 2 environments and 2 databases: 1 to create the testcases and 1 to execute them I want to check the results with DBTables Custom queries (SSMS) and I have i.e. next connectionstring: How CAN I make this string Variable ? When I login at the ALPHA-environment I want in the string ALPHA and server01 When I login at the BETA-environment I want in the string BETA and server02 Greetings, Sjef van Irsel721Views0likes6CommentsHow To: Read data from the Windows Registry
Hello all, I have recently learned how to retrieve data from the Windows registry in JavaScript test units. I am using this to return the OS information and application path information. This is very useful when added to the EventControl_OnStartTest event code. This will allow you to return OS information and other needed data at each test run. Some test management systems may provide this information for you or it may be logged in the in data produced in a pipeline run. This will embed the information directly into your test log. SmartBear KB Links: Storages Object Storages Object Methods Storages.Registry Method Section Object Get SubSection Method This bit of code will return the Product Name and Current Build from the registry. This location may vary between OS's so you will want to check this with RegEdit. let Section = Storages.Registry("SOFTWARE\\Microsoft\\Windows NT", HKEY_LOCAL_MACHINE); let regKeyString = Section.GetSubSection("CurrentVersion").Name; let productIdString = Storages.Registry(regKeyString, HKEY_LOCAL_MACHINE, 1, true).GetOption("ProductName", ""); let currentBuildString = Storages.Registry(regKeyString, HKEY_LOCAL_MACHINE, 1, true).GetOption("CurrentBuild", ""); Log.Message("Windows Version: " + productIdString + " Build: " + currentBuildString ) I have also found the need to find and set an application path and work folder in the project TestedApp for running through a pipeline because the pipeline deploys the application to a non-standard path. let Section = Storages.Registry("SOFTWARE\\WOW6432Node\\<_yourSectionName>\\", HKEY_LOCAL_MACHINE); let regKey = Section.GetSubSection(<_yourSubSectionName>).Name; let Path = Storages.Registry(regKey, HKEY_LOCAL_MACHINE, 0, true).GetOption("", ""); let WorkFolder = Storages.Registry(regKey, HKEY_LOCAL_MACHINE, 0, true).GetOption("Path", ""); let appIndex = TestedApps.Find(<_yourAppName>); if (appIndex >= 0){ if(TestedApps.Items(<_yourAppName>).Path != Path){ TestedApps.Items(<_yourAppName>).Path = Path } if(TestedApps.Items(<_yourAppName>).WorkFolder != WorkFolder){ TestedApps.Items(<_yourAppName>).Params.ActiveParams.WorkFolder = WorkFolder; } } else{ Log.Error("TestedApp " + <_yourAppName> + " does not Exist.") Runner.Stop(true); } I hope you find these links and code examples as useful as I have! Have a great day!97Views0likes0CommentsReadyApI/Soapui junit report with maven
i'am trying to implement CI for my test automation project using maven with bamboo. I need to generate the junit report to parse it in bamboo. I'am using this POM: ` <build> <plugins> <plugin> <groupId>com.smartbear.soapui</groupId> <artifactId><artifactId>soapui-maven-plugin</artifactId></artifactId> <version>4.6.1</version> <executions> <execution> <phase>test</phase> <goals> <!-- Do not change. Commands the Maven plugin to run a functional test. --> <goal>test</goal> </goals> <configuration> <projectFile>${soapui.projectfile}</projectFile> <exportwAll>true</exportwAll> <printReport>true</printReport> <testFailIgnore>true</testFailIgnore> <outputFolder>${project.build.directory}/Junit</outputFolder> <junitReport>true</junitReport> </configuration> </execution> </executions> </plugin> </plugins> </build> but the junit report generated is like report for confi When i use the Junit parser after build in test part i have limited information : We need a detailled test report with succes and failed tests. Thank you in advance for your help. Best regards, Smilnik `60Views0likes0CommentsMark FAILED file/XML comparison nodes with red instead of a green checkmark
Currently, when a file comparison fails because of a difference higher then the set tolerance, the results are confusing as they are shown with a green checkmark! To say the least this is counter-intuitive, and seems easy to fix. Examples (taken from the TC docs):29Views0likes0Comments