DevOps: Skillset, but with a new Mindset

1 01 2020

https://itnext.io/do-not-put-devops-in-a-cage-3604a83821e1

Joe McKendrick wrote in an article at ZDNet about Devops — ‘requires multiple teams to work closely with each other, side by side, on a day-to-day basis, to meet the significantly shrunken delivery timelines.’

At my current organization, we have been working towards achieving the goal of Agile sprints and a DevOps/CloudOps culture. The attempts have been sincere, but the mindset change requires a lot of effort from both the management and the people working across the projects.

The management needs to understand that the workflow of a Dev+Ops cycle needs a lot of hand-holding and a certain degree of automation across the development, build, deploy and test phases of the application. On the other hand, people who work on the projects need to ensure that they work towards a goal of making tasks automated and easy to build and deploy through the use of scripts. All this entails that the test/QA team are involved with the design/development process from the requirement analysis phase. This is currently missing. And that works out against the concept of an Agile DevOps view.

DevOps can be a powerful antidote to the issues of Agile not working, when it is done properly, with a view to achieving an outcome beneficial for both the Organization and the Individual.

“Automating the testing and the QA aspects can deliver an ROI up to 250% to 300% month over month, according to Chris DeGonia, director of QA at International SOS. In a recent podcast with Kalyan Rao Konda, president and head of the North America East business unit at Cigniti, he credits the ability to automate the flow, across repeatable processes, checks, and balances in the system.”

https://opensource.com/article/19/5/values-devops-mindset

The skillsets required for a DevOps project/practice to get success is already present in most of the team members, developers know how to use scripts and have worked with Puppet, Chef and Ansible and with CI/CD tools; the test team similarily has a good grip of C#, Jenkins, shell scripts, automation, performance and CI/CD tools. Most of the team have worked with Cloud and related Docker and Kubernetes systems too. But the mindset is something that needs to make a change:

  • Start small, so debugging becomes easy
  • Break stuff, so that you know where and what is going wrong
  • Embrace your mistakes and rectify them fast
  • Educate each other on tools and fixes
  • Project management needs disruption, don’t be caught up on costs and timelines
  • Promote collaboration with the team members and business stakeholders

All these would result in a DevOps (Agile, Collaborative) culture, where the following would hold true:

  • Collaboration between the development teams and the business
  • Faster and on-time delivery of products/projects
  • Employee engagement and happiness (they get to learn and implement the learnings)
  • Innovation in the form of the smaller increments, where direction can be changed with nimbleness and finesse.

Thus, teams need to embrace change and provide more guidance to each other to ensure that DevOps with CI/CD can be implemented successfully. The DevOps practise helps in the fast and improved delivery of the product/application, using tools and scripts to automate the build, deploy and testing of the software code.

In conclusion, there are six basic principles that define a DevOps mindset (as mentioned in the DevOps article on ZDNet):

  • Be about serving the customer: “DevOps organizations require the guts to act as lean startups that innovate continuously, pivot when an individual strategy is not (or no longer) working, and constantly invests in products and services that will receive a maximum level of customer delight.”
  • Create with the end in mind: IT organizations “need to act like product companies that explicitly focus on building working products sold to real customers, and all employees need to share the engineering mindset that is required actually to envision and realize those products.”
  • Encourage end-to-end responsibility: “Where traditional organizations develop IT solutions and then hand them over to operations to deploy and maintain these solutions, in a DevOps environment teams are vertically organized such that they are fully accountable from concept to grave.”
  • Promote cross-functional autonomous teams: DevOps teams “need to be entirely independent throughout the whole lifecycle,” and even “become a hotbed of personal development and growth.”
  • Continuously improve: “Minimize waste, optimize for speed, costs, and ease of delivery, and to continuously improve the products/services offered.”
  • Automate everything you can: “Think of automation of not only the software development process (continuous delivery, including continuous integration and continuous deployment) but also of the whole infrastructure landscape by building next-gen container-based cloud platforms that allow infrastructure to be versioned and treated as code as well.”

To close it all, Calvin & Hobbes is required! 🙂

https://www.slideshare.net/kermisch/shifting-to-a-dev-ops-mindset-lnkd




Automating for the Future

1 01 2018

When we go about discussing on automation, we talk about frameworks and tools for automating the application or the user’s product. People, who want their applications to be automated, usually start off with taking up an open source tool (or commercially bought tool) and using it for a simple record and play script creation. This then starts the cycle of making those set of scripts more robust and make them work over the application. Finally, the scripts are joined together and the developers of those scripts start calling them frameworks. This is the beginning of the confusion and chaos for test automation.

It is the belief of testing teams that once a “framework” like this is created and it can then complete a regression cycle for a certain release or development, the same is the best piece of work they have created and it would work out for any and all releases they do from that time onwards. What they forget is the basic rule of any software that it evolves. And with it has to evolve the test software also. They create an application specific and tool specific “framework”, which might be just a combination of scripts, which execute the test cases for their application or product and nothing else. Sending out some rudimentary reports, which someone may one day see and realize that everything has been failing for the past 2 weeks 🙂

 

There is a plethora of test tools which are roaming the open source and commercial world of testing these days. They all are good for what they advertise themselves for. But there is an inherent problem with them all. They are generic (catering to out-of-box standards) in nature and require a framework to be developed over them, which will take care of specific needs for the user’s application and/or product.





Automating with Agile

31 12 2013

Agile is not a new word to the world of Information Technology. Automation has been said to be one of the key practices to making an Agile project possible. This in many ways may be considered true. I have been going through some good established practices of Agile, where most have been based on some basic level of automation, which helped in making a success of the project. I have penned down some thoughts on what it means to have test automation along with Agile practices.

There are many schools of thought which have gone into the agile way of doing a project and managing the various components that finally lead to the delivery of the project. When we consider Agile and its various derivatives, we come to realize that each Organization has their own way of dealing with the complexities that come with it. There is mention of starting with a session to discuss and elaborate on what the scope of the project is and how it can be broken into smaller pieces which then become the initial requirements. These can then be distributed to the team and made into story cards, attached with t-shirt sizes and put up on the scrum board to be picked up in batches by the team and worked on. In this fashion, a project progresses with the minimal of friction  and gets completed within the estimates provided by the t-shirt sizes. Most of the time this might not be entirely true, but this is how some Organizations perceive Agile and practice it; in the process giving negligent time for the automation, as manual tests take up the majority of time to complete.

When we talk of automation in Agile, it not only consists of the testing component, but an overall ‘continuous integration’ component. From the check-ins, build, unit tests, defect handling, integration & system tests to the final deployment on the ‘test’ server for doing the User Acceptance Testing (UAT).  Agile shops majorly miss out on this flow which should be the first thing to be completed for a Project to perform smoothly through its life-cycle. There are a multitude of tools available for making these tasks simpler and more robust, to name just a few which I have used – Jenkins/AntHill, Maven/MSBuild/make/Ant, SVN/Git, JIRA/FishEye, Crucible, TOSCA/QTP/pCloudy.com/Selenium.

In the path to Agile, we forget that we need to do the planning for automating our complete build-deploy process also and that includes the crucial part for integration & system tests. A thoughtful planning would be to make the initial framework using stubs for the interfaces and when these get built, replace them with the real thing. Often what I see is the perception that automation should be started when a clear and stable build is provided; yes, in a way this might be true, but not for Agile, where you really need to be agile and think on your feet. Start by implementing a strategy, wherein you have stubs ready and a CI platform available to make sure that testing can be done without code. This was the first lesson I was taught, we were to create test cases based on the ‘pseudo-algorithm’ and  the interfaces that we have written. The tests need to be developed in a way that all fail initially and as and when the code is delivered they start to pass according to the requirements provided.

If you have done this then you have taken that crucial step towards Agile automation, that will take you a long way in making the project a success for you and your Team.

 





Working with TOSCA (Part 2)

28 04 2013

This has been a long overdue post from my end, and as I now have some time at hand, thought it was better to put it down.

TOSCA has been promoted by Tricentis in Australia for the past 3+ years now and has risen from being an unknown tool in the ANZ markets to now in the 2nd position after the ever prevalent QTP (although under HP’s banner, it has undergone a lot of iterations and name changes also now). Tricentis has used the MBT principles to create TOSCA as an easy to use and implement tool. It allows the test team to concentrate on creating the actual workflow of the application, from the ‘artifacts’ provided in the initial ‘Requirement’ and ‘Test Case Design’ sections. From then, it is a simple case of either matching these test workflows with the appropriate screen objects (‘Modules’), or running them manually [yes, you can run ‘Test Case’ created in TOSCA as manual or automated tests]. TOSCA provides a section for ‘Reports’, which is in PDF format or from the ‘Requirement’ tab, which provides an overview of what has been created, what is automated and what has passed/failed. The ‘Execution List’ tab provides a simplistic way to define the different ways (and environments) in which you can run your test cases.

As I wrote in my previous post, TOSCA should be started from the Requirements of the application, where the application is broken into workflows and each is assigned a weight-age  This provides the base for creating the test cases in our ‘Test Case Design’ section.

The ‘Test Case Design’ is the interesting part (and claimed by Tricentis, as not being used by any other tool, as yet). Here you need to dissect the requirements and application to create each attribute and assign its relevant ‘equivalence partitioning‘. Sometimes this may not be necessary and  the TCD acts like a data sheet for the test team.

For most automation tools, you begin with the application and then match it with the requirements. TOSCA wants you to start from the requirements and build it to the actual tests. Then you add in the actual application and you are on the way to creating a well thought out automation or manual test practice.

Now TOSCA v7.6.x has come out with a new Cross-Browser testing concept called TBox. This allows you to create a ‘Module’ in one of the main browsers, and be used across IE, Chrome and FF.





Working with TOSCA

23 07 2012

For the past few months, I have been working on a new paradigm to Automation, with a “Model Based” tool from Tricentis – TOSCA. Overall, it is quite a different experience in using it. It does not contain any code, and builds from the requirements as a model of what the actual application will contain. The catch being that initially you do not need to define your test cases from the application end and things might not even be in sequence of what the actual final application would look like.

I have an analogy for this – a human body is composed of head, body, hands and legs. Each one has its own “attributes”, which in turn have “instances”. This is what is called the ‘Model-based approach’. Each hand will have attributes such as fingers, nails, elbow, fore-hand, wrist, etc. Then, all these attributes will have instances – long fingers, short fingers, thick fingers, etc. Now to build a body, you need to join all these “attributes” into a seamless body with the various parts working in tandem. This is what a test case would look like in TOSCA. With the initial parts of the body being the Test Case Design part. The joining together of the parts being the test case and the final infusion of blood being the execution and reporting [have not used Frankenstein here, as TOSCA tends to create a human rather than it’s alternate :-)]

TOSCA takes its roots in Object Oriented Modelling, employing concepts such as separation of concerns and encapsulation. In TOSCA, you can create classes, attributes and instances (objects). This modular breakdown makes the understanding and management of the actual requirements fairly simple; without going into how the final system under test would look like. I find this a very cool thing; although it took me some time to understand the concept in relation to the current bombardment of the existing Test Frameworks and Tools.

Again, the interface has a very intuitive design, which can be modelled according to the needs and quirks of the person working with it. People might argue here, that it is the same with Eclipse and other such tools like MS Visual Studio Test Professional, but the concept is totally different with TOSCA. You have the drag & drop capabilities, combined with a good integration across all the functionality provided from putting in the requirements to the final reporting; all in a single interface and tool, with support from a dedicated and technical team to get over the initial hiccups of using it.

The next good part, I found, was its capability to extend its technology adaptors (adaptors are used to automate tests against systems developed in various technologies, such as HTML, Java, .NET, Mainframe, Web Services, etc.) using the ubiquitous and simple VBScript and VBA; which is prevalent as the development language of choice in the Testing Community. I found this quite interesting, as we can now easily use TOSCA with almost any system, which we can code to make the underlying adaptor understand. For example, we had a hybrid mainframe green screen application to test (a rich Java GUI with an embedded mainframe emulator), which after a week’s work was ready to be tested with TOSCA; I have not come across such quick development cycles with other tools I worked with/on. That said, TOSCA has the capability to extend itself to different backend databases with the ease of just creating a simple module for it and using that module throughout your test cases to create a connection and then run your customized SQL queries.

If you start from the Requirement Definitions part, you can easily put in your current requirements and provide a measure of weight-age for each.

Then comes the part where you can extremely easily define the actions you can do on the objects which form your test cases. TOSCA by default defines 6 such actions – Do Nothing, Input, Output, Buffer, Verify and WaitOn, which take care of how a particular attribute defined earlier in the Test Design is taken action on.

More on this coming up soon…





Automation Tool across Web, Mobile and Web Services!

26 03 2012

Earlier in the week, I was sent across a request from one of our Senior Management on what could be a best tool, used for the automation of tests across the spectrum of Web (HTML & Flash), Mobile (iPhone, Android, Windows, etc.) and Web Services. What I could come up on this is the following. People may disagree with these options and may have different opinions and views on it… please feel free to comment and put them through, to improve on the content 🙂

Looking into the problem from the requirements viewpoint, I believe Selenium would be the tool best suited for the above automation work. The issue which might go against it, is that their Mobile product is still in Beta, and they are not the best for Web Services Testing, Watir being the frontrunner in the Open Source (i.e., Free) tools in that category. There are other Commercial Tools also which are available with good support and good interface, making it easier for the Automation to be maintained; which is somewhat of a problem with the Open Source tools, if not properly designed initially. Commercial products also have a big following and hence are cost-effective in the long run, although they might be expensive to procure, but getting a resource who is great in an Open Source product can sometimes be a big recruitment headache.

That said, Flash/Flex is a group, which almost with all tools requires a debug/special build to be provided for testing. Each tool has their own quirks and libraries with which the Flash/Flex application needs to be compiled with. So, you might wish to go more into each tools individual ability and reviews of their Flash library functionality; especially for Web Based applications.

Coming to mobile applications, the market for these exists as a very fragmented field for testing successfully. With Android Browser, iPhone Safari, IE Mobile and Firefox being the major browser contenders for the Automation tools available, along with testing of the Apps within the iOS, Android, Windows Phone and the various other vendors out there. I have seen many people refer to the Experitest SeeTestMobile tool, which might be becoming a tool of choice for many, these days.

I plan to go over some of the tools which might help out in each group, and some which might have multiple categories covered below. These opinions are my own through what I have experienced with them, and all are free to criticize and cajole me into making changes as is reflected “great” for them…

Selenium

Advantages: Good for Web GUI Testing. Great tools available for Firefox browser and the new WebDriver combined with PageObjects concept make it a great cross-browser test tool for the HTML/JavaScript Web. It even has a Flex/Flash plug-in for compatibility with the [debug/developer] flash applications. Can be coded in multiple languages (Java [most popular], Perl, PHP, Python, C#, etc.). This is a Free Open Source Tool.

Disadvantages: Not very intuitive, depends on coding skills and good design. New WebDriver is good, but there are not many in the market who can create some really good frameworks and know how to use it properly. Requires knowledge of XPath and JUnit type of coding to do anything great with the tool. Mobile product is still in Beta. Not many people available and consultation fees with consultants and resources can be high.

HP Quick Test Pro

Advantages: Well supported and lots of resources available who have certifications, but mostly used in Financial Institutions. Integrated add-ons for Flex, Web Services, Silverlight, and Web HTML. Framework issues can be easily taken care of with Odin AXE framework, which uses XML and simple interface.

Disadvantages: Ability to recognize complex UI and dynamic content hinders the tool. Mostly used in Data-driven web testing, which makes use of Excel sheets; easy for the user to use, but may cause issues in maintainability. Windows System only focused. Not suitable for Unix-Clones and Mac OS. High deployment costs.

MicroFocus / Borland SilkTest

Advantages: Good tool for Web and Flash. (MicroFocus has recently bought it after Borland failed, not sure of its development path going into the future). Has support for other platforms and operating systems.

Disadvantages: Learning curve, due to its test coding language. Not many people available with the tool knowledge.

Watir

Advantages: Good Open Source Tool for Web Services and Web Testing. Used with Fitnesse, produces easy to create and support web tests and web services tests. Not too good with Flash and Mobile.

Disadvantages: Uses Ruby as the language of choice, which is a skill getting hard to find for Testing.

 

SAHI

Advantages: Great tool for Web testing. Has good variety of plug-ins for the various other technologies. Available as Free version and supported paid version. Support for the same is great, the Developer of the tool is quite helpful in working out the issues with the Test Team. Good for complex websites, where other tools may sometimes fail. Unlike Selenium, it does not make use of XPath to identify objects; and can be used across browsers for recording tests.

Disadvantages: Only used for Web Testing for now. [not sure if it has been updated with plug-ins for others]. Limited use, thus not many people know about it.

 

SmartBear SoapUI

Advantages: Great tool for Web Services Testing from Smart Bear.

Disadvantages: Only useful for Web Services Testing. (but this might be an advantage, as they plan to make this a separate activity)

TestComplete

Advantages: Good tool, very similar to HP QTP, with a good interface and price. Overall good for Flash/Flex, with the included Libraries. SmartBear has a full stable of tools, which if bought together may be helpful in pricing and overall deployment and support. Uses VBScript/VBA for coding. People with QTP Experience may find it easy.

Disadvantages: Flash/Flex testing is still not very stable, sometimes fails to recognize the separate objects.

Microsoft Visual Studio Test Professional

Advantages: Is natively attached to the Visual Studio product line. Great for Cloud and .NET application testing. Good is you have Windows Phone applications. “CodedUI” is an excellent tool for testing cross-browser and web HTML testing. MS does deals to get the testing community to start using their tools 🙂

Disadvantages: Only for MS Technologies mostly. Not good for Firefox and Android. Only uses C# or Python.

Odin AXE Framework

Advantages: Great tool for building a wrapper over the existing tools scripts; actually it converts the tools identified objects into a XML recognizable format and has a great and easily understandable format for Automation testers.

Disadvantages: None that I can think of for now, except the use of a tool is somewhat a compulsory need for the framework created in AXE to work. Odin has done a good job of making the tool robust for Web Testing tools and it is compatible with almost all other commercial tools available.

Tricentis TOSCA

Advantages: Combines the best of Requirements, Test Case Design and Test Case execution, all in one single application. Good when there are business testers who know what the application is doing and there is good documentation available for doing it.

Disadvantages: Not very flexible when it comes to handling of unexpected behaviour within the application. Likes to have a clean interface to run through test cases and offer a “happy” path.

I can provide some more research into the new tools (and some less known but good ones), but the above are some of the common ones in use.

I am not advocating the use of any one tool above and to each depends on what he has worked with and would be comfortable in using.





Lessons from GUI Testing

11 01 2012

I recently started working on the GUI testing space again. This is an interesting space, with loads of commercial and open source tools being available. Although all the tools might have their own unique features which they bring to the fore; I realized that there are some basic fundamental steps, which need to be brought up to get things moving in the right direction. I have tried to put these steps as succinctly as possible in this post.

The initial step is to realize that although all GUI web based applications may vastly differ from each other, they have one common control which needs to be looked into – the ‘objects’ which create the page. Each web page (or for that matter GUI based application) would have these. Each tool has its own unique way of looking at and identifying these objects on the web page. The basic assumption being that the Dev’s have done a spot of good coding and provided meaningful and unique names to all the visible objects on a Web based application 🙂

Map these objects to the web applications page and half the work of automating the web based app is complete. The crucial part is that the automation engineer should realize that he has to use the names provided by him during this initial setup and mapping stage. We cannot rely on names provided by the Dev Team, as these may be generic and/or not properly worded; to provide the correct identification of the object on the web page.

So from my viewpoint, you need to start any GUI Automation by first mapping all the objects and providing proper names to these. With this work done, now arrange them into a proper flow, so that you create the required test scenario as has been provided by either the Business or the Customer. Having the initial mapping of the objects, is the biggest help that can be obtained. Will further post on the different tools and how to build this great library of objects with each tool.





Test Coverage – A Concept!

24 10 2011

These days I am trying to work on a concept known as Test Coverage. I call this a concept, as it starts off with something in the mind of the Management, fetters down to the Manager and finally is handed down to the Tester to carry out the said instructions. Without actually realizing, soon a graphical representation of our work comes out, in something which people call Business Intelligence (another much-hyped word these days, but will come to it later). The graphical representation goes on to show that the current set of tests which have been implemented/created, cover either “X” lines of code or “Y” number of Business Screens.

Is this a true representation of the complete scenario? Not what a Test Manager or a Dev Manager, who has enough thought process would like to think so. The above is a misnomer of how we go about treating an important issue like Test Coverage. Let me take you through a typical “Software Test Life Cycle” (don’t even start me off on that one). The requirements come out in the form of a BIG bunch of documentation, which has gone through various iterations and reviews with the Business people and the other Stake Holders involved (but rarely the Test Team). This bunch of neatly typed bundle is handed over to the Test Team in an official ceremony, which we call the “Beginning of the Test Cycle”. The Test Manager goes over this vast bundle of joyous documentation and then based on his “past” experiences, provides an estimate of what all will need testing and what test cases can be broadly done. This is called the “Estimation Period”, as usually a rough time period is provided, on when the Test Team will finish – includes Automation, Manual, Performance, Security and the jig-bang.

Once this “Estimation Period” is through, the task is handed over to the Leads to break down and offer an estimate, but based on what the Test Manager has already provided. Till this time, the actual team members are usually not taken into consultation, but the seniors of the Team are the confidants who will decide on what the underlings do. Finally a document starts taking shape, which for the sake of convenience we call the “Test Plan” or the “Test Strategy“, for want of a better name. This soon becomes the golden Bible/Vedas for the Test Team and they have to adhere to what has been said in it. Thereby the official STLC starts!

Once you have converted the BRD (Business Requirement Document) or the PRD (Product Requirement Document) to your test cases, you need to start actually implementing those test cases. This is the place where you start bringing in concepts like Test Matrix and Test Vectors, which in layman parlance (developer speak) mean the way that your tests are structured across the various data points for a particular view on the application. Now comes the really good part! This also lies the place where the above mentioned superior tester comes out and says that we are doing a Test Coverage of “X” lines of code, or a “Y” number of business screens (for GUI applications, which usually is 90% of tested applications). But does he actually know what he has covered with his test cases? Some do, while some have just made the assumptions, after reading blogs such as this one or from their superiors, who again might have obtained their knowledge from such places. The test cases are sorted out and some go over to the Automation Team to put in their regression suite, while others are manually vetted out and put through the paces of the “Bug Life Cycle”! (what this means to the globally scattered teams, depends on how much the management has spent of procuring a good issue reporting tool. My recommendation would be to look into Joel Spolsky’s FogBugz: http://www.fogcreek.com/fogbugz/). But to each his own …

Once the case of creating test cases and shoving them into the Automated Test Suite is completed, the Test Manager will jump and click a variety of buttons on his console (something which has been created by his Team to make life a brisk walk for him Or the Management has spent some more Money into procuring another one of those efficient tools out there). Thus, voila, a beautifully colored report of what passed and what failed, and specially “How much of Code/Screens were covered by our Testing”. Definitely a piece of Beauty for the Management!

But what is the real usefulness of such a report! In my honest opinion (IMHO), zilch… NIL! We did a good job of covering all the lines of code which were there, but did we cover the paths through which the code would be executed, I don’t think that is thought of even 25% of the time. Did we make sure that boundary values are covered? it might be that we have a few test cases making sure of this, but do they map to our coverage? Did we take care of the definite values that a few fields on our screen work on? No, this would be a definite gap most of the time… What we did do was this – a) Ensure that at least 85-90% of the code lines are covered by our test cases, executed using the Automated Scripts (Good! This might be an issue with doing through Manual tests, so no offence to Manual Testing here). b) Made sure that all the GUI screens are covered.

But, did we make sure that all the fields on that screen are covered, usually not. These are the places where we get issues. Also, most of time Negative testing is not given enough importance in such cases. The usual rant being – a) Did not have time. b) Is not that important, as such a case would not happen in Production? But these are important things and they convey the coverage of our tests. I will try to bring out more facets of this testing type in my next few posts and hopefully those are more helpful, than this one, which just rants about what is not being tested and/or how badly we test things …





What to Automate?

19 09 2011

I had this interesting conversation on Automation the other day with my colleagues at my new job. It started off fairly innocently on how the automation should be thought out and what needs to be done to automate. We already have an existing framework and test scripts in place, which very efficiently work and report issues. The problem comes when we need to provide data on what automation actually does. How do you prove the effort spent on automation? You could easily say that it saves time and resources by checking for faults early in the development cycle, but how can we be sure that it actually covers scenarios that check the application? This is where a business requirement document becomes a necessity.

The issue which we increasingly face today is how do we relate the business requirements to what we test. There are a few things which are given in textbooks and across such certifications organizations like ISTQB, which provide information on such. The thing is ultimately it depends on the person who is sitting and working on the application (which most of the Program/Product Managers miss out on) and the person who is writing the scripts to automate the application testing. The best way to figure out for a restricted zone (propriety) application is to go and sit with the users and find out what they use the most (or run a key stroke capture software and see where all it goes, limitations of this later). With a public/global reach software, it is best to give out Beta versions of the same, like most of the Big Organizations do and see what is reported back.

The other end of the spectrum is propriety software, which might not have a user interface (system tools like Compilers is one example). For this the technique is basically to read through the Software Requirement Document/Specification and have confidence in your abilities to decipher the jargon written in those and convert them to simple English. I got my initial training on writing code on these only, and the golden rule for us was [in a summary]:

  • Read through the document and write what you have understood for each function
  • Create the Algorithm for each in plain English pseudo code
  • Convert these into test cases, and run those test cases
I think the above rules of going through the steps of Why, What and How has helped us out a lot to become what we are today 🙂




Comparing Commercial Test Tools

1 03 2011

This post is more about the comparison of 2 tools, which finally made the cut for an application/product, I have to Automate for my current Organization. They are a renowned name in the Smart Grid domain and have their own Smart Meters manufacturing. The application is the software API on top of these meters and their firmware, which allows the readings from the multiple meters (mostly in the thousands), to be collected and provides a Business Intelligence abstraction layer for the actual Hardware and Firmware. There were certain criteria which were needed to be implemented and taken into consideration before the final tool choice was to be made.

The team went through many .NET enabled Software Test tools – both commercial and open source, before finalizing on the below two, due to long-term stability and robustness. Also, we had to cater for emulators being used to test things which were critical to the business. All these points might not be mentioned below in the actual comparison of the tools, which has been made more generic for the purposes of posting on the blog.

Criteria on which the tool has been analysed HP QuickTest Professional v11 Visual Studio Test Professional 2010
Actual end user simulation: Is the test conducted using this tool equivalent to an end user action? QTP claims to perform end user simulation, in other words executing QTP scripts are equivalent to a person performing those steps manually on the application. Using the Coded UI tests, we can create UI test cases as they have been done using actual user interaction. You can execute tests with the browser minimized also, like Selenium, as it can use XPATH and DOM.
Support for UI Components QTP requires extra add-ins (plug-ins, not free) to work with .NET and other components, like Java, JavaScript, etc. Visual Studio natively supports .NET components. Also, JavaScript and other web scripting languages support is present, without additional plug-ins.
Object Management & Storage QTP comes built-in with Object Repository. Object Repository management is quite easy in QTP. Objects are recorded and added automatically to the Object Repository. Visual Studio Coded UI interface provides a limited set of Object Repository. It creates the user interaction internally in XML format and can be used in conjunction with screen position or the object name and ID.
Support for Dialog Boxes QTP supports all kinds of IE dialog boxes. These are helpful, when parsing error messages in the application under test. Especially when we expect a popup dialog to appear. Good support for embedded and IFrame dialog boxes. This has better support for IE browsers, being a Microsoft product.
Support for web browsers Cross-browser support is lacking in QTP. Scripts created for one browser may not run on another. It has cross browser support for IE, Safari and Firefox. These have been built by the specific vendors themselves.
Object Oriented Language Support & Scalability (as in Integration with External tools utilities and libraries). VBScript has limited OO support and QTP has limitations with using any other language for framework development. Supports C# as the major language. It is very similar to Java and has full OO support. Also, there is a large base of resources who are working with C# and .NET
Integration with Test Management tool With HP Quality Center and Test Director Integrated with Visual Studio Test Manager and Team Foundation Server.
Types of application supported Web, Windows (.NET, VB,  Power Builder, TCL/TK), Terminal Emulation, Command Prompt, Windows Desktop Native .NET, Command Prompt, Windows Desktop Native, Web Applications.
Support for different Operating Systems / Platforms QTP only supports Windows Current implementation of Coded UI, can support test cases on Windows and Linux boxes, as the application creates XML based code.
Technical Support QTP offers technical support by phone and mail, HP also has a web-forum. QTP user community is vast and questions posted on online forums get answered quickly. Although the technical support is available through phone and mail, the forums are not that intuitive now. But Microsoft has made efforts to have multiple Evangelists create blogs and forums to discuss user issues.
Cost Costly. $9,000 per seat license. Separate costs for Quality Center and other development and SCM related tools from HP. Costs $11,000, when bought with the Visual Studio Ultimate edition. But has other products bundled with it  – TFS, Test Manager, Visual Studio, etc.
Test Development Environment Reasonable but not the best. QTP tests can only be developed using QTP or Notepad like application. Best in the world. (my opinion, after Eclipse, it is the best and those who use it love the interface) 🙂
Integration with development process No real integration possible. Has plug-ins now which can integrate with other tools and development processes. Tests developed using VS TP can be easily part of the development project. Using tools like TFS and VS Build, Continuous Integration is easier.
Data Driven Testing Support for Data-driven and Keyword-driven testing, implemented using MS Excel. Good support for both Data-driven and Keyword-driven testing. With XML as the base for Keyword-driven.
Database Testing SQL (Structured Query Language) is integrated with QTP. Can make use of SQL statements from within QTP. Native SQL Server DB API’s are present. Allows command-line driven testing for validation and verification of the DB Integrity also. Support for Oracle is also present. (Will need to investigate this further)

Update: Integration of HP QTP scripts with Microsoft TFS is now also possible, as has been given in this excellent Lecture Series: StickyMinds.com Lecture Series

http://testingcircus.com