Working with TOSCA

23 07 2012

For the past few months, I have been working on a new paradigm to Automation, with a “Model Based” tool from Tricentis – TOSCA. Overall, it is quite a different experience in using it. It does not contain any code, and builds from the requirements as a model of what the actual application will contain. The catch being that initially you do not need to define your test cases from the application end and things might not even be in sequence of what the actual final application would look like.

I have an analogy for this – a human body is composed of head, body, hands and legs. Each one has its own “attributes”, which in turn have “instances”. This is what is called the ‘Model-based approach’. Each hand will have attributes such as fingers, nails, elbow, fore-hand, wrist, etc. Then, all these attributes will have instances – long fingers, short fingers, thick fingers, etc. Now to build a body, you need to join all these “attributes” into a seamless body with the various parts working in tandem. This is what a test case would look like in TOSCA. With the initial parts of the body being the Test Case Design part. The joining together of the parts being the test case and the final infusion of blood being the execution and reporting [have not used Frankenstein here, as TOSCA tends to create a human rather than it’s alternate :-)]

TOSCA takes its roots in Object Oriented Modelling, employing concepts such as separation of concerns and encapsulation. In TOSCA, you can create classes, attributes and instances (objects). This modular breakdown makes the understanding and management of the actual requirements fairly simple; without going into how the final system under test would look like. I find this a very cool thing; although it took me some time to understand the concept in relation to the current bombardment of the existing Test Frameworks and Tools.

Again, the interface has a very intuitive design, which can be modelled according to the needs and quirks of the person working with it. People might argue here, that it is the same with Eclipse and other such tools like MS Visual Studio Test Professional, but the concept is totally different with TOSCA. You have the drag & drop capabilities, combined with a good integration across all the functionality provided from putting in the requirements to the final reporting; all in a single interface and tool, with support from a dedicated and technical team to get over the initial hiccups of using it.

The next good part, I found, was its capability to extend its technology adaptors (adaptors are used to automate tests against systems developed in various technologies, such as HTML, Java, .NET, Mainframe, Web Services, etc.) using the ubiquitous and simple VBScript and VBA; which is prevalent as the development language of choice in the Testing Community. I found this quite interesting, as we can now easily use TOSCA with almost any system, which we can code to make the underlying adaptor understand. For example, we had a hybrid mainframe green screen application to test (a rich Java GUI with an embedded mainframe emulator), which after a week’s work was ready to be tested with TOSCA; I have not come across such quick development cycles with other tools I worked with/on. That said, TOSCA has the capability to extend itself to different backend databases with the ease of just creating a simple module for it and using that module throughout your test cases to create a connection and then run your customized SQL queries.

If you start from the Requirement Definitions part, you can easily put in your current requirements and provide a measure of weight-age for each.

Then comes the part where you can extremely easily define the actions you can do on the objects which form your test cases. TOSCA by default defines 6 such actions – Do Nothing, Input, Output, Buffer, Verify and WaitOn, which take care of how a particular attribute defined earlier in the Test Design is taken action on.

More on this coming up soon…





Test Coverage – A Concept!

24 10 2011

These days I am trying to work on a concept known as Test Coverage. I call this a concept, as it starts off with something in the mind of the Management, fetters down to the Manager and finally is handed down to the Tester to carry out the said instructions. Without actually realizing, soon a graphical representation of our work comes out, in something which people call Business Intelligence (another much-hyped word these days, but will come to it later). The graphical representation goes on to show that the current set of tests which have been implemented/created, cover either “X” lines of code or “Y” number of Business Screens.

Is this a true representation of the complete scenario? Not what a Test Manager or a Dev Manager, who has enough thought process would like to think so. The above is a misnomer of how we go about treating an important issue like Test Coverage. Let me take you through a typical “Software Test Life Cycle” (don’t even start me off on that one). The requirements come out in the form of a BIG bunch of documentation, which has gone through various iterations and reviews with the Business people and the other Stake Holders involved (but rarely the Test Team). This bunch of neatly typed bundle is handed over to the Test Team in an official ceremony, which we call the “Beginning of the Test Cycle”. The Test Manager goes over this vast bundle of joyous documentation and then based on his “past” experiences, provides an estimate of what all will need testing and what test cases can be broadly done. This is called the “Estimation Period”, as usually a rough time period is provided, on when the Test Team will finish – includes Automation, Manual, Performance, Security and the jig-bang.

Once this “Estimation Period” is through, the task is handed over to the Leads to break down and offer an estimate, but based on what the Test Manager has already provided. Till this time, the actual team members are usually not taken into consultation, but the seniors of the Team are the confidants who will decide on what the underlings do. Finally a document starts taking shape, which for the sake of convenience we call the “Test Plan” or the “Test Strategy“, for want of a better name. This soon becomes the golden Bible/Vedas for the Test Team and they have to adhere to what has been said in it. Thereby the official STLC starts!

Once you have converted the BRD (Business Requirement Document) or the PRD (Product Requirement Document) to your test cases, you need to start actually implementing those test cases. This is the place where you start bringing in concepts like Test Matrix and Test Vectors, which in layman parlance (developer speak) mean the way that your tests are structured across the various data points for a particular view on the application. Now comes the really good part! This also lies the place where the above mentioned superior tester comes out and says that we are doing a Test Coverage of “X” lines of code, or a “Y” number of business screens (for GUI applications, which usually is 90% of tested applications). But does he actually know what he has covered with his test cases? Some do, while some have just made the assumptions, after reading blogs such as this one or from their superiors, who again might have obtained their knowledge from such places. The test cases are sorted out and some go over to the Automation Team to put in their regression suite, while others are manually vetted out and put through the paces of the “Bug Life Cycle”! (what this means to the globally scattered teams, depends on how much the management has spent of procuring a good issue reporting tool. My recommendation would be to look into Joel Spolsky’s FogBugz: http://www.fogcreek.com/fogbugz/). But to each his own …

Once the case of creating test cases and shoving them into the Automated Test Suite is completed, the Test Manager will jump and click a variety of buttons on his console (something which has been created by his Team to make life a brisk walk for him Or the Management has spent some more Money into procuring another one of those efficient tools out there). Thus, voila, a beautifully colored report of what passed and what failed, and specially “How much of Code/Screens were covered by our Testing”. Definitely a piece of Beauty for the Management!

But what is the real usefulness of such a report! In my honest opinion (IMHO), zilch… NIL! We did a good job of covering all the lines of code which were there, but did we cover the paths through which the code would be executed, I don’t think that is thought of even 25% of the time. Did we make sure that boundary values are covered? it might be that we have a few test cases making sure of this, but do they map to our coverage? Did we take care of the definite values that a few fields on our screen work on? No, this would be a definite gap most of the time… What we did do was this – a) Ensure that at least 85-90% of the code lines are covered by our test cases, executed using the Automated Scripts (Good! This might be an issue with doing through Manual tests, so no offence to Manual Testing here). b) Made sure that all the GUI screens are covered.

But, did we make sure that all the fields on that screen are covered, usually not. These are the places where we get issues. Also, most of time Negative testing is not given enough importance in such cases. The usual rant being – a) Did not have time. b) Is not that important, as such a case would not happen in Production? But these are important things and they convey the coverage of our tests. I will try to bring out more facets of this testing type in my next few posts and hopefully those are more helpful, than this one, which just rants about what is not being tested and/or how badly we test things …