Sunday, 28 June 2015

Top 3 differences between Retesting and Regression testing

Difference between Retesting and Regression testing seems to be confusing term or concept from most of the testers perspective. 

Retesting:
It is a type of testing in which an already tested functionality is tested again to ensure that defect is  reproducible or not . It is usually  performed when the defect is fixed and ready for retest with testing team.If tester is able to reproduce the defect then tester change the defect status from Ready for retest to Re-Open. If tester is not able to reproduce the defect then tester changes the defect status from Ready for retest to Closed with appropriate comments.
re-testing
                            Retesting
Regression Testing:
It is a type of testing in which an existing tested functionality is tested once again, in order to check if the functionality is the bug free and to check if existing functionality is not affected whenever new change is added or defect is fixed.
In simple terms it is testing to check the cross impact of defect fix.
At the time of execution if tester comes across any defect, the tester reports it to the development team and when development team has fixed the defect tester needs to perform the testing on fixed defect. While the defect fix developer might have changed the other functionality. To ensure these changes does not affect other functionality regression testing is done.
Retesting Vs Regression Testing
Retesting and Regression Testing


Wednesday, 24 June 2015

Integration Testing

Integration testing is also called logical extension of unit testing. In other words, two components or modules tested in isolated fashion are combined into a component or module and interfaces between them is tested. As testing progresses few other modules are added to earlier tested module and the interface between them is tested. Integration testing in the software testing model comes before system testing and post the unit testing has been done.
The intention of integration testing is to test the combinations functions or module or pieces of the code. The components are tested in pairs.
Integration testing identifies problems that occur when components or modules are combined. Before starting the integration testing all the units or components must be tested separately.

Purpose of Integration Testing:
1.       To check the individual right functionality of the modules 
2.       To check integrated functionality/ behaviour is achieved or not 
3.       To check the module are communicating each other as per Data flow Diagram (DFD) 
4.       To check navigation among the several modules/screens

User can do the integration testing in variety of ways. In general, there are following 3 common integration testing strategies.

1.       Top down approach: The highest level modules tested and integrated first 
2.       Bottom up approach: The lower level modules tested and integrated first 
3.       Sandwich or Hybrid approach: Combination of Top down and Bottom up integration testing 
4.       Big Bang approach: Highest level and lower level modules integrated 

1.       Top Down approach: In this approach when lower level components are developed they are integrated with highest level components. It is known as top down or top to bottom approach. Top-down testing mainly needs for the testing team to identify what is important and what is least important modules, then the most important modules are worked on first. In this approach there is a possibility that a lower level module is missing or not yet developed which affect the integration between the other modules. Top Down approach is like a Binary tree where testing starts from top to roots. 
In this situation to make the integration possible developer develop the temporary dummy program that is utilised in place of missing module known as ‘STUB’ for the smooth running of integration testing. 
In top down approach the decision making occurs at the upper level in the hierarchy and encountered first. The main advantage of the Top Down approach it is easier to find a missing branch link. 
The problem occurs in this hierarchy when processing the lower level components to adequately test highest levels. As stubs are used to replace lower level components at the beginning of top down approach; no significant data can flow between the modules. Lower level modules may not be tested as thoroughly as the upper level modules.  
top down integration testing
Top Down Integration Testing


2.       Bottom Up approach: Bottom Up approach is the opposite to top down where lower level components are tested first and proceed upwards to the higher level components. In this approach when highest level components are developed they are integrated with the lower level components. It is known as bottom up or lowers to highest approach. There is possibility of missing highest level components which affects the integration process. The main advantage of the Bottom Up approach is that defects are more easily found.  
In this situation to make the integration possible developer develop the temporary dummy program that is utilised in place of missing module known as ‘DRIVER’.
bottom up integration testing
Bottom Up Integration Testing
3.       Sandwich or Hybrid approach: In this the combination of Top Down and Bottom Up approach is used. If some of the highest level components and some of the lower level components are not developed yet then the developer develops the temporary program ‘STUBS’ and ‘DRIVERS’ for smooth running of the integration testing.
hybrid testing
Sandwich Integration Testing
4.       Big Bang approach: This approach is applicable when all the highest and lower level components are developed to integrate all the modules and make it as single application. The Big Bang method is very effective for saving time in the integration testing process. Big Bang Integration testing is also called Usage Model testing. Usage Model Testing can be used in both hardware and software integration testing. The source behind this type of integration testing is to run user like workloads in integrated user like environments. 
The testing team has to wait till all the modules to be developed to integrate in order to do big bang testing so there will be a lot of delay to start testing. The cost to fix errors identified is high as they are identified late stage and difficult to identify the fault.


Monday, 22 June 2015

Unit Testing

Testing the smallest piece of testable module/unit/software is called Unit testing. As Unit testing requires coding knowledge, normally it is done by developers.
The objective of unit testing is to test smallest piece of testable software in the product or application and separate it from the remainder of the code to determine whether it is behaving exactly as expected or not.
Each unit of code is tested in isolated fashion and then it is integrated into the module to test the module interfaces. Unit testing has proven its value in finding a large percentage of defects during use.
Drivers and stubs are written for unit testing. The driver simulates a calling unit and the stub simulates a called unit. Finding errors in isolated module is much cheaper than in complex integrated module.
E.g. Testing a banking website consists of testing the Login functionality, page submits functionality, and page navigation functionality in isolated fashion is few possible scenarios of unit testing.
module testing_component testing
Unit Testing

  

                          

Thursday, 18 June 2015

Top 10 differences between Smoke and Sanity testing

Difference between Smoke and Sanity testing seems to be confusing term or concept from most of the testers perspective. Before going to understand the difference between Smoke and Sanity testing, you should know the Smoke and Sanity testing basics.

Smoke Testing:
Smoke testing is the testing to check the basic functionality of newly released build.Smoke testing is also known as Build Verification Test (BVT) and is performed to check the normal health of the build to make sure the stability of the build received for software testing to continue testing or not.
It is done in the beginning of the software testing cycle. It is first level of testing on a newly released build of application at testing team level to check the main functionality of the application and is performed before the detailed functional or regression tests are executed.
It is usually preferred when some minor issues with the application or software and a new build is obtained for testing after fixing of issues instead of full regression, a subset of most basic and important test cases is selected which is used to find issues in particular functionality.

smoke testing
                               Smoke Testing
 It is a non-detailed and no deep testing. It checks whether crucial functions of a program are working or not. Smoke test is scripted, i.e. either manual test cases or automated scripts for it.
Some basic test cases for Smoke Testing:
1.       Application launches successfully
2.       GUI response is proper
3.       Database connectivity is in place

Advantage:
Smoke testing of the software application is performed to validate whether the build can be accepted for through software testing or not.

Sanity Testing:
Sanity testing is the testing to check main functionality as well as optional functionality of the application. It is detailed testing.
When a new build is obtained after fixing the some minor issues then instead of doing complete regression testing sanity is performed on that build. In general words sanity testing is a subset of regression testing.
Sanity testing is performed after thorough regression testing is completed, to ensure that any defect fixes does not break the core functionality of the product.

Acceptance Testing
                                  Sanity Testing
 It is done towards the end of the product release phase. Sanity testing is performed with an objective to verify that requirements are met on not. A sanity test is normally unscripted.
Sanity testing is a subset of Acceptance testing.
Examples:
  1. Basic GUI functionality
  2. Database connectivity
Sanity testing factors:
·         Environment issues 
e.g Application closing, application getting hang, unable to launch URL
·         Exceptional errors
e.g. java.io.exception (some source code will be displayed)
·         Urgent severity defects

Advantage: 
Sanity testing of the software application is performed to validate whether the requirements are met on not

Smoke Vs Sanity testing
                                                    Smoke Testing Vs Sanity Testing                 
                                             
Difference between Smoke and Sanity testing:

                         Smoke Testing                          Sanity Testing
Smoke Testing is performed to ensure that the critical functionality of the application are working fine Sanity Testing is performed to ensure that basic functionality or any defect fixes does not break the core functionality of the product.
Smoke testing is like normal health check up to a build of an application Sanity Testing is like specialised health check up to a build of an application
It is performed towards the start of the product release phase It is performed towards the end of the product release phase
Smoke testing is a subset of Regression testing Sanity testing is a subset of Acceptance testing
Smoke testing is also called Build verification test Sanity Testing is also called tester acceptance testing
Smoke testing is performed with an objective to verify that build is acceptable or not Sanity testing is performed with an objective to verify that requirements are met on not
Smoke testing is normally scripted that is either manual test cases or automated scripts used for it Sanity testing is usually unscripted
It is a non-detailed and no deep testing It is a detailed and deep testing
Smoke testing is performed by the developers or testers Sanity testing is usually performed by testers
Smoke testing validates the entire system from end to end Sanity testing validates  the particular component of the entire system
It is performed before the detailed functional or regression tests are executed It is performed after the detailed functional or regression tests are executed

Monday, 15 June 2015

How to calculate Automation Coverage ?

Automation Coverage metrics is used to calculate the percentage of manual test case automated.

How to calculate Automation Coverage:
Automation Coverage= Total number of manual test cases Automated / Total number of manual test cases * 100

Example:
No. of Manual Test Cases 
No. of Test Cases Automated
Automation Coverage
100
50
50%

Automation Coverage = 50/100*100

                                    =50 %