Friday, June 4, 2010
Software Developement Life Cycle Models (SDLC)
Waterfall Model
The waterfall model is one of the earliest structured models for software development. It consists of the following sequential phases through which the development life cycle progresses:
System feasibility: In this phase, you consider the various aspects of the targeted business process, find out which aspects are worth incorporating into a system, and evaluate various approaches to building the required software.
Requirement analysis. In this phase, you capture software requirements in such a way that they can be translated into actual use cases for the system. The requirements can derive from use cases, performance goals, target deployment, and so on.
System design: In this phase, you identify the interacting components that make up the system. You define the exposed interfaces, the communication between the interfaces, key algorithms used, and the sequence of interaction. An architecture and design review is conducted at the end of this phase to ensure that the design conforms to the previously defined requirements.
Coding and unit testing. In this phase, you write code for the modules that make up the system. You also review the code and individually test the functionality of each module.
Integration and system testing: In this phase, you integrate all of the modules in the system and test them as a single system for all of the use cases, making sure that the modules meet the requirements.
Deployment and maintenance: In this phase, you deploy the software system in the production environment. You then correct any errors that are identified in this phase, and add or modify functionality based on the updated requirements.
Advantages
o Simple and easy to use.
o Easy to manage due to the rigidity of the model – each phase has specific
Deliverables and a review process.
o Phases are processed and completed one at a time.
o Works well for smaller projects where requirements are very well understood/stable.
Disadvantages
o It’s difficult to respond to changing customer requirements.
o Adjusting scope during the life cycle can kill a project
o No working software is produced until late during the life cycle.
o High amounts of risk and uncertainty.
o Poor model for complex and object-oriented projects.
o Poor model for long run and ongoing projects.
Prototype Model
The prototyping model assumes that you do not have clear requirements at the beginning of the project. Often, customers have a vague idea of the requirements in the form of objectives that they want the system to address. With the prototyping model, you build a simplified version of the system and seek feedback from the parties who have a stake in the project. The next iteration incorporates the feedback and improves on the requirements specification.
The prototypes that are built during the iterations can be any of the following:
o A simple user interface without any actual data processing logic
o A few subsystems with functionality that is partially or completely implemented
o Existing components that demonstrate the functionality that will be incorporated into the system.
The prototyping model consists of the following steps.
o Capture requirements. This step involves collecting the requirements over a period of time as they become available.
o Design the system. After capturing the requirements, a new design is made or an existing one is modified to address the new requirements.
o Create or modify the prototype. A prototype is created or an existing prototype is modified based on the design from the previous step.
o Assess based on feedback. The prototype is sent to the stakeholders for review. Based on their feedback, an impact analysis is conducted for the requirements, the design, and the prototype. The role of testing at this step is to ensure that customer feedback is incorporated in the next version of the prototype.
o Refine the prototype. The prototype is refined based on the impact analysis conducted in the previous step.
o Implement the system. After the requirements are understood, the system is rewritten either from scratch or by reusing the prototypes.
The main advantage of the prototyping model is that it allows you to start with requirements that are not clearly defined.
The main disadvantage of the prototyping model is that it can lead to poorly designed systems. The prototypes are usually built without regard to how they might be used later, so attempts to reuse them may result in inefficient systems. This model emphasizes refining the requirements based on customer feedback, rather than ensuring a better product through quick change based on test feedback.
Incremental or Iterative Model
The incremental, or iterative, development model breaks the project into small parts. Each part is subjected to multiple iterations of the waterfall model. At the end of each iteration, a new module is completed or an existing one is improved on, the module is integrated into the structure, and the structure is then tested as a whole.
For example, using the iterative development model, a project can be divided into 12 one- to four-week iterations. The system is tested at the end of each iteration, and the test feedback is immediately incorporated at the end of each test cycle. The time required for successive iterations can be reduced based on the experience gained from past iterations. The system grows by adding new functions during the development portion of each iteration. Each cycle tackles a relatively small set of requirements; therefore, testing evolves as the system evolves. In contrast, in a classic waterfall life cycle, each phase (requirement analysis, system design, and so on) occurs once in the development cycle for the entire set of system requirements.
The main advantage of the iterative development model is that corrective actions can be taken at the end of each iteration. The corrective actions can be changes to the specification because of incorrect interpretation of the requirements, changes to the requirements themselves, and other design or code-related changes based on the system testing conducted at the end of each cycle.
The main disadvantages of the iterative development model are as follows:
o The communication overhead for the project team is significant, because each iteration involves giving feedback about deliverables, effort, timelines, and so on.
o It is difficult to freeze requirements, and they may continue to change in later iterations because of increasing customer demands. As a result, more iteration may be added to the project, leading to project delays and cost overruns.
o The project requires a very efficient change control mechanism to manage changes made to the system during each iteration.
Agile Model
Most software development life cycle methodologies are either iterative or follow a sequential model (as the waterfall model does). As software development becomes more complex, these models cannot efficiently adapt to the continuous and numerous changes that occur. Agile methodology was developed to respond to changes quickly and smoothly. Although the iterative methodologies tend to remove the disadvantage of sequential models, they still are based on the traditional waterfall approach. Agile methodology is a collection of values, principles, and practices that incorporates iterative development, test, and feedback into a new style of development.
The key differences between agile and traditional methodologies are as follows:
o Development is incremental rather than sequential. Software is developed in incremental, rapid cycles. This results in small, incremental releases, with each release building on previous functionality. Each release is thoroughly tested, which ensures that all issues are addressed in the next iteration.
o People and interactions are emphasized, rather than processes and tools. Customers, developers, and testers constantly interact with each other. This interaction ensures that the tester is aware of the requirements for the features being developed during a particular iteration and can easily identify any discrepancy between the system and the requirements.
o Working software is the priority rather than detailed documentation. Agile methodologies rely on face-to-face communication and collaboration, with people working in pairs. Because of the extensive communication with customers and among team members, the project does not need a comprehensive requirements document.
o Customer collaboration is used, rather than contract negotiation. All agile projects include customers as a part of the team. When developers have questions about a requirement, they immediately get clarification from customers.
o Responding to change is emphasized, rather than extensive planning. Extreme Programming does not preclude planning your project. However, it suggests changing the plan to accommodate any changes in assumptions for the plan, rather than stubbornly trying to follow the original plan.
Thursday, June 3, 2010
Bug reporting And tracking
What is bug?
Wikipedia definition - A software bug is the common term used to describe an error, flaw, mistake, failure, or fault in a computer program or system that produces an incorrect or unexpected result, or causes it to behave in unintended ways. Most bugs arise from mistakes and errors made by people in either a program's source code or its design, and a few are caused by compilers producing incorrect code. A program that contains a large number of bugs, and/or bugs that seriously interfere with its functionality, is said to be buggy. Reports detailing bugs in a program are commonly known as bug reports, fault reports, problem reports, trouble reports, change requests, and so forth.
ISTQB definition - A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.
Other definition - A problem that causes a program to produce invalid output or to crash (lock up). The problem is either insufficient logic or erroneous logic. For example, a program can crash if there are not enough validity checks performed on the input or on the calculations themselves, and the computer attempts to divide by zero. Bad instruction logic misdirects the computer to a place in the program where an instruction does not exist, and it crashes. A program with bad logic may produce bad output without crashing, which is the reason extensive testing is required. For example, if the program is supposed to add an amount, but subtracts it instead, bad output results, although the computer keeps running. See abend, bug and buggy.
What is the difference between Error, Bug and Defect?
One answer which i read somewhere was: They are one and the same. The issue is whether to call them bugs or defects.Here are some comments from experts
comment 1:Bug : Any discrepancy found during testing of software product.
Defect: Any discrepancy found by the customer in the software product after the release in to production
Error: Any discrepancy in the coding
comment 2:I always thought of bug as slang for defect. ISTQB calls an error: A human action that produces an incorrect result. [After IEEE 610]
ISTQB calls a defect: A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.
The error could be introduced in coding, testing, requirements or any phase of the development process.
comment 3:As I stopped bothering about the differences between them or to questions like Sanity and Smoke and so on, I got 2 years of time that I invested in learning to test better.
comment 4:Well i agree to some extent for BUG & Defect..you call it the same. Reporter and time when reported makes a difference. But Error is really different. Error has nothing to do with the your application flow/specification/requirement. Its the Code which produce the Error. For example:
1.Error handlings in code. 2. Some time you see that some exceptions appear on application like Java null point exception.., Error 111....file missing...kind of things... These are really input data dependent and many other dependencies in code during code complilation etc...and those have interation with other tools like tomcat,libararies,dll etc These kind of issues are refered as Errors. These are not bugs as this is nothing to do with application flow/spec/requirements or flaws in application.
comment 5: The ISEB definition is: An error leads to a defect which leads to a bug (not verbatim but it's along those lines). However it'll differ very orginisation to orginisation, just use whatever works.
Bug life cycle
Important thing we always forget to mention about Duplicate bugs. Duplicate bugs when reported also come in Un confirmed state. From unconfirmed they are marked as Duplicate and closed. If Tester convince the team that the bug is not duplicate then the bug follow the same cycle as mentioned in the figure.
Bug Priority and Severity
Priority describes importance of defect. Usually we give priority as
P0 (Basic functionality)
P1 (Compliance functionality)
P2 (Cosmetic functionality)
Severity describes seriousness of defect. Usually we give severity as
Critical-Unable to continue test execution on that functionality without resolving the defect.
Major-Able to continue test execution but mandatory to resolve before releasing the defect
Minor –Able continue test execution but may or may not resolve
Types of bugs/defects:
1. User Interface defects (Minor)
2. Input domain defects (Major)
3. Calculation defects (Major)
4. Hardware defects (Critical/Show stopper)
5. Error Handling defects (Major)
Wikipedia definition - A software bug is the common term used to describe an error, flaw, mistake, failure, or fault in a computer program or system that produces an incorrect or unexpected result, or causes it to behave in unintended ways. Most bugs arise from mistakes and errors made by people in either a program's source code or its design, and a few are caused by compilers producing incorrect code. A program that contains a large number of bugs, and/or bugs that seriously interfere with its functionality, is said to be buggy. Reports detailing bugs in a program are commonly known as bug reports, fault reports, problem reports, trouble reports, change requests, and so forth.
ISTQB definition - A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.
Other definition - A problem that causes a program to produce invalid output or to crash (lock up). The problem is either insufficient logic or erroneous logic. For example, a program can crash if there are not enough validity checks performed on the input or on the calculations themselves, and the computer attempts to divide by zero. Bad instruction logic misdirects the computer to a place in the program where an instruction does not exist, and it crashes. A program with bad logic may produce bad output without crashing, which is the reason extensive testing is required. For example, if the program is supposed to add an amount, but subtracts it instead, bad output results, although the computer keeps running. See abend, bug and buggy.
What is the difference between Error, Bug and Defect?
One answer which i read somewhere was: They are one and the same. The issue is whether to call them bugs or defects.Here are some comments from experts
comment 1:Bug : Any discrepancy found during testing of software product.
Defect: Any discrepancy found by the customer in the software product after the release in to production
Error: Any discrepancy in the coding
comment 2:I always thought of bug as slang for defect. ISTQB calls an error: A human action that produces an incorrect result. [After IEEE 610]
ISTQB calls a defect: A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.
The error could be introduced in coding, testing, requirements or any phase of the development process.
comment 3:As I stopped bothering about the differences between them or to questions like Sanity and Smoke and so on, I got 2 years of time that I invested in learning to test better.
comment 4:Well i agree to some extent for BUG & Defect..you call it the same. Reporter and time when reported makes a difference. But Error is really different. Error has nothing to do with the your application flow/specification/requirement. Its the Code which produce the Error. For example:
1.Error handlings in code. 2. Some time you see that some exceptions appear on application like Java null point exception.., Error 111....file missing...kind of things... These are really input data dependent and many other dependencies in code during code complilation etc...and those have interation with other tools like tomcat,libararies,dll etc These kind of issues are refered as Errors. These are not bugs as this is nothing to do with application flow/spec/requirements or flaws in application.
comment 5: The ISEB definition is: An error leads to a defect which leads to a bug (not verbatim but it's along those lines). However it'll differ very orginisation to orginisation, just use whatever works.
Bug life cycle
Important thing we always forget to mention about Duplicate bugs. Duplicate bugs when reported also come in Un confirmed state. From unconfirmed they are marked as Duplicate and closed. If Tester convince the team that the bug is not duplicate then the bug follow the same cycle as mentioned in the figure.
Bug Priority and Severity
Priority describes importance of defect. Usually we give priority as
P0 (Basic functionality)
P1 (Compliance functionality)
P2 (Cosmetic functionality)
Severity describes seriousness of defect. Usually we give severity as
Critical-Unable to continue test execution on that functionality without resolving the defect.
Major-Able to continue test execution but mandatory to resolve before releasing the defect
Minor –Able continue test execution but may or may not resolve
Types of bugs/defects:
1. User Interface defects (Minor)
2. Input domain defects (Major)
3. Calculation defects (Major)
4. Hardware defects (Critical/Show stopper)
5. Error Handling defects (Major)
Software Testing Life Cycle (STLC) - Testing process
Step-1:Requirements
Step-2:Test planning
step-3:Test case writing
step-4:Test case execution
step-5:Test sign-off
Requirements
First and most important input for testing is requirement. Requirements are mostly gathered by the Business development team. These requirements are really not of technical requirements. Language is very much understood by non-technical users. Later on Project Manager/Programme Manager comes up with the requirements in proper format commonly known as functional specifications/business requirement specifications. These documents generally are doc format.
Tester's role:It's really important for test team to participate in requirement review process.This helps the entire product team to start in right track from the beginning. Output expected from a tester here in review meetings are:
• See if requirements are testable?
Example: If Requirement says "Web pages must look beautiful". How will you test if web pages are beautiful? Can you answer this? So test team points out these kind of issues in requirement reviews.
• Make sure requirements are clear,concise and not contradictory.
• Understand the requirements clearly,ask questions.As later in cycle tester's need to test the application against the requirements.
This is the first stage where tester test the requirements.
Test Planning
Once you are done with requirement reviews then PM/PO will come up with the final version of requirement specs.
Now based on the final requirement specs test team will plan there testing activities like coming up with test strategy, test plan writing etc.
Test plan is mostly developed by test lead or senior test member. The test planning will mainly answer the following:
• What to cover :functional testing, performance testing, load testing, accessibility testing, regression testing etc..?
• Time line for above.
• who will do what?
• Only manual testing or Automation?
• If Automation then what percentage?
• Dependencies?
• Test activities & timeline: like test case writing,test execution,bug bash,test sign off etc
• test sign off criteria.
Test case writing
Once test plan is written it go through some review process and final document gets publish in the team. Now testers start writing test cases for the features/functionality. Good way of writing test cases is to first write test case outlines (TCO's) and then convert those tco's in detail test cases.
TCOs are basically one line statement about a test scenario that test intend to test.
Further tester breaks the TCOs in multiple test cases with all details like "Steps", "Expected results", "Priority" etc.
• To see if there is a good test coverage for scenarios.
• To see if steps provided in test cases are clear and concise."
Tools used for Test case writing : Some companies use word or excel format and rest uses Test management tools.
Other activities that always go along with test case writing is test case update where testers Add/edit/delete test cases because of following reasons:
o Wrong test cases written by a tester.
o Some changes in existing requirement which may require editing of existing test case or adding new test cases.
o Missed scenarios by testers.
Test Execution/Test Pass/Test Run
Once the Developer are done with coding. Test team starts the action:
o Developer release a build for testing.
o Test team will now run BVT tests.
o Once the BVT tests are pass. Test team will accept the build.
o Now the actual test pass starts.
o Test team will run the tests they have written.
o Test will mark the test cases Pass/Fail accordingly.
o Test will log bugs for the failed test cases.
o Test will verify the fix for resolved bugs and accordingly close the bugs.
o Test team will do the complete test pass again if required or do a test of some selective test cases to make sure that there are no regression bugs.
etc.
These activities are part of every milestone or iterations. Once test passes are complete.Next step is Test Sign-off.
Test Sign-Off
Now the time comes when test team will do the test sign-off. Normally the test sign-off criteria is mentioned in the test plan. It's being done by the QA head/test manager/lead.
Step-2:Test planning
step-3:Test case writing
step-4:Test case execution
step-5:Test sign-off
Requirements
First and most important input for testing is requirement. Requirements are mostly gathered by the Business development team. These requirements are really not of technical requirements. Language is very much understood by non-technical users. Later on Project Manager/Programme Manager comes up with the requirements in proper format commonly known as functional specifications/business requirement specifications. These documents generally are doc format.
Tester's role:It's really important for test team to participate in requirement review process.This helps the entire product team to start in right track from the beginning. Output expected from a tester here in review meetings are:
• See if requirements are testable?
Example: If Requirement says "Web pages must look beautiful". How will you test if web pages are beautiful? Can you answer this? So test team points out these kind of issues in requirement reviews.
• Make sure requirements are clear,concise and not contradictory.
• Understand the requirements clearly,ask questions.As later in cycle tester's need to test the application against the requirements.
This is the first stage where tester test the requirements.
Test Planning
Once you are done with requirement reviews then PM/PO will come up with the final version of requirement specs.
Now based on the final requirement specs test team will plan there testing activities like coming up with test strategy, test plan writing etc.
Test plan is mostly developed by test lead or senior test member. The test planning will mainly answer the following:
• What to cover :functional testing, performance testing, load testing, accessibility testing, regression testing etc..?
• Time line for above.
• who will do what?
• Only manual testing or Automation?
• If Automation then what percentage?
• Dependencies?
• Test activities & timeline: like test case writing,test execution,bug bash,test sign off etc
• test sign off criteria.
Test case writing
Once test plan is written it go through some review process and final document gets publish in the team. Now testers start writing test cases for the features/functionality. Good way of writing test cases is to first write test case outlines (TCO's) and then convert those tco's in detail test cases.
TCOs are basically one line statement about a test scenario that test intend to test.
Further tester breaks the TCOs in multiple test cases with all details like "Steps", "Expected results", "Priority" etc.
• To see if there is a good test coverage for scenarios.
• To see if steps provided in test cases are clear and concise."
Tools used for Test case writing : Some companies use word or excel format and rest uses Test management tools.
Other activities that always go along with test case writing is test case update where testers Add/edit/delete test cases because of following reasons:
o Wrong test cases written by a tester.
o Some changes in existing requirement which may require editing of existing test case or adding new test cases.
o Missed scenarios by testers.
Test Execution/Test Pass/Test Run
Once the Developer are done with coding. Test team starts the action:
o Developer release a build for testing.
o Test team will now run BVT tests.
o Once the BVT tests are pass. Test team will accept the build.
o Now the actual test pass starts.
o Test team will run the tests they have written.
o Test will mark the test cases Pass/Fail accordingly.
o Test will log bugs for the failed test cases.
o Test will verify the fix for resolved bugs and accordingly close the bugs.
o Test team will do the complete test pass again if required or do a test of some selective test cases to make sure that there are no regression bugs.
etc.
These activities are part of every milestone or iterations. Once test passes are complete.Next step is Test Sign-off.
Test Sign-Off
Now the time comes when test team will do the test sign-off. Normally the test sign-off criteria is mentioned in the test plan. It's being done by the QA head/test manager/lead.
Test Metrics And Measurements
Measurement:
Quantifying the quality of an Application
Metric:
It is the combination of measurements
Some important testing metrics:
1)Schedule variance=(Actual time taken-planed time)/planed time*100
2)Effort variance=(Actual effort-Planned Effort)/Planned effort * 100
3)Test Case coverage =(Total Test Cases – Requirements that cannot mapped to test cases)/Total Test cases * 100
4)Customer Satisfaction= number of complaints/Period of time
5)Test Case effectiveness = The extent to which test cases are able to find defect.
6)Time to find a defect = The effort required to find a defect
7)Defect Severity = business impact= effect on the end user
8)Test Coverage = To which test case covers the products complete functionality.
9)Defect Severity index = An index representing the average of the severity of the defect.
10)Time to Solve a Defect = Effort Required to resolve the a defect
11)No Of Defect = The Total number of defects found in time
12)Defects/KLOC = The number of defects per 1000 lines of code
13)Defect severity = The severity level of a defect indicates the potential business impact for the end user.
(business impact = effect on the end user)
14) Time to find the defect= The effort required to find a defect.
15)Time to solve a defect =Effort required to resolve a defect (diagnosis and correction)
16)Test coverage =Defined as the extent to which testing covers the product’s complete functionality.
17) Test case effectiveness = The extent to which test cases are able to find defects.
18) Defect Age=Fixed date-Reported date
19)Defect Density=Number of defects in the module.
20) Defect cost= Cost to analyze the defect + Cost to fix it+ Cost of failures already incurred due to it.
21) Bug clearance Ratio =The ratio between valid & Invalid bugs
22) DRE (Defect Removal Efficiency)=A / A+B
A- Defects found by testing team (Fixed Defects)
B- Defects found by customers (Missed Defects)
Quantifying the quality of an Application
Metric:
It is the combination of measurements
Some important testing metrics:
1)Schedule variance=(Actual time taken-planed time)/planed time*100
2)Effort variance=(Actual effort-Planned Effort)/Planned effort * 100
3)Test Case coverage =(Total Test Cases – Requirements that cannot mapped to test cases)/Total Test cases * 100
4)Customer Satisfaction= number of complaints/Period of time
5)Test Case effectiveness = The extent to which test cases are able to find defect.
6)Time to find a defect = The effort required to find a defect
7)Defect Severity = business impact= effect on the end user
8)Test Coverage = To which test case covers the products complete functionality.
9)Defect Severity index = An index representing the average of the severity of the defect.
10)Time to Solve a Defect = Effort Required to resolve the a defect
11)No Of Defect = The Total number of defects found in time
12)Defects/KLOC = The number of defects per 1000 lines of code
13)Defect severity = The severity level of a defect indicates the potential business impact for the end user.
(business impact = effect on the end user)
14) Time to find the defect= The effort required to find a defect.
15)Time to solve a defect =Effort required to resolve a defect (diagnosis and correction)
16)Test coverage =Defined as the extent to which testing covers the product’s complete functionality.
17) Test case effectiveness = The extent to which test cases are able to find defects.
18) Defect Age=Fixed date-Reported date
19)Defect Density=Number of defects in the module.
20) Defect cost= Cost to analyze the defect + Cost to fix it+ Cost of failures already incurred due to it.
21) Bug clearance Ratio =The ratio between valid & Invalid bugs
22) DRE (Defect Removal Efficiency)=A / A+B
A- Defects found by testing team (Fixed Defects)
B- Defects found by customers (Missed Defects)
Quality standards
ISO (International Organization for Standardization)
SEI-CMM (Capability Maturity Model)
SEI-CMMI (Capability Maturity Model for Integration)
Six Sigma
ISO(International Organization for Standardization)
o ISO 9001:2000: ISO is generic Model, Applicable for all types of originations, contains 20 clauses, certification audit is like an examination, and result is the certification is pass or fail.
o It is based on the “PDCA Cycle” and the “8 Quality management Principles”
o PDCA (Plan Do Check Act):
o Define a plan (DEFINE)
o Execute the plan (IMPLEMENT)
o Check the results (CHECK)
o Take the necessary action (CORRECT)
Eight Quality Management principles in ISO Standard
1.Customer Focus
2.Leadership
3.Involvement of people
4.Process Approach
5.System Approach to Management.
6.Continual Improvement.
7.Factual Approach to Decision making
8.Mutually Beneficial Supplier Relationship
CMM (Capability Maturity Model)
o CMM Certification is given to only IT based companies like
o Software developement development companies
o Call Centers
o BPO’s
o Medical transcription etc.
o CMM Certification is given based on the process followed by the company.
o If the company developed system software then it will get CMMI Certification.
o CMM/CMMI Certification is given in different levels or stages.
o CMM/CMMI is called staged Model.
o Each Level has several KPA’s (Key Process Areas)
Sixsigma
o Six Sigma Certification is given to any type of company.
o Six Sigma Certification is given based on Quality produced by the company.
o According to Six Sigma Certification for 1 Million transactions 3.4 defects are acceptable.
SEI-CMM (Capability Maturity Model)
SEI-CMMI (Capability Maturity Model for Integration)
Six Sigma
ISO(International Organization for Standardization)
o ISO 9001:2000: ISO is generic Model, Applicable for all types of originations, contains 20 clauses, certification audit is like an examination, and result is the certification is pass or fail.
o It is based on the “PDCA Cycle” and the “8 Quality management Principles”
o PDCA (Plan Do Check Act):
o Define a plan (DEFINE)
o Execute the plan (IMPLEMENT)
o Check the results (CHECK)
o Take the necessary action (CORRECT)
Eight Quality Management principles in ISO Standard
1.Customer Focus
2.Leadership
3.Involvement of people
4.Process Approach
5.System Approach to Management.
6.Continual Improvement.
7.Factual Approach to Decision making
8.Mutually Beneficial Supplier Relationship
CMM (Capability Maturity Model)
o CMM Certification is given to only IT based companies like
o Software developement development companies
o Call Centers
o BPO’s
o Medical transcription etc.
o CMM Certification is given based on the process followed by the company.
o If the company developed system software then it will get CMMI Certification.
o CMM/CMMI Certification is given in different levels or stages.
o CMM/CMMI is called staged Model.
o Each Level has several KPA’s (Key Process Areas)
Sixsigma
o Six Sigma Certification is given to any type of company.
o Six Sigma Certification is given based on Quality produced by the company.
o According to Six Sigma Certification for 1 Million transactions 3.4 defects are acceptable.
Testing terminology
Black box testing - not based on any knowledge of internal design or code. Tests are based on requirements and functionality.
White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions.
Unit testing - the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code, may require developing test driver modules or test harnesses.
Incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
Integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
Functional testing – Black box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)
System testing - Black box type testing that is based on overall requirements specifications; covers all combined parts of a system.
end-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Sanity testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.
Regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
Acceptance testing - final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.
Load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.
Stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
Performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.
Usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
Install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.
Recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
Security testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
Compatibility testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment.
Exploratory testing - often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
User acceptance testing - determining if software is satisfactory to an end-user or customer.
Comparison testing - comparing software weaknesses and strengths to competing products.
Alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
Beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
Mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.
White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions.
Unit testing - the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code, may require developing test driver modules or test harnesses.
Incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
Integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
Functional testing – Black box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)
System testing - Black box type testing that is based on overall requirements specifications; covers all combined parts of a system.
end-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Sanity testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.
Regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
Acceptance testing - final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.
Load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.
Stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
Performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.
Usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
Install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.
Recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
Security testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
Compatibility testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment.
Exploratory testing - often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
User acceptance testing - determining if software is satisfactory to an end-user or customer.
Comparison testing - comparing software weaknesses and strengths to competing products.
Alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
Beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
Mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.
Types of software testing
o User Interface Testing
o Functional Testing
o Non Functional Testing
o User support Testing
User interface testing
During this testing test engineers validates user interface of the application as following aspects:
o Look & Feel
o Easy to use
o Navigations & shortcut keys
Functional testing
o Object properties coverage
o Input domain Testing
o Database testing/Backend coverage
o Error Handling
o Calculations/Manipulations coverage
o Links Existence & Links Execution
o Cookies & Sessions
Non-Functional testing
o Performance Testing
o Load Testing
o Stress Testing
o Memory Testing
o Security Testing
o Recovery Testing
o Compatibility Testing
o Configuration Testing
o Installation Testing
o Sanitation Testing
User support testing
o During this testing test engineers validates whether the application provides help or not.
o This is also called as context sensitive help.
o Functional Testing
o Non Functional Testing
o User support Testing
User interface testing
During this testing test engineers validates user interface of the application as following aspects:
o Look & Feel
o Easy to use
o Navigations & shortcut keys
Functional testing
o Object properties coverage
o Input domain Testing
o Database testing/Backend coverage
o Error Handling
o Calculations/Manipulations coverage
o Links Existence & Links Execution
o Cookies & Sessions
Non-Functional testing
o Performance Testing
o Load Testing
o Stress Testing
o Memory Testing
o Security Testing
o Recovery Testing
o Compatibility Testing
o Configuration Testing
o Installation Testing
o Sanitation Testing
User support testing
o During this testing test engineers validates whether the application provides help or not.
o This is also called as context sensitive help.
Levels of software testing
o Unit Testing
o Integration Testing
o System Testing
o User Acceptance Testing
Unit testing
o Testing a single program or single module, single unit is called Unit testing.
o Unit testing is white box testing technique.
o This testing is conducted by developers.
o Unit testing techniques:
o Basis path testing
o Control structure testing
o Conditional coverage
o Loops Coverage
o Mutation Testing
Integration testing
o Combining two or more modules is called coupling.
o Testing is conducted on combining modules is called Integration testing.
o It is white box testing technique.
o This testing is conducted by developers or white box test engineers.
o Integration testing Approaches:
o Top-down approach
o Bottom-up approach
o Hybrid approach
System testing
o Testing over all functionality of the application with respective requirements is called system testing.
o It is a black box testing technique.
o This testing is conducted by testing team.
o Before conducting system testing we should know the requirements.
o In this testing team conducted different types of testing’s in the following aspects:
o User interface testing
o Functional testing
o Non-functional testing
o User support testing
User acceptance testing
o After completion of system testing UAT Team conduct
Acceptance testing in two levels.
o Alpha testing
o Beta testing
o Alpha testing is conducted by customer like people at development site.
o Beta testing is conducted by users at client site with real data.
o Integration Testing
o System Testing
o User Acceptance Testing
Unit testing
o Testing a single program or single module, single unit is called Unit testing.
o Unit testing is white box testing technique.
o This testing is conducted by developers.
o Unit testing techniques:
o Basis path testing
o Control structure testing
o Conditional coverage
o Loops Coverage
o Mutation Testing
Integration testing
o Combining two or more modules is called coupling.
o Testing is conducted on combining modules is called Integration testing.
o It is white box testing technique.
o This testing is conducted by developers or white box test engineers.
o Integration testing Approaches:
o Top-down approach
o Bottom-up approach
o Hybrid approach
System testing
o Testing over all functionality of the application with respective requirements is called system testing.
o It is a black box testing technique.
o This testing is conducted by testing team.
o Before conducting system testing we should know the requirements.
o In this testing team conducted different types of testing’s in the following aspects:
o User interface testing
o Functional testing
o Non-functional testing
o User support testing
User acceptance testing
o After completion of system testing UAT Team conduct
Acceptance testing in two levels.
o Alpha testing
o Beta testing
o Alpha testing is conducted by customer like people at development site.
o Beta testing is conducted by users at client site with real data.
Software testing techniques
White Box Testing
In this testing we test internal logic of the program.
To conduct this testing we should aware of programming.
Ex: Unit Testing
Black Box Testing
Without knowing internal logic of the program,we test over all functionality of the application whether it is working according to client requirement or not.
Ex: System Testing
Grey Box Testing
It is the both combination of white box and black box testing.
Ex:Database Testing
In this testing we test internal logic of the program.
To conduct this testing we should aware of programming.
Ex: Unit Testing
Black Box Testing
Without knowing internal logic of the program,we test over all functionality of the application whether it is working according to client requirement or not.
Ex: System Testing
Grey Box Testing
It is the both combination of white box and black box testing.
Ex:Database Testing
Verification And Validation
Verification checks whether we are building the right system
Verification Techniques:
o Reviews
o Walkthroughs
o Inspections
Validation checks whether we are building the system right.
Validation Techniques:
o Black Box testing methodologies
Verification Techniques:
o Reviews
o Walkthroughs
o Inspections
Validation checks whether we are building the system right.
Validation Techniques:
o Black Box testing methodologies
Quality Assurance (QA) Vs Quality Control (QC)
Quality Assurance
Focuses on building in quality
Preventing defects
Process oriented
For entire life cycle
Meant for developing ,organizing the best quality process
Makes sure you are doing the right things, the right way
Quality Control
Is the actual testing of the software
Focuses on testing for quality
Detecting defects
Product oriented
Testing part in SDLC
Meant for implementing the process developed by former team
Makes sure the results of what you done are what you expected
Focuses on building in quality
Preventing defects
Process oriented
For entire life cycle
Meant for developing ,organizing the best quality process
Makes sure you are doing the right things, the right way
Quality Control
Is the actual testing of the software
Focuses on testing for quality
Detecting defects
Product oriented
Testing part in SDLC
Meant for implementing the process developed by former team
Makes sure the results of what you done are what you expected
Why does software have bugs ?
o Miscommunication or no communication
o Software complexity
o Programming errors
o Changing requirements
o Time pressures
o Poorly documented code
o Software complexity
o Programming errors
o Changing requirements
o Time pressures
o Poorly documented code
Software Quality
Quality software is reasonably
o Bug-free.
o Delivered on time.
o Within budget.
o Meets requirements and/or expectations.
o Maintainable.
o Bug-free.
o Delivered on time.
o Within budget.
o Meets requirements and/or expectations.
o Maintainable.
What is software testing ?
Software is an application developed for specific customer/Client based on the requirements.
Testing is a part of software development process.
Testing is an activity to detect and identify the defects in the software.
The objective of testing is to release quality product to the client.
Testing is a part of software development process.
Testing is an activity to detect and identify the defects in the software.
The objective of testing is to release quality product to the client.
Subscribe to:
Posts (Atom)