Rex black software testing ebook free download
With a hands-on, exercise-rich approach, this book teaches you how to define and carry out the tasks required to put a test … Advanced Software Testing — Vol. Book file PDF easily for everyone and every device. Este producto: Advanced Software Testing — Vol. Save my name, email, and website in this browser for the next time I comment. Post Comment. Note :. If you likes to read the soft copy of this book, and you wants to buy hard copy of this book officially from the Publisher.
Discover more about our services. Get the software testing resources you need RBCS provides easy access to the resources you need to be successful. Access up-to-date information today.
In pure analytical requirements-based test strategies, the risk reduction throughout test executionisneitherpredictablenormeasurable. Therefore, with analytical requirements-based test strategies, we cannot easily express the remaining level of risk if project stakeholders askuswhetherwecansafelycurtailorcompress testing.
On the contrary, we use requirements specifications, design specifications, marketing claims, technical support or help desk data, and myriad other inputs to inform our risk identification and analysis process if they are available. This also makes an analytical risk-based testing strategy more robust than an analytical requirements- based strategy, because we reduce our dependency on upstream processes which we may not control like requirements gathering and design.
ISTQB Glossary risk identification: The process of identifying risks using techniques such as brainstorming, checklists, and failure history. All that said, an analytical risk-based testing strategyisnotperfect. Likeanyanalyticaltesting strategy, we will not have all of the information we need for a perfect risk assessment at the beginning of the project. Even with periodic reassessment of risk—which I will also discuss later in this section—we will miss some important risks.
Therefore, an analytical risk- basedtestingstrategy,likeanyanalyticaltesting strategy,shouldblendreactivestrategiesduring test implementation and execution so that we can detect risks that we missed during our risk assessment. Let me be more specific and concise about the testing problems we often face and how analytical risk-based testing can help solve them.
First, as testers, we often face significant time pressures. Ultimately, all testing is time-boxed. Risk-based testing provides a way to prioritize and triage tests at any point in the lifecycle. When I say that all testing is time-boxed, I mean that we face a challenge in determining appropriate test coverage. If we measure test coverage as a percentage of what could be tested, any amount of testing yields a coverage metric of 0 percent because the set of tests that could be run is infinite for any real-sized system.
So, risk-based testing provides a means to choose a smart subset from the infinite number of comparatively small subsets of tests we could run. Further, we often have to deal with poor or missingspecifications. Byinvolvingstakeholders in the decision about what not to test, what to test, and how much to test it, risk-based testing allows us to identify and fills gaps in documents like requirements specifications that might result in big holes in our testing.
It also helps to sensitize the other stakeholders to the difficult problem of determining what to test and how much and what not to test. To return to the issue of time pressure, not only are they significant, they tend to escalate during the test execution period. We are often asked to compress the test schedule at the start of or even midway through test execution.
Risk-based testing provides a means to drop tests intelligently while also providing a way to discuss with project stakeholders the risks inherent in doing so. Finally, as we reach the end of our test execution period, we need to be able to help project stakeholders make smart release decisions. Risk-based testing allows us to work with stakeholders to determine an acceptable level 6. Understanding this history can help you understand where we are and where these strategies might evolve.
In the early s, Barry Boehm and Boris Beizer each separately examined the idea of risk as it relates to software development. Boehm advanced the idea of a risk-driven spiral development lifecycle, which we covered in the Foundation syllabus. The idea of this approach is to develop the architecture and design in risk order to reduce the risk of development catastrophes and blind alleys later in the project.
Beizer advanced the idea of risk-driven integration and integration testing. Now, in the mid s, Beizer and Bill Hetzel each separately declared that risk should be a primary driver of testing.
By this, they meant both in terms of effort and in terms of order. However, while giving some general ideas on this, they did not elaborate any specific mechanisms or methodologies for for making this happen. At that point, it perhaps seemed that just ensuring awarenessofriskamongthetesterswasenough to ensure risk-based testing. So, more structure was needed to ensure a systematic exploration of the risks. This brings us to the s. Separately, Rick Craig, Paul Gerrard, Felix Redmill, and I were all looking for ways to systematize this concept of risk-based testing.
So in parallel and with very little apparent cross-pollination, the four of us—and perhaps others—developed similar approaches for quality risk analysis and risk-based testing. In the mid- to late s, test practitioners widely use analytical risk- based testing strategies in various forms. Some still practice misguided, reactive, tester-focused bug hunts.
Van Veenendaal discusses informal techniques in The Testing Practitioner. By putting the ideas in this section into practice, you can join us in this endeavor. However, while we still have much to learn, that does not mean that analytical risk-based testing strategies are at all experimental. They are well- proven practice. I am unaware of any other test strategies that adapt as well to the myriad realities and constraints of software projects.
They are the best thing going, especially when blended with reactive strategies. Anotherformofblendingthatrequiresattention and work is blending of analytical risk-based testing strategies with all the existing lifecycle models.
My associates have used analytical risk-based testing strategies with sequential lifecycles,iterativelifecycles,andspirallifecycles. These strategies work regardless of lifecycle.
However, the strategies must be adapted to the lifecycle. Beyondlearningmorethroughpractice,another important next step is for test management tools to catch up and start to advance the use of analytical risk-based testing strategies.
Some test management tools now incorporate the state of the practice in risk-based testing. Some still do not support risk-based testing directly at all. I encourage those of you who are working on test management tools to build support for this strategy into your tools and look for ways to improve it.
They are staged such that risk identification starts first. Risk analysis comes next. Risk control starts once we have determinedthelevelofriskthroughriskanalysis. However,sinceweshouldcontinuouslymanage risk in a project, risk identification, risk analysis, and risk control are all recurring activities. ISTQB Glossary risk control: The process through which decisions are reached and protective measures are implemented for reducing risks to, or maintaining risks within, specified levels 8.
Everyone has their own perspective on how to manage risks on a project, including what the risks are, the level of risk, and the appropriate controls to put in place for risks.
So risk management should include all project stakeholders. Test analysts bring particular expertise to risk management due to their defect-focused outlook. They should participate whenever possible. Infact,inmanycases,thetestmanager will lead the quality risk analysis effort with test analysts providing key support in the process. For proper risk-based testing, we need to identify both product and project risks. We can identify both kinds of risks using techniques like these: Expert interviews Independent assessments Use of risk templates Project retrospectives Risk workshops and brainstorming Checklists Calling on past experience Conceivably, you can use a single integrated process to identify both project and product risks.
I usually separate them into two separate processes since they have two separate deliverables and often separate stakeholders. I include the project risk identification process in the test planning process.
In parallel, the quality risk identification process occurs early in the project. That said, project risks—and not just for testing but also for the project as a whole—are often identified as by-products of quality risk analysis.
In addition, if you use a requirements specification, design specification, use cases, and the like as inputs into your quality risk analysis process, you should expect to find defects in those documents as another set of by-products. These are valuable by-products, which you should plan to capture and escalate to the proper person.
Previously, I encouraged you to include representatives of all possible stakeholder groups in the risk management process. For the risk identification activities, the broadest range of stakeholders will yield the most complete, accurate, and precise risk identification.
The more stakeholder group representatives you omit from the process, the more risk items and even whole risk categories will be missing.
How far should you take this process? Well, it depends on the technique. In informal techniques, which I frequently use, risk identification stops at the risk items. The risk items must be specific enough to allow for analysis and assessment of each one to yield an unambiguous likelihood rating and an unambiguous impact rating. These effects include effects on the system—or the system of systems if applicable—as well as on potential users, customers, stakeholders, and even society in general.
Failure Mode and Effect Analysis is an example of such a formal risk management technique, and it is commonly used on safety- critical and embedded systems. Hazard Analysis is an example of such a formal risk management technique. The Advanced syllabus refers to the next step in the risk management process as risk analysis. I prefer to call it risk assessment, just because analysis would seem to include both identification and assessment of risk to me. Regardless of what we call it, risk analysis or risk assessment involves the study of the identified risks.
We typically want to categorize each risk item appropriately and assign each risk item an appropriate level of risk.
We can use ISO or other quality categories to organize the risk items. However, in complex projects and for large organizations, the category of risk can determine who has to deal with the risk.
A practical implication of categorization like this will make the categorization important. The Level of Risk Theotherpartofriskassessmentorriskanalysisis determining the level of risk. This often involves likelihood and impact as the two key factors. Likelihood arises from technical considerations, typically, while impact arises from business considerations.
However, in some formalized approaches you use three factors, such as severity, priority, and likelihood of detection, or even subfactors underlying likelihood and impact. So, what technical factors should we consider?
When determining the level of risk, we can try to work quantitatively or qualitatively. In quantitative risk analysis, we have numerical Likelihood is a percentage, and impact is often a monetary quantity. If we multiply the two values together, we can calculate the cost of exposure, which is called—in the insurance business—the expected payout or expected loss. While it will be nice some day in the future of software engineering to be able to do this routinely, typically the level of risk is determined qualitatively.
This is not to say—by any means—that a qualitative approach should be seen as inferior oruseless. Infact,giventhedatamostofushave to work with, use of a quantitative approach is almostcertainlyinappropriateonmostprojects.
The illusory precision thus produced misleads the stakeholders about the extent to which you actually understand and can manage risk. Unless your risk analysis is based on extensive and statistically valid risk data, your risk analysis will reflect perceived likelihood and impact. In otherwords,personalperceptionsandopinions heldbythestakeholderswilldeterminethelevel of risk.
The key point is that project managers, programmers, users, business analysts, architects, and testers typically have differentperceptionsandthuspossiblydifferent opinions on the level of risk for each risk item. By including all these perceptions, we distill the collective wisdom of the team. However, we do have a strong possibility of disagreements between stakeholders. So the risk analysis process should include some way of reaching consensus. In the worst case, if we cannot obtain consensus, we should be able to escalate the disagreement to some level of management to resolve.
Otherwise, risk levels will be ambiguous and conflicted and thus not useful as a guide for risk mitigation activities— including testing. Controlling the Risks Part of any management role, including test management, is controlling risks that affect your area of interest.
How can we control risks? Contingency, where we have a plan or perhaps multiple plans to reduce the impact if the risk becomes an actuality. Transference, where we get another party to accept the consequences of a risk. Finally, we can ignore or accept the risk and its consequences.
For any given risk item, selecting one or more of these options creates its own set of benefits and opportunities as well as costs and, potentially, additional risks associated with each option. Analytical risk-based testing is focused on creating risk mitigation opportunities for the test team, especially for quality risks. Risk-based testing mitigates quality risks through testing throughout the entire lifecycle.
In some cases, there are standards that can apply. Project Risks While much of this section deals with product risks, test managers often identify project risks, andsometimestheyhavetomanagethem. A specific list of all possible test-related project risks would be huge, but includes issues like these: Test environment and tool readiness Test staff availability and qualification Low quality of test deliverables Too much change in scope or product definition Sloppy, ad-hoc testing effort Test-related project risks can often be mitigated or at least one or more contingency plans put in place to respond to the unhappy event if it occurs.
A test manager can manage risk to the test effort in a number of ways. We can accelerate the moment of test involvement and ensure early preparation of testware. Bydoingthis,wecanmakesureweare ready to start testing when the product is ready.
In addition, as mentioned in the Foundation syllabus and elsewhere in this course, early involvement of the test team allows our test analysis, design, and implementation activities toserveasaformofstatictestingfortheproject, which can serve to prevent bugs from showing up later during dynamic testing, such as during system test. Detecting an unexpectedly large number of bugs during high-level testing like system test, system integration test, and acceptance test creates a significant risk of project delay, so this bug-preventing activity is a key project risk-reducing benefit of testing.
We can make sure that we check out the test environment before test execution starts. This can be paired with another risk-mitigation activity, that of testing early versions of the product before formal test execution begins.
If we do this in the test environment, we can test the testware, the test environment, the test release and test object installation process, and many other test execution processes in advance before the first day of testing. We can also define tougher entry criteria to testing. That can be an effective approach if the project manager will slip the end date of testing if the start date slips. We can try to institute requirements for testability. For example, getting the user interface design team to change editable fields into non-editable pull-down fields wherever possible—such as on date and time fields— can reduce the size of the potential user input validation test set dramatically and help automation efforts.
Toreducethelikelihoodofbeingcaughtunaware by really bad test objects, and to help reduce bugs in those test objects, test team members can participate in reviews of earlier project work We can also have the test team participate in problem and change management. Finally, during the test execution effort— hopefully starting with unit testing and perhaps even before, but if not at least from day one of formal testing—we can monitor the project progress and quality.
If we see alarming trends developing, we can try to manage them before they turn into end-game disasters. In Figure 1, you see the test-related project risks for an Internet appliance project that serves as a recurring case study in this book. These risks were identified in the test plan and steps were taken throughout the project to manage them through mitigation or respond to them through contingency.
We were worried, given the initial aggressive schedules, that we might not be able to staff the test team on time. Our contingency plan was to reduce scope of test effort in reverse- priority order. Our mitigation plan was to ensure a well-defined crisp release management process. We have sometimes had to deal with test environment system administration support that was either unavailable at key times or simply unable to carry out the tasks required. Our mitigation plan was to identify system administration resources with pager and cell phone availability and appropriate Unix, QNX, and network skills.
As consultants, my associates and I often encounter situations in which test environment are shared with development, which can introducetremendousdelaysandunpredictable interruptions into the test execution schedule.
Figure 1: Test-related project risks example In fact, more often than not, the determining factor in test cycle duration for new applications as opposed to maintenance releases is the number of bugs in the product and how long it takes to grind them out.
We asked for complete unit testing and adherence to test entry and exit criteria as mitigation plans for the software. For the hardware component, we wanted to mitigate this risk through early auditing of vendor test and reliability plans and results.
As a contingency plan to manage this should it occur, we wanted a change management or change control board to be established. It requires risk analysis. It considers two primary factors to determine the level of risk: likelihood and impact. During a project, the standard directs us to reduce the residual level of risk to a tolerable level, specifically through the application of electrical, electronic, or software improvements to the system.
The standard has an inherent philosophy about risk. It says that we have to build quality, especially safety, in from the beginning, not try to add it at the end, and thus must take defect-preventing actions like requirements, design, and code reviews.
The standard also insists that we know what constitutes tolerable and intolerable risks and that we take steps to reduce intolerable risks. When those steps are testing steps, we must document them, including a software safety validation plan, software test specification, software test results, software safety validation, verification report, and software functional safety report.
The standard is concerned with the author-bias problem, which, as you should recall from the Foundation syllabus, is the problem with self-testing, so it calls for tester independence, indeed insisting on it for those performing any safety-related tests. The standard has a concept of a safety integrity level SIL , which is based on the likelihood of failure for a particular component or subsystem.
The safety integrity level influences a number of risk-related decisions, including the choice of testing and QA techniques. Some of the techniques are ones I discuss in the companion volume on Advanced Test Analyst, such as the various functional and Many of the techniques are ones I discuss in the companion volume on Advanced Technical Test Analyst, including probabilistic testing, dynamic analysis, data recording and analysis, performance testing, interface testing, static analysis, and complexity metrics.
Additionally, since thorough coverage, including during regression testing, is important to reduce the likelihood of missed bugs, the standard mandates the use of applicable automated test tools. Again, depending on the safety integrity level, the standard might require various levels of testing. These levels include module testing, integration testing, hardware-software integration testing, safety requirements testing, and system testing. If a level is required, the standard states that it should be documented and independently verified.
In other words, the standard can require auditing or outside reviews of testing activities. The standard requires structural testing as a test design technique.
0コメント