- ISTQB Overview
- ISTQB CTFL knowledge system
- Software testing basics
- Testing in the software life cycle
- Static technology
- Test design technology
- Test Management
- Software testing tools
- Summarize
1. Overview of ISTQB
ISTQB, full name International Software Testing Qualification Board. Its training and certification system are divided into three levels:
- Foundation Level (CTFL): 6 months of software testing or development experience can be applied for
- Advanced Level (CTAL): Complete CTFL + more than 3 years of testing experience can apply. There are three examination modules in total: Test management, test analysis, and technical test analysis
- Expert Level (CTEL): Complete CTAL + more than 5 years of testing experience can apply, with a total of four examination modules: Test process improvement, test management, test automation, and safety testing
Currently, you can only apply for the first two levels of exams.
2. ISTQB CTFL knowledge system
From the structure of the entire ISTQB certification system, we can roughly see the knowledge domain (including breadth and depth) involved in software testing. Therefore, for us who are engaged in software testing, it has high reference value for improving our own knowledge system and career development. Anyone who is engaged in software testing will probably hear about the problems encountered in actual work. Many of them also give some practical suggestions. The advantage of the system is that it can give people a systematic and global perspective, compare the work you actually do and the knowledge you have mastered with, find out what level and shortcomings you are currently at, and where your next efforts are going, so as to help us avoid detours.
There are six parts about CTFL:
- Software testing basics
- Testing in the software life cycle
- Static technology
- Test design technology
- Test Management
- Software testing tools
Regarding CTAL and CTEL, since I haven't taken the exam yet, I plan to add a summary of the knowledge after finishing the exam next year.
3. Basics of software testing
1. Error (Error, Mistake), defect (Defect, Bug, Fault) and Failure
- Error: An incorrect result caused by human reasons, including human thinking and behavioral errors. Focus on human factors (there is an assumption here: everyone makes mistakes).
- Flaws: The specific manifestation of errors. Flaws will be introduced when a code or document designed by humans occur incorrectly. Focus on the internal manifestation of errors, that is, the direct cause of errors, and it is also where Developer needs to locate bug fixes
- Failure: The external reflection of the defect, an incorrect result caused by a code defect during the program run (the expectation is inconsistent with the actual situation). Focusing on external reflection of defects is a bug that needs to be discovered in Tester's work
In summary: People make mistakes and cause program defects. Tester finds defects and submits bugs through test activities. Developer locates and fixes defects based on bugs. The essence of testing is to discover defects
2. Overall objectives of software testing
Software development has a life cycle, and at different stages of the life cycle, testing will have different goals:
- Early:Preventing defects(static test)
- Development stage:Finding defects(Component testing, integration testing, system testing)
- acceptance:Gaining confidence about the level of quality(Acceptance test)
- Running phase:Provide information for decision-making(Non-functional testing, maintenance testing)
3. 7 basic principles of software testing
- Testing shows presence of defects can show defects, but it cannot prove that the system has no defects.
- Let the leader have the right expectation for the test
- Exhaustive testing is impossible
- In actual work, we determine the focus of the test based on risk analysis and the priority of different system functions.
- Early testing
- The earlier the defect is detected, the lower the cost
- Testing can be involved in the requirements stage
- Defect clustering
- A few modules found most of the defects, the 80/20 rule
- Focus on testing where defects are found, especially where defects are concentrated. Test coverage should be increased
- Pesticide paradox
- Testing with the same test case repeatedly, and no new defects will be discovered in the end
- Revelation: Test cases should be reviewed and modified regularly, and new different test cases should be continuously added
- Testing is context dependent Testing activity depends on the test background
- Carry out different testing activities for different testing backgrounds
- Implement different types of tests according to the test environment and goals, such as safety testing, performance testing, stress testing, etc.
- Absence-of-errors fallacy The fallacy of no flaws
- If the system cannot be used, or the system cannot meet the needs of customers, it is meaningless to discover and repair defects.
- Make sure the system is available before testing
4. Basic testing process
The basic testing process is mainly composed of the following activities:
- Test planning and control
- Test analysis and design
- Test implementation and execution
- Evaluation of export guidelines and reports
- Test End Activity
4.1 Test plan and control
Test plan: 1) Identify the test task. 2) Define the test objectives. 3) Determine test activities to achieve goals and tasks. The general testing plan begins at the end of the software requirement analysis phase
Test control: Continuous activities, report the status of the test by comparing the actual progress of the test and the test plan, take corrective measures or change the original plan as needed
Output: Test plan, daily test report
4.2 Test analysis and design
A series of activities that translate the summarized test objectives into specific test conditions (i.e., test items) and test cases:
- Review and testing basis (such as requirements analysis, system architecture, design, interface description and other documents)
- Evaluate the test basis and testability of the test object
- Identify test conditions and prioritize
- Confirm the test data
- Identify the infrastructure and tools required for testing (test environment)
- Create two-way traceability between test basis and test cases
Output: Determine the mapping of test conditions, test data, test environment, test conditions and requirements, only things in the general direction are needed without specific implementation
4.3 Test implementation and execution
Mainly including: design of test procedures and scripts, setting up test environments, running tests (i.e. implementation, preparation, and execution)
Implementation includes:
- Test Case: Develop test cases and prioritize them, identify test data
- Test Procedure: Develop test procedures (i.e., a set of test cases) and prioritize them, creating test data. (Optional, prepare the test framework and develop automated test scripts)
- Test Suites: Create a test suite based on test procedures
Preparations include:
- Test Environment: build a test environment (server construction, client environment construction, test assistance (test cases, bugs, etc.) management environment construction, automated test environment construction)
- Confirm and update the two-way traceability between test basis and test cases. (Test cases cover test requirements, many-to-one, and can be mapped through tools)
Execution includes:
- Execute test procedures (run case, manual or automatic)
- Record the results of test execution (manual tests are recorded by the tool when running the case through the tool, and the test report of the automated test will automatically record the test execution results)
- Comparison of actual results and expected results
- If the actual results and expected results are inconsistent, they will be reported as an event and the causes of the differences will be analyzed and determined (analyze the problem, report the bug)
- After the defect is fixed, re-test (Regression test)
Output: test cases, test environment, test execution records, bug records
4.4 Evaluation of export guidelines and reports
Evaluation Export Guidelines are activities that compare test execution results with defined test objectives, which need to be performed at all levels of testing.
The main tasks of evaluating test export guidelines:
- Check the test log according to the test export criteria defined in the test plan (whether the test case has been executed, whether the coverage indicators of the planned functions, statements, etc. are met, continue to test and find that the number of defects is reduced below the metric standard, and whether the exit standard is met)
- Evaluate whether more testing is needed or whether the export criteria for the test need to be changed (determined based on the bug)
- Provide a test summary report for stakeholders (summarize test execution, defects, test status)
Output: Test report
4.5 Test End Activity
Collect and integrate useful data from completed testing activities (test experience, test pieces, factors that affect testing, etc.). In essence, it is the summary and accumulation of related activities
The main tasks of the test ending activity:
- Check if all planned deliverables are generated and delivered
- Whether the event report is closed or the need to submit a change to an event report that is not closed
- Record system acceptance
- Record and archive test pieces, test environment and test basic equipment for future reuse
- Hand over the test piece to the maintenance department
- Analyze and record lessons learned
- Improve test maturity using accumulated effective information
The test ending activity defined here may be cropped in actual work.
5. Psychology of Testing
Independent testing, that is, development and testing are separated and carried out separately. Independent testing can be applied to any test level. Different levels of independence can be defined from low to high:
- Developers write their own tests
- Tested by other developers in the same group
- Tested by a dedicated testing team (within the same organization)
- Test outsourcing (external organization)
Development is constructive thinking, and testing is destructive thinking. How to avoid conflict between the two?
- Testers communicate with developers on a constructive basis about the found defects or failures
- Establish a common goal (pursuing high-quality products) and start a project in a way that is collaborative rather than struggle
- Don't blame the group or individual who introduce the problem.
- Think from the perspective of others and try to understand how other members feel and why they react like this
- Be sure that other members have understood your description, while making sure you understand others' description correctly
4. Testing in the software life cycle
1. Software development model
The software development model is the method and process on which software development is based. Software testing is not an activity that exists in isolation, but exists in the software development life cycle. Therefore, it is necessary to understand the software development model and choose different testing methods according to different models.
V model (sequential development model)
- Including: user requirements, requirements analysis and system design, outline design, detailed design, coding implementation, module testing, integration testing, system testing, acceptance testing
- This development model corresponds to four test levels: component/unit testing, integration testing, system testing, acceptance testing
Iteration-Incremental Development Model
- Consisting of a series of relatively short development cycles such as demand establishment, design, construction and testing
- Common models: Prototyping, Rapid Application Development (RAD), Unified Software Development Process (RUP) and Agile Development Model (Targets are more commonly used)
Testing in the life cycle model
- Each development activity (not only the development of code, but also the requirements document, system design, etc.) has corresponding testing activities.
- Each test level has unique test objectives
- For each test level, corresponding test analysis and design must be carried out
- Testing to participate in the review of the document (the first draft stage of the document)
2. Test level
What needs to be clear for each test level: testing objectives, testing basis, test objects, typical defects and failures, requirements for testing tools, support for testing tools, specialized methods and responsibilities
1) Component test/unit test
- Test objectives: Check whether the code complies with the design and specifications and find defects
- Test basis: component requirements description, detailed design documents, code
- Test objects: code (components, programs, data conversion/transplantation programs, database models)
- Typical defects and failures: TBD
- Requirements for testing equipment: pile modules, drive modules and simulators will be used
- Testing tools: Unit testing frameworks, such as JUnit
- Special method: TDD test-driven development, first write test cases, test coverage
- Responsibilities: Completed by the developer
2) Integration Test
- Test objectives: Check the interfaces of interactions between components and the interactions of different parts of a system to find defects
- Test basis: software and system design documents, system architecture, workflow, use cases
- Test objects: subsystem, database implementation, infrastructure, interface, system configuration and configuration data
- Typical defects and failures: TBD
- Requirements for testing tools: There are multiple levels of integration: component integration (interactive testing between components), system integration (interactive testing between different subsystems or software and hardware)
- Test tool: The larger the scale of the integration, the more difficult it is to locate defects.
- Specialized method: gradually increase the degree of integration (top to bottom to top, bottom to top), avoiding the use of big-bang integration
- Responsibilities: Completed by the tester
3) System testing
- Test objectives: fully operate the system in the actual operating environment, seriously whether all components of the system can work normally, meet the software requirements specifications, and find defects
- Test basis: system and software requirements specifications, use cases, functional specifications, risk analysis reports
- Test objects: system, user manual and operating manual, system configuration and configuration data
- Typical defects and failures: TBD
- Requirements for testing tools: automated testing
- Test tools: functional testing, non-functional testing (pressure, performance, capacity), user interface testing, compatibility testing, international testing, localization testing and other testing tools
- Specialized methods: requirements-based testing, business process-based testing, use case-based testing, risk assessment-based testing
- Responsibilities: Completed by an independent testing team
4) Acceptance test
- Test objectives: Build confidence
- Test basis: user requirements, system requirements, use cases, business processes, risk analysis reports
- Test objects: Business processes, operation and maintenance processes, user processing processes, structures, reports, and configuration data based on fully integrated systems
- Typical defects and failures: TBD
- Requirements for testing tools: Acceptance testing can be carried out at multiple levels (component testing, integration testing, system testing)
- Testing tools: Depend on demand and actual situation
- Specialized methods: user acceptance test, operational test, contract and regulatory acceptance test, Alpha and Beta (on-site) test
- Responsibilities: It is performed by users or customers, and other stakeholders may participate
3. Test type
1) Functional testing
- Functional testing is based on the interaction between functions and features and specialized systems and can be performed at various levels of testing.
- Functional testing mainly considers the external manifestation behavior of the software, and does not consider the specific execution path of the program.
- Black box testing is usually used (also known as functional testing and data-driven testing)
- Security testing is a type of functional testing, focusing on security-related functions (such as firewalls)
- Interoperability testing is a kind of functional testing
At present, many companies basically include functional tests
2) Non-functional testing
- Including but not limited to: performance testing, load testing, stress testing, availability testing, maintainability testing, reliability testing, portability testing
- Non-functional tests test how well the system works and can be executed at any level
- Non-functional testing focuses on the external behavior of the software, and black box testing design technology is usually used to implement test cases.
The most common thing in the company is performance testing
3) Software structure/architecture testing (structural testing)
- Structural testing is also called white box testing and logic-driven testing.
- Structural testing can be performed at any test level (not limited to component testing/unit testing, but can also be used for system testing, integration testing, acceptance testing)
- Coverage: refers to the degree to which the structure passes the test suite, expressed as the percentage of the item being covered. If the coverage rate is less than 100%, more test cases may be required to test missing items and improve the coverage of the test.
- The main methods of white box testing: statement coverage, judgment coverage, condition coverage, judgment/condition coverage, condition combination coverage, path coverage, basic path testing
4) Change-related tests (retest and regression test)
- Retest: After a software defect is discovered and repaired, it is necessary to retest to confirm that the original defect has been successfully repaired. This is called confirmation test, and retest
- Regression testing: Repeated tests performed by tested programs after modifying defects to find out whether new defects are introduced or blocked after these changes.
- The size of the regression test can be determined based on the risk of new defects found in previously normal-run software.
- Regression testing can be performed at all test levels and is suitable for both functional, non-functional and structural testing
4. Maintenance test
- After the software is released, the system usually runs for several years or even decades. During the maintenance stage of the software life cycle, the tests conducted due to modification, expansion, transplantation, and partial software being eliminated due to changes in business and other aspects are called maintenance testing.
- Maintenance testing is performed on a running system, and maintenance testing is required once the software or system is modified, transplanted, or the system is eliminated.
V. Static technology
1. Static technology and testing process
- Static testing technology: through manual inspection (Review) or automated analysis (Static analysis) way to check code or other project documents without executing code
- Review: A way to test software work products (including code) before dynamic testing
- The cost of detecting and modifying defects early in the life cycle is lower than the cost of detecting and repairing defects in dynamic testing
- Reviews can be conducted entirely by hand or through the support of the tool. Manual review is to inspect and evaluate work products
- The purpose of the review is to discover defects and improvements in the work product
- Comparison with dynamic testing
- Reviews, static analysis and dynamic testing share a common goal: identifying defects. They are complementary, and different techniques can find different types of defects.
- Static technology discovers the cause (defect) of software failure, while dynamic testing discovers the failure itself
- Typical defects found in the review: deviations from the standards, errors within requirements, design errors, insufficient maintainability and error interface specifications, etc.
- Static testing technology classification: review (informal review, formal review (walk inspection, technical review, review)), static analysis (lexical and grammatical analysis, static error analysis)
2. Review process
- Types of reviews:
- Informal review: No need to follow a well-defined process
- Formal Review: Following a well-defined process, participants are composed of clear responsibilities and checklists, with well-defined criteria for entering and completing reviews (structured and standardized)
- Formal reviews include: Walk Through (hosted by the author), Technical Review (with special hosts, no managers participate), Review (manager participates, introduction of measurement items)
- Estimates of review: identify flaws, increase understanding, train testers and new team members, reach consensus on discussions and decisions
- The formal review stage: a typical formal review consists of six parts.
- Planning phase: Define review standards, select personnel, assign roles, formulate entrance and export criteria, select document content to be reviewed, and check entrance criteria
- Preparation session: Distribution of documents, explaining the review objectives and documents to review participants
- Personal preparation stage: review documents first, prepare for review meetings; mark possible defects, problems and suggestions
- Check/evaluate/record results (review meeting stage): discuss and record; mark defects, make suggestions for handling, make decisions on defects; check, evaluate and record problems
- Rework stage: Modify the found defects and record the status of the defect updates
- Tracking results phase: Check whether defects have been resolved, collect metric data, and check export criteria
- Roles and responsibilities: manager (decided), host (coordinated), author (responsible person), reviewer (discovering problems and giving suggestions), recorder (conference minutes)
- Factors for successful review: clear goals, appropriate reviewers, welcome deficiencies found, no evaluation of participants, appropriate checklists, provision of review technical training, management support, emphasis on learning and process improvements
3. Tool support for static analysis
2) Test design specifications, test case specifications, and test procedures specifications: respectively test conditions (test items), test cases, and a set of test cases containing priority and order
3) How to evaluate the quality of test cases: establish traceability of test conditions and requirements (to facilitate the impact analysis of demand changes and the requirement coverage analysis of test case sets), expected results
2. Types of test design technology
The purpose of using test design technology: identify test conditions, develop test cases (i.e. determine what to test and how to test)
Classification of test design technology:
3. Specification-based or black box testing technology
4. Structure-based technology or white box technology
5. Experience-based technology
6. Select testing technology
7. Test management
1. The organizational structure of the test:
2. Test plan and estimation
Note: In actual work, the export criteria we use:
3. Test process monitoring
4. Configuration Management
5. Risk and Testing
6. Event Management
8. Software testing tools
1. Types of test tools
2. Effective use of tools: potential benefits and risks
3. Introduction of tools in the organization
9. Summary
The knowledge system of ISTQB FL basically covers all aspects of our testing work, including: software testing basics, testing in the software life cycle, static technology, test design technology, test management, and testing tools. This gives us a breadth of tests. Regarding the depth of testing, I think as work experience and knowledge accumulate, ISTQB's AL and EL will continue to deepen. Through the summary process, I have deepened my understanding of the test knowledge system. At the same time, I feel that many points are not thorough enough, and I also need to constantly experience and summarize them in actual work. We will strive to win AL next year.
- The purpose of static analysis: discover defects in software source code and software model
- Static analysis usually finds defects rather than failures
- Static analysis tools can analyze program code (such as control flows and data flows) and produce outputs such as HTML and XML
- Typical flaws found by static analysis tools:
- Refer to a variable with no defined value
- Inconsistent interface between module and component
- Variables never used
- Unreachable code
- Logical omissions and errors (potential infinite loop)
- Overly complex structure
- Violate programming rules
- Security vulnerabilities
- Syntax errors in code and software models
- Strategies for static analysis: Developers use static analysis tools before or during component testing and integration testing, or when code is checked into configuration management tools; designers use static analysis tools during software modeling
-
6. Test design technology
1. Test development process
1) Test development process
- It was also introduced during the previous test process. Here we focus on the development process of tests, which are generally divided into: analysis stage, design stage and implementation stage.
- Analysis phase: Determine what to test (i.e. clarify the test conditions), define the test conditions as an entry or event that can be verified through the test case, and establish traceability from the test conditions to the requirements
- Design phase: Define and record test cases and test data. Complete the test design specifications and test case specifications.
- Test cases include: a set of input values, prerequisites for execution, expected results, post-conditions for execution, output, data and state changes, etc.
- Implementation phase: The development, implementation, prioritization and organization of test cases are included in the test procedure description. The test procedure describes the order of execution of test cases, that is, the case and execution order of execution of tests.
- Summary: The test development process is the process of determining what to test, how to test the design, and how to perform the test
- Test technology based on specification description (black box): technology that selects test conditions, test cases and test data based on the document. Contains functional and non-functional tests.
- Structure-based technology (white box): Test technology based on analyzing the structure of the component or system under test
- Experience-based technology: Testing based on testers’ experience with similar applications or techniques, as well as knowledge and intuition
-
Equivalent classification: Based on an assumption, the input of the software or system is divided into different groups, and the software or system should have similar performance for the same group of inputs. Then it is divided into valid equivalent and invalid equivalent
- The division of equivalent classes can be based on input, output, time-dependent values (such as before or after an event), interface parameters, etc.
- Steps to determine the equivalent class:
- Category: Decompose the input according to the same characteristics or similar functions
- Abstract: abstract the same feature in each subclass and characterize the feature with examples
- Determine the valid equivalence class (the test program implements the specified functions) and the invalid equivalence class (the test program's fault tolerance, handling of exceptions)
- Design test cases based on equivalents
-
Boundary value analysis: is an effective complement to the equivalent class method. Incorrect behavior is usually more likely to occur in the boundaries of each equivalent class, so the boundary is where the test may find defects.
- The maximum and minimum values of each division are its boundaries. The boundaries of the effective part are the valid boundary value, and the boundaries of the invalid part are the invalid boundary value.
- Boundary value analysis can be applied to all test levels
- Method for selecting boundary value: If the input condition specifies the range of the value, then take the value that has just reached the boundary of this range and the value that has just exceeded the boundary of this range as the test input data.
-
Decision table test: Decision table, also known as judgment table, is used to represent and analyze complex logical relationships and is suitable for describing several combinations of actions taken under different combinations of conditions.
- When establishing a decision table, analyze the specifications and identify the system's conditions and actions.
- The decision table usually consists of 4 parts: conditional piles (list all conditions of the problem and do not emphasize the order of the conditions), action piles (list the actions that can be taken for the problem specified), conditional items (list the values of conditional piles: true or false), action items (list the actions that should be taken under various values of conditional items)
- The specific value of any condition combination and its corresponding operations to be performed can be regarded as a rule and can be designed as a test case.
- When designing decision tables, you can merge the same rules with action items to reduce redundancy
-
State transition test: The process of software operation or operation is regarded as a process of continuous change in its state. It can be tested based on the state of the software, the transition between states, the input or events that trigger the change of state, and possible actions caused by the state transition.
- The status of the system or object under test is independent, recognizable, and limited in number
- A state table depicts the relationship between state and input and can display possible invalid state transitions
- The designed test can cover a typical sequence of states, covering each state
- Perform transitions of each state, perform a specific state transition sequence, perform an invalid state transition
-
Use case testing: Use cases describe interactions between participants, process flow is described based on the situations most likely to be used by the system, and test cases can be obtained from the use cases.
- Use cases have preconditions (required for successful execution of the use case) and postconditions (the results that can be observed after the use case are executed and the end state of the system)
- Use case testing facilitates the discovery of integration defects caused by interaction between different components and can be used for acceptance testing
- Use cases usually have a main scenario (the most likely scenario) and optional branches
- Level of structure testing: Structure-based testing/white box testing can be applied to all test levels, not only at component level (White box testing that we usually call usually refers to component-level unit testing)
- Component level: Structure refers to the structure of software components, such as: statements, judgments, branches, or each different path
- Integration level: The structure may be a call tree (module call relationship diagram)
- System level: Structure may be menu structure, business process, or web page structure
-
Statement overwrite: Design several test cases to run the tested programs so that each executable statement is executed at least once
- Statement coverage: The number of executable statements covered by (design or execute) test cases divided by the number of all executable statements in the code under test
- Statement coverage cannot detect problems in logical operations in judgment, statement coverage is the weakest logical coverage
-
Determine coverage: Design several test cases to run the tested program so that each determined true branch and false branch in the program undergo at least once
- Determined coverage: The number of all judged exits covered by (design or executed) test cases divided by all possible judged exits in the code under test
- Decision coverage is more comprehensive than statement coverage. 100% determination coverage can ensure 100% statement coverage, otherwise it will not work.
-
Conditional coverage: In the determination conditions, ensure that each condition has a true or false at least once.
- Conditional coverage is an effective supplement to the judgment coverage, and can cover the judgment conditions missing in the judgment coverage.
- White box testing requires both determination coverage and condition coverage
- Concept: Testing based on testers' experience and knowledge and intuition on similar applications or technologies is an effective complement to the systematic generation of test cases, and its effect depends on the tester's experience.
-
Error speculation: Predict defects, list possible errors, and design tests to attack these errors, called defect attacks
- You can design defects based on experience, existing defects and failure data, common sense about software failures, etc.
-
Exploration test: Test activities for testing design and test execution are carried out at the same time according to the test objectives contained in the test plan. The non-formal test method is actively involved and controlled by testers to design new, better and more complete tests using the information obtained during the test process. (The recent exploratory test has been discussed more popular on the Internet)
- Often used in situations where valuable test documents are lacking and test time is stressful
- Can be added or supplemented to other formal tests
- Can be used as an inspection during the test, helping to detect serious defects
- The selection of test technology requires cross-using according to actual conditions during the actual design of test cases to ensure sufficient coverage of the test objects.
- The considerations given in the ISTQB outline include: system type, legal and regulatory standards, customer or contract requirements, level of risk, type of risk, test objectives, document availability, tester skill level, time and cost budget, development life cycle, use case models and previous experience of finding various defects, etc.
- Two roles involved: the test team leader and the tester
- Testing team leader: mainly responsible for testing planning, supervision and control, and coordination
- Possible tasks: testing strategy, testing policy, testing plan, testing supervision and control, testing progress, automation selection, test report, coordination of various problems
- Tester: Mainly responsible for testing analysis, design and execution
- Possible tasks: test specifications, test environment, test data, test execution and recording, automated implementation
- Test plan
- The test plan is affected by the following factors: the organization's testing policy, testing scope, testing objectives, risks, constraints, criticality, measurability, resource availability, etc.
- The test plan isContinuous activities, it is necessary to continuously obtain feedback from tests throughout the life cycle and activities to identify risks, so as to make corresponding adjustments to the plan.
- Test plan activities include:
- Determine the scope and risks of the test and clarify the test objectives
- Determine the overall testing method, including the definition of test level, entry and export criteria
- What to test? Who will test it? How to test it? How to evaluate the test results?
- Schedule time progress for test analysis, design, implementation, execution and evaluation
- Assign resources to different defined test activities
- Define the number, level of detail, structure, and template of the test document (determine the output)
- Select metrics for monitoring test preparation and execution, defect resolution, and risk problems
- Entry Guidelines: Define when testing can be started, mainly including:
- The test environment is ready and available
- Test tools are ready
- Test objects are available
- Test data available
- Export Guidelines: Define when testing can be stopped, mainly including:
- Integrity measurements, such as code, function or risk coverage
- Estimation of defect density or reliability measures
- cost
- Legacy risks, such as no repaired defects or lack of test coverage in certain areas
- Schedule, such as based on delivery to the market
- Test Estimation: Here are two estimation methods, metric-based approaches (based on similar items and typical data) and expert-based approaches (dependent on experts)
- Actual estimation process: WBS decomposition of software testing work, decompose defined tasks, and determine the workload of tasks based on previous project testing experience and historical data, estimate the cost based on workload and combined with enterprise productivity.
- Once the test workload is estimated, resources can be identified and scheduled
- Contents of test workload: test case design, test environment setting, test case execution, test defect report
- Test strategies, test methods
- Testing strategies are often a description of the overall approach and objectives of how to test the software, including determining the test environment, stage, type, methodology, and technology.
- The purpose of formulating a test strategy: identify reasonable testing plans, identify the main tasks and risks of testing, and make the test more effective
- Testing methods are concrete implementations of test strategies, defined and refined in the test planning and design phases, usually depending on the test objectives and risk assessments.
- Typical testing methods: analysis methods, model-based methods, systematic methods, process-based or standard-compliant methods, dynamic and heuristic methods, consultative methods, and reusable methods.
- Test execution covers 100% of the functions (full coverage is done in advance when designing test cases, but there will be some trimming for different rounds of tests, such as the first round of coverage 100%, and the regression test only covers modules with new functions and bugs)
- The critical and above level bugs are 0 (dividing the severity of the bug into 5 levels: 5 extremely serious (system crash), 4 very serious (functional module is not available), 3 serious (functional part is available), 2 moderate (does not affect usage but affects user experience), 1 mild (other))
- The leftover bug has been submitted as a change request
- The purpose of test monitoring: Provide feedback information and visibility for test activities. Test monitoring includes supervision (collecting data) and control (correcting bias)
- Objects to be monitored: test coverage (demand, risk, code), progress (test case execution), defects (quantity, trend)
- Monitoring form: test report
- Purpose of configuration management: Establish and maintain the integrity of software or system products (components, data, documents) throughout the life cycle of the project and product.
- Function on testing: Ensure that the test piece is identified, the version is controlled, and the changes can be traced and traced.
- Risk: the possibility of events, dangers, threats or circumstances, and the unpredictable consequences resulting from this, i.e.Potential problems(Not actually happened)
- Risk level: by the occurrence of uncertain eventspossibilityand producedInfluence(The adverse consequences caused by the event) to determine
- Project risk: Risks in the ability to deliver projects according to goals
- Considering from organizational factors, technical factors, and supplier factors
- Product Risk: Potentially Failed Parts in Software or System
- The relationship between risk and testing: Risk can often be used to decide where to start testing and where more testing is needed. Testing can reduce risk or reduce the impact of negative events
- Event: The difference between the actual result and the expected result needs to be recorded as an event. (That is what we usually call bugs and defects)
- Events and defects: Events may or may not be caused by defects; defects are practical issues that have been confirmed to need to be fixed. (The bugs that are accepted in our actual work are considered defects)
- Events may occur anywhere (requirement documents, development documents, test documents, user documents, code, etc.), and are not limited to inconsistencies found during test execution.
- Purpose of incident reporting: provide problem feedback, track system quality and test progress, and provide information for testing process improvement
- The contents of the event report include: expected and actual results, identification of test items and environment, the life cycle phase of the software, event description (log, database backup or screenshot), scope of impact, severity, urgency/priority of repair, event status, conclusion, global impact, change records, reference
- The significance of testing tools: support testing activities
- Depending on different types of testing activities, different testing tools are available. Generally, it includes: tools for testing, testing process management tools, tools for observation, tools for testing and tools that are helpful for testing.
- Test framework: It is widely used in the industry, and it contains at least the following three meanings
- Reusable and extensible test library, which can be used to build test tools (also called test tools)
- Design type of design test automation (such as data-driven, keyword-driven)
- The entire process of test execution
- Classification of test tools: It can be divided into functional testing tools (white box testing tools/black box testing tools), performance testing tools, and test management tools (test process management, defect tracking management, test case management)
- Test management tools: test management tools, requirements management tools, event management tools (defect tracking tools), configuration management tools. TD, JIRA, RTC, etc. in practical applications
- Static testing tools: review tools, static analysis tools, modeling tools
- Tools for testing specification description: test design tools, test data preparation tools. QTP, WinRunner, Robot, RFT, Selenium
- Test execution and recording tools: test execution tools, test tools/unit testing framework tools, test comparators, coverage measurement tools, security testing tools
- Performance and monitoring tools: Dynamic analysis tools, performance testing/load testing/stress testing tools, monitoring tools. LoadRunner, JMeter, Nmon
- income:
- Reduce repetitive work (such as performing regression tests, re-entering the same test data, and checking code rules)
- Better consistency and repeatability
- Objective evaluation (e.g., static measurement, coverage)
- It is much easier to test and test-related information (such as statistics and charts of test progress, event incidence and performance, etc.)
- risk:
- Have unrealistic expectations for tools (functional and ease of use)
- Underestimate the time, cost and effort required for the first introduction (including training and additional expertise)
- Excessive reliance on tools (using automation tools for tasks suitable for manual testing)
- Ignore version control of test objects in the tool
- Key points to consider for organizational selection tools:
- Organizational maturity, advantages and disadvantages of introducing tools, and the possibility that the introduction of tools can improve the testing process
- Evaluation based on clear requirements and objective guidelines
- Whether infrastructure needs to be changed and how
- Evaluate suppliers
- Collect internal requirements
- Test team’s automated testing skills
- Cost-benefit ratio
- Introducing selected tools into an organization starts with a pilot project (increasing awareness, collecting feedback for improvement, defining usage criteria, evaluating input-output)
- Factors to successfully deploy tools within an organization: gradual promotion, coordination of process improvement, provision of training, defining usage guidelines, monitoring the use and benefits of tools, providing support, gathering experiences and lessons