Manual testing

From Wikipedia, the free encyclopedia

Compare with Test automation.

Manual testing is the process of manually testing software for defects. It requires a tester to play the role of an end user where by they use most of the application's features to ensure correct behaviour. To guarantee completeness of testing, the tester often follows a written test plan that leads them through a set of important test cases.

Overview[edit]

A key step in the process is testing the software for correct behavior prior to release to end users.

For small scale engineering efforts (including prototypes), ad hoc testing may be sufficient. With this informal approach, the tester does not follow any rigorous testing procedure and simply performs testing without planning or documentation. Conversely, exploratory testing, which involves simultaneous learning, test design and test execution, explores the user interface of the application using as many of its features as possible, using information gained in prior tests to intuitively derive additional tests. The success of exploratory manual testing relies heavily on the domain expertise of the tester, because a lack of knowledge will lead to incompleteness in testing. One of the key advantages of an informal approach is to gain an intuitive insight to how it feels to use the application.

Large scale engineering projects that rely on manual software testing follow a more rigorous methodology in order to maximize the number of defects that can be found. A systematic approach focuses on predetermined test cases and generally involves the following steps.[1]

  1. Choose a high level test plan where a general methodology is chosen, and resources such as people, computers, and software licenses are identified and acquired.
  2. Write detailed test cases, identifying clear and concise steps to be taken by the tester, with expected outcomes.
  3. Assign the test cases to testers, who manually follow the steps and record the results.
  4. Author a test report, detailing the findings of the testers. The report is used by managers to determine whether the software can be released, and if not, it is used by engineers to identify and correct the problems.

A rigorous test case based approach is often traditional for large software engineering projects that follow a Waterfall model.[2] However, at least one recent study did not show a dramatic difference in defect detection efficiency between exploratory testing and test case based testing.[3]

Testing can be through black-, white- or grey-box testing. In white-box testing the tester is concerned with the execution of the statements through the source code. In black-box testing the software is run to check for the defects and is less concerned with how the processing of the input is done. Black-box testers do not have access to the source code. Grey-box testing is concerned with running the software while having an understanding of the source code and algorithms.[4]

Static and dynamic testing approach may also be used. Dynamic testing involves running the software. Static testing includes verifying requirements, syntax of code and any other activities that do not include actually running the code of the program.

Testing can be further divided into functional and non-functional testing. In functional testing the tester would check the calculations, any link on the page, or any other field which on given input, output may be expected. Non-functional testing includes testing performance, compatibility and fitness of the system under test, its security and usability among other things.

Stages[edit]

Manual testing[5] plays a crucial role in software development by utilizing human insight and adaptability alongside automated methods. It ensures test coverage and helps identify complex problems. Let's explore its key stages:

Use tests with the broadest possible coverage: This section discusses different testing methods such as black box testing, exploratory testing, and white box testing, which are all techniques used in manual testing to ensure comprehensive coverage of the software's functionality and requirements.

Developing Test Plans: Test planning is crucial in manual testing to keep track of testing activities and ensure that all functional and design criteria are met as per documentation. A well-defined test plan serves as a roadmap for testing throughout the development process.

Employ test-oriented development techniques: This stage emphasizes the importance of integrating testing early in the development process through techniques like Test-Driven Development (TDD), pair programming, and unit testing, which are essential aspects of manual testing.

Evaluate the level of the code: Here, the text discusses the importance of assessing the quality of both the software product and its source code, emphasizing the need for quantifiable metrics to measure QA goals effectively.

Think creatively: This section encourages manual testers to think creatively and consider various scenarios, including edge cases, to ensure thorough testing of the software.

Keeping track of and recording Test cases: It stresses the significance of creating well-designed test cases with clear entry and exit criteria, input-output specifications, and execution sequences to assess software usability, reliability, and performance.

Simplify Security: This stage highlights the importance of manual security testing techniques, including examining server access controls, static code analysis, penetration testing, and access control management, to ensure the security of the application.

Create a bug report that is precise, detailed, and unambiguous: Lastly, it emphasizes the need for precise and detailed bug reports, including the impact and potential remedies, to effectively communicate issues to other teams and stakeholders.

Overall, these stages collectively represent various aspects and processes involved in manual testing, from test planning to security testing and bug reporting, demonstrating the comprehensive nature of manual testing in software development.

Advantages[edit]

  • Low-cost operation as no software tools are used
  • Most bugs are caught by manual testing
  • Humans observe and judge better than the automated tools

Comparison to automated testing[edit]

Test automation may be able to reduce or eliminate the cost of actual testing.[6] A computer can follow a rote sequence of steps more quickly than a person, and it can run the tests overnight to present the results in the morning. However, the labor that is saved in actual testing must be spent instead authoring the test program. Depending on the type of application to be tested, and the automation tools that are chosen, this may require more labor than a manual approach. In addition, some testing tools present a very large amount of data, potentially creating a time consuming task of interpreting the results.

Things such as device drivers and software libraries must be tested using test programs. In addition, testing of large numbers of users (performance testing and load testing) is typically simulated in software rather than performed in practice.

Conversely, graphical user interfaces whose layout changes frequently are very difficult to test automatically. There are test frameworks that can be used for regression testing of user interfaces. They rely on recording of sequences of keystrokes and mouse gestures, then playing them back and observing that the user interface responds in the same way every time. Unfortunately, these recordings may not work properly when a button is moved or relabeled in a subsequent release. An automatic regression test may also be fooled if the program output varies significantly.

See also[edit]

References[edit]

  1. ^ ANSI/IEEE 829-1983 IEEE Standard for Software Test Documentation
  2. ^ Craig, Rick David; Stefan P. Jaskiel (2002). Systematic Software Testing. Artech House. p. 7. ISBN 1-58053-508-9.
  3. ^ Itkonen, Juha; Mika V. Mäntylä; Casper Lassenius (2007). "Defect Detection Efficiency: Test Case Based vs. Exploratory Testing" (PDF). First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007). pp. 61–70. doi:10.1109/ESEM.2007.56. ISBN 978-0-7695-2886-1. S2CID 5178731. Archived from the original (PDF) on October 13, 2016. Retrieved January 17, 2009.
  4. ^ Hamilton, Thomas (May 23, 2020). "What is Grey Box Testing? Techniques, Example". www.guru99.com. Retrieved August 7, 2022.
  5. ^ KiwiQA, Services. "Eight Key Enablers For Manual Testing". www.kiwiqa.com. KiwiQA. Retrieved March 21, 2024.
  6. ^ Atlassian. "Test Automation". Atlassian. Retrieved August 7, 2022.