Top Manual Testing Interview Questions & Answers
1. What is Manual Testing?
Manual testing is a software testing process where testers execute test cases manually, without using automation tools. Testers rely on intuition, experience, and hands-on interaction to identify bugs, usability issues and unexpected behaviors in an application.
Here’s why manual testing is important:
- It helps find UI/UX issues that automation might miss.
- It’s useful for exploratory testing, where creativity plays a key role.
- It ensures new features work as expected before automation scripts are created.
- It’s essential for usability testing, where real users evaluate the experience.
2. What is Test Scenario
A Test Scenario is a high-level description of what needs to be tested in a software application. It ensures different functionalities work as expected under various conditions. It helps testers understand what to test without detailing exact steps.
3. Advantages and Disadvantages of Manual Testing
Here are some advantages and disadvantages of manual testing:
Advantages of Manual Testing
- Human Intuition & Observation – Helps detect UI/UX issues that automation might miss.
- Flexible & Adaptable – Easily adjust test cases based on real-time observations.
- Cost-Effective for Small Projects – No need for expensive automation tools.
- Exploratory Testing – Allows testers to creatively uncover defects.
- Suitable for Complex Scenarios – Some tests (like usability testing) require human judgment.
- Quick Testing for Minor Changes – Faster than automation when testing small updates.
Disadvantages of Manual Testing
- Time-Consuming – Running tests manually takes more time compared to automation.
- Prone to Human Errors – Mistakes can occur due to fatigue or oversight.
- Lack of Reusability – Manual test cases aren’t reusable like automation scripts.
- Limited Scalability – Difficult to execute large-scale tests quickly.
- Higher Maintenance Costs – Requires continuous efforts from testers for each release.
- Cannot Ensure High-Speed Execution – Slower than automated tests, especially for regression testing.
4. How Manual Testing is different from Automated Testing?
Here’s a concise comparison between Manual Testing and Automated Testing:
Aspect | Manual Testing | Automated Testing |
---|---|---|
Execution | Performed by humans without automation tools | Uses test scripts and automation tools |
Speed | Slower, as each test is executed manually | Faster, executes multiple tests simultaneously |
Accuracy | Prone to human errors | More reliable, minimizes human mistakes |
Repetitive Tasks | Tedious and time-consuming | Efficient for repetitive and regression testing |
Initial Cost | Low, requires fewer resources initially | High due to tool purchase and setup costs |
Long-Term Cost | Higher due to manual effort over time | Lower, saves time and resources in the long run |
Test Coverage | Limited, depends on tester’s availability | Broader coverage, can test complex scenarios |
Flexibility | Ideal for exploratory and usability testing | Best for performance, load, and stress testing |
Scalability | Hard to scale for large applications | Scales efficiently for extensive test scenarios |
Maintenance | Low, no need to update scripts | Requires script updates for changes in the app |
5.What are the different levels of manual testing?
Manual testing is performed at different levels to ensure software quality. Here are the key levels:
- Unit Testing – Focuses on testing individual components or functions to verify they work correctly. Typically performed by developers before integration.
- Integration Testing – Ensures multiple units or modules work together as expected. Identifies defects in module interactions.
- System Testing – Validates the complete application against requirements to check overall functionality and performance.
- Acceptance Testing – Determines if the software meets business requirements and is ready for deployment. Includes User Acceptance Testing (UAT) where end-users evaluate the application.
6.What skills is required to become a manual tester?
- Analytical Thinking helps testers break down complex requirements and design precise test cases.
- Communication Skills ensure test results, defects, and insights are clearly conveyed to stakeholders.
- Domain Knowledge enables testers to understand business logic, leading to effective validation.
- Agile Methodology Familiarity allows adaptability in iterative development, ensuring continuous feedback and testing.
- Test Planning & Tracking ensures systematic execution, helping maintain comprehensive test coverage.
- SDLC, STLC, SQL & Manual Concepts Knowledge facilitates a structured approach to testing and database validations.
- Proficiency in Test Management & Tracking Tools streamlines documentation, defect tracking, and reporting.
- Expertise in Testing Techniques strengthens validation through proven methodologies, enhancing software reliability.
7. What is Test Bed?
A test bed is a controlled environment where software applications, systems, or components are tested. It includes all necessary hardware, software, configurations, and test data required to execute test cases effectively.
8. What is Test Data?
Test data refers to the input values used during testing to verify how a software application behaves under different conditions. It helps ensure accuracy, reliability, and functionality. Effective test data ensures comprehensive testing and uncovers defects before deployment.
9. What is a Test Case?
A test case is a set of conditions or variables used to determine whether a software application, system, or feature works correctly. It includes inputs, expected results, and procedures for execution. Test cases help identify bugs, verify functionality, and ensure that the system meets requirements.
10. What is a Test Plan?
A test plan is a detailed document that outlines the approach, objectives, scope, and schedule for testing a software system. It acts as a blueprint for testing activities and ensures that the testing process is well-organized and effective. A well-structured test plan helps teams detect defects early, improve software quality, and ensure all requirements are met before deployment.
11. What is a Test Script?
A test script is a set of instructions written for automated or manual testing that verifies whether a software application is functioning correctly. It outlines specific actions testers should take, expected results, and conditions to check.
12. What is Traceability Matrix?
A Traceability Matrix is a document that maps and tracks the relationship between requirements and test cases. It ensures all requirements are covered during testing and helps identify missing test scenarios.
Benefits:
- Improves transparency and accountability in testing.
- Ensures complete test coverage.
- Helps track requirement changes.
- Reduces gaps between development and testing.
13. What’s the difference between verification and validation in software testing?
In software testing, verification and validation serve distinct roles in ensuring the quality of the software:
Verification is about evaluating whether the software is being developed correctly, according to specifications and requirements. This happens before actual execution and includes activities like reviews, inspections, walkthroughs, and static testing
Validation is about checking whether the final software meets user expectations and performs its intended function correctly. This occurs after development, through dynamic testing, such as unit tests, integration tests, system tests, and user acceptance tests.
In short: Verification = Are we building it right? (Process-oriented) Validation = Did we build the right thing? (Product-oriented)
14. When should testing end?
- Completion of Test Cases – When all planned test cases have been executed and passed successfully.
- Bug Fixing Threshold – When critical defects have been addressed and remaining issues are minor or acceptable risks.
- Meeting Requirements – When the software meets functional, performance, security, and usability requirements.
- User Acceptance – When stakeholders and users confirm the software behaves as expected.
- Budget & Time Constraints – When testing has been conducted within the allocated budget and timeline.
15. What are the principles of software testing?
Software testing follows fundamental principles to ensure quality, efficiency, and effectiveness. Here are the core principles:
- Testing Shows Presence of Defects, Not Their Absence – Testing helps identify bugs but never guarantees the software is completely defect-free.
- Exhaustive Testing is Impossible – It’s impractical to test all possible inputs and scenarios, so we focus on high-risk areas.
- Early Testing Saves Costs – Detecting issues earlier in development reduces the cost and effort needed for fixing defects later.
- Defect Clustering – A small portion of the software typically contains the majority of defects, so targeted testing is beneficial.
- Pesticide Paradox – Running the same tests repeatedly won’t find new bugs; test cases need to be updated regularly.
- Testing is Context-Dependent – Different applications require different testing approaches (e.g., banking software vs. video games).
- Absence-of-Errors Fallacy – Even if a system is bug-free, it might still fail to meet user needs, meaning functional validation is essential
16. Diffrence Between Regression Testing and Sanity Testing?
Comparison between Regression Testing and Sanity Testing :
Aspect | Regression Testing | Sanity Testing |
---|---|---|
Purpose | Ensures new changes do not break existing functionality | Quickly verifies that critical functionalities are working after a minor change |
Scope | Covers the entire application | Focuses only on specific modules or functionalities affected by recent changes |
Depth of Testing | Thorough and comprehensive | Shallow, basic validation |
When It’s Conducted | After modifications, updates, or bug fixes | After quick fixes or minor enhancements |
Automation | Often automated due to its broad scope | Usually manual due to its quick execution |
Time Required | More time-consuming | Faster compared to regression testing |
Example | Testing all features after a software update | Checking if login functionality works after a password reset fix |
17. What are the phases involved in Software Testing Life Cycle?
The Software Testing Life Cycle (STLC) consists of multiple phases that ensure a systematic approach to testing. Here are the key phases:
1.Requirement Analysis
- Understanding testing requirements from software specifications.
- Identifying which features need to be tested.
- Defining acceptance criteria and risk analysis.
2. Test Planning
- Developing a test strategy and scope.
- Allocating resources, tools, and timelines.
- Identifying test deliverables.
3. Test Case Development
- Writing detailed test cases based on requirements.
- Preparing test data for execution.
- Reviewing and validating test cases.
4. Test Environment Setup
- Configuring hardware, software, and network settings.
- Ensuring required tools and applications are available.
- Setting up test servers, databases, and test accounts.
5. Test Execution
- Running test cases and logging defects.
- Validating actual results against expected outcomes.
- Re-testing and performing regression testing for bug fixes.
6. Test Closure
- Preparing test summary reports.
- Evaluating test coverage and software quality.
- Documenting lessons learned for future projects.
18. What is quality control, and how does it differ from quality assurance?
Quality Control (QC) and Quality Assurance (QA) are both essential for maintaining software quality, but they differ in their approach and purpose.
Quality Assurance (QA) is a process-oriented approach focused on preventing defects before they occur. It ensures that software development follows established standards, methodologies, and best practices to build high-quality software. It improves the development process to minimize errors.
Quality Control (QC), on the other hand, is a product-oriented approach that focuses on detecting defects after the software is developed. It involves executing test cases, performing validations, and identifying bugs to ensure the final product meets quality standards. QC is responsible for verifying that the software functions correctly by running functional, performance, and security tests.
The key difference is that QA ensures the software is built correctly, while QC ensures the software works correctly after development.
19. What is black box testing?
Black box testing is a software testing method where the tester evaluates the system’s functionality without knowing its internal structure or code. The focus is purely on inputs and expected outputs, making it ideal for assessing whether an application behaves correctly from a user’s perspective.
20. What is white box testing?
White Box Testing is a software testing technique that focuses on examining an application’s internal code structure. It requires knowledge of the underlying implementation and programming skills to design test cases.
21. What is grey box testing?
Grey Box Testing is a software testing method that combines elements of Black Box Testing and White Box Testing to create a balanced approach. The tester working on Grey Box Testing needs to have access to design documents allowing for more effective test case creation while still focusing on external functionality.
22 . What is test coverage?
Test Coverage is a metric that measures how much of a software application’s code and functionality has been tested. It helps assess the effectiveness of testing efforts and ensures that critical areas of the application are not overlooked.
23. Explain the Waterfall model.
The Waterfall Model is a linear and sequential software development approach where progress flows in one direction—like a waterfall—through different phases. Each phase is completed before moving to the next, and once a phase is finished, going back to modify it is difficult.
Phases of the Waterfall Model:
- Requirement Analysis – Gather and document all software requirements.
- System Design – Define architecture, components, and technical specifications.
- Implementation (Coding) – Developers write code based on design documents.
- Testing – Verify the software against requirements, identifying and fixing defects.
- Deployment – Release the software to users.
- Maintenance – Provide support, updates, and fixes after deployment.
Advantages:
- Simple and easy to follow.
- Works well for small or clearly defined projects.
- Well-documented, making future modifications easier.
Disadvantages:
- Changes are difficult to incorporate once a phase is completed.
- Not suitable for projects requiring flexibility or continuous updates.
- Testing happens late in the process, increasing risks of finding major issues.
24. Explain V- Model.
The V-Model (Verification and Validation Model) is a software development and testing approach that follows a sequential process.It emphasizes early testing by aligning each development phase with a corresponding testing phase, ensuring that defects are detected as early as possible.
Structure of the V-Model
The V-Model consists of two main sides:
- Left Side – Verification (Development Phases)
- Requirement Analysis → Acceptance Testing
- System Design → System Testing
- High-Level Design → Integration Testing
- Low-Level Design → Unit Testing
- Coding → Development phase where the actual application is built.
- Right Side – Validation (Testing Phases) Each testing phase corresponds to a development phase:
- Unit Testing – Validates individual components.
- Integration Testing – Ensures modules work together.
- System Testing – Verifies the entire system meets requirements.
- Acceptance Testing – Confirms software aligns with business needs.
Advantages of the V-Model
- High Reliability – Improves software quality by integrating testing early.
- Reduces Risk – Avoids late-stage defects that are harder to fix.
- Ideal for Well-Defined Projects – Works best when requirements are clear upfront.
Disadvantages of the V-Model
- Rigid and Inflexible – Changes after requirements are defined can be challenging.
- Time-Consuming – Requires thorough documentation at each stage.
- Not Suitable for Agile – Doesn’t accommodate continuous iterations or updates.
25. What is a bug?
In software testing, a bug refers to an error, flaw, or failure in a program that causes it to behave unexpectedly or incorrectly.
26. List the differences between Regression and Retesting.
Here’s a comparison of Regression Testing and Retesting in software testing:
Aspect | Regression Testing | Retesting |
---|---|---|
Purpose | Ensures that new code changes don’t adversely affect existing functionality. | Verifies that defects previously identified have been fixed. |
Focus | Focuses on the unchanged parts of the application to detect unintended issues. | Focuses only on the specific functionality where defects were found. |
Test Cases | Executes a predefined suite of test cases. | Re-executes the failed test cases after fixing the defect. |
Automation | Typically automated due to repetitive execution. | Often performed manually (though automation is possible). |
Scope | Broad, covering areas affected directly or indirectly by code changes. | Narrow, limited to the specific defect or issue. |
Goal | Maintains overall application stability. | Confirms the resolution of reported bugs. |
27.Differentiate between bug leakage and bug release.
Bug Leakage occurs when a defect is missed during the testing phase and is only discovered after the product has been released to the end-users or moved to production. It highlights gaps in the testing process or test coverage.
Bug Release, on the other hand, happens when a known defect is intentionally left in the application and the product is released with this defect. This decision is typically made because the bug is deemed low-priority, has minimal impact, or can be fixed in a later release.
28. What is Integration Testing? What are its types?
Integration Testing is a level of software testing where individual units or components of an application are combined and tested as a group. The goal is to ensure that these integrated modules work correctly together and communicate effectively. It focuses on detecting interface errors, data flow issues, and interaction problems between integrated modules.
Types of Integration Testing:
- Big Bang Integration Testing: All modules are integrated at once, and testing is performed collectively after integration. This approach is quick but can make identifying the root cause of issues challenging.
- Incremental Integration Testing: Modules are integrated and tested step by step. It has two subtypes:
- Top-Down Testing: Integration starts from the top-level modules, proceeding to lower levels using stubs (temporary substitutes for missing modules).
- Bottom-Up Testing: Integration begins from lower-level modules, moving upwards using drivers (temporary substitutes for higher-level modules).
- Hybrid Integration Testing: A combination of both top-down and bottom-up approaches, where integration occurs in both directions simultaneously.
29. How monkey testing is different from adhoc testing?
Monkey Testing and Ad-hoc Testing are both informal testing techniques, but they differ in approach and purpose:
- Monkey Testing is a chaotic and random testing method where testers, or even automated tools, interact with the application unpredictably, without specific goals or knowledge of the functionality. The aim is to stress the system and identify unexpected crashes or failures. It’s like throwing random inputs and observing how the software reacts.
- Ad-hoc Testing, on the other hand, is more intentional and relies on the tester’s understanding of the application. Although unstructured, it uses the tester’s experience and intuition to explore specific areas where defects are more likely to occur, making it less random compared to Monkey Testing.
30. Explain the defect life cycle.
The defect life cycle, also known as the bug life cycle, outlines the process of managing and resolving defects during software testing. Here’s a breakdown of its stages:
- New: When a defect is first identified and reported, it is marked as “New” and awaits review.
- Assigned: The defect is assigned to a developer or team for investigation and resolution.
- Open: The developer acknowledges the defect and begins working on a fix.
- Fixed: After the issue has been addressed, the developer marks it as “Fixed.”
- Retest: The tester re-tests the defect to confirm the fix and ensure no side effects.
- Verified: If the defect no longer exists during re-testing, it is marked as “Verified.”
- Closed: Once the fix is confirmed, the defect is marked as “Closed,” indicating it is resolved.
- Reopened (optional): If the defect persists after re-testing, it is reopened for further investigation.
- Deferred (optional): If fixing the defect is postponed due to low priority or other reasons, it is marked as “Deferred.”
- Rejected (optional): If the reported issue is not a valid defect or cannot be reproduced, it is marked as “Rejected.
31. What is the role of documentation in manual testing?
Documentation plays a crucial role in manual testing by ensuring clarity, consistency, and traceability throughout the testing process. Here’s how:
- Knowledge Sharing: When testers leave or new members join, documentation ensures seamless transfer of knowledge and continuity.
- Test Planning: Documents like test plans outline the scope, objectives, and approach, keeping everyone aligned.
- Test Cases: Well-written test cases act as a step-by-step guide for testers, ensuring that testing is thorough and repeatable.
- Bug Reporting: Defect or bug reports document issues found during testing, providing details for developers to address them.
- Progress Tracking: Documentation helps track testing efforts, milestones, and results.
32. What is the Agile methodology and why is it important ?
Agile methodology is an approach to project management and software development that emphasizes flexibility, collaboration, and customer satisfaction. Instead of delivering a complete product at the end of the project, Agile focuses on building and delivering smaller, functional increments in short cycles called “sprints” or “iterations.”Agile methodology is highly valued in modern software development and project management for several reasons:
- Flexibility and Adaptability: Agile allows teams to respond to changes quickly, whether in requirements, technology, or market needs, ensuring the product stays relevant.
- Customer-Centric Approach: Continuous feedback from stakeholders and end-users ensures that the final product meets their expectations and requirements.
- Frequent Deliverables: Agile promotes delivering working increments of the product regularly, which helps in early detection of issues and provides ongoing value to customers.
- Team Collaboration: Agile fosters a culture of teamwork, where developers, testers, and business stakeholders collaborate closely for better outcomes.
- Transparency and Visibility: Regular meetings like daily stand-ups and sprint reviews keep everyone informed and aligned on progress and challenges.
- Risk Management: By working in small, iterative cycles, teams can identify and address risks early, reducing the likelihood of major failures.
33. Difference between Alpha testing and beta testing.
Alpha testing and beta testing are crucial stages of software testing, but they serve different purposes and occur at different times in the development process. Here’s a comparison:
Feature | Alpha Testing | Beta Testing |
---|---|---|
Definition | Conducted internally by developers and testers to identify and fix bugs before public release. | Conducted externally by a group of end-users or customers to gather feedback on product performance. |
Environment | Controlled environment, often a lab setting. | Real-world environment where users interact naturally with the software. |
Participants | In-house team of developers, designers, and testers. | Selected end-users or customers. |
Objective | Ensure the software functions correctly and identify critical bugs. | Gather user feedback to improve usability, reliability, and overall experience. |
Stage in Development | Early phase, before the product is feature-complete. | Later phase, closer to the product’s final release. |
Access | Restricted to internal teams. | Open to external users (sometimes by invitation). |
Q34. How is Load Testing different from Stress Testing?
Load testing and stress testing are two distinct performance testing techniques, each serving a unique purpose in evaluating a system’s capabilities.
Load Testing is conducted to assess how a system performs under expected or normal levels of user load. For instance, it checks if a website can handle the typical number of daily visitors without slowing down or crashing. The primary goal is to verify that the system can meet performance standards during routine operations.
On the other hand, Stress Testing examines how the system behaves under extreme or abnormal conditions, such as a sudden surge in traffic far beyond its usual capacity. It pushes the system to its breaking point to identify vulnerabilities, failure thresholds, and recovery mechanisms.
Q35. What is an epic, user story and task?
In Agile methodology, epic, user story, and task are ways to break down and organize work, starting from broad goals to specific actions:
- Epic:
- An epic is a large, overarching goal or feature that cannot be completed in one sprint or iteration.
- It represents a big piece of functionality or work.
- User Story:
- A user story is a specific, smaller requirement that contributes to the epic.
- It is written from the perspective of the end-user, focusing on what they need and why.
- Task:
- A task is the smallest unit of work and details the specific actions required to complete a user story.