Possible Interview Questions For BT (Nexxt Gen)
Possible Interview Questions For BT (Nexxt Gen)
What is a Hashmap?
HashMap is a Map based collection class that is used for storing Key & value
pairs, it is denoted as HashMap<Key, Value> or HashMap<K, V>. ... It is not an
ordered collection which means it does not return the keys and values in the
same order in which they have been inserted into the HashMap.
○ HashMap
■ Is not synchronized thread safe. The different threads access to
hashmap simultaneously
■ Hashmap allows one null key and any number of null values
■ Uses iterator to iterate
○ HashTable
■ Is synchronized and thread safe. The operation will be performed when
other thread completes is operation
■ Doesn’t not allow null key nor values
■ Uses enumerator to iterate
○ If we will be using multithreading we use HashTable otherwise HashMap
● How many kinds of testing exist and explain one scenario for
one of them?
- Regression Testing
Is re-running functional and non-functional tests to ensure that previously developed
and tested software still performs after a change
- Integration Testing
Is the phase in software testing in which individual software modules are combined
and tested as a group.
- Exploratory Testing
Is a type of software testing where Test cases are not created in advance but testers
check the system on the fly. They may note down ideas about what to test before
test execution. The focus of exploratory testing is more on testing as a "thinking"
activity.
- Acceptance Testing
Is a test conducted to determine if the requirements of a specification or contract are
met.
- System Testing
Is testing conducted on a complete integrated system to evaluate the system's
compliance with its specified.
- Smoke Testing
Smoke Testing verifies the critical functionalities of the system whereas Sanity
Testing verifies the new functionality like bug fixes.
- Sanity Testing
Sanity testing is a kind of Software Testing performed after receiving a software
build, with minor changes in code, or functionality, to ascertain that the bugs have
been fixed and no further issues are introduced due to these changes. The goal is to
determine that the proposed functionality works roughly as expected. If a sanity test
fails, the build is rejected to save the time and costs involved in a more rigorous
testing.
● Test name. A title that describes the functionality or feature that the test is verifying.
● Test ID. Typically a numeric or alphanumeric identifier that QA engineers and testers
use to group test cases into test suites.
● Objective. Also called the description, this important component describes what the test
intends to verify in one to two sentences.
● References. Links to user stories, design specifications or requirements that the test is
expected to verify.
● Prerequisites. Any conditions that are necessary for the tester or QA engineer to
perform the test.
● Test setup. This component identifies what the test case needs to run correctly, such as
app version, operation system, date and time requirements and security specifications.
● Test steps. Detailed descriptions of the sequential actions that must be taken to
complete the test.
● Expected results. An outline of how the system should respond to each test step.
TITLE
An ideal title is clear, short, and gives the developer a summary of what the bug is. It should
contain the category of the bug or the component of the app where the bug occurred (e.g.
Cart, UI, etc.) and the action or circumstances it occurred under. A clear title makes the
report easy to find for the developer in order to identify duplicate reports and makes triaging
bugs a whole lot eassier.
Severity is a measure of how serious an issue is. The levels and definitions of severity vary
between developers of different applications and even more so between developers, testers,
and end users who are unaware of these details. The usual classification is:
● Critical/Blocker: This is reserved for issues that make the application unusable or cause
serious data loss.
● High: When the bug effects a major feature and there is no workaround or the available
workaround is very complex.
● Medium: The bug effects a minor feature or effects a major feature but has an easy enough
workaround to not cause any major inconvenience.
● Low: This is used for bugs that don’t significantly effect the user experience, like minor visual
bugs.
DESCRIPTION
This is an overview of the bug and how and when it happened, written in shorthand. This
part should include more details than the title, like how frequently the bug appears if it is an
intermittent error and the circumstances that seem to trigger it.
ENVIRONMENT
Apps can have completely different behavior depending on their environment. This part
should include all the details about the environment setup and configuration on which the
app is running. If you require specific info about the environment, make sure that is clear to
your users and testers.
REPRO STEPS
This should include the minimum steps needed to reproduce the bug. Ideally, the steps
should be short, simple, and can be followed by anybody. With that being said, encourage
your testers and users to submit too many details rather than too little. The goal is to allow
the developer to reproduce the bug on their side to get a better idea of what might be going
wrong. A bug report without repro steps is minimally useful and only serves to waste time
and effort that can be dedicated to resolving more detailed reports; be sure to communicate
that to your testers and in a way that makes sense to your end users.
Example:
ACTUAL RESULT
Example:
EXPECTED RESULT
Example:
.jpeg format supported and message is sent with empty “No Subject”
ATTACHMENTS
Attachments can be very helpful for the developer to pinpoint the issue quicker; a screenshot
of the issue does a lot of explaining, especially when the issue is visual. Other extremely
useful attachments like logs can, at the very least, point the developer in the right direction.
CONTACT DETAILS
An e-mail address where you can reach the user submitting the bug should be provided in
case any further details are needed about the issue. Getting the user to respond to the e-
mails can be a challenge so you should consider providing other communication channels
that are less of a hassle to the user to maximize their responsiveness.
1. Analyze Requirements
It costs more to fix a bug that has been detected during testing, as compared to just
preventing them at the stage of requirements design. QA professionals should be involved in
the analysis and definition of software requirements, both functional and non-functional. QAs
must be offered requirements that are consistent, comprehensive, traceable, and clearly
marked. This helps the QA team design tests specifically tailored to the software being
tested.
The information gained during the requirements analysis phase is used as the basis for
planning necessary tests. The test plan should comprise the software testing strategy, the
scope of testing, the project budget, and establish deadlines. It should also outline the types
and levels of testing required, methods, and tools for tracking bugs and allocate resources
and responsibilities to individual testers.
At this stage, QA teams have to craft test cases and checklists that encompass the software
requirements. Each test case must contain conditions, data, and the steps needed to
validate each functionality. Every test must also define the expected test result so that
testers know what to compare actual results to.
This is also the stage for preparing the staging environment for execution. This environment
should closely mirror the production environment with regard to the specifics of hardware,
software, and network configurations. Other characteristics such as databases, system
settings should also be closely mimicked.
Tests start at the unit level with developers performing unit tests. Then, the QA team runs
tests at API and UI levels. Manual tests are run in accordance with previously designed test
cases. All bugs detected are submitted in a defect tracking system. Additionally, test
automation engineers can use an automated test framework such as Selenium, Cypress, or
Appium to execute test scripts and generate reports.
Once bugs have been found, submitted, and fixed, QAs test the functions again to ensure
that they didn’t miss any anomalies. They also run regression tests to verify that the fixes
have not affected the existing functions.
Once developers issue a release notification that details a list of already implemented
features, fixed bugs, recurring issues, and limitations, the QA team must identify the
functionalities being affected by these changes. Then, the team must design modified test
suites that cover the scope of the new build.
The QA team must also perform smoke tests to ensure each build is stable. If the test
passes, then modified test suites are run, and a report is generated at the end.
ensure the application flow behaves as expected. It defines the product’s system dependencies and
The main purpose of End-to-end (E2E) testing is to test from the end user’s experience by simulating
the real user scenario and validating the system under test and its components for integration and
data integrity.