1/40
QA Tester
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Tell me about your experience as in QA
My career of gaming started in 2019 when I became a streamer and professional gamer on Apex Legends. I played professionally for about two years and played at the highest level and for different gaming organizations. I retired from competitive and started doing content creation on Apex and built up a good enough audience to be noticed by EA and became partners with them through their EA game changers program. I built some relationship during that experience that I was recommended to look into the process of making Apex and so I applied for the QA Tester role and began to be a part of the Quality Assurance team.
The Tester role was feedback oriented QA, but also had the traditional QA responsibilities of finding and reporting bugs, escalation, and cross functional communication with devs and designers.We used the programs of Testrail, Confluence, and JIRA. PC, Xbox, Playstation, and Switch. SDK, program on PC in order to put builds on the consoles.
After 9 months I got a promotion to be a QA Test Lead, I believe I got that position because of my attention to detail, consistency with bug reports, and my ability to lead and communicate well. My role was to manage the QA testers, review bug reports, and bridging the gap between testers and developers to maintain good communication.
-live service game like apex
-upcoming DICE title, need different areas to be looked at.
Currently I am developing my C++ skills for coding, and learning Unreal Engine because I really enjoy the process of creating and developing games.
What happens if a dev or a manager doesn’t think something is important but you think that it is?
First have clear and good communication with the person you have a disagreement with, don’t come without reason.
Have supporting documentation about your opinion and why it affects the end user and stakeholders.
Give example - Going into world systems meeting with director of world systems to explain why changing a core feature to the game was going to impact the game a lot more than they thought it would. Completely shifting the way players would be playing the game at a certain skill level and higher. I came with evidence through experience gathered from professional players and core players, and how it would shift the mindset of gameplay. I was respectful and presented my case. It was highly considered in their decision on the feature. You can’t be scared to push back on certain topics if you feel strongly about them.
As developers, sometimes they don’t play the game or program and that in turn means they aren’t always thinking about the end user’s perspective and we are be that last line of defense on certain things being added or removed or changed in the program.
High Level Test Case vs Low Level Test Case
A way to approach test cases.
Low level test cases will have detailed and defined input values. EX: Login with “XXX” username and “XXX” password.
High Level leaves room for creativity for the tester. EX. Login with valid credentials
High Level Test Case: Login with with a username that has special characters.
Username: $#^%@#(
Low Level Test Case: Login with “SureThing%$^”.
“I tend to like High Level test cases more as it gives the tester more creativity and allows me to explore, giving me a better chance to break something. “
Automation Testing
Usually created by engineers (SDET). The test is being run by programs not testers.
“Never did this at EA, but I understand it.”
Allows for tests that are run on a weekly cadence to be run without any testers. Consistent coverage for the feature, more time for testers to work on other things, easy to detect issues and monitor status.
Indicate that you’re aware that automation is a huge part of QA and is necessary - but not all the time.
The more we can automate manual test cases, the more efficiently we can maximize coverage and free up time to work on new features.
Considerations for automation. We need to know how difficult it will be to automate the test cases, and how much we stand to gain from automated said test cases. Finding the right test cases to automate is very important. Try to gauge how hard it would be to automate vs how much time/money you save or earn by automating.
SDLC
Software Development Life Cycle
Cadence Testing
How often you are testing something
Weeklies, Monthlies, OPR (Once Per Release), RC (Release Candidate), BTR (Build Test Request).
(OPR) Once Per Release
Series of test cases that are run once per new release.
(RC) Release Candidate
Final Release build that we plan to push to the Live Product
(BTR) Build Test Request
When the dev requests a specific feature or thing they worked on to have test coverage. Test Coverage and Cases will be specified by Analyst.
Pencil’s Down
No adding to features.
Hard Lock
strictly can’t add anything. Considered final, moving to get it to be ready for release build.
Type of Builds
Source and Binary
Source
when you get download the raw code and compile it yourself
Binary
Build is already compiled - this can be from the launcher or just a compressed file that you download.
Ad-Hoc
going into testing with no set of directions
Smoke Testing
Checking basic functionality of the app to make sure major systems are working
Load Testing
Putting high pressure on a system to monitor performance(EX. When we would get 60 testers in a BR game, and make everyone die at the same time, make everyone spam their abilities at the same time, etc.)
What types of testing have you done?
Mostly Exploratory, Ad-Hoc, and Smoke testing with some Load Testing.
When you were monitoring bug reports that came in from testers, what was a common error that you saw or what were you looking for?
SUFFICIENT INFO FOR BUG
Something that was very common with testers was running into bugs and not properly querying for them. Making sure a bug doesn’t already exist is very important. It could be a bug that was fixed 2 releases ago which can give you more info on it and what the fix was. There could already be an open bug report for it which would make it a waste of time for the tester/dev.
In terms of the actual bug report, I would review the whole thing but the main things I was looking at was the Summary, Repro Steps, and Description. Often times the testers I was over wouldn’t understand the bug that they ran into and the summary would be misleading so I would ask them follow up questions about the bug to try and get more info for an accurate summary. Repro Steps and Description are really important. Making sure the repro steps are accurate and easy to understand for developers was one of my main goals. At the end of the day I tried to avoid NMI’s(Needs More Info) as much as possible.
Also super important to include Version, CL, Testing Environment.
What is the role of a QA tester in software development?
The role of a QA tester is to ensure that the software meets the expected quality standards before it is released. QA testers play a key role in preventing defects and improving the overall quality of the software.
How do you prioritize bugs?
Impact on the system and how frequently they occur. Critical bugs that affect core functionality or cause system crashes get the highest priority. Bugs that affect the user experience but don’t block essential features come next. I also consider deadlines and business impact when prioritizing. Communication with the devs also is something to consider.
How do you ensure that you fully understand the requirements for a feature before testing it?
First look at project documentation, like requirement specifications. If I need further clarification, I reach out to product managers or developers. I may also attend requirement review meetings to get a better understanding. It's crucial to ask questions early to avoid misunderstandings and ensure the tests I create cover all scenarios.
Describe a situation where you had to deal with a difficult colleague. How did you handle it?
While being a QA Test Lead, there were multiple days where the bugs being submitted were not done correctly. Which was frustrating, but I saw an opportunity for me to get to be personable with the testers having issues and go over the confusion and steps to make their bugs reports better. Which in the end, they were very efficient.
What tools do you use for testing, and why do you prefer them?
JIRA: For tracking bugs and managing test cases, which makes it easier to communicate with the dev team about bugs. Bug verification process.
Confluence: For documentation to get details and important information about the feature being tested.
Testrail: Test case management: Create, organize, and manage test cases.
Can you give an example of a time when you found a critical bug late in the development cycle?
In a past project, I found a critical bug during final regression testing for a new weapon that was going to be released, when you used a tactical ability while reloading, it gave you infinite ammo. I immediately communicated the issue to the development team, outlining the severity and steps to reproduce it. While it was late in the cycle, the team prioritized fixing it, and I worked overtime to ensure the patch was thoroughly tested. The release was delayed slightly, but catching the bug prevented a significant problem for end users.
Realized it would be a blocker, and it would stop release. So I escalated to devs and release manager.
How do you manage tight deadlines or high-pressure situations?
In high-pressure situations, I remain organized and focus on prioritization. I break down tasks into smaller, manageable pieces and prioritize the most critical tests or bugs that need attention. I also communicate effectively with my team about what can realistically be achieved within the time frame. If needed, I escalate concerns early to avoid bottlenecks later.
CONTEXT SWITCHING.
How do you handle feedback on your work?
I view feedback as an opportunity to improve. Whether it's positive or constructive, I take time to reflect on the feedback and look for areas where I can grow. For example, if I receive feedback that a test case wasn’t detailed enough, I would revise my approach to ensure clarity and thoroughness next time. I appreciate open communication and always aim to learn from feedback
How do you approach writing a test case? (Maybe have to know)
Understand the Requirement: I first review the requirement or user story thoroughly to understand what the expected behavior is.
Define the Objective: I define what I’m trying to validate with this test.
Set Pre-conditions: I determine the environment setup and any prerequisites that need to be met before running the test.
Steps to Execute: I list detailed steps for how to execute the test, ensuring they are clear and repeatable.
Expected Result: I define what the expected outcome should be after each step.
Post-conditions: I include any clean-up steps, if necessary, to return the system to its initial state.
Traceability: I make sure each test case can be traced back to a specific requirement to ensure coverage."
What is the difference between severity and priority in bug tracking?
Severity refers to the impact a bug has on the system’s functionality. For example, a crash would be classified as high severity, while a minor UI issue might be low severity. Priority is related to how soon the bug needs to be fixed, which is more about business impact. A low severity issue could be high priority if it's on a customer-facing page, while a high severity bug in a less-used feature might have a lower priority.
What would you do if you couldn’t consistently repro a bug?
waiting for bobby to tell me.
What is exploratory testing, and when do you use it?
Exploratory testing is an unscripted approach where testers actively explore the application to find defects without predefined test cases. I use exploratory testing when:
The requirements are not fully detailed.
There’s a need to check for real-world usage issues.
I'm testing a new feature that hasn’t been fully documented yet. It’s a good way to discover unexpected issues that structured testing might not cover.
TEST THE GOLDEN PATH.
TEST EVERYTHING BUT THE GOLDEN PATH.
How do you test an application without any documentation?
If there’s no documentation, I start by exploring the application myself to understand its functionality. I communicate with developers, product managers, or end-users to gather informal requirements or understand the expected behavior. I then create basic test scenarios and progressively refine them as I learn more about the application. Using exploratory testing techniques in such cases is essential.
What’s the biggest challenge you’ve faced in QA, and how did you overcome it?
Bridging the communication gap between tester and developer. At EA, it was almost frowned upon to talk with developers about the features/bugs we were testing.
How do you handle repetitive tasks in testing?
I would say that it is important to break up your tasks to avoid burnout. I had to review bug reports, I had to help with documentation and then our normal exploratory testing. I split my time up based on priority and deadline. This helped me handle the repetitive tasks in testing.
Can you explain what regression testing is and why it’s important?
Regression testing is simply, if the build that you are testing on has regressed. If we are testing version 2.0 and we find a bug, we want to go back in 1.0 and test to see if the bug exists. If it does, this is not a regression. If it does not, then we have regressed because there is a bug in the current version.
This is important because we never want the quality of our product to go down, when the whole goal of the newer version is to be an improved product of the previous version.
How would you handle a situation where a bug you found is marked as ‘Not a Bug’ by the development team?
In my experience, if the developer does not think that it is a bug, they will write a comment on my JIRA ticket explaining why it is not a bug.
If I have any doubts or disagree, I would go to my analyst to discuss before moving forward with anything regarding the bug.
What is load testing, and how is it different from stress testing?
Load testing involves testing the application’s performance under expected, normal usage conditions. For example, simulating hundreds of users accessing the system simultaneously to check if it can handle the load without issues. Stress testing goes a step further by pushing the system beyond its limits to see how it behaves under extreme conditions, such as very high traffic or low system resources. The goal is to identify the system's breaking point and how gracefully it handles failure.
How would you explain the importance of QA to a non-technical stakeholder?
Overall, QA is to ensure the quality of the product. We have a lot of people in a variety of positions who work to ensure quality in different ways.
The engineer side of QA, do things like automate testing and provide tools.
We have analysts, who are the subject matter experts of certain features. They try to maximize test coverage of their feature.
We have testers, who execute the manual testing of said features.
All these positions work together to provide a product of the highest quality.
What do you do if you discover a requirement or feature is untestable?
First you want to know why the feature is untestable.
Next you want to communicate to your analyst that the feature is not testable.
How do you ensure effective communication with remote teams?
Effective communication with remote teams requires regular -
Daily checkins. Holding daily standups to discuss progress and the goals for the day.
Clear Documentation: Keeping test cases, bugs, and reports well-documented and available for all team members.
Use of Collaboration Tools: Leveraging tools like Slack, JIRA, and Zoom to stay connected and ensure all communication is clear and documented.
Flexibility: Being flexible with meeting times to accommodate different time zones and ensuring everyone has a voice during discussions.
What do you think are the key qualities of a good QA tester?
A good QA tester should have:
Attention to Detail: The ability to spot even minor issues and discrepancies.
Curiosity: A willingness to explore different scenarios and edge cases.
Communication Skills: The ability to effectively communicate findings with both technical and non-technical stakeholders.
Analytical Thinking: The skill to break down complex functionality and understand the root cause of problems.
Teamwork: The ability to collaborate well with developers, product managers, and other stakeholders.
Patience and Perseverance: Testing can be repetitive, so it’s important to remain patient and persistent in finding bugs.