Endpoint ft. Ijeoma Okereafor: On the human element of testing and ecosystem quality

Your lightweight Client for API debugging
No Login Required
Requestly is a web proxy that requires a desktop and desktop browser.
Enter your email below to receive the download link. Give it a try next time youâre on your PC!
Ijeoma Okereafor is a Quality Assurance Engineer who views testing through the lens of holistic system reliability. With a background rooted in Network Engineering, she brings a unique infrastructure-first mindset to API and mobile testing. Currently driving product quality across SaaS, Telecom, and AI platforms, Ijeoma combines technical depth—spanning web, mobile, and machine learning environments—with a sharp focus on user behavior. She champions the idea that quality is not just about finding bugs, but about understanding the architecture that powers the product.
We spoke with Ijeoma to get her perspective on how network fundamentals shape API testing and why the “human eye” remains irreplaceable in an automated world.
A Q&A with Ijeoma Okereafor
1. How does your previous experience in network engineering influence the way you now design API test cases and identify potential failure points?
One thing my background in network engineering really helped me with is that I don’t see APIs as just endpoints. I see them as part of a full communication flow.
Before moving fully into API testing, I already understood how systems talk to each other behind the scenes. Things like latency, packet loss, routing delays, handshake failures, timeouts, and unstable connections were not new concepts to me. So when I started designing API test cases, I naturally began thinking beyond the happy path.
For example, I don’t just test whether an endpoint returns a 200 response. I think about what happens when the network is slow, when there is intermittent connectivity, when a request is partially received, or when retries happen due to timeout. These are real world conditions users face and many failures actually come from these situations, not from the logic itself.
My network background also helps me identify failure points that are not obvious at first glance. Things like dependency delays between services, DNS resolution issues, load balancer behavior, or how APIs respond under unstable connections.
I tend to ask questions like “What happens if the request reaches but the response cannot return on time”, “What happens during handshake delays”, “How does the API behave under packet drops or retries”
This mindset influences how I design my negative test cases and resilience tests. I focus a lot on timeout handling, retry mechanisms, idempotency, and how the system behaves under degraded network conditions.
So instead of testing APIs only from an application perspective, I test them from a communication perspective as well. That has helped me catch issues that are not immediately visible during functional testing but would have shown up in production environments.
In short, my network engineering experience trained me to think in terms of reliability and failure behavior, not just correctness. And that has significantly shaped the way I approach API testing today.
2. As someone proficient in manual testing, where do you believe the “human eye” is still superior to a script in API testing?
Automation is powerful, no doubt. But there are still areas in API testing where the human eye sees what a script simply cannot.
Scripts are built to check what we tell them to check. Humans, on the other hand, notice what was never specified.
For instance, a script can validate that the response status is 200 and that certain fields exist. But as a manual tester, I look deeper into whether the response actually makes sense in context.
I question things like “Does this data logically align with the request”, “Does this response create any downstream risk”, “Is there a silent data inconsistency that may not break the system now but can cause issues later”
Sometimes everything passes technically but still feels wrong functionally.
Another area where the human eye is superior is in identifying behavioral gaps. APIs may respond correctly according to documentation but still expose usability or integration risks. For example, responses that are technically valid but poorly structured for consumers, inconsistent naming conventions, unnecessary payload weight, or error messages that are not actionable.
Manual testing also plays a big role in exploratory scenarios. When I am interacting with APIs, I often try unexpected sequences, unusual combinations of inputs, or borderline data conditions that were never originally captured in test scripts. These kinds of scenarios are usually where hidden defects live.
There is also the aspect of intuition built from experience. After years of testing, you start recognizing patterns. You can sense when something is fragile even if it has not failed yet. A script waits for failure. A human can predict it.
So while automation ensures speed and coverage, the human eye brings judgment, context, and curiosity. It helps us validate not just correctness but reliability and real world readiness.
That combination is what truly strengthens API quality.
3. What is your personal workflow for reporting API defects to developers to minimize back-and-forth and ensure faster fixes?
Over time, I’ve learned that the way you report an API defect can either speed things up or slow the whole team down.
My personal workflow is built around clarity and anticipation. I try to answer the developer’s next question before they even ask it.
First, I make sure the issue is truly isolated. Before raising a defect, I validate that it is not coming from test data, environment instability, authentication issues, or request misconfiguration. This helps avoid noise and builds trust with the dev team.
Once confirmed, I document the issue in a way that makes it immediately reproducible. I include the exact endpoint, request payload, headers, environment, and timestamp. I also attach the response body and status code so there is no ambiguity.
I always describe the expected behavior versus the actual behavior in simple terms. Not from a testing perspective but from a system or business perspective. This helps developers quickly understand the impact.
Where possible, I also highlight patterns. For example, if the issue occurs only under certain conditions like specific data types, edge values, or sequence of calls. This reduces the need for back and forth investigation.
I also attach logs or traces when available because API issues are often easier to debug with visibility into the flow.
Another important part of my workflow is context. I mention if the defect affects integration, performance, or downstream services. This helps the developer prioritize correctly.
Finally, I keep communication open but concise. If it’s a critical issue, I usually follow up with a quick discussion instead of relying only on the ticket. This avoids long threads and speeds up resolution.
The goal is always to move from reporting to fixing as quickly as possible. So I focus on making the defect actionable, not just visible.
4. With the rise of complex microservices and distributed systems, what do you believe is the biggest challenge API testers will face in the next 1–2 years?
In the next one to two years, I believe the biggest challenge for API testers will not be testing individual services but understanding system behavior across service boundaries.
With microservices becoming more distributed, failures are no longer always obvious. A single request may pass through multiple services, queues, gateways, and third party integrations before a response is returned. So when something goes wrong, it is rarely a single point failure.
The real challenge will be visibility.
As testers, we will need to move beyond validating responses and start understanding how services interact under different conditions. Issues like cascading failures, timeout chains, partial service degradation, and inconsistent data synchronization will become more common.
For example, an API may return a valid response while another dependent service silently fails in the background. The failure may not show immediately but can surface later as data inconsistency or performance degradation.
Another challenge will be environment parity. With systems becoming more cloud native and dynamic, ensuring that test environments truly reflect production behavior will be harder. Things like network variability, scaling behavior, and service dependencies will influence API reliability in ways traditional testing does not always capture.
Testers will also need stronger collaboration with observability tools such as logging, tracing, and monitoring platforms. Understanding how to interpret system signals will become just as important as validating request and response.
So the shift will be from endpoint testing to ecosystem testing.
API testers will need to think more like system reliability engineers, focusing not just on whether something works, but whether it continues to work under stress, scale, and failure conditions.
That mindset will define the next phase of API quality
5. Beyond the technical stack, what is the most underrated skill for developers and testers today that significantly impacts product quality?
Beyond technical skills, I strongly believe that clear thinking is the most underrated skill that impacts product quality today.
Not just communication in the usual sense, but the ability to truly understand the problem before jumping into solutions.
Many product issues don’t happen because people lack technical knowledge. They happen because assumptions were made. A developer assumes how something will be used. A tester assumes how something should behave. A product owner assumes how something was understood.
And in distributed teams especially, small misunderstandings can grow into real production risks.
For developers, this shows up in how requirements are interpreted. Writing clean code is important, but understanding the intent behind the feature is what prevents logical gaps.
For testers, it influences how we validate systems. If we only test what is written, we may miss what was actually meant.
The ability to ask the right questions, challenge politely, and clarify expectations early can prevent weeks of rework later.
Another underrated part of this skill is empathy. Understanding how your work affects the next person in the chain. When developers think about how their APIs will be consumed, and testers think about how failures affect users, quality naturally improves.
Technical tools help us detect issues. But clear thinking and shared understanding help us prevent them.
And prevention will always be more valuable than detection when it comes to product quality.
6. What is one piece of career advice you wish you had received early on that you now share with others entering the QA space?
One piece of advice I wish I received early in my QA journey is: Do not focus only on how to test. Focus on how the system works.
When I started out, like many people, I thought being a good tester meant mastering test cases, tools, and bug reporting. Those things are important, but real growth started when I became curious about the architecture behind what I was testing.
Understanding how services communicate, how data flows, how failures happen, and how different components depend on each other changed my approach completely.
It helped me move from simply finding defects to understanding why they happen.
That shift is what took me from executing tests to designing meaningful ones. It also made collaboration with developers easier because conversations became more about solutions than symptoms.
So when I speak to people entering QA today, I encourage them to go beyond the surface. Learn the basics of how systems are built. Understand APIs, environments, integrations, and infrastructure.
Tools will change over time. But system thinking stays relevant.
The earlier you develop that mindset, the faster you grow from a tester into a quality professional.
(Responses may have been edited for clarity.)
What was your biggest takeaway from Ijeoma’s insights? Share your thoughts and join the conversation on LinkedIn!
Join us in celebrating Ijeoma Okereafor and the incredible work of all the developers, builders, and product leaders who are pushing the boundaries of technology. Stay tuned as we continue to spotlight more leaders in our Endpoint series.
Contents
Subscribe for latest updates
Share this article
Related posts
Get started today
Requestly is a web proxy that requires a desktop and desktop browser.
Enter your email below to receive the download link. Give it a try next time youâre on your PC!










