📣 Requestly API Client – Free Forever & Open Source. A powerful alternative to Postman. Try now ->

How to Run Failed Test Cases in TestNG: Methods and Best Practices

Azma Banu
Learn how to rerun failed test cases in TestNG using testng-failed.xml, IRetryAnalyzer, and listeners. Explore challenges, best practices, and why retries should be tested on real browsers.
How to Run Failed Test Cases in TestNG_ Methods and Best Practices

In large-scale automation testing, test case failures are inevitable. Failures may arise due to application issues, unstable test scripts, or environmental factors. In TestNG, one of the most widely used testing frameworks for Java, rerunning failed test cases ensures that intermittent failures do not affect the stability of the entire test suite.

With built-in features and customizable retry mechanisms, TestNG provides multiple ways to handle failed tests effectively.

Why Test Cases Fail in TestNG?

Test case failures in TestNG can stem from different sources. Identifying the root cause is critical before deciding on reruns.

  • Application-side issues: Bugs in the application under test can cause consistent failures.
  • Script errors: Incorrect locators, synchronization issues, or poor test logic often lead to unstable scripts.
  • Environment instability: Network issues, slow servers, or unavailable test data may cause temporary failures.
  • Cross-browser inconsistencies: Tests may pass on one browser but fail on another due to rendering or compatibility differences.

Not all failures require immediate fixing—some can be resolved through controlled reruns.

Understanding TestNG Reports for Failed Tests

After executing a test suite, TestNG generates detailed reports. One of the key outputs is testng-failed.xml, which lists only the failed test cases from the last run. This XML file helps testers rerun only the failed tests instead of re-executing the entire suite.

The report provides:

  • Names of failed test cases.
  • Failure stack traces and reasons.
  • Execution time details.
  • Links to rerun scripts (testng-failed.xml).

This mechanism allows efficient debugging and re-execution of problem tests.

Methods to Re-run Failed Test Cases in TestNG

TestNG provides multiple approaches for rerunning failed test cases, from built-in rerun files to custom retry logic.

Using testng-failed.xml File

TestNG automatically creates a testng-failed.xml file inside the test-output folder after execution.

Steps:

  1. Navigate to the test-output directory.
  2. Locate the testng-failed.xml file.
  3. Execute it as a TestNG suite.

This reruns only the failed tests, making the process efficient. However, it does not provide retry logic for tests within the same execution.

Using IRetryAnalyzer Interface

TestNG’s IRetryAnalyzer interface allows testers to automatically retry failed tests a defined number of times.

Example in Java:

import org.testng.IRetryAnalyzer;
import org.testng.ITestResult;
public class RetryAnalyzer implements IRetryAnalyzer {
private int count = 0;
private static final int MAX_RETRY = 2;
@Override
public boolean retry(ITestResult result) {
if (count < MAX_RETRY) {
count++;
return true;
// Retry test
}
return false;
// Stop retrying
}
}

To apply this, add the retryAnalyzer attribute to your test method:

@Test(retryAnalyzer = RetryAnalyzer.class)public void testLogin() {
// test steps}

This ensures that a failed test automatically retries before being marked as failed in the final report.

Configuring Retry Logic with Listeners

For broader control, TestNG listeners can be used in conjunction with retry analyzers. Implementing ITestListener allows testers to capture failures and trigger retries dynamically.

Example:

import org.testng.ITestListener;
import org.testng.ITestResult;
public class TestListener implements ITestListener {
@Override
public void onTestFailure(ITestResult result) {
System.out.println("Test failed: " + result.getName());
// Custom logging or retry logic
}
}

Listeners can be added in the testng.xml file to apply globally across tests.

Practical Example of Re-running Failed Test Cases

Consider a login test that fails intermittently due to network delays. By integrating RetryAnalyzer, the test will automatically rerun up to the configured limit before being marked as failed.

@Test(retryAnalyzer = RetryAnalyzer.class)public void loginTest() {
driver.get("https://example.com/login");
driver.findElement(By.id("username")).sendKeys("user");
driver.findElement(By.id("password")).sendKeys("pass");
driver.findElement(By.id("loginBtn")).click();
Assert.assertTrue(driver.findElement(By.id("dashboard")).isDisplayed());
}

If the page load fails once, TestNG reruns the test without manual intervention, ensuring false negatives are minimized.

Common Challenges When Re-running Failed Tests

While rerunning failed tests improves efficiency, it can also introduce challenges:

  • Masking real issues: Continuous retries may hide genuine application bugs.
  • Increased execution time: Excessive retries can prolong test cycles.
  • Inconsistent failures: Flaky tests may pass in some runs and fail in others, leading to unreliable results.
  • Parallel execution conflicts: Retried tests running in parallel may lead to resource or data conflicts.

Mitigation requires striking the right balance between retries and debugging.

Best Practices for Handling Failed Test Cases in TestNG

To achieve stable automation runs, testers should follow structured practices:

  • Limit retries to 1–2 attempts to avoid masking real defects.
  • Analyze failure reports before rerunning—understand whether the issue is script or application-related.
  • Maintain separate logs for retried tests to differentiate flaky failures from actual defects.
  • Use deleteAllCookies() or environment resets to ensure clean states before reruns.
  • Run reruns on different browsers and environments to identify cross-browser inconsistencies.

Why Execute Retry Scenarios on Real Browsers and Devices

Local reruns can confirm basic script stability, but they don’t guarantee consistency across real-world environments. Browser differences, device-specific behaviors, and network conditions can influence test results.

BrowserStack Automate enables rerunning failed TestNG cases on thousands of real browsers and devices hosted on the cloud. Benefits include:

  • Validating flaky failures across multiple environments.
  • Ensuring compatibility in Chrome, Firefox, Safari, Edge, and mobile browsers.
  • Running retries in parallel on real infrastructure to reduce execution time.
  • Eliminating maintenance overhead of managing in-house device labs.

This ensures that rerun tests deliver results that truly reflect production-level scenarios.

Conclusion

Rerunning failed test cases in TestNG is essential for stabilizing automation suites and minimizing false negatives. By leveraging built-in reports, retry analyzers, and listeners, testers can automate retries effectively.

However, best practices must be followed to prevent masking real issues. Running reruns on real browsers and devices through platforms like BrowserStack ensures accurate, environment-specific validation and reliable test outcomes.

Written by
Azma Banu

Related posts