High-Performance .NET: Async, Multithreading, and Parallel Programming Parallel Loops in .NET Created: 19 Jan 2026 Updated: 19 Jan 2026

Implementing Retry Logic and Thread-Safe Logging

In high-performance applications, transient faults—such as temporary network glitches or database timeouts—are inevitable. When these occur within a parallel loop, a single failure shouldn't necessarily crash the entire operation. Instead, a resilient system should be able to log the error and retry the task until it succeeds.

As explored in Jeff McNamara’s Ultimate C# for High-Performance Applications, combining the Task Parallel Library (TPL) with thread-safe collections allows us to build loops that are both fast and fault-tolerant.

The Strategy: "Retry Until Success"

To handle transient errors in a parallel environment, we use three key components:

  1. Thread-Safe Storage: Using ConcurrentBag<T> or ConcurrentDictionary<K,V> ensures that multiple threads can log errors or save results simultaneously without data corruption.
  2. Internal Loop: A while loop inside the parallel delegate allows a specific thread to keep attempting its assigned task if an exception occurs.
  3. Local Exception Handling: Catching exceptions inside the loop body allows for immediate logging and triggers the retry logic.

Practical Example: Resilient Parallel Data Processing

In this example, we simulate processing a batch of sensor data. Some readings might fail due to "simulated sensor noise," but our loop will log those failures and retry until the data is correctly recorded.

// See https://aka.ms/new-console-template for more information


using System.Collections.Concurrent;

Console.WriteLine("Parallel Processing Example");

ResilientProcessor processor = new ResilientProcessor();
processor.RunParallelProcess();

class ResilientProcessor
{
public void RunParallelProcess()
{
var errorLog = new ConcurrentBag<string>();
var processedData = new ConcurrentDictionary<int, string>();
var retryCounters = new ConcurrentDictionary<int, int>();

var sensorIds = Enumerable.Range(100, 50).ToList();

Parallel.ForEach(sensorIds, id =>
{
bool success = false;
int retryCount = 0;

while (!success)
{
try
{
string result = ReadSensorData(id);
processedData.TryAdd(id, result);
success = true;

if (retryCount > 0)
{
retryCounters.TryAdd(id, retryCount);
Console.WriteLine($"✅ Sensor {id} succeeded after {retryCount} retry(ies)");
}
else
{
Console.WriteLine($"✅ Sensor {id} succeeded on first attempt");
}
}
catch (TimeoutException ex)
{
retryCount++;
errorLog.Add($"[Sensor {id}] Retry #{retryCount}: {ex.Message}");
Console.WriteLine($"🔄 Sensor {id} failed, retrying... (Attempt #{retryCount})");
}
}
});

Console.WriteLine("\n--- Processing Complete ---");
Console.WriteLine($"Total Successful Readings: {processedData.Count}");
Console.WriteLine($"Total Transient Failures: {errorLog.Count}");
Console.WriteLine($"Sensors that needed retry: {retryCounters.Count}");
Console.WriteLine($"Total retry attempts: {retryCounters.Values.Sum()}");

if (retryCounters.Any())
{
Console.WriteLine("\n--- Retry Details ---");
foreach (var kvp in retryCounters.OrderByDescending(x => x.Value))
{
Console.WriteLine($"Sensor {kvp.Key}: {kvp.Value} retry(ies)");
}
}
}

private string ReadSensorData(int id)
{
if (Random.Shared.Next(0, 5) == 0)
{
throw new TimeoutException("Sensor timed out.");
}

return $"DataValue_{id * 2}";
}
}

Why This Works

1. Concurrent Collections

Standard List<T> or Dictionary<K,V> classes are not thread-safe. If two threads try to .Add() at the same time, you may lose data or cause the application to crash. ConcurrentBag and ConcurrentDictionary use internal locking mechanisms optimized for high-concurrency scenarios.

2. Isolation of Failure

Because the while loop and try-catch are inside the parallel delegate, a failure for "Sensor 101" does not slow down "Sensor 102." Each CPU core manages its own retries independently.

3. Reduced System Stress

By logging errors to a concurrent collection instead of writing directly to the Console or a database inside the loop, we minimize "contention"—the situation where threads wait for a single resource.

Best Practices for Retries

  1. Avoid Infinite Loops: In production, add a maxRetries counter to prevent a permanent error from running forever.
  2. Exponential Backoff: If the error is network-related, consider adding a small delay (Task.Delay) between retries to let the system recover.
  3. Monitor Thread Count: Too many retries can keep CPU cores busy, potentially delaying other parts of your application.


Share this lesson: