How to fix `MaxAttemptsExceededException` in Laravel Jobs

If you’ve been working for Laravel & Queued jobs for a moment you probably stumbled on a “Job has been attempted too many times or run too long” issue.

This is thrown by a MaxAttemptsExceededException exception.

I had one of these on an app I’m working on and I really didn’t know how to fix it.

Here was the initial code for my job.

// FetchMetrics.php@handle
// $timeout = 180;
// $retryAfter = 210;

$job = $this;

Redis::throttle('fetch-metrics')->allow(60)->every(100)->then(function () use ($job) {
    // Fetch Google API
}, function () {
    return $this->release(20); // Don't know what this means
});

This piece of code works, but if you have a big enough queue, it’ll eventually break and throw thousands of MaxAttemptsExceededException. It was my case when I reached 12K jobs.

I tried to change the timeout & retryAfter values, like 10 times. It all ended up the same, lots of exceptions.

I tried to implement the failed method on my job, but it doesn’t do anything more.

So finally, after weeks of research I found the solution: ->block()

This method allows Redis to block the execution of the queue for a specific amount of time. It turns out that without it, my jobs where all throttled but at the same time they are retried. So because of the throttling, after a while all the jobs reached their maximum amount of tries.

// FetchMetrics.php@handle
// $timeout = 180;
// $retryAfter = 210;

$job = $this;

Redis::throttle('fetch-metrics')->allow(60)->every(100)->block(120)->then(function () use ($job) {
    // Fetch Google API
}, function () {
    return $this->release(20); // Don't know what this means
});

By adding ->block(120) we actually wait for 120s before continuing the queue when we reach the treshold of 60/100.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.