Skip to content

yuyo.backoff#

Utility used for handling automatic back-off.

This can be used to cover cases such as hitting rate-limits and failed requests.

Backoff #

Used to exponentially backoff asynchronously.

This class acts as an asynchronous iterator and can be iterated over to provide implicit backoff where for every iteration other than the first this will either back off for the time passed to Backoff.set_next_backoff if applicable or a time calculated exponentially.

Each iteration yields the current retry count (starting at 0).

Examples:

An example of using this class as an asynchronous iterator may look like the following

# While we can directly do `async for _ in Backoff()`, by assigning it to a
# variable we allow ourself to provide a specific backoff time in some cases.
backoff = Backoff()
async for _ in backoff:
    response = await client.fetch(f"https://example.com/{resource_id}")
    if response.status_code == 403:  # Ratelimited
        # If we have a specific backoff time then set it for the next iteration
        backoff.set_next_backoff(response.headers.get("Retry-After"))

    elif response.status_code >= 500:  # Internal server error
        # Else let the iterator calculate an exponential backoff before the next loop.
        pass

    else:
        response.raise_for_status()
        resource = response.json()
        # We need to break out of the iterator to make sure it doesn't backoff again.
        # Alternatively `Backoff.finish()` can be called to break out of the loop.
        break

Alternatively you may want to explicitly call Backoff.backoff. An alternative implementation of the previous example which uses Backoff.backoff may look like the following:

backoff = Backoff()
resource = None
while not resource:
    response = await client.fetch(f"https://example.com/{resource_id}")
    if response == 403  # Ratelimited
        # If we have a specific backoff time then set it for the next iteration.
        backoff.set_next_backoff(response.headers.get("Retry-After"))
        await backoff.backoff()  # We must explicitly backoff in this flow.

    elif response >= 500:  # Internal server error
        # Else let the iterator calculate an exponential backoff and explicitly backoff.
        await backoff.backoff()

    else:
        response.raise_for_status()
        resource = response.json()

is_depleted property #

is_depleted

Whether "max_retries" has been reached.

This can be used to workout whether the loop was explicitly broken out of using Backoff.finish/break or if it hit "max_retries".

__init__ #

__init__(max_retries=None, *, base=2.0, maximum=64.0, jitter_multiplier=1.0, initial_increment=0)

Initialise a backoff instance.

Parameters:

  • max_retries (Optional[int], default: None ) –

    The maximum amount of times this should iterate for between resets.

    If left as None then this iterator will be unlimited. This must be greater than or equal to 1.

  • base (float, default: 2.0 ) –

    The base to use.

  • maximum (float, default: 64.0 ) –

    The max value the backoff can be in a single iteration. Anything above this will be capped to this base value plus random jitter.

  • jitter_multiplier (float, default: 1.0 ) –

    The multiplier for the random jitter.

    Set to 0 to disable jitter.

  • initial_increment (int, default: 0 ) –

    The initial increment to start at.

Raises:

  • ValueError

    If an int that's too big to be represented as a float or a non-finite value is passed in place of a field that's annotated as float or if max_retries is less than 1.

backoff async #

backoff()

Sleep for the provided backoff or for the next exponent.

This provides an alternative to iterating over this class.

Returns:

  • int | None

    Whether this has reached the end of its iteration.

    If this returns True then that call didn't sleep as this has been marked as finished or has reached the max retries.

finish #

finish()

Mark the iterator as finished to break out of the current loop.

reset #

reset()

Reset the backoff to it's original state to reuse it.

set_next_backoff #

set_next_backoff(backoff_)

Specify a backoff time for the next iteration or Backoff.backoff call.

If this is called then the exponent won't be increased for this iteration.

Note

Calling this multiple times in a single iteration will overwrite any previously set next backoff.

Parameters:

  • backoff_ (Union[float, int, None]) –

    The amount of time to backoff for in seconds.

    If this is None then any previously set next backoff will be unset.

ErrorManager #

A context manager provided to allow for more concise error handling with Backoff.

Examples:

The following is an example of using ErrorManager alongside Backoff in-order to handle the exceptions which may be raised while trying to reply to a message.

retry = Backoff()
# Rules can either be passed to `ErrorManager`'s initiate as variable arguments
# or one at a time to `ErrorManager.with_rule` through possibly chained-calls.
error_handler = (
    # For the 1st rule we catch two errors which would indicate the bot
    # no-longer has access to the target channel and break out of the
    # retry loop using `Backoff.retry`.
    ErrorManager(((NotFoundError, ForbiddenError), lambda _: retry.finish()))
        # For the 2nd rule we catch rate limited errors and set their
        # `retry` value as the next backoff time before suppressing the
        # error to allow this to retry the request.
        .with_rule((RateLimitedError,), lambda exc: retry.set_next_backoff(exc.retry_after))
        # For the 3rd rule we suppress the internal server error to allow
        # backoff to reach the next retry and exponentially backoff as we
        # don't have any specific retry time for this error.
        .with_rule((InternalServerError,), lambda _: False)
)
async for _ in retry:
    # We entre this context manager each iteration to catch errors before
    # they cause us to break out of the `Backoff` loop.
    with error_handler:
        await post(f"https://example.com/{resource_id}", json={"content": "General Kenobi"})
        # We need to break out of `retry` if this request succeeds.
        break

__init__ #

__init__(*rules)

Initialise an error manager instance.

Parameters:

  • *rules (tuple[Iterable[type[BaseException]], Callable[[Any], Optional[bool]]], default: () ) –

    Rules to initiate this error context manager with.

    These are each a 2-length tuple where the tuple[0] is an iterable of types of the exceptions this rule should apply to and tuple[1] is the rule's callback function.

    The callback function will be called with the raised exception when it matches one of the passed exceptions for the relevant rule and may raise, return True to indicate that the current error should be raised outside of the context manager or False/None to suppress the current error.

add_rule #

add_rule(exceptions, result)

Add a rule to this exception context manager.

Parameters:

  • exceptions (Iterable[type[BaseException]]) –

    An iterable of types of the exceptions this rule should apply to.

  • result (Callable[[Any], Optional[bool]]) –

    The function called with the raised exception when it matches one of the passed exceptions. This may raise, return True to indicate that the current error should be raised outside of the context manager or False/None to suppress the current error.

Returns:

  • Self

    This returns the handler a rule was being added to in-order to allow for chained calls.

clear_rules #

clear_rules()

Clear the rules registered with this handler.