配置和默认值

Configuration and defaults

本文档描述了可用的配置选项。

如果你正在使用默认的加载器,则必须创建一个 celeryconfig.py 模块, 并确保该模块在 Python 路径上可用。

This document describes the configuration options available.

If you're using the default loader, you must create the celeryconfig.py module and make sure it's available on the Python path.

示例配置文件

Example configuration file

以下是一个配置文件示例,供你入门使用。 它应当包含运行一个基础 Celery 设置所需的全部内容。

## Broker 设置。
broker_url = 'amqp://guest:guest@localhost:5672//'

# 当 Celery worker 启动时要导入的模块列表。
imports = ('myapp.tasks',)

## 使用数据库存储任务状态和结果。
result_backend = 'db+sqlite:///results.db'

task_annotations = {'tasks.add': {'rate_limit': '10/s'}}

This is an example configuration file to get you started. It should contain all you need to run a basic Celery set-up.

## Broker settings.
broker_url = 'amqp://guest:guest@localhost:5672//'

# List of modules to import when the Celery worker starts.
imports = ('myapp.tasks',)

## Using the database to store task state and results.
result_backend = 'db+sqlite:///results.db'

task_annotations = {'tasks.add': {'rate_limit': '10/s'}}

新的小写设置

New lowercase settings

在 4.0 版本中,引入了新的小写配置项及配置结构。

与早期版本相比,主要的区别在于配置项名称从大写变为小写, 以及一些前缀被重命名,例如 celery_beat_ 改为 beat_celeryd_ 改为 worker_,并且大多数顶层的 celery_ 配置项 都被移动到了新的 task_ 前缀下。

警告

Celery 仍然能够读取旧版的配置文件,直到 Celery 6.0。 此后将不再支持旧的配置方式。 我们提供了 celery upgrade 命令,用于处理大量的升级情况 (包括 Django)。

请尽快迁移到新的配置结构。

Version 4.0 introduced new lower case settings and setting organization.

The major difference between previous versions, apart from the lower case names, are the renaming of some prefixes, like celery_beat_ to beat_, celeryd_ to worker_, and most of the top level celery_ settings have been moved into a new task_ prefix.

警告

Celery will still be able to read old configuration files until Celery 6.0. Afterwards, support for the old configuration files will be removed. We provide the celery upgrade command that should handle plenty of cases (including Django).

Please migrate to the new configuration scheme as soon as possible.

Setting name

Replace with

CELERY_ACCEPT_CONTENT

accept_content

CELERY_ENABLE_UTC

enable_utc

CELERY_IMPORTS

imports

CELERY_INCLUDE

include

CELERY_TIMEZONE

timezone

CELERYBEAT_MAX_LOOP_INTERVAL

beat_max_loop_interval

CELERYBEAT_SCHEDULE

beat_schedule

CELERYBEAT_SCHEDULER

beat_scheduler

CELERYBEAT_SCHEDULE_FILENAME

beat_schedule_filename

CELERYBEAT_SYNC_EVERY

beat_sync_every

BROKER_URL

broker_url

BROKER_TRANSPORT

broker_transport

BROKER_TRANSPORT_OPTIONS

broker_transport_options

BROKER_CONNECTION_TIMEOUT

broker_connection_timeout

BROKER_CONNECTION_RETRY

broker_connection_retry

BROKER_CONNECTION_MAX_RETRIES

broker_connection_max_retries

BROKER_FAILOVER_STRATEGY

broker_failover_strategy

BROKER_HEARTBEAT

broker_heartbeat

BROKER_LOGIN_METHOD

broker_login_method

BROKER_NATIVE_DELAYED_DELIVERY_QUEUE_TYPE

broker_native_delayed_delivery_queue_type

BROKER_POOL_LIMIT

broker_pool_limit

BROKER_USE_SSL

broker_use_ssl

CELERY_CACHE_BACKEND

cache_backend

CELERY_CACHE_BACKEND_OPTIONS

cache_backend_options

CASSANDRA_COLUMN_FAMILY

cassandra_table

CASSANDRA_ENTRY_TTL

cassandra_entry_ttl

CASSANDRA_KEYSPACE

cassandra_keyspace

CASSANDRA_PORT

cassandra_port

CASSANDRA_READ_CONSISTENCY

cassandra_read_consistency

CASSANDRA_SERVERS

cassandra_servers

CASSANDRA_WRITE_CONSISTENCY

cassandra_write_consistency

CASSANDRA_OPTIONS

cassandra_options

S3_ACCESS_KEY_ID

s3_access_key_id

S3_SECRET_ACCESS_KEY

s3_secret_access_key

S3_BUCKET

s3_bucket

S3_BASE_PATH

s3_base_path

S3_ENDPOINT_URL

s3_endpoint_url

S3_REGION

s3_region

CELERY_COUCHBASE_BACKEND_SETTINGS

couchbase_backend_settings

CELERY_ARANGODB_BACKEND_SETTINGS

arangodb_backend_settings

CELERY_MONGODB_BACKEND_SETTINGS

mongodb_backend_settings

CELERY_EVENT_QUEUE_EXPIRES

event_queue_expires

CELERY_EVENT_QUEUE_TTL

event_queue_ttl

CELERY_EVENT_QUEUE_PREFIX

event_queue_prefix

CELERY_EVENT_SERIALIZER

event_serializer

CELERY_REDIS_DB

redis_db

CELERY_REDIS_HOST

redis_host

CELERY_REDIS_MAX_CONNECTIONS

redis_max_connections

CELERY_REDIS_USERNAME

redis_username

CELERY_REDIS_PASSWORD

redis_password

CELERY_REDIS_PORT

redis_port

CELERY_REDIS_BACKEND_USE_SSL

redis_backend_use_ssl

CELERY_RESULT_BACKEND

result_backend

CELERY_MAX_CACHED_RESULTS

result_cache_max

CELERY_MESSAGE_COMPRESSION

result_compression

CELERY_RESULT_EXCHANGE

result_exchange

CELERY_RESULT_EXCHANGE_TYPE

result_exchange_type

CELERY_RESULT_EXPIRES

result_expires

CELERY_RESULT_PERSISTENT

result_persistent

CELERY_RESULT_SERIALIZER

result_serializer

CELERY_RESULT_DBURI

Use result_backend instead.

CELERY_RESULT_ENGINE_OPTIONS

database_engine_options

[...]_DB_SHORT_LIVED_SESSIONS

database_short_lived_sessions

CELERY_RESULT_DB_TABLE_NAMES

database_db_names

CELERY_SECURITY_CERTIFICATE

security_certificate

CELERY_SECURITY_CERT_STORE

security_cert_store

CELERY_SECURITY_KEY

security_key

CELERY_SECURITY_KEY_PASSWORD

security_key_password

CELERY_ACKS_LATE

task_acks_late

CELERY_ACKS_ON_FAILURE_OR_TIMEOUT

task_acks_on_failure_or_timeout

CELERY_TASK_ALWAYS_EAGER

task_always_eager

CELERY_ANNOTATIONS

task_annotations

CELERY_COMPRESSION

task_compression

CELERY_CREATE_MISSING_QUEUES

task_create_missing_queues

CELERY_DEFAULT_DELIVERY_MODE

task_default_delivery_mode

CELERY_DEFAULT_EXCHANGE

task_default_exchange

CELERY_DEFAULT_EXCHANGE_TYPE

task_default_exchange_type

CELERY_DEFAULT_QUEUE

task_default_queue

CELERY_DEFAULT_QUEUE_TYPE

task_default_queue_type

CELERY_DEFAULT_RATE_LIMIT

task_default_rate_limit

CELERY_DEFAULT_ROUTING_KEY

task_default_routing_key

CELERY_EAGER_PROPAGATES

task_eager_propagates

CELERY_IGNORE_RESULT

task_ignore_result

CELERY_PUBLISH_RETRY

task_publish_retry

CELERY_PUBLISH_RETRY_POLICY

task_publish_retry_policy

CELERY_QUEUES

task_queues

CELERY_ROUTES

task_routes

CELERY_SEND_SENT_EVENT

task_send_sent_event

CELERY_TASK_SERIALIZER

task_serializer

CELERYD_SOFT_TIME_LIMIT

task_soft_time_limit

CELERY_TASK_TRACK_STARTED

task_track_started

CELERY_TASK_REJECT_ON_WORKER_LOST

task_reject_on_worker_lost

CELERYD_TIME_LIMIT

task_time_limit

CELERY_ALLOW_ERROR_CB_ON_CHORD_HEADER

task_allow_error_cb_on_chord_header

CELERYD_AGENT

worker_agent

CELERYD_AUTOSCALER

worker_autoscaler

CELERYD_CONCURRENCY

worker_concurrency

CELERYD_CONSUMER

worker_consumer

CELERY_WORKER_DIRECT

worker_direct

CELERY_DISABLE_RATE_LIMITS

worker_disable_rate_limits

CELERY_ENABLE_REMOTE_CONTROL

worker_enable_remote_control

CELERYD_HIJACK_ROOT_LOGGER

worker_hijack_root_logger

CELERYD_LOG_COLOR

worker_log_color

CELERY_WORKER_LOG_FORMAT

worker_log_format

CELERYD_WORKER_LOST_WAIT

worker_lost_wait

CELERYD_MAX_TASKS_PER_CHILD

worker_max_tasks_per_child

CELERYD_POOL

worker_pool

CELERYD_POOL_PUTLOCKS

worker_pool_putlocks

CELERYD_POOL_RESTARTS

worker_pool_restarts

CELERYD_PREFETCH_MULTIPLIER

worker_prefetch_multiplier

CELERYD_ENABLE_PREFETCH_COUNT_REDUCTION

worker_enable_prefetch_count_reduction

CELERYD_REDIRECT_STDOUTS

worker_redirect_stdouts

CELERYD_REDIRECT_STDOUTS_LEVEL

worker_redirect_stdouts_level

CELERY_SEND_EVENTS

worker_send_task_events

CELERYD_STATE_DB

worker_state_db

CELERY_WORKER_TASK_LOG_FORMAT

worker_task_log_format

CELERYD_TIMER

worker_timer

CELERYD_TIMER_PRECISION

worker_timer_precision

CELERYD_DETECT_QUORUM_QUEUES

worker_detect_quorum_queues

配置指令

Configuration Directives

常规设置

General settings

accept_content

Default: {'json'} (set, list, or tuple).

允许的内容类型/序列化器白名单。

如果收到的消息不在此列表中,则 该消息将被丢弃并产生错误。

默认情况下,仅启用 json,但可以添加任何内容类型, 包括 pickle 和 yaml;在这种情况下,确保 未经授权的人员无法访问您的代理。 有关更多信息,请参见 安全

示例:

# 使用序列化器名称
accept_content = ['json']

# 或者使用实际的内容类型(MIME)
accept_content = ['application/json']

A white-list of content-types/serializers to allow.

If a message is received that's not in this list then the message will be discarded with an error.

By default only json is enabled but any content type can be added, including pickle and yaml; when this is the case make sure untrusted parties don't have access to your broker. See 安全 for more.

Example:

# using serializer name
accept_content = ['json']

# or the actual content-type (MIME)
accept_content = ['application/json']

result_accept_content

Default: None (can be set, list or tuple).

Added in version 4.3.

结果后端的内容类型/序列化器白名单。

如果收到的消息不在此列表中,则 该消息将被丢弃并产生错误。

默认情况下,它与 accept_content 使用相同的序列化器。 但是,可以为结果后端的接受内容指定不同的序列化器。 通常,当使用签名消息且结果以未签名形式存储在结果后端时, 需要这样做。 有关更多信息,请参见 安全

示例:

# 使用序列化器名称
result_accept_content = ['json']

# 或者使用实际的内容类型(MIME)
result_accept_content = ['application/json']

A white-list of content-types/serializers to allow for the result backend.

If a message is received that's not in this list then the message will be discarded with an error.

By default it is the same serializer as accept_content. However, a different serializer for accepted content of the result backend can be specified. Usually this is needed if signed messaging is used and the result is stored unsigned in the result backend. See 安全 for more.

Example:

# using serializer name
result_accept_content = ['json']

# or the actual content-type (MIME)
result_accept_content = ['application/json']

时间和日期设置

Time and date settings

enable_utc

Added in version 2.5.

默认值:自版本 3.0 起默认启用。

如果启用,消息中的日期和时间将转换为使用 UTC 时区。

请注意,运行低于 2.5 版本的 Celery 的工作进程会将所有消息假设为本地 时区,因此只有当所有工作进程都已升级时才启用此选项。

Default: Enabled by default since version 3.0.

If enabled dates and times in messages will be converted to use the UTC timezone.

Note that workers running Celery versions below 2.5 will assume a local timezone for all messages, so only enable if all workers have been upgraded.

timezone

Added in version 2.5.

Default: "UTC".

配置 Celery 使用自定义时区。 时区值可以是 ZoneInfo 库支持的任何时区。

如果未设置,则使用 UTC 时区。为了向后兼容, 还有一个 enable_utc 设置,当此设置为 false 时,使用系统本地时区。

Configure Celery to use a custom time zone. The timezone value can be any time zone supported by the ZoneInfo library.

If not set the UTC timezone is used. For backwards compatibility there's also a enable_utc setting, and when this is set to false the system local timezone is used instead.

任务设置

Task settings

task_annotations

Added in version 2.5.

Default: None.

此设置可用于从配置中重写任何任务属性。该设置可以是一个字典,或是一个任务过滤器的注解对象列表,返回一个要更改的属性映射。

这将更改 tasks.add 任务的 rate_limit 属性:

task_annotations = {'tasks.add': {'rate_limit': '10/s'}}

或者更改所有任务的相同属性:

task_annotations = {'*': {'rate_limit': '10/s'}}

您还可以更改方法,例如 on_failure 处理器:

def my_on_failure(self, exc, task_id, args, kwargs, einfo):
    print('哦,不!任务失败: {0!r}'.format(exc))

task_annotations = {'*': {'on_failure': my_on_failure}}

如果您需要更多灵活性,您可以使用对象而不是字典来选择要注解的任务:

class MyAnnotate:

    def annotate(self, task):
        if task.name.startswith('tasks.'):
            return {'rate_limit': '10/s'}

task_annotations = (MyAnnotate(), {other,})

This setting can be used to rewrite any task attribute from the configuration. The setting can be a dict, or a list of annotation objects that filter for tasks and return a map of attributes to change.

This will change the rate_limit attribute for the tasks.add task:

task_annotations = {'tasks.add': {'rate_limit': '10/s'}}

or change the same for all tasks:

task_annotations = {'*': {'rate_limit': '10/s'}}

You can change methods too, for example the on_failure handler:

def my_on_failure(self, exc, task_id, args, kwargs, einfo):
    print('Oh no! Task failed: {0!r}'.format(exc))

task_annotations = {'*': {'on_failure': my_on_failure}}

If you need more flexibility then you can use objects instead of a dict to choose the tasks to annotate:

class MyAnnotate:

    def annotate(self, task):
        if task.name.startswith('tasks.'):
            return {'rate_limit': '10/s'}

task_annotations = (MyAnnotate(), {other,})

task_compression

Default: None

任务消息的默认压缩格式。 可以是 gzipbzip2 (如果可用),或在 Kombu 压缩注册表中注册的任何自定义压缩方案。

默认情况下,发送的是未压缩的消息。

Default compression used for task messages. Can be gzip, bzip2 (if available), or any custom compression schemes registered in the Kombu compression registry.

The default is to send uncompressed messages.

task_protocol

Added in version 4.0.

Default: 2 (since 4.0).

设置用于发送任务的默认任务消息协议版本。 支持协议:1 和 2。

协议 2 支持 3.1.24 和 4.x+。

Set the default task message protocol version used to send tasks. Supports protocols: 1 and 2.

Protocol 2 is supported by 3.1.24 and 4.x+.

task_serializer

Default: "json" (since 4.0, earlier: pickle).

一个字符串,标识用于的默认序列化方法。可以是 json (默认), pickleyamlmsgpack,或任何已注册的自定义序列化 方法,注册在 kombu.serialization.registry 中。

参见

序列化器

A string identifying the default serialization method to use. Can be json (default), pickle, yaml, msgpack, or any custom serialization methods that have been registered with kombu.serialization.registry.

参见

序列化器.

task_publish_retry

Added in version 2.2.

Default: Enabled.

决定在发生连接丢失或其他连接错误的情况下,是否重试发布任务消息。 另请参见 task_publish_retry_policy

Decides if publishing task messages will be retried in the case of connection loss or other connection errors. See also task_publish_retry_policy.

task_publish_retry_policy

Added in version 2.2.

Default: See 消息发送重试.

定义在连接丢失或其他连接错误的情况下重试发布任务消息时的默认策略。

Defines the default policy when retrying publishing a task message in the case of connection loss or other connection errors.

任务执行设置

Task execution settings

task_always_eager

Default: Disabled.

如果此项为 True,所有任务将通过本地执行,直到 任务返回为止。apply_async()Task.delay() 将返回 一个 EagerResult 实例,该实例模拟 AsyncResult 的 API 和行为,但结果已经被评估。

也就是说,任务将被本地执行,而不是发送到 队列中。

If this is True, all tasks will be executed locally by blocking until the task returns. apply_async() and Task.delay() will return an EagerResult instance, that emulates the API and behavior of AsyncResult, except the result is already evaluated.

That is, tasks will be executed locally instead of being sent to the queue.

task_eager_propagates

Default: Disabled.

如果此项为 True,则急切执行的任务(通过 task.apply() 应用, 或启用了 task_always_eager 设置时),将 传播异常。

这相当于始终以 throw=True 运行 apply()

If this is True, eagerly executed tasks (applied by task.apply(), or when the task_always_eager setting is enabled), will propagate exceptions.

It's the same as always running apply() with throw=True.

task_store_eager_result

Added in version 5.1.

Default: Disabled.

如果此项为 True,且 task_always_eagerTruetask_ignore_resultFalse, 急切执行任务的结果将被保存到后端。

默认情况下,即使 task_always_eager 设置为 Truetask_ignore_result 设置为 False, 结果也不会被保存。

If this is True and task_always_eager is True and task_ignore_result is False, the results of eagerly executed tasks will be saved to the backend.

By default, even with task_always_eager set to True and task_ignore_result set to False, the result will not be saved.

task_remote_tracebacks

Default: Disabled.

如果启用,任务结果将包含工作者的堆栈信息,以便在重新抛出任务错误时进行追踪。

这需要 https://pypi.org/project/tblib/ 库,可以通过 pip 安装:

$ pip install celery[tblib]

有关如何组合多个扩展要求的信息,请参见 软件包

If enabled task results will include the workers stack when re-raising task errors.

This requires the https://pypi.org/project/tblib/ library, that can be installed using pip:

$ pip install celery[tblib]

See 软件包 for information on combining multiple extension requirements.

task_ignore_result

Default: Disabled.

是否存储任务返回值(墓碑/tombstones)。如果您仍然希望存储错误,而不是成功返回值,可以设置 task_store_errors_even_if_ignored

Whether to store the task return values or not (tombstones). If you still want to store errors, just not successful return values, you can set task_store_errors_even_if_ignored.

task_store_errors_even_if_ignored

Default: Disabled.

如果设置了此项,即使 Task.ignore_result 已启用,工作者也会将所有任务错误存储在结果存储中。

If set, the worker stores all task errors in the result store even if Task.ignore_result is on.

task_track_started

Default: Disabled.

如果 True,任务在被工作者执行时将报告其状态为“已开始”。默认值为 False,因为正常行为是不报告如此细粒度的状态。任务通常是挂起、已完成或等待重试。拥有“已开始”状态对长时间运行的任务特别有用,因为需要报告当前正在运行的任务。

If True the task will report its status as 'started' when the task is executed by a worker. The default value is False as the normal behavior is to not report that level of granularity. Tasks are either pending, finished, or waiting to be retried. Having a 'started' state can be useful for when there are long running tasks and there's a need to report what task is currently running.

task_time_limit

Default: No time limit.

任务硬时间限制,单位为秒。当任务处理超过此时间时,处理该任务的工作者将被终止,并由一个新的工作者替换。

Task hard time limit in seconds. The worker processing the task will be killed and replaced with a new one when this is exceeded.

task_allow_error_cb_on_chord_header

Added in version 5.3.

Default: Disabled.

启用此标志将允许将错误回调链接到一个 Chord 头部,默认情况下,使用 link_error() 时不会进行链接,且如果头部中的任何任务失败,将阻止 Chord 的主体执行。

考虑以下画布,在该标志禁用时(默认行为):

header = group([t1, t2])
body = t3
c = chord(header, body)
c.link_error(error_callback_sig)

如果 任何 头部任务失败( t1t2 ),默认情况下,Chord 主体(t3)将 不执行,并且 error_callback_sig仅调用一次 (针对主体)。

启用此标志将改变上述行为:

  1. error_callback_sig 将被链接到 t1t2 (以及 t3)。

  2. 如果 任何 头部任务失败,error_callback_sig针对每个失败的头部任务调用一次,并且也会针对 body 调用一次(即使主体没有执行)。

现在,考虑启用此标志后的画布:

header = group([failingT1, failingT2])
body = t3
c = chord(header, body)
c.link_error(error_callback_sig)

如果 所有 头部任务失败(failingT1failingT2),那么 Chord 主体(t3)将 不执行,并且 error_callback_sig 将被 调用 3 次 (两次针对头部,一次针对主体)。

最后,考虑启用此标志的画布:

header = group([failingT1, failingT2])
body = t3
upgraded_chord = chain(header, body)
upgraded_chord.link_error(error_callback_sig)

此画布的行为将与之前的完全相同,因为 chain 会在内部被升级为 chord

Enabling this flag will allow linking an error callback to a chord header, which by default will not link when using link_error(), and preventing from the chord's body to execute if any of the tasks in the header fails.

Consider the following canvas with the flag disabled (default behavior):

header = group([t1, t2])
body = t3
c = chord(header, body)
c.link_error(error_callback_sig)

If any of the header tasks failed (t1 or t2), by default, the chord body (t3) would not execute, and error_callback_sig will be called once (for the body).

Enabling this flag will change the above behavior by:

  1. error_callback_sig will be linked to t1 and t2 (as well as t3).

  2. If any of the header tasks failed, error_callback_sig will be called for each failed header task and the body (even if the body didn't run).

Consider now the following canvas with the flag enabled:

header = group([failingT1, failingT2])
body = t3
c = chord(header, body)
c.link_error(error_callback_sig)

If all of the header tasks failed (failingT1 and failingT2), then the chord body (t3) would not execute, and error_callback_sig will be called 3 times (two times for the header and one time for the body).

Lastly, consider the following canvas with the flag enabled:

header = group([failingT1, failingT2])
body = t3
upgraded_chord = chain(header, body)
upgraded_chord.link_error(error_callback_sig)

This canvas will behave exactly the same as the previous one, since the chain will be upgraded to a chord internally.

task_soft_time_limit

Default: No soft time limit.

任务软时间限制,单位为秒。

当超过软时间限制时,将引发 SoftTimeLimitExceeded 异常。例如,任务可以捕获此异常并在硬时间限制到达之前进行清理:

from celery.exceptions import SoftTimeLimitExceeded

@app.task
def mytask():
    try:
        return do_work()
    except SoftTimeLimitExceeded:
        cleanup_in_a_hurry()

Task soft time limit in seconds.

The SoftTimeLimitExceeded exception will be raised when this is exceeded. For example, the task can catch this to clean up before the hard time limit comes:

from celery.exceptions import SoftTimeLimitExceeded

@app.task
def mytask():
    try:
        return do_work()
    except SoftTimeLimitExceeded:
        cleanup_in_a_hurry()

task_acks_late

Default: Disabled.

延迟确认意味着任务消息将在任务执行 被确认,而不是 在执行前 (默认行为)。

Late ack means the task messages will be acknowledged after the task has been executed, not right before (the default behavior).

task_acks_on_failure_or_timeout

Default: Enabled

启用此项后,所有任务的消息即使失败或超时,也会被确认。

此设置仅适用于在任务执行 被确认的任务,并且仅当 task_acks_late 启用时生效。

When enabled messages for all tasks will be acknowledged even if they fail or time out.

Configuring this setting only applies to tasks that are acknowledged after they have been executed and only if task_acks_late is enabled.

task_reject_on_worker_lost

Default: Disabled.

即使 task_acks_late 启用,当执行任务的工作者进程突然退出或收到信号(例如 KILL / INT 等)时,工作者仍会确认任务。

将此设置为 true 允许将消息重新排队,从而使任务由同一工作者或其他工作者重新执行。

警告

启用此项可能会导致消息循环;确保了解你在做什么。

Even if task_acks_late is enabled, the worker will acknowledge tasks when the worker process executing them abruptly exits or is signaled (e.g., KILL/INT, etc).

Setting this to true allows the message to be re-queued instead, so that the task will execute again by the same worker, or another worker.

警告

Enabling this can cause message loops; make sure you know what you're doing.

task_default_rate_limit

Default: No rate limit.

任务的全局默认速率限制。

该值用于没有自定义速率限制的任务。

参见

worker_disable_rate_limits 设置可以禁用所有速率限制。

The global default rate limit for tasks.

This value is used for tasks that doesn't have a custom rate limit

参见

The worker_disable_rate_limits setting can disable all rate limits.

任务结果后端设置

Task result backend settings

result_backend

Default: No result backend enabled by default.

用于存储任务结果(墓碑)的后端。 可以是以下之一:

警告

虽然 AMQP 结果后端非常高效,但您必须确保只接收一次相同的结果。请参阅 userguide/calling

The backend used to store task results (tombstones). Can be one of the following:

警告

While the AMQP result backend is very efficient, you must make sure you only receive the same result once. See userguide/calling).

result_backend_always_retry

Default: False

如果启用,后端将在可恢复异常发生时尝试重试,而不是传播异常。 它将在两次重试之间使用指数退避的睡眠时间。

If enable, backend will try to retry on the event of recoverable exceptions instead of propagating the exception. It will use an exponential backoff sleep time between 2 retries.

result_backend_max_sleep_between_retries_ms

Default: 10000

此项指定两次后端操作重试之间的最大睡眠时间。

This specifies the maximum sleep time between two backend operation retry.

result_backend_base_sleep_between_retries_ms

Default: 10

此项指定两次后端操作重试之间的基础睡眠时间。

This specifies the base amount of sleep time between two backend operation retry.

result_backend_max_retries

Default: Inf

这是在发生可恢复异常时的最大重试次数。

This is the maximum of retries in case of recoverable exceptions.

result_backend_thread_safe

Default: False

如果为 True,则后端对象将在多个线程之间共享。 这对于使用共享连接池而不是为每个线程创建一个连接可能很有用。

If True, then the backend object is shared across threads. This may be useful for using a shared connection pool instead of creating a connection for every thread.

result_backend_transport_options

Default: {} (empty mapping).

一个传递给底层传输的附加选项字典。

请参阅你的传输用户手册以了解支持的选项(如果有的话)。

例如设置可见性超时(Redis 和 SQS 传输支持):

result_backend_transport_options = {'visibility_timeout': 18000}  # 5小时

A dict of additional options passed to the underlying transport.

See your transport user manual for supported options (if any).

Example setting the visibility timeout (supported by Redis and SQS transports):

result_backend_transport_options = {'visibility_timeout': 18000}  # 5 hours

result_serializer

Default: json since 4.0 (earlier: pickle).

结果序列化格式。

请参阅 序列化器 获取关于支持的序列化格式的更多信息。

Result serialization format.

See 序列化器 for information about supported serialization formats.

result_compression

Default: No compression.

用于任务结果的可选压缩方法。 支持与 task_compression 设置相同的选项。

Optional compression method used for task results. Supports the same options as the task_compression setting.

result_extended

Default: False

启用后,可以将扩展的任务结果属性(名称、参数、关键字参数、工作者、重试次数、队列、投递信息)写入后端。

Enables extended task result attributes (name, args, kwargs, worker, retries, queue, delivery_info) to be written to backend.

result_expires

Default: Expire after 1 day.

存储的任务墓碑将在此时间(以秒为单位,或 timedelta 对象)后被删除。

内建的定期任务将在此时间后删除结果(celery.backend_cleanup),前提是启用了 celery beat。该任务每天早上4点运行。

如果设置为 None 或 0,表示结果永远不会过期(具体取决于后端的规范)。

备注

目前,这仅适用于 AMQP、数据库、缓存、Couchbase 和 Redis 后端。

当使用数据库后端时,必须运行 celery beat,以便结果能够过期。

Time (in seconds, or a timedelta object) for when after stored task tombstones will be deleted.

A built-in periodic task will delete the results after this time (celery.backend_cleanup), assuming that celery beat is enabled. The task runs daily at 4am.

A value of None or 0 means results will never expire (depending on backend specifications).

备注

For the moment this only works with the AMQP, database, cache, Couchbase, and Redis backends.

When using the database backend, celery beat must be running for the results to be expired.

result_cache_max

Default: Disabled by default.

启用客户端结果缓存。

这对于旧版已弃用的 'amqp' 后端非常有用,因为当一个结果实例被消费后,结果将不再可用。

这是在旧结果被逐出之前要缓存的结果总数。 值为 0 或 None 表示没有限制,值为 -1 将禁用缓存。

默认情况下禁用。

Enables client caching of results.

This can be useful for the old deprecated 'amqp' backend where the result is unavailable as soon as one result instance consumes it.

This is the total number of results to cache before older results are evicted. A value of 0 or None means no limit, and a value of -1 will disable the cache.

Disabled by default.

result_chord_join_timeout

Default: 3.0.

在加入一个组的结果时的超时时间(秒,int/float)。

The timeout in seconds (int/float) when joining a group's results within a chord.

result_chord_retry_interval

Default: 1.0.

重试和弦任务的默认间隔。

Default interval for retrying chord tasks.

override_backends

Default: Disabled by default.

实现后端的类的路径。

允许覆盖后端实现。 如果你需要存储执行任务的附加元数据、覆盖重试策略等,这可能会很有用。

示例:

override_backends = {"db": "custom_module.backend.class"}

Path to class that implements backend.

Allows to override backend implementation. This can be useful if you need to store additional metadata about executed tasks, override retry policies, etc.

Example:

override_backends = {"db": "custom_module.backend.class"}

数据库后端设置

Database backend settings

数据库 URL 示例

Database URL Examples

要使用数据库后端,必须使用连接 URL 配置 result_backend 设置,并使用 db+ 前缀:

result_backend = 'db+scheme://user:password@host:port/dbname'

示例:

# sqlite(文件名)
result_backend = 'db+sqlite:///results.sqlite'

# mysql
result_backend = 'db+mysql://scott:tiger@localhost/foo'

# postgresql
result_backend = 'db+postgresql://scott:tiger@localhost/mydatabase'

# oracle
result_backend = 'db+oracle://scott:tiger@127.0.0.1:1521/sidname'

请参阅 支持的数据库 获取支持的数据库表格, 并参阅 连接字符串 获取有关连接字符串的更多信息(即 URI 中 db+ 前缀之后的部分)。

To use the database backend you have to configure the result_backend setting with a connection URL and the db+ prefix:

result_backend = 'db+scheme://user:password@host:port/dbname'

Examples:

# sqlite (filename)
result_backend = 'db+sqlite:///results.sqlite'

# mysql
result_backend = 'db+mysql://scott:tiger@localhost/foo'

# postgresql
result_backend = 'db+postgresql://scott:tiger@localhost/mydatabase'

# oracle
result_backend = 'db+oracle://scott:tiger@127.0.0.1:1521/sidname'

Please see Supported Databases for a table of supported databases, and Connection String for more information about connection strings (this is the part of the URI that comes after the db+ prefix).

database_create_tables_at_setup

Added in version 5.5.0.

Default: True by default.

  • 如果为 True,Celery 将在设置时创建数据库中的表。

  • 如果为 False,Celery 将延迟创建表,即等待第一个任务执行后再创建表。

备注

在 Celery 5.5 之前,表是延迟创建的,即相当于将 database_create_tables_at_setup 设置为 False。

  • If True, Celery will create the tables in the database during setup.

  • If False, Celery will create the tables lazily, i.e. wait for the first task to be executed before creating the tables.

备注

Before celery 5.5, the tables were created lazily i.e. it was equivalent to database_create_tables_at_setup set to False.

database_engine_options

Default: {} (empty mapping).

要指定额外的 SQLAlchemy 数据库引擎选项,可以使用 database_engine_options 设置:

# echo 启用 SQLAlchemy 的详细日志记录。
app.conf.database_engine_options = {'echo': True}

To specify additional SQLAlchemy database engine options you can use the database_engine_options setting:

# echo enables verbose logging from SQLAlchemy.
app.conf.database_engine_options = {'echo': True}

database_short_lived_sessions

Default: Disabled by default.

短生命周期会话默认情况下是禁用的。如果启用,它们可能会大幅降低性能,尤其是在处理大量任务的系统上。这个选项在低流量的 worker 上非常有用,特别是当因数据库连接由于不活跃而变得过时而导致错误时。例如,间歇性的错误如 (OperationalError) (2006, 'MySQL server has gone away') 可以通过启用短生命周期会话来修复。此选项仅影响数据库后端。

Short lived sessions are disabled by default. If enabled they can drastically reduce performance, especially on systems processing lots of tasks. This option is useful on low-traffic workers that experience errors as a result of cached database connections going stale through inactivity. For example, intermittent errors like (OperationalError) (2006, 'MySQL server has gone away') can be fixed by enabling short lived sessions. This option only affects the database backend.

database_table_schemas

Default: {} (empty mapping).

当 SQLAlchemy 配置为结果后端时,Celery 会自动创建两个表来存储任务的结果元数据。此设置允许您自定义表的模式:

# 为数据库结果后端使用自定义模式。
database_table_schemas = {
    'task': 'celery',
    'group': 'celery',
}

When SQLAlchemy is configured as the result backend, Celery automatically creates two tables to store result meta-data for tasks. This setting allows you to customize the schema of the tables:

# use custom schema for the database result backend.
database_table_schemas = {
    'task': 'celery',
    'group': 'celery',
}

database_table_names

Default: {} (empty mapping).

当 SQLAlchemy 配置为结果后端时,Celery 会自动创建两个表来存储任务的结果元数据。此设置允许您自定义表名:

# 为数据库结果后端使用自定义表名。
database_table_names = {
    'task': 'myapp_taskmeta',
    'group': 'myapp_groupmeta',
}

When SQLAlchemy is configured as the result backend, Celery automatically creates two tables to store result meta-data for tasks. This setting allows you to customize the table names:

# use custom table names for the database result backend.
database_table_names = {
    'task': 'myapp_taskmeta',
    'group': 'myapp_groupmeta',
}

RPC 后段设置

RPC backend settings

result_persistent

Default: Disabled by default (transient messages).

如果设置为 True,结果消息将是持久的。这意味着消息在代理重启后不会丢失。

If set to True, result messages will be persistent. This means the messages won't be lost after a broker restart.

示例配置

result_backend = 'rpc://'
result_persistent = False

请注意:使用此后端可能会触发 celery.backends.rpc.BacklogLimitExceeded 错误,如果任务墓碑太

例如:

for i in range(10000):
    r = debug_task.delay()

print(r.state)  # 这将引发 celery.backends.rpc.BacklogLimitExceeded

Please note: using this backend could trigger the raise of celery.backends.rpc.BacklogLimitExceeded if the task tombstone is too old.

E.g.

for i in range(10000):
    r = debug_task.delay()

print(r.state)  # this would raise celery.backends.rpc.BacklogLimitExceeded

缓存后端设置

Cache backend settings

备注

缓存后端支持 https://pypi.org/project/pylibmc/https://pypi.org/project/python-memcached/ 库。如果未安装 https://pypi.org/project/pylibmc/,则使用后者。

使用单个 Memcached 服务器:

result_backend = 'cache+memcached://127.0.0.1:11211/'

使用多个 Memcached 服务器:

result_backend = """
    cache+memcached://172.19.26.240:11211;172.19.26.242:11211/
""".strip()

"memory" 后端仅将缓存存储在内存中:

result_backend = 'cache'
cache_backend = 'memory'

备注

The cache backend supports the https://pypi.org/project/pylibmc/ and https://pypi.org/project/python-memcached/ libraries. The latter is used only if https://pypi.org/project/pylibmc/ isn't installed.

Using a single Memcached server:

result_backend = 'cache+memcached://127.0.0.1:11211/'

Using multiple Memcached servers:

result_backend = """
    cache+memcached://172.19.26.240:11211;172.19.26.242:11211/
""".strip()

The "memory" backend stores the cache in memory only:

result_backend = 'cache'
cache_backend = 'memory'

cache_backend_options

Default: {} (empty mapping).

您可以使用 cache_backend_options 设置来配置 https://pypi.org/project/pylibmc/ 选项:

cache_backend_options = {
    'binary': True,
    'behaviors': {'tcp_nodelay': True},
}

You can set https://pypi.org/project/pylibmc/ options using the cache_backend_options setting:

cache_backend_options = {
    'binary': True,
    'behaviors': {'tcp_nodelay': True},
}

cache_backend

此设置不再用于 Celery 的内建后端,因为现在可以直接在 result_backend 设置中指定缓存后端。

备注

django-celery-results - 使用 Django ORM/Cache 作为结果后端 库使用 cache_backend 来选择 Django 缓存。

This setting is no longer used in celery's builtin backends as it's now possible to specify the cache backend directly in the result_backend setting.

备注

The django-celery-results - 使用 Django ORM/Cache 作为结果后端 library uses cache_backend for choosing django caches.

MongoDB 后端设置

MongoDB backend settings

备注

MongoDB 后端需要 pymongo 库: http://github.com/mongodb/mongo-python-driver/tree/master

备注

The MongoDB backend requires the pymongo library: http://github.com/mongodb/mongo-python-driver/tree/master

mongodb_backend_settings

mongodb_backend_settings

这是一个支持以下键的字典:

  • database

    要连接的数据库名称。默认值为 celery

  • taskmeta_collection

    存储任务元数据的集合名称。默认值为 celery_taskmeta

  • max_pool_size

    传递给 PyMongo 的 Connection 或 MongoClient 构造函数的 max_pool_size。它是一次保持与 MongoDB 开放的最大 TCP 连接数。如果打开的连接数超过 max_pool_size,套接字将在释放时关闭。默认值为 10。

  • options

    传递给 MongoDB 连接构造函数的附加关键字参数。请参阅 pymongo 文档以查看支持的参数列表。

This is a dict supporting the following keys:

  • database

    The database name to connect to. Defaults to celery.

  • taskmeta_collection

    The collection name to store task meta data. Defaults to celery_taskmeta.

  • max_pool_size

    Passed as max_pool_size to PyMongo's Connection or MongoClient constructor. It is the maximum number of TCP connections to keep open to MongoDB at a given time. If there are more open connections than max_pool_size, sockets will be closed when they are released. Defaults to 10.

  • options

    Additional keyword arguments to pass to the mongodb connection constructor. See the pymongo docs to see a list of arguments supported.

示例配置

Example configuration

result_backend = 'mongodb://localhost:27017/'
mongodb_backend_settings = {
    'database': 'mydb',
    'taskmeta_collection': 'my_taskmeta_collection',
}

Redis 后端设置

Redis backend settings

配置后端 URL

Configuring the backend URL

备注

Redis 后端需要 https://pypi.org/project/redis/ 库。

要安装此包,请使用 pip

$ pip install celery[redis]

请参见 软件包 了解有关组合多个扩展要求的信息。

此后端要求将 result_backend 设置为 Redis 或 Redis over TLS URL:

result_backend = 'redis://username:password@host:port/db'

例如:

result_backend = 'redis://localhost/0'

与此相同:

result_backend = 'redis://'

使用 rediss:// 协议通过 TLS 连接 Redis:

result_backend = 'rediss://username:password@host:port/db?ssl_cert_reqs=required'

请注意,ssl_cert_reqs 字符串应为 requiredoptionalnone 之一(尽管为了与旧版本的 Celery 向后兼容,该字符串 也可以是 CERT_REQUIREDCERT_OPTIONALCERT_NONE, 但这些值仅适用于 Celery,而不适用于 Redis 本身)。

如果需要使用 Unix 套接字连接,URL 格式应为:

result_backend = 'socket:///path/to/redis.sock'

URL 的各个字段定义如下:

  1. username

    Added in version 5.1.0.

    用于连接数据库的用户名。

    请注意,这仅在 Redis>=6.0 且已安装 py-redis>=3.4.0 时支持。

    如果您使用的是较旧的数据库版本或较旧的客户端版本 可以省略用户名:

    result_backend = 'redis://:password@host:port/db'
    
  2. password

    用于连接数据库的密码。

  3. host

    Redis 服务器的主机名或 IP 地址(例如,localhost)。

  4. port

    Redis 服务器的端口。默认值为 6379。

  5. db

    要使用的数据库编号。默认值为 0。 数据库编号可以包括可选的前导斜杠。

使用 TLS 连接时(协议为 rediss://),可以将 broker_use_ssl 中的所有值作为查询参数传递。证书路径必须进行 URL 编码,且 ssl_cert_reqs 是必需的。示例:

result_backend = 'rediss://:password@host:port/db?\
    ssl_cert_reqs=required\
    &ssl_ca_certs=%2Fvar%2Fssl%2Fmyca.pem\                  # /var/ssl/myca.pem
    &ssl_certfile=%2Fvar%2Fssl%2Fredis-server-cert.pem\     # /var/ssl/redis-server-cert.pem
    &ssl_keyfile=%2Fvar%2Fssl%2Fprivate%2Fworker-key.pem'   # /var/ssl/private/worker-key.pem

请注意, ssl_cert_reqs 字符串应为 requiredoptionalnone 之一(尽管为了向后兼容,字符串 也可以是 CERT_REQUIREDCERT_OPTIONALCERT_NONE)。

备注

The Redis backend requires the https://pypi.org/project/redis/ library.

To install this package use pip:

$ pip install celery[redis]

See 软件包 for information on combining multiple extension requirements.

This backend requires the result_backend setting to be set to a Redis or Redis over TLS URL:

result_backend = 'redis://username:password@host:port/db'

For example:

result_backend = 'redis://localhost/0'

is the same as:

result_backend = 'redis://'

Use the rediss:// protocol to connect to redis over TLS:

result_backend = 'rediss://username:password@host:port/db?ssl_cert_reqs=required'

Note that the ssl_cert_reqs string should be one of required, optional, or none (though, for backwards compatibility with older Celery versions, the string may also be one of CERT_REQUIRED, CERT_OPTIONAL, CERT_NONE, but those values only work for Celery, not for Redis directly).

If a Unix socket connection should be used, the URL needs to be in the format::

result_backend = 'socket:///path/to/redis.sock'

The fields of the URL are defined as follows:

  1. username

    Added in version 5.1.0.

    Username used to connect to the database.

    Note that this is only supported in Redis>=6.0 and with py-redis>=3.4.0 installed.

    If you use an older database version or an older client version you can omit the username:

    result_backend = 'redis://:password@host:port/db'
    
  2. password

    Password used to connect to the database.

  3. host

    Host name or IP address of the Redis server (e.g., localhost).

  4. port

    Port to the Redis server. Default is 6379.

  5. db

    Database number to use. Default is 0. The db can include an optional leading slash.

When using a TLS connection (protocol is rediss://), you may pass in all values in broker_use_ssl as query parameters. Paths to certificates must be URL encoded, and ssl_cert_reqs is required. Example:

result_backend = 'rediss://:password@host:port/db?\
    ssl_cert_reqs=required\
    &ssl_ca_certs=%2Fvar%2Fssl%2Fmyca.pem\                  # /var/ssl/myca.pem
    &ssl_certfile=%2Fvar%2Fssl%2Fredis-server-cert.pem\     # /var/ssl/redis-server-cert.pem
    &ssl_keyfile=%2Fvar%2Fssl%2Fprivate%2Fworker-key.pem'   # /var/ssl/private/worker-key.pem

Note that the ssl_cert_reqs string should be one of required, optional, or none (though, for backwards compatibility, the string may also be one of CERT_REQUIRED, CERT_OPTIONAL, CERT_NONE).

Added in version 5.1.0.

redis_backend_health_check_interval

Default: Not configured

Redis 后端支持健康检查。此值必须设置为整数,表示健康检查之间的秒数。如果在健康检查过程中遇到 ConnectionError 或 TimeoutError, 连接将重新建立,并且命令将再次重试一次。

The Redis backend supports health checks. This value must be set as an integer whose value is the number of seconds between health checks. If a ConnectionError or a TimeoutError is encountered during the health check, the connection will be re-established and the command retried exactly once.

redis_backend_use_ssl

Default: Disabled.

Redis 后端支持 SSL。此值必须以字典的形式设置。有效的键值对与 在 broker_use_ssl 下的 redis 子部分中提到的键值对相同。

The Redis backend supports SSL. This value must be set in the form of a dictionary. The valid key-value pairs are the same as the ones mentioned in the redis sub-section under broker_use_ssl.

redis_max_connections

Default: No limit.

Redis 连接池中用于发送和接收结果的最大连接数。

警告

如果并发连接数超过最大值,Redis 将引发 ConnectionError

Maximum number of connections available in the Redis connection pool used for sending and retrieving results.

警告

Redis will raise a ConnectionError if the number of concurrent connections exceeds the maximum.

redis_socket_connect_timeout

Added in version 4.0.1.

Default: None

结果后端连接 Redis 的套接字超时(以秒为单位,int/float)。

Socket timeout for connections to Redis from the result backend in seconds (int/float)

redis_socket_timeout

Default: 120.0 seconds.

用于 Redis 服务器读/写操作的套接字超时时间(单位为秒,int/float),适用于 Redis 结果后端。

Socket timeout for reading/writing operations to the Redis server in seconds (int/float), used by the redis result backend.

redis_retry_on_timeout

Added in version 4.4.1.

Default: False

用于在遇到 TimeoutError 时重试 Redis 服务器的读/写操作,适用于 Redis 结果后端。如果使用 Unix 套接字连接 Redis,则不应设置该变量。

To retry reading/writing operations on TimeoutError to the Redis server, used by the redis result backend. Shouldn't set this variable if using Redis connection by unix socket.

redis_socket_keepalive

Added in version 4.4.1.

Default: False

用于保持与 Redis 服务器连接健康状态的 Socket TCP keepalive,适用于 Redis 结果后端。

Socket TCP keepalive to keep connections healthy to the Redis server, used by the redis result backend.

Cassandra/AstraDB 后端设置

Cassandra/AstraDB backend settings

备注

此 Cassandra 后端驱动依赖 https://pypi.org/project/cassandra-driver/

此后端既可用于常规的 Cassandra 部署,也可用于托管的 Astra DB 实例。根据所用的后端环境,必须在 cassandra_serverscassandra_secure_bundle_path 中选择其一进行配置(二者不可同时设置)。

要安装,请使用 pip

$ pip install celery[cassandra]

请参见 软件包 了解有关组合多个扩展要求的信息。

此后端需要配置以下参数:

备注

This Cassandra backend driver requires https://pypi.org/project/cassandra-driver/.

This backend can refer to either a regular Cassandra installation or a managed Astra DB instance. Depending on which one, exactly one between the cassandra_servers and cassandra_secure_bundle_path settings must be provided (but not both).

To install, use pip:

$ pip install celery[cassandra]

See 软件包 for information on combining multiple extension requirements.

This backend requires the following configuration directives to be set.

cassandra_servers

Default: [] (empty list).

Cassandra 服务器的 host 列表。当连接 Cassandra 集群时必须提供该项。此项与 cassandra_secure_bundle_path 互斥。例如:

cassandra_servers = ['localhost']

List of host Cassandra servers. This must be provided when connecting to a Cassandra cluster. Passing this setting is strictly exclusive to cassandra_secure_bundle_path. Example:

cassandra_servers = ['localhost']

cassandra_secure_bundle_path

Default: None.

连接 Astra DB 实例所需的 secure-connect-bundle zip 文件的绝对路径。此项与 cassandra_servers 互斥。 例如:

cassandra_secure_bundle_path = '/home/user/bundles/secure-connect.zip'

连接 Astra DB 时,必须指定明文认证提供程序(plain-text auth provider)以及相关的用户名和密码, 其值分别为 Astra DB 实例所生成有效令牌的 Client ID 和 Client Secret。 请参考下方 Astra DB 配置示例。

Absolute path to the secure-connect-bundle zip file to connect to an Astra DB instance. Passing this setting is strictly exclusive to cassandra_servers. Example:

cassandra_secure_bundle_path = '/home/user/bundles/secure-connect.zip'

When connecting to Astra DB, it is necessary to specify the plain-text auth provider and the associated username and password, which take the value of the Client ID and the Client Secret, respectively, of a valid token generated for the Astra DB instance. See below for an Astra DB configuration example.

cassandra_port

Default: 9042.

与 Cassandra 服务器通信所用的端口。

Port to contact the Cassandra servers on.

cassandra_keyspace

Default: None.

用于存储结果的 keyspace。例如:

cassandra_keyspace = 'tasks_keyspace'

The keyspace in which to store the results. For example:

cassandra_keyspace = 'tasks_keyspace'

cassandra_table

Default: None.

用于存储结果的表(列族)。例如:

cassandra_table = 'tasks'

The table (column family) in which to store the results. For example:

cassandra_table = 'tasks'

cassandra_read_consistency

Default: None.

读取一致性级别。可选值包括 ONE, TWO, THREE, QUORUM, ALL, LOCAL_QUORUM, EACH_QUORUM, LOCAL_ONE.

The read consistency used. Values can be ONE, TWO, THREE, QUORUM, ALL, LOCAL_QUORUM, EACH_QUORUM, LOCAL_ONE.

cassandra_write_consistency

Default: None.

写入一致性级别。可选值包括 ONE, TWO, THREE, QUORUM, ALL, LOCAL_QUORUM, EACH_QUORUM, LOCAL_ONE.

The write consistency used. Values can be ONE, TWO, THREE, QUORUM, ALL, LOCAL_QUORUM, EACH_QUORUM, LOCAL_ONE.

cassandra_entry_ttl

Default: None.

状态条目的存活时间(TTL),单位为秒。超过该时间后条目将过期并被移除。 默认值为 None,表示永不过期。

Time-to-live for status entries. They will expire and be removed after that many seconds after adding. A value of None (default) means they will never expire.

cassandra_auth_provider

Default: None.

cassandra.auth 模块中用于认证的 AuthProvider 类。可选值包括 PlainTextAuthProviderSaslAuthProvider

AuthProvider class within cassandra.auth module to use. Values can be PlainTextAuthProvider or SaslAuthProvider.

cassandra_auth_kwargs

Default: {} (empty mapping).

传递给认证提供程序的命名参数。例如:

cassandra_auth_kwargs = {
    username: 'cassandra',
    password: 'cassandra'
}

Named arguments to pass into the authentication provider. For example:

cassandra_auth_kwargs = {
    username: 'cassandra',
    password: 'cassandra'
}

cassandra_options

Default: {} (empty mapping).

传递给 cassandra.cluster 类的命名参数。

cassandra_options = {
    'cql_version': '3.2.1'
    'protocol_version': 3
}

Named arguments to pass into the cassandra.cluster class.

cassandra_options = {
    'cql_version': '3.2.1'
    'protocol_version': 3
}

示例配置 (Cassandra)

Example configuration (Cassandra)

result_backend = 'cassandra://'
cassandra_servers = ['localhost']
cassandra_keyspace = 'celery'
cassandra_table = 'tasks'
cassandra_read_consistency = 'QUORUM'
cassandra_write_consistency = 'QUORUM'
cassandra_entry_ttl = 86400

示例配置 (Astra DB)

Example configuration (Astra DB)

result_backend = 'cassandra://'
cassandra_keyspace = 'celery'
cassandra_table = 'tasks'
cassandra_read_consistency = 'QUORUM'
cassandra_write_consistency = 'QUORUM'
cassandra_auth_provider = 'PlainTextAuthProvider'
cassandra_auth_kwargs = {
  'username': '<<CLIENT_ID_FROM_ASTRA_DB_TOKEN>>',
  'password': '<<CLIENT_SECRET_FROM_ASTRA_DB_TOKEN>>'
}
cassandra_secure_bundle_path = '/path/to/secure-connect-bundle.zip'
cassandra_entry_ttl = 86400

其他配置

Additional configuration

在建立连接时,Cassandra 驱动程序会与服务器协商协议版本。 同时,还会自动使用一个负载均衡策略(默认值为 DCAwareRoundRobinPolicy,其含有一个由驱动自动确定的 local_dc 设置)。 当可能时,应在配置中显式提供这些参数: 此外,Cassandra 驱动的未来版本将要求至少指定负载均衡策略 (可通过 execution profiles 进行配置,如下所示)。

因此,一个完整的 Cassandra 后端配置将包含以下附加配置:

from cassandra.policies import DCAwareRoundRobinPolicy
from cassandra.cluster import ExecutionProfile
from cassandra.cluster import EXEC_PROFILE_DEFAULT
myEProfile = ExecutionProfile(
load_balancing_policy=DCAwareRoundRobinPolicy(
    local_dc='datacenter1', # 替换为您的数据中心名称
)
)
cassandra_options = {
'protocol_version': 5,    # 适用于 Cassandra 4,如有需要可更改
'execution_profiles': {EXEC_PROFILE_DEFAULT: myEProfile},
}

对于 Astra DB,配置方式类似:

from cassandra.policies import DCAwareRoundRobinPolicy
from cassandra.cluster import ExecutionProfile
from cassandra.cluster import EXEC_PROFILE_DEFAULT
myEProfile = ExecutionProfile(
load_balancing_policy=DCAwareRoundRobinPolicy(
    local_dc='europe-west1',  # 对于 Astra DB,region 名即为 dc 名
)
)
cassandra_options = {
'protocol_version': 4,      # 适用于 Astra DB
'execution_profiles': {EXEC_PROFILE_DEFAULT: myEProfile},
}

The Cassandra driver, when establishing the connection, undergoes a stage of negotiating the protocol version with the server(s). Similarly, a load-balancing policy is automatically supplied (by default DCAwareRoundRobinPolicy, which in turn has a local_dc setting, also determined by the driver upon connection). When possible, one should explicitly provide these in the configuration: moreover, future versions of the Cassandra driver will require at least the load-balancing policy to be specified (using execution profiles, as shown below).

A full configuration for the Cassandra backend would thus have the following additional lines:

from cassandra.policies import DCAwareRoundRobinPolicy
from cassandra.cluster import ExecutionProfile
from cassandra.cluster import EXEC_PROFILE_DEFAULT
myEProfile = ExecutionProfile(
load_balancing_policy=DCAwareRoundRobinPolicy(
    local_dc='datacenter1', # replace with your DC name
)
)
cassandra_options = {
'protocol_version': 5,    # for Cassandra 4, change if needed
'execution_profiles': {EXEC_PROFILE_DEFAULT: myEProfile},
}

And similarly for Astra DB:

from cassandra.policies import DCAwareRoundRobinPolicy
from cassandra.cluster import ExecutionProfile
from cassandra.cluster import EXEC_PROFILE_DEFAULT
myEProfile = ExecutionProfile(
load_balancing_policy=DCAwareRoundRobinPolicy(
    local_dc='europe-west1',  # for Astra DB, region name = dc name
)
)
cassandra_options = {
'protocol_version': 4,      # for Astra DB
'execution_profiles': {EXEC_PROFILE_DEFAULT: myEProfile},
}

S3 后端设置

S3 backend settings

备注

此 S3 后端驱动依赖 https://pypi.org/project/s3/

要安装,请使用 s3

$ pip install celery[s3]

请参见 软件包 了解有关组合多个扩展依赖项的信息。

此后端需要配置以下参数:

备注

This s3 backend driver requires https://pypi.org/project/s3/.

To install, use s3:

$ pip install celery[s3]

See 软件包 for information on combining multiple extension requirements.

This backend requires the following configuration directives to be set.

s3_access_key_id

Default: None.

S3 的访问密钥 ID。例如:

s3_access_key_id = 'access_key_id'

The s3 access key id. For example:

s3_access_key_id = 'access_key_id'

s3_secret_access_key

Default: None.

S3 的访问密钥。例如:

s3_secret_access_key = 'access_secret_access_key'

The s3 secret access key. For example:

s3_secret_access_key = 'access_secret_access_key'

s3_bucket

Default: None.

S3 的存储桶名称。例如:

s3_bucket = 'bucket_name'

The s3 bucket name. For example:

s3_bucket = 'bucket_name'

s3_base_path

Default: None.

用于存储结果键的 S3 存储桶中的基础路径。例如:

s3_base_path = '/prefix'

A base path in the s3 bucket to use to store result keys. For example:

s3_base_path = '/prefix'

s3_endpoint_url

Default: None.

自定义的 S3 端点 URL。可用于连接自托管的 S3 兼容后端(如 Ceph、Scality 等)。例如:

s3_endpoint_url = 'https://.s3.custom.url'

A custom s3 endpoint url. Use it to connect to a custom self-hosted s3 compatible backend (Ceph, Scality...). For example:

s3_endpoint_url = 'https://.s3.custom.url'

s3_region

Default: None.

S3 使用的 AWS 区域。例如:

s3_region = 'us-east-1'

The s3 aws region. For example:

s3_region = 'us-east-1'

示例配置

Example configuration

s3_access_key_id = 's3-access-key-id'
s3_secret_access_key = 's3-secret-access-key'
s3_bucket = 'mybucket'
s3_base_path = '/celery_result_backend'
s3_endpoint_url = 'https://endpoint_url'

Azure Block Blob 后端设置

Azure Block Blob backend settings

要使用 AzureBlockBlob 作为结果后端,只需将 result_backend 配置为正确的 URL。

所需的 URL 格式为 azureblockblob://,后跟存储连接字符串。你可以在 Azure 门户中存储帐户资源的 Access Keys 面板中找到该连接字符串。

To use AzureBlockBlob as the result backend you simply need to configure the result_backend setting with the correct URL.

The required URL format is azureblockblob:// followed by the storage connection string. You can find the storage connection string in the Access Keys pane of your storage account resource in the Azure Portal.

示例配置

Example configuration

result_backend = 'azureblockblob://DefaultEndpointsProtocol=https;AccountName=somename;AccountKey=Lou...bzg==;EndpointSuffix=core.windows.net'

azureblockblob_container_name

Default: celery.

用于存储结果的存储容器名称。

The name for the storage container in which to store the results.

azureblockblob_base_path

Added in version 5.1.

Default: None.

存储容器中用于存放结果键的基础路径。例如:

azureblockblob_base_path = 'prefix/'

A base path in the storage container to use to store result keys. For example:

azureblockblob_base_path = 'prefix/'

azureblockblob_retry_initial_backoff_sec

Default: 2.

第一次重试的初始退避时间(以秒为单位)。 后续重试将采用指数退避策略。

The initial backoff interval, in seconds, for the first retry. Subsequent retries are attempted with an exponential strategy.

azureblockblob_retry_increment_base

Default: 2.

azureblockblob_retry_max_attempts

Default: 3.

最大重试次数。

The maximum number of retry attempts.

azureblockblob_connection_timeout

Default: 20.

建立 Azure Block Blob 连接的超时时间(秒)。

Timeout in seconds for establishing the azure block blob connection.

azureblockblob_read_timeout

Default: 120.

读取 Azure Block Blob 的超时时间(秒)。

Timeout in seconds for reading of an azure block blob.

GCS 后端设置

GCS backend settings

备注

此 GCS 后端驱动依赖 https://pypi.org/project/google-cloud-storage/https://pypi.org/project/google-cloud-firestore/

要安装,请使用 gcs

$ pip install celery[gcs]

请参阅 软件包 获取关于组合多个扩展依赖项的信息。

可以通过 result_backend 中提供的 URL 配置 GCS,例如:

result_backend = 'gs://mybucket/some-prefix?gcs_project=myproject&ttl=600'
result_backend = 'gs://mybucket/some-prefix?gcs_project=myproject?firestore_project=myproject2&ttl=600'

此后端需要配置以下参数:

备注

This gcs backend driver requires https://pypi.org/project/google-cloud-storage/ and https://pypi.org/project/google-cloud-firestore/.

To install, use gcs:

$ pip install celery[gcs]

See 软件包 for information on combining multiple extension requirements.

GCS could be configured via the URL provided in result_backend, for example:

result_backend = 'gs://mybucket/some-prefix?gcs_project=myproject&ttl=600'
result_backend = 'gs://mybucket/some-prefix?gcs_project=myproject?firestore_project=myproject2&ttl=600'

This backend requires the following configuration directives to be set:

gcs_bucket

Default: None.

GCS 的存储桶名称。例如:

gcs_bucket = 'bucket_name'

The gcs bucket name. For example:

gcs_bucket = 'bucket_name'

gcs_project

Default: None.

GCS 的项目名称。例如:

gcs_project = 'test-project'

The gcs project name. For example:

gcs_project = 'test-project'

gcs_base_path

Default: None.

GCS 存储桶中用于存储所有结果键的基础路径。例如:

gcs_base_path = '/prefix'

A base path in the gcs bucket to use to store all result keys. For example:

gcs_base_path = '/prefix'

gcs_ttl

Default: 0.

结果 Blob 的生存时间(秒)。 要求 GCS 存储桶启用 “删除” 对象生命周期管理操作。 可用于在 Cloud Storage 中自动删除结果。

例如,若要在 24 小时后自动删除结果:

gcs_ttl = 86400

The time to live in seconds for the results blobs. Requires a GCS bucket with "Delete" Object Lifecycle Management action enabled. Use it to automatically delete results from Cloud Storage Buckets.

For example to auto remove results after 24 hours:

gcs_ttl = 86400

gcs_threadpool_maxsize

Default: 10.

用于 GCS 操作的线程池大小。该值也决定连接池大小。 允许控制并发操作的数量。例如:

gcs_threadpool_maxsize = 20

Threadpool size for GCS operations. Same value defines the connection pool size. Allows to control the number of concurrent operations. For example:

gcs_threadpool_maxsize = 20

firestore_project

Default: gcs_project.

用于 Chord 引用计数的 Firestore 项目名称。启用原生 Chord 引用计数。 若未指定,默认为 gcs_project。 例如:

firestore_project = 'test-project2'

The Firestore project for Chord reference counting. Allows native chord ref counts. If not specified defaults to gcs_project. For example:

firestore_project = 'test-project2'

示例配置

Example configuration

gcs_bucket = 'mybucket'
gcs_project = 'myproject'
gcs_base_path = '/celery_result_backend'
gcs_ttl = 86400

Elasticsearch 后端设置

Elasticsearch backend settings

要使用 Elasticsearch 作为结果后端,只需将 result_backend 配置为正确的 URL。

To use Elasticsearch as the result backend you simply need to configure the result_backend setting with the correct URL.

示例配置

Example configuration

result_backend = 'elasticsearch://example.com:9200/index_name/doc_type'

elasticsearch_retry_on_timeout

Default: False

是否在超时后尝试切换到其他节点进行重试?

Should timeout trigger a retry on different node?

elasticsearch_max_retries

Default: 3.

在抛出异常前允许的最大重试次数。

Maximum number of retries before an exception is propagated.

elasticsearch_timeout

Default: 10.0 seconds.

Elasticsearch 结果后端使用的全局超时时间。

Global timeout,used by the elasticsearch result backend.

elasticsearch_save_meta_as_text

Default: True

元数据应以文本还是原生 JSON 格式保存。 结果始终以文本形式序列化。

Should meta saved as text or as native json. Result is always serialized as text.

AWS DynamoDB 后端设置

AWS DynamoDB backend settings

备注

Dynamodb 后端需要依赖 https://pypi.org/project/boto3/ 库。

要安装此软件包,请使用 pip

$ pip install celery[dynamodb]

请参阅 软件包 获取关于组合多个扩展依赖项的信息。

警告

Dynamodb 后端与定义了排序键(sort key)的表不兼容。

如果你希望基于分区键以外的字段查询结果表,请使用全局二级索引(GSI)。

此后端要求通过 result_backend 设置一个 DynamoDB URL:

result_backend = 'dynamodb://aws_access_key_id:aws_secret_access_key@region:port/table?read=n&write=m'

例如,指定 AWS 区域和表名:

result_backend = 'dynamodb://@us-east-1/celery_results'

或从环境变量中读取 AWS 配置参数,使用默认表名(celery)并设置读写吞吐量:

result_backend = 'dynamodb://@/?read=5&write=5'

或使用 可下载版本 在本地部署的 DynamoDB:

result_backend = 'dynamodb://@localhost:8000'

或使用可下载版本或其他部署在任意主机上的兼容 API 服务:

result_backend = 'dynamodb://@us-east-1'
dynamodb_endpoint_url = 'http://192.168.0.40:8000'

result_backend 中的 DynamoDB URL 字段定义如下:

  1. aws_access_key_id & aws_secret_access_key

    用于访问 AWS API 资源的凭据。也可通过 https://pypi.org/project/boto3/ 库从多种来源解析, 详见 此处

  2. region

    AWS 区域,例如 us-east-1,或 可下载版本 使用的 localhost。 可参考 https://pypi.org/project/boto3/文档 获取可选值定义。

  3. port

    如果使用的是本地版本,此为本地 DynamoDB 实例的监听端口。 如果 region 未设置为 localhost,此参数将 不起作用

  4. table

    使用的表名,默认值为 celery。 表名的允许字符与长度要求可参考 DynamoDB 命名规则

  5. read & write

    创建的 DynamoDB 表的读/写容量单位,默认均为 1。 详细说明可参考 吞吐量配置文档

  6. ttl_seconds

    结果在过期前的存活时间(秒)。默认不设置过期,并且不修改 DynamoDB 表的 TTL 设置。 如果设置为正数,结果将在该时间后过期; 如果设置为负数,则表示不设置过期,并主动禁用表的 TTL 设置。 注意:短时间内频繁修改 TTL 设置可能会导致限速错误。 详见 DynamoDB TTL 文档

备注

The Dynamodb backend requires the https://pypi.org/project/boto3/ library.

To install this package use pip:

$ pip install celery[dynamodb]

See 软件包 for information on combining multiple extension requirements.

警告

The Dynamodb backend is not compatible with tables that have a sort key defined.

If you want to query the results table based on something other than the partition key, please define a global secondary index (GSI) instead.

This backend requires the result_backend setting to be set to a DynamoDB URL:

result_backend = 'dynamodb://aws_access_key_id:aws_secret_access_key@region:port/table?read=n&write=m'

For example, specifying the AWS region and the table name:

result_backend = 'dynamodb://@us-east-1/celery_results'

or retrieving AWS configuration parameters from the environment, using the default table name (celery) and specifying read and write provisioned throughput:

result_backend = 'dynamodb://@/?read=5&write=5'

or using the downloadable version of DynamoDB locally:

result_backend = 'dynamodb://@localhost:8000'

or using downloadable version or other service with conforming API deployed on any host:

result_backend = 'dynamodb://@us-east-1'
dynamodb_endpoint_url = 'http://192.168.0.40:8000'

The fields of the DynamoDB URL in result_backend are defined as follows:

  1. aws_access_key_id & aws_secret_access_key

    The credentials for accessing AWS API resources. These can also be resolved by the https://pypi.org/project/boto3/ library from various sources, as described here.

  2. region

    The AWS region, e.g. us-east-1 or localhost for the Downloadable Version. See the https://pypi.org/project/boto3/ library documentation for definition options.

  3. port

    The listening port of the local DynamoDB instance, if you are using the downloadable version. If you have not specified the region parameter as localhost, setting this parameter has no effect.

  4. table

    Table name to use. Default is celery. See the DynamoDB Naming Rules for information on the allowed characters and length.

  5. read & write

    The Read & Write Capacity Units for the created DynamoDB table. Default is 1 for both read and write. More details can be found in the Provisioned Throughput documentation.

  6. ttl_seconds

    Time-to-live (in seconds) for results before they expire. The default is to not expire results, while also leaving the DynamoDB table's Time to Live settings untouched. If ttl_seconds is set to a positive value, results will expire after the specified number of seconds. Setting ttl_seconds to a negative value means to not expire results, and also to actively disable the DynamoDB table's Time to Live setting. Note that trying to change a table's Time to Live setting multiple times in quick succession will cause a throttling error. More details can be found in the DynamoDB TTL documentation

IronCache 后端设置

IronCache backend settings

备注

IronCache 后端需要 https://pypi.org/project/iron_celery/ 库:

要安装此软件包,请使用 pip

$ pip install iron_celery

IronCache 可通过 result_backend 中提供的 URL 配置,例如:

result_backend = 'ironcache://project_id:token@'

或修改缓存名称:

ironcache:://project_id:token@/awesomecache

更多信息参见: https://github.com/iron-io/iron_celery

备注

The IronCache backend requires the https://pypi.org/project/iron_celery/ library:

To install this package use pip:

$ pip install iron_celery

IronCache is configured via the URL provided in result_backend, for example:

result_backend = 'ironcache://project_id:token@'

Or to change the cache name:

ironcache:://project_id:token@/awesomecache

For more information, see: https://github.com/iron-io/iron_celery

Couchbase 后端设置

Couchbase backend settings

备注

Couchbase 后端依赖 https://pypi.org/project/couchbase/ 库。

安装该库可使用 pip 命令:

$ pip install celery[couchbase]

有关如何组合多个扩展依赖项的说明,请参见 软件包

该后端可通过 result_backend 设置为 Couchbase URL 进行配置:

result_backend = 'couchbase://username:password@host:port/bucket'

备注

The Couchbase backend requires the https://pypi.org/project/couchbase/ library.

To install this package use pip:

$ pip install celery[couchbase]

See 软件包 for instructions how to combine multiple extension requirements.

This backend can be configured via the result_backend set to a Couchbase URL:

result_backend = 'couchbase://username:password@host:port/bucket'

couchbase_backend_settings

Default: {} (empty mapping).

这是一个字典,支持以下键:

  • host

    Couchbase 服务器的主机名,默认为 localhost

  • port

    Couchbase 服务器监听的端口,默认为 8091

  • bucket

    Couchbase 服务器写入的默认 bucket,默认为 default

  • username

    用于认证 Couchbase 服务器的用户名(可选)。

  • password

    用于认证 Couchbase 服务器的密码(可选)。

This is a dict supporting the following keys:

  • host

    Host name of the Couchbase server. Defaults to localhost.

  • port

    The port the Couchbase server is listening to. Defaults to 8091.

  • bucket

    The default bucket the Couchbase server is writing to. Defaults to default.

  • username

    User name to authenticate to the Couchbase server as (optional).

  • password

    Password to authenticate to the Couchbase server (optional).

ArangoDB 后端设置

ArangoDB backend settings

备注

ArangoDB 后端依赖 https://pypi.org/project/pyArango/ 库。

安装该库可使用 pip 命令:

$ pip install celery[arangodb]

有关如何组合多个扩展依赖项的说明,请参见 软件包

该后端可通过 result_backend 设置为 ArangoDB URL 进行配置:

result_backend = 'arangodb://username:password@host:port/database/collection'

备注

The ArangoDB backend requires the https://pypi.org/project/pyArango/ library.

To install this package use pip:

$ pip install celery[arangodb]

See 软件包 for instructions how to combine multiple extension requirements.

This backend can be configured via the result_backend set to a ArangoDB URL:

result_backend = 'arangodb://username:password@host:port/database/collection'

arangodb_backend_settings

Default: {} (empty mapping).

这是一个字典,支持以下键:

  • host

    ArangoDB 服务器的主机名,默认为 localhost

  • port

    ArangoDB 服务器监听的端口,默认为 8529

  • database

    ArangoDB 服务器写入的默认数据库,默认为 celery

  • collection

    ArangoDB 数据库中写入的默认集合,默认为 celery

  • username

    用于认证 ArangoDB 服务器的用户名(可选)。

  • password

    用于认证 ArangoDB 服务器的密码(可选)。

  • http_protocol

    ArangoDB 连接中使用的 HTTP 协议,默认为 http

  • verify

    建立 ArangoDB HTTPS 连接时是否执行证书校验,默认为 False

This is a dict supporting the following keys:

  • host

    Host name of the ArangoDB server. Defaults to localhost.

  • port

    The port the ArangoDB server is listening to. Defaults to 8529.

  • database

    The default database in the ArangoDB server is writing to. Defaults to celery.

  • collection

    The default collection in the ArangoDB servers database is writing to. Defaults to celery.

  • username

    User name to authenticate to the ArangoDB server as (optional).

  • password

    Password to authenticate to the ArangoDB server (optional).

  • http_protocol

    HTTP Protocol in ArangoDB server connection. Defaults to http.

  • verify

    HTTPS Verification check while creating the ArangoDB connection. Defaults to False.

CosmosDB 后端设置(实验性)

CosmosDB backend settings (experimental)

要使用 CosmosDB 作为结果后端,仅需将 result_backend 设置为正确的 URL 即可。

To use CosmosDB as the result backend, you simply need to configure the result_backend setting with the correct URL.

示例配置

Example configuration

result_backend = 'cosmosdbsql://:{InsertAccountPrimaryKeyHere}@{InsertAccountNameHere}.documents.azure.com'

cosmosdbsql_database_name

Default: celerydb.

存储结果的数据库名称。

The name for the database in which to store the results.

cosmosdbsql_collection_name

Default: celerycol.

存储结果的集合名称。

The name of the collection in which to store the results.

cosmosdbsql_consistency_level

Default: Session.

表示 Azure Cosmos DB 客户端操作支持的一致性级别。

一致性级别按强度顺序为:Strong、BoundedStaleness、Session、ConsistentPrefix 和 Eventual。

Represents the consistency levels supported for Azure Cosmos DB client operations.

Consistency levels by order of strength are: Strong, BoundedStaleness, Session, ConsistentPrefix and Eventual.

cosmosdbsql_max_retry_attempts

Default: 9.

执行请求的最大重试次数。

Maximum number of retries to be performed for a request.

cosmosdbsql_max_retry_wait_time

Default: 30.

在重试期间等待请求完成的最大等待时间(单位为秒)。

Maximum wait time in seconds to wait for a request while the retries are happening.

CouchDB 后端设置

CouchDB backend settings

备注

CouchDB 后端依赖 https://pypi.org/project/pycouchdb/ 库:

安装该 CouchDB 库可使用 pip 命令:

$ pip install celery[couchdb]

有关如何组合多个扩展依赖项的说明,请参见 软件包

该后端可通过 result_backend 设置为 CouchDB URL 进行配置:

result_backend = 'couchdb://username:password@host:port/container'

URL 由以下部分组成:

  • username

    用于认证 CouchDB 服务器的用户名(可选)。

  • password

    用于认证 CouchDB 服务器的密码(可选)。

  • host

    CouchDB 服务器的主机名,默认为 localhost

  • port

    CouchDB 服务器监听的端口,默认为 8091

  • container

    CouchDB 服务器写入的默认容器,默认为 default

备注

The CouchDB backend requires the https://pypi.org/project/pycouchdb/ library:

To install this Couchbase package use pip:

$ pip install celery[couchdb]

See 软件包 for information on combining multiple extension requirements.

This backend can be configured via the result_backend set to a CouchDB URL:

result_backend = 'couchdb://username:password@host:port/container'

The URL is formed out of the following parts:

  • username

    User name to authenticate to the CouchDB server as (optional).

  • password

    Password to authenticate to the CouchDB server (optional).

  • host

    Host name of the CouchDB server. Defaults to localhost.

  • port

    The port the CouchDB server is listening to. Defaults to 8091.

  • container

    The default container the CouchDB server is writing to. Defaults to default.

文件系统后端设置

File-system backend settings

该后端也可以使用文件 URL 配置,例如:

CELERY_RESULT_BACKEND = 'file:///var/celery/results'

所配置的目录必须对使用该后端的所有服务器可共享并具有可写权限。

如果你仅在单机上试用 Celery,可以直接使用该后端而无需额外配置。 对于大型集群,你可以使用 NFS、 GlusterFS、 CIFS、 HDFS (使用 FUSE)或其他任何文件系统。

This backend can be configured using a file URL, for example:

CELERY_RESULT_BACKEND = 'file:///var/celery/results'

The configured directory needs to be shared and writable by all servers using the backend.

If you're trying Celery on a single system you can simply use the backend without any further configuration. For larger clusters you could use NFS, GlusterFS, CIFS, HDFS (using FUSE), or any other file-system.

Consul 键值存储后端设置

Consul K/V store backend settings

备注

Consul 后端需要安装 https://pypi.org/project/python-consul2/ 库:

使用 pip 安装此软件包:

$ pip install python-consul2

Consul 后端可以通过 URL 进行配置,例如:

CELERY_RESULT_BACKEND = 'consul://localhost:8500/'

或:

result_backend = 'consul://localhost:8500/'

该后端将在 Consul 的 K/V 存储中以独立键的形式存储结果。 该后端支持使用 Consul 中的 TTL 自动过期结果。 URL 的完整语法如下:

consul://host:port[?one_client=1]

该 URL 由以下部分组成:

  • host

    Consul 服务器的主机名。

  • port

    Consul 服务器监听的端口。

  • one_client

    默认情况下,为了确保正确性,该后端在每次操作时都会使用独立的客户端连接。 在负载极高的情况下,频繁创建新连接可能导致 Consul 服务器返回 HTTP 429 “连接过多” 错误。 推荐的处理方式是参考此补丁为 python-consul2 启用重试功能: https://github.com/poppyred/python-consul2/pull/31

    或者,如果设置了 one_client 参数,则所有操作将复用单个客户端连接。 这样可以避免 HTTP 429 错误,但后端存储结果的可靠性可能会降低。

备注

The Consul backend requires the https://pypi.org/project/python-consul2/ library:

To install this package use pip:

$ pip install python-consul2

The Consul backend can be configured using a URL, for example:

CELERY_RESULT_BACKEND = 'consul://localhost:8500/'

or:

result_backend = 'consul://localhost:8500/'

The backend will store results in the K/V store of Consul as individual keys. The backend supports auto expire of results using TTLs in Consul. The full syntax of the URL is:

consul://host:port[?one_client=1]

The URL is formed out of the following parts:

  • host

    Host name of the Consul server.

  • port

    The port the Consul server is listening to.

  • one_client

    By default, for correctness, the backend uses a separate client connection per operation. In cases of extreme load, the rate of creation of new connections can cause HTTP 429 "too many connections" error responses from the Consul server when under load. The recommended way to handle this is to enable retries in python-consul2 using the patch at https://github.com/poppyred/python-consul2/pull/31.

    Alternatively, if one_client is set, a single client connection will be used for all operations instead. This should eliminate the HTTP 429 errors, but the storage of results in the backend can become unreliable.

消息路由

Message Routin

task_queues

默认值:None (队列将从默认队列设置中获取)。

大多数用户无需手动设置此选项,而应使用 自动路由机制

如果确实需要配置高级路由策略,该设置应为 一个由 kombu.Queue 对象组成的列表,Worker 将从中消费任务。

需要注意的是,Worker 可通过 -Q 选项覆盖该设置, 或使用 -X 选项排除该列表中的部分队列(按名称)。

详见 基础知识 了解更多信息。

默认使用名为 celery 的队列/交换机/绑定键, 交换机类型为 direct

另见 task_routes

Default: None (queue taken from default queue settings).

Most users will not want to specify this setting and should rather use the automatic routing facilities.

If you really want to configure advanced routing, this setting should be a list of kombu.Queue objects the worker will consume from.

Note that workers can be overridden this setting via the -Q option, or individual queues from this list (by name) can be excluded using the -X option.

Also see 基础知识 for more information.

The default is a queue/exchange/binding key of celery, with exchange type direct.

See also task_routes

task_routes

默认值:None

该设置应为一个路由器列表,或一个用于将任务路由到队列的路由器。 在确定任务最终投递位置时,系统会按顺序调用这些路由器。

路由器可按以下几种形式指定:

  • 函数,签名为 (name, args, kwargs, options, task=None, **kwargs)

  • 字符串,提供指向路由函数的路径。

  • 字典,包含路由器说明:将转换为 celery.routes.MapRoute 实例。

  • (pattern, route) 元组组成的列表:将转换为 celery.routes.MapRoute 实例。

示例:

task_routes = {
    'celery.ping': 'default',
    'mytasks.add': 'cpu-bound',
    'feed.tasks.*': 'feeds',                           # <-- 通配符模式
    re.compile(r'(image|video)\.tasks\..*'): 'media',  # <-- 正则表达式
    'video.encode': {
        'queue': 'video',
        'exchange': 'media',
        'routing_key': 'media.video.encode',
    },
}

task_routes = ('myapp.tasks.route_task', {'celery.ping': 'default'})

其中 myapp.tasks.route_task 可以是:

def route_task(self, name, args, kwargs, options, task=None, **kw):
    if task == 'celery.ping':
        return {'queue': 'default'}

route_task 可返回字符串或字典。 字符串表示 task_queues 中的队列名; 字典表示自定义路由设置。

在发送任务时,系统将依序查询各路由器。 第一个返回非 None 的路由器即为选中路由, 任务消息的选项将与该路由配置合并,任务设置优先。

例如,调用 apply_async() 传入如下参数:


Task.apply_async(immediate=False, exchange='video',

routing_key='video.compress')

而某路由器返回:

{'immediate': True, 'exchange': 'urgent'}

则最终消息选项为:

immediate=False, exchange='video', routing_key='video.compress'

(还包括 Task 类中定义的默认消息选项)

在合并 task_routestask_queues 的设置时, 前者具有更高优先级。

例如,设置如下:

task_queues = {
    'cpubound': {
        'exchange': 'cpubound',
        'routing_key': 'cpubound',
    },
}

task_routes = {
    'tasks.add': {
        'queue': 'cpubound',
        'routing_key': 'tasks.add',
        'serializer': 'json',
    },
}

tasks.add 的最终路由选项为:

{'exchange': 'cpubound',
'routing_key': 'tasks.add',
'serializer': 'json'}

参见 路由器 获取更多示例。

Default: None.

A list of routers, or a single router used to route tasks to queues. When deciding the final destination of a task the routers are consulted in order.

A router can be specified as either:

  • A function with the signature (name, args, kwargs, options, task=None, **kwargs)

  • A string providing the path to a router function.

  • A dict containing router specification: Will be converted to a celery.routes.MapRoute instance.

  • A list of (pattern, route) tuples: Will be converted to a celery.routes.MapRoute instance.

Examples:

task_routes = {
    'celery.ping': 'default',
    'mytasks.add': 'cpu-bound',
    'feed.tasks.*': 'feeds',                           # <-- glob pattern
    re.compile(r'(image|video)\.tasks\..*'): 'media',  # <-- regex
    'video.encode': {
        'queue': 'video',
        'exchange': 'media',
        'routing_key': 'media.video.encode',
    },
}

task_routes = ('myapp.tasks.route_task', {'celery.ping': 'default'})

Where myapp.tasks.route_task could be:

def route_task(self, name, args, kwargs, options, task=None, **kw):
    if task == 'celery.ping':
        return {'queue': 'default'}

route_task may return a string or a dict. A string then means it's a queue name in task_queues, a dict means it's a custom route.

When sending tasks, the routers are consulted in order. The first router that doesn't return None is the route to use. The message options is then merged with the found route settings, where the task's settings have priority.

Example if apply_async() has these arguments:


Task.apply_async(immediate=False, exchange='video',

routing_key='video.compress')

and a router returns:

{'immediate': True, 'exchange': 'urgent'}

the final message options will be:

immediate=False, exchange='video', routing_key='video.compress'

(and any default message options defined in the Task class)

Values defined in task_routes have precedence over values defined in task_queues when merging the two.

With the follow settings:

task_queues = {
    'cpubound': {
        'exchange': 'cpubound',
        'routing_key': 'cpubound',
    },
}

task_routes = {
    'tasks.add': {
        'queue': 'cpubound',
        'routing_key': 'tasks.add',
        'serializer': 'json',
    },
}

The final routing options for tasks.add will become:

{'exchange': 'cpubound',
'routing_key': 'tasks.add',
'serializer': 'json'}

See 路由器 for more examples.

task_queue_max_priority

brokers:

RabbitMQ

Default: None.

See RabbitMQ 消息优先级.

task_default_priority

brokers:

RabbitMQ, Redis

Default: None.

See RabbitMQ 消息优先级.

task_inherit_parent_priority

brokers:

RabbitMQ

默认值:False

如果启用,子任务将继承父任务的优先级。

# 链中最后一个任务也将具有优先级 5。
chain = celery.chain(add.s(2) | add.s(2).set(priority=5) | add.s(3))

在使用 delayapply_async 从父任务调用子任务时,优先级继承同样适用。

参见 RabbitMQ 消息优先级

Default: False.

If enabled, child tasks will inherit priority of the parent task.

# The last task in chain will also have priority set to 5.
chain = celery.chain(add.s(2) | add.s(2).set(priority=5) | add.s(3))

Priority inheritance also works when calling child tasks from a parent task with delay or apply_async.

See RabbitMQ 消息优先级.

worker_direct

Default: Disabled.

此选项允许为每个 worker 创建一个专用队列,以便可以将任务路由到特定的 worker。

每个 worker 的队列名称是基于 worker 的主机名并添加 .dq 后缀自动生成的,使用的是 C.dq2 交换机。

例如,节点名称为 w1@example.com 的 worker 对应的队列名称为:

w1@example.com.dq

然后你可以通过指定主机名作为路由键,并使用 C.dq2 交换机将任务路由到该 worker:

task_routes = {
    'tasks.add': {'exchange': 'C.dq2', 'routing_key': 'w1@example.com'}
}

This option enables so that every worker has a dedicated queue, so that tasks can be routed to specific workers.

The queue name for each worker is automatically generated based on the worker hostname and a .dq suffix, using the C.dq2 exchange.

For example the queue name for the worker with node name w1@example.com becomes:

w1@example.com.dq

Then you can route the task to the worker by specifying the hostname as the routing key and the C.dq2 exchange:

task_routes = {
    'tasks.add': {'exchange': 'C.dq2', 'routing_key': 'w1@example.com'}
}

task_create_missing_queues

Default: Enabled.

如果启用(默认启用),任何未在 task_queues 中定义的队列将被自动创建。参见 自动路由

If enabled (default), any queues specified that aren't defined in task_queues will be automatically created. See 自动路由.

task_default_queue

Default: "celery".

.apply_async 在消息没有指定路由或没有自定义队列的情况下所使用的默认队列名称。

该队列必须在 task_queues 中列出。 如果未指定 task_queues,则会自动创建一个包含该队列名称的队列条目。

The name of the default queue used by .apply_async if the message has no route or no custom queue has been specified.

This queue must be listed in task_queues. If task_queues isn't specified then it's automatically created containing one queue entry, where this name is used as the name of that queue.

task_default_queue_type

Added in version 5.5.

默认值: "classic"

该设置用于更改 task_default_queue 所使用的默认队列类型。另一个可用选项是 "quorum",仅在 RabbitMQ 中支持,会使用队列参数 x-queue-type 将队列类型设置为 quorum

如果启用了 worker_detect_quorum_queues 设置,worker 将自动检测队列类型并相应地禁用全局 QoS。

警告

quorum 队列需要启用 confirm publish。 使用 broker_transport_options 设置以启用 confirm publish:

broker_transport_options = {"confirm_publish": True}

更多信息请参见 RabbitMQ 官方文档

Default: "classic".

This setting is used to allow changing the default queue type for the task_default_queue queue. The other viable option is "quorum" which is only supported by RabbitMQ and sets the queue type to quorum using the x-queue-type queue argument.

If the worker_detect_quorum_queues setting is enabled, the worker will automatically detect the queue type and disable the global QoS accordingly.

警告

Quorum queues require confirm publish to be enabled. Use broker_transport_options to enable confirm publish by setting:

broker_transport_options = {"confirm_publish": True}

For more information, see RabbitMQ documentation.

task_default_exchange

默认值:使用 task_default_queue 的值。

在没有为 task_queues 中的条目指定自定义交换机时所使用的默认交换机名称。

Default: Uses the value set for task_default_queue.

Name of the default exchange to use when no custom exchange is specified for a key in the task_queues setting.

task_default_exchange_type

默认值: "direct"

在没有为 task_queues 中的条目指定自定义交换机类型时所使用的默认交换机类型。

Default: "direct".

Default exchange type used when no custom exchange type is specified for a key in the task_queues setting.

task_default_routing_key

默认值:使用 task_default_queue 的值。

在没有为 task_queues 中的条目指定自定义路由键时所使用的默认路由键。

Default: Uses the value set for task_default_queue.

The default routing key used when no custom routing key is specified for a key in the task_queues setting.

task_default_delivery_mode

Default: "persistent".

可以为 transient (消息不写入磁盘)或 persistent (写入磁盘)。

Can be transient (messages not written to disk) or persistent (written to disk).

Broker 设置

Broker Settings

broker_url

Default: "amqp://"

默认的 broker URL。它必须是以下格式的 URL:

transport://userid:password@hostname:port/virtual_host

仅 scheme 部分(transport://)是必须的,其余部分是可选的,若未提供,则使用所选传输方式的默认值。

transport 部分表示所使用的 broker 实现,默认是 amqp (如果安装了 librabbitmq 则使用之,否则回退至 pyamqp)。还支持其他选项,如:redis://sqs://qpid://

scheme 也可以是你自定义传输实现的完整路径:

broker_url = 'proj.transports.MyTransport://localhost'

还可以指定多个相同传输方式的 broker URL。 这些 broker URL 可以作为用分号分隔的单个字符串传入:

broker_url = 'transport://userid:password@hostname:port//;transport://userid:password@hostname:port//'

也可以作为列表指定:

broker_url = [
    'transport://userid:password@localhost:port//',
    'transport://userid:password@hostname:port//'
]

这些 broker 将按 broker_failover_strategy 设置使用。

更多信息参见 Kombu 文档中的 Celery with SQS

Default broker URL. This must be a URL in the form of:

transport://userid:password@hostname:port/virtual_host

Only the scheme part (transport://) is required, the rest is optional, and defaults to the specific transports default values.

The transport part is the broker implementation to use, and the default is amqp, (uses librabbitmq if installed or falls back to pyamqp). There are also other choices available, including; redis://, sqs://, and qpid://.

The scheme can also be a fully qualified path to your own transport implementation:

broker_url = 'proj.transports.MyTransport://localhost'

More than one broker URL, of the same transport, can also be specified. The broker URLs can be passed in as a single string that's semicolon delimited:

broker_url = 'transport://userid:password@hostname:port//;transport://userid:password@hostname:port//'

Or as a list:

broker_url = [
    'transport://userid:password@localhost:port//',
    'transport://userid:password@hostname:port//'
]

The brokers will then be used in the broker_failover_strategy.

See Celery with SQS in the Kombu documentation for more information.

broker_read_url / broker_write_url

Default: Taken from broker_url.

以下设置可用于替代 broker_url,分别指定用于消费和生产的连接参数。

示例:

broker_read_url = 'amqp://user:pass@broker.example.com:56721'
broker_write_url = 'amqp://user:pass@broker.example.com:56722'

这两个选项也可以指定为列表以实现故障转移备用,更多信息参见 broker_url

These settings can be configured, instead of broker_url to specify different connection parameters for broker connections used for consuming and producing.

Example:

broker_read_url = 'amqp://user:pass@broker.example.com:56721'
broker_write_url = 'amqp://user:pass@broker.example.com:56722'

Both options can also be specified as a list for failover alternates, see broker_url for more information.

broker_failover_strategy

Default: "round-robin".

用于 broker Connection 对象的默认故障转移策略。如果提供的值是字符串,则应映射至 'kombu.connection.failover_strategies' 中的键;也可以是一个方法引用,该方法从提供的列表中生成单个项。

示例:

# 随机故障转移策略
def random_failover_strategy(servers):
    it = list(servers)  # 不修改调用方的列表
    shuffle = random.shuffle
    for _ in repeat(None):
        shuffle(it)
        yield it[0]

broker_failover_strategy = random_failover_strategy

Default failover strategy for the broker Connection object. If supplied, may map to a key in 'kombu.connection.failover_strategies', or be a reference to any method that yields a single item from a supplied list.

Example:

# Random failover strategy
def random_failover_strategy(servers):
    it = list(servers)  # don't modify callers list
    shuffle = random.shuffle
    for _ in repeat(None):
        shuffle(it)
        yield it[0]

broker_failover_strategy = random_failover_strategy

broker_heartbeat

transports supported:

pyamqp

默认值:120.0 (由服务器协商)。

备注

此值仅由 Worker 使用,客户端当前不会使用心跳机制。

仅通过 TCP/IP 并不总能及时检测连接丢失,因此 AMQP 定义了一种称为心跳(heartbeat)的机制, 由客户端和 broker 双方使用,以检测连接是否已关闭。

如果心跳值设置为 10 秒,那么心跳检测的间隔由 broker_heartbeat_checkrate 设置控制 (默认值是心跳值的两倍速率,也就是说,对于 10 秒的心跳值,心跳会每 5 秒检测一次)。

Default: 120.0 (negotiated by server).

Note: This value is only used by the worker, clients do not use a heartbeat at the moment.

It's not always possible to detect connection loss in a timely manner using TCP/IP alone, so AMQP defines something called heartbeats that's is used both by the client and the broker to detect if a connection was closed.

If the heartbeat value is 10 seconds, then the heartbeat will be monitored at the interval specified by the broker_heartbeat_checkrate setting (by default this is set to double the rate of the heartbeat value, so for the 10 seconds, the heartbeat is checked every 5 seconds).

broker_heartbeat_checkrate

transports supported:

pyamqp

Default: 2.0.

Worker 会周期性地监控 broker 是否丢失过多心跳。该检查的速率通过将 broker_heartbeat 除以该设置值计算得出,所以如果心跳为 10.0 且检测速率为默认的 2.0,检查将每 5 秒执行一次 (即发送心跳的两倍速率)。

Default: 2.0.

At intervals the worker will monitor that the broker hasn't missed too many heartbeats. The rate at which this is checked is calculated by dividing the broker_heartbeat value with this value, so if the heartbeat is 10.0 and the rate is the default 2.0, the check will be performed every 5 seconds (twice the heartbeat sending rate).

broker_use_ssl

transports supported:

pyamqp, redis

Default: Disabled.

切换 broker 连接上的 SSL 使用与相关设置。

该选项的有效值取决于所用的传输类型。

Toggles SSL usage on broker connection and SSL settings.

The valid values for this option vary by transport.

pyamqp

如果为 True,连接将使用默认 SSL 设置启用 SSL。 如果为字典,将根据给定策略配置 SSL 连接,格式为 Python 的 ssl.wrap_socket() 选项格式。

请注意,SSL 套接字通常由 broker 提供服务于单独的端口。

以下是一个提供客户端证书并使用自定义 CA 验证服务端证书的示例:

import ssl

broker_use_ssl = {
'keyfile': '/var/ssl/private/worker-key.pem',
'certfile': '/var/ssl/amqp-server-cert.pem',
'ca_certs': '/var/ssl/myca.pem',
'cert_reqs': ssl.CERT_REQUIRED
}

Added in version 5.1: 从 Celery 5.1 开始,py-amqp 将始终验证从服务器收到的证书, 因此不再需要手动设置 cert_reqsssl.CERT_REQUIRED

之前的默认值 ssl.CERT_NONE 是不安全的,应避免使用。 如果你希望恢复之前不安全的默认行为,可将 cert_reqs 设置为 ssl.CERT_NONE

If True the connection will use SSL with default SSL settings. If set to a dict, will configure SSL connection according to the specified policy. The format used is Python's ssl.wrap_socket() options.

Note that SSL socket is generally served on a separate port by the broker.

Example providing a client cert and validating the server cert against a custom certificate authority:

import ssl

broker_use_ssl = {
'keyfile': '/var/ssl/private/worker-key.pem',
'certfile': '/var/ssl/amqp-server-cert.pem',
'ca_certs': '/var/ssl/myca.pem',
'cert_reqs': ssl.CERT_REQUIRED
}

Added in version 5.1: Starting from Celery 5.1, py-amqp will always validate certificates received from the server and it is no longer required to manually set cert_reqs to ssl.CERT_REQUIRED.

The previous default, ssl.CERT_NONE is insecure and we its usage should be discouraged. If you'd like to revert to the previous insecure default set cert_reqs to ssl.CERT_NONE

redis

该设置必须是包含以下键的字典:

  • ssl_cert_reqs (必需):为 SSLContext.verify_mode 中的一个值:

  • ssl.CERT_NONE

  • ssl.CERT_OPTIONAL

  • ssl.CERT_REQUIRED

  • ssl_ca_certs (可选):CA 证书路径

  • ssl_certfile (可选):客户端证书路径

  • ssl_keyfile (可选):客户端密钥路径

The setting must be a dict with the following keys:

  • ssl_cert_reqs (required): one of the SSLContext.verify_mode values:
    • ssl.CERT_NONE

    • ssl.CERT_OPTIONAL

    • ssl.CERT_REQUIRED

  • ssl_ca_certs (optional): path to the CA certificate

  • ssl_certfile (optional): path to the client certificate

  • ssl_keyfile (optional): path to the client key

broker_pool_limit

Added in version 2.3.

Default: 10.

连接池中可同时打开的最大连接数。

自 2.5 版本起,连接池默认启用,默认限制为 10 个连接。 该数值可根据使用连接的线程 / green-thread 数量(如 eventlet / gevent)进行调整。 例如,当使用 eventlet 并拥有 1000 个使用 broker 连接的 greenlet 时,可能会产生争用, 此时应考虑提高连接池上限。

若设置为 None 或 0,则连接池将被禁用,且每次使用时都会建立并关闭连接。

The maximum number of connections that can be open in the connection pool.

The pool is enabled by default since version 2.5, with a default limit of ten connections. This number can be tweaked depending on the number of threads/green-threads (eventlet/gevent) using a connection. For example running eventlet with 1000 greenlets that use a connection to the broker, contention can arise and you should consider increasing the limit.

If set to None or 0 the connection pool will be disabled and connections will be established and closed for every use.

broker_connection_timeout

Default: 4.0.

建立与 AMQP 服务器连接时的默认超时时间(单位:秒)。使用 gevent 时此设置将被禁用。

备注

broker 连接超时仅适用于 Worker 尝试连接 broker 的场景。 它不适用于生产者(producer)发送任务的情况。有关该场景中如何设置超时, 请参见 broker_transport_options

The default timeout in seconds before we give up establishing a connection to the AMQP server. This setting is disabled when using gevent.

备注

The broker connection timeout only applies to a worker attempting to connect to the broker. It does not apply to producer sending a task, see broker_transport_options for how to provide a timeout for that situation.

broker_connection_retry

Default: Enabled.

在初次建立连接后,如果连接丢失,将自动尝试重新连接到 AMQP broker。

重试之间的间隔会逐次递增,直到超过 broker_connection_max_retries 设置的最大重试次数。

警告

从 Celery 6.0 起,配置项 broker_connection_retry 将不再决定 启动期间是否进行连接重试。 如果你希望在启动时不进行连接重试,应将 broker_connection_retry_on_startup 设置为 False

Automatically try to re-establish the connection to the AMQP broker if lost after the initial connection is made.

The time between retries is increased for each retry, and is not exhausted before broker_connection_max_retries is exceeded.

警告

The broker_connection_retry configuration setting will no longer determine whether broker connection retries are made during startup in Celery 6.0 and above. If you wish to refrain from retrying connections on startup, you should set broker_connection_retry_on_startup to False instead.

broker_connection_retry_on_startup

Default: Enabled.

在 Celery 启动时,如果 broker 不可用,将自动尝试连接 AMQP broker。

重试之间的间隔会逐次递增,直到超过 broker_connection_max_retries 设置的最大重试次数。

Automatically try to establish the connection to the AMQP broker on Celery startup if it is unavailable.

The time between retries is increased for each retry, and is not exhausted before broker_connection_max_retries is exceeded.

broker_connection_max_retries

Default: 100.

在放弃重新连接 AMQP broker 之前的最大重试次数。

如果设置为 None,则将永久重试,直到连接成功。

Maximum number of retries before we give up re-establishing a connection to the AMQP broker.

If this is set to None, we'll retry forever.

broker_channel_error_retry

Added in version 5.3.

Default: Disabled.

在收到无效响应时,自动尝试重新连接 AMQP broker。

该选项的重试次数和间隔与 broker_connection_retry 相同。 此外,当 broker_connection_retryFalse 时,此选项也不会生效。

Automatically try to re-establish the connection to the AMQP broker if any invalid response has been returned.

The retry count and interval is the same as that of broker_connection_retry. Also, this option doesn't work when broker_connection_retry is False.

broker_login_method

Default: "AMQPLAIN".

设置自定义 AMQP 登录方式。

Set custom amqp login method.

broker_native_delayed_delivery_queue_type

Added in version 5.5.

transports supported:

pyamqp

Default: "quorum".

此设置用于更改原生延迟投递队列的默认队列类型。 另一个可用的选项为 "classic",仅 RabbitMQ 支持, 它会通过 x-queue-type 队列参数将队列类型设置为 classic

This setting is used to allow changing the default queue type for the native delayed delivery queues. The other viable option is "classic" which is only supported by RabbitMQ and sets the queue type to classic using the x-queue-type queue argument.

broker_transport_options

Added in version 2.2.

Default: {} (empty mapping).

传递给底层传输机制的额外选项字典。

请参考所用传输方式的用户手册,以了解支持的选项(如有)。

以下示例为设置可见性超时(Redis 和 SQS 传输支持):

broker_transport_options = {'visibility_timeout': 18000}  # 5 小时

以下示例为设置生产者连接的最大重试次数 (这样在首次任务执行时,如果 broker 不可用,生产者不会无限重试):

broker_transport_options = {'max_retries': 5}

A dict of additional options passed to the underlying transport.

See your transport user manual for supported options (if any).

Example setting the visibility timeout (supported by Redis and SQS transports):

broker_transport_options = {'visibility_timeout': 18000}  # 5 hours

Example setting the producer connection maximum number of retries (so producers won't retry forever if the broker isn't available at the first task execution):

broker_transport_options = {'max_retries': 5}

Worker

imports

Default: [] (empty list).

Worker 启动时要导入的一组模块序列。

该设置用于指定要导入的任务模块,同时也可用于导入信号处理器、扩展远程控制命令等。

模块将按照定义顺序依次导入。

A sequence of modules to import when the worker starts.

This is used to specify the task modules to import, but also to import signal handlers and additional remote control commands, etc.

The modules will be imported in the original order.

include

Default: [] (empty list).

该设置语义与 imports 完全相同,但可用于区分类别不同的导入项。

此设置中的模块会在 imports 中定义的模块之后导入。

Exact same semantics as imports, but can be used as a means to have different import categories.

The modules in this setting are imported after the modules in imports.

worker_deduplicate_successful_tasks

Added in version 5.1.

Default: False

在每次任务执行前,指示 Worker 检查该任务是否为重复消息。

去重仅适用于以下任务:具有相同的标识符、启用了延迟确认(late acknowledgment)、由消息代理重新投递,并且其状态在结果后端为 SUCCESS

为避免在结果后端产生过多查询,Worker 会在查询结果后端之前先检查本地缓存,以判断任务是否已在本地成功执行过。

可通过设置 worker_state_db 使此缓存持久化。

如果结果后端不是 持久化的 (如 RPC 后端),此设置将被忽略。

Before each task execution, instruct the worker to check if this task is a duplicate message.

Deduplication occurs only with tasks that have the same identifier, enabled late acknowledgment, were redelivered by the message broker and their state is SUCCESS in the result backend.

To avoid overflowing the result backend with queries, a local cache of successfully executed tasks is checked before querying the result backend in case the task was already successfully executed by the same worker that received the task.

This cache can be made persistent by setting the worker_state_db setting.

If the result backend is not persistent (the RPC backend, for example), this setting is ignored.

worker_concurrency

Default: Number of CPU cores.

同时执行任务的并发 Worker 进程/线程/协程数量。

如果任务主要是 I/O 密集型,可设置更多并发; 但如果是 CPU 密集型,建议将该值设为与主机 CPU 核心数接近。 如果未设置,则使用主机的 CPU 核心数。

The number of concurrent worker processes/threads/green threads executing tasks.

If you're doing mostly I/O you can have more processes, but if mostly CPU-bound, try to keep it close to the number of CPUs on your machine. If not set, the number of CPUs/cores on the host will be used.

worker_prefetch_multiplier

Default: 4.

每个进程可预取的消息数量,乘以并发进程数量。默认值为 4(即每个进程预取 4 条消息)。 此默认设置通常是合适的,但若任务运行时间很长,并且你在 Worker 启动后才开始处理, 注意第一个启动的 Worker 将一次性接收 4 倍数量的消息,可能导致任务分配不均。

若要禁用预取行为,将 worker_prefetch_multiplier 设置为 1。 如果设置为 0,则允许 Worker 不受限制地持续消费消息。

有关预取机制的详细信息,请参阅 预取限制

备注

带 ETA/countdown 的任务不受预取限制影响。

How many messages to prefetch at a time multiplied by the number of concurrent processes. The default is 4 (four messages for each process). The default setting is usually a good choice, however -- if you have very long running tasks waiting in the queue and you have to start the workers, note that the first worker to start will receive four times the number of messages initially. Thus the tasks may not be fairly distributed to the workers.

To disable prefetching, set worker_prefetch_multiplier to 1. Changing that setting to 0 will allow the worker to keep consuming as many messages as it wants.

For more on prefetching, read 预取限制

备注

Tasks with ETA/countdown aren't affected by prefetch limits.

worker_enable_prefetch_count_reduction

Added in version 5.4.

Default: Enabled.

设置项 worker_enable_prefetch_count_reduction 控制在与消息代理断开连接后, Worker 是否会将预取计数恢复至最大允许值。默认该设置是启用的。

当连接丢失时,Celery 会自动尝试重新连接 broker,前提是 broker_connection_retry_on_startupbroker_connection_retry 没有被设为 False。 在连接丢失期间,消息代理不会追踪已被获取的任务数量。为合理控制任务负载并防止过载, Celery 会根据当前正在运行的任务数来减少预取计数。

预取计数是指 Worker 从 broker 一次性获取的消息数量。 在重连期间降低预取计数有助于避免消息被过量获取。

worker_enable_prefetch_count_reduction 保持默认启用状态时,每当一个在连接丢失前已启动的任务完成时, 预取计数将逐步恢复到最大值。此机制有助于在 Worker 之间保持任务的合理分布,并有效管理负载。

若要禁用此机制,即禁止在重连后减少和恢复预取计数, 可将 worker_enable_prefetch_count_reduction 设置为 False。 在某些场景下,例如需要使用固定预取计数来控制任务处理速率或管理 Worker 负载, 特别是在网络连接波动的环境中,禁用此设置可能更合适。

worker_enable_prefetch_count_reduction 提供了一种方式, 用于控制在连接丢失后恢复预取计数的行为,从而帮助在 Worker 之间维持任务平衡和负载管理。

The worker_enable_prefetch_count_reduction setting governs the restoration behavior of the prefetch count to its maximum allowable value following a connection loss to the message broker. By default, this setting is enabled.

Upon a connection loss, Celery will attempt to reconnect to the broker automatically, provided the broker_connection_retry_on_startup or broker_connection_retry is not set to False. During the period of lost connection, the message broker does not keep track of the number of tasks already fetched. Therefore, to manage the task load effectively and prevent overloading, Celery reduces the prefetch count based on the number of tasks that are currently running.

The prefetch count is the number of messages that a worker will fetch from the broker at a time. The reduced prefetch count helps ensure that tasks are not fetched excessively during periods of reconnection.

With worker_enable_prefetch_count_reduction set to its default value (Enabled), the prefetch count will be gradually restored to its maximum allowed value each time a task that was running before the connection was lost is completed. This behavior helps maintain a balanced distribution of tasks among the workers while managing the load effectively.

To disable the reduction and restoration of the prefetch count to its maximum allowed value on reconnection, set worker_enable_prefetch_count_reduction to False. Disabling this setting might be useful in scenarios where a fixed prefetch count is desired to control the rate of task processing or manage the worker load, especially in environments with fluctuating connectivity.

The worker_enable_prefetch_count_reduction setting provides a way to control the restoration behavior of the prefetch count following a connection loss, aiding in maintaining a balanced task distribution and effective load management across the workers.

worker_lost_wait

Default: 10.0 seconds.

在某些情况下,Worker 可能被异常终止,未能正常清理资源, 并且可能在终止前已发布了任务结果。 此设置值指定在引发 WorkerLostError 异常前,等待缺失结果的最大时长。

In some cases a worker may be killed without proper cleanup, and the worker may have published a result before terminating. This value specifies how long we wait for any missing results before raising a WorkerLostError exception.

worker_max_tasks_per_child

每个进程池中的 Worker 子进程在被替换前最多可执行的任务数。默认没有限制。

Maximum number of tasks a pool worker process can execute before it's replaced with a new one. Default is no limit.

worker_max_memory_per_child

Default: No limit. Type: int (kilobytes)

Worker 进程允许使用的最大常驻内存量(以 KB 为单位,1 KB = 1024 字节), 超出该限制后 Worker 将被替换为新的进程。 若单个任务导致该限制被超出,该任务会先完成,然后 Worker 被替换。

示例:

worker_max_memory_per_child = 12288  # 12 * 1024 = 12 MB

Maximum amount of resident memory, in kilobytes (1024 bytes), that may be consumed by a worker before it will be replaced by a new worker. If a single task causes a worker to exceed this limit, the task will be completed, and the worker will be replaced afterwards.

Example:

worker_max_memory_per_child = 12288  # 12 * 1024 = 12 MB

worker_disable_rate_limits

Default: Disabled (rate limits enabled).

禁用所有速率限制,即使任务显式设置了速率限制也无效。

Disable all rate limits, even if tasks has explicit rate limits set.

worker_state_db

Default: None.

用于存储 Worker 持久状态(例如被撤销的任务)的文件名。 可以是相对路径或绝对路径,但请注意根据 Python 版本的不同, 文件名可能会自动附加 .db 后缀。

也可以通过 celery worker --statedb 命令行参数进行设置。

Name of the file used to stores persistent worker state (like revoked tasks). Can be a relative or absolute path, but be aware that the suffix .db may be appended to the file name (depending on Python version).

Can also be set via the celery worker --statedb argument.

worker_timer_precision

Default: 1.0 seconds.

设置 ETA 调度器在重新检查调度计划之间最长可休眠的时间(以秒为单位)。

若设置为 1 秒,则调度器的精度为 1 秒; 若需近似毫秒级精度,可将其设置为 0.1。

Set the maximum time in seconds that the ETA scheduler can sleep between rechecking the schedule.

Setting this value to 1 second means the schedulers precision will be 1 second. If you need near millisecond precision you can set this to 0.1.

worker_enable_remote_control

Default: Enabled by default.

指定是否启用对 Worker 的远程控制功能。

Specify if remote control of the workers is enabled.

worker_proc_alive_timeout

Default: 4.0.

等待新 Worker 进程启动的超时时间(单位为秒,支持整数或浮点数)。

The timeout in seconds (int/float) when waiting for a new worker process to start up.

worker_cancel_long_running_tasks_on_connection_loss

Added in version 5.1.

Default: Disabled by default.

在连接丢失时,终止所有启用了延迟确认的长时间运行任务。

由于连接通道丢失,尚未确认的任务将无法完成确认, 并将重新进入队列。因此启用了延迟确认的任务必须是幂等的, 因为它们可能会被执行多次。 在这种情况下,每次连接丢失都会导致任务被重复执行(有时甚至由多个 Worker 并行执行)。

启用此选项后,尚未完成的任务将被取消,其执行会被终止。 在连接丢失前已完成的任务, 只要未启用 task_ignore_result,其结果仍会被写入结果后端。

警告

此特性是将来的破坏性变更。

若未启用,Celery 将发出警告信息。

在 Celery 6.0 中,worker_cancel_long_running_tasks_on_connection_loss 将默认设为 True,因为当前行为引发的问题多于解决的问题。

Kill all long-running tasks with late acknowledgment enabled on connection loss.

Tasks which have not been acknowledged before the connection loss cannot do so anymore since their channel is gone and the task is redelivered back to the queue. This is why tasks with late acknowledged enabled must be idempotent as they may be executed more than once. In this case, the task is being executed twice per connection loss (and sometimes in parallel in other workers).

When turning this option on, those tasks which have not been completed are cancelled and their execution is terminated. Tasks which have completed in any way before the connection loss are recorded as such in the result backend as long as task_ignore_result is not enabled.

警告

This feature was introduced as a future breaking change. If it is turned off, Celery will emit a warning message.

In Celery 6.0, the worker_cancel_long_running_tasks_on_connection_loss will be set to True by default as the current behavior leads to more problems than it solves.

worker_detect_quorum_queues

Added in version 5.5.

Default: Enabled.

自动检测 task_queues 中的队列(包括 task_default_queue)是否为 Quorum 队列, 如果存在任意 Quorum 队列,将禁用全局 QoS。

Automatically detect if any of the queues in task_queues are quorum queues (including the task_default_queue) and disable the global QoS if any quorum queue is detected.

worker_soft_shutdown_timeout

Added in version 5.5.

Default: 0.0.

标准的 温关 会等待所有任务执行完毕后再关闭, 除非触发冷关。软关 会在启动冷关前加入等待时间。 此设置用于指定 Worker 在冷关启动前的最大等待时长。

即使未先执行温关,Worker 触发 冷关 时也适用此设置。

若该值设为 0.0,则软关将几乎等同于禁用。 无论该值为何,若当前没有运行任务(除非启用 worker_enable_soft_shutdown_on_idle), 软关将被跳过。

建议尝试不同值来找到最适合你任务的优雅退出时间,推荐值包括 10、30、60 秒。 过大的值可能导致 Worker 关闭延迟过长,甚至会被主机系统发送 KILL 信号强制终止。

The standard warm shutdown will wait for all tasks to finish before shutting down unless the cold shutdown is triggered. The soft shutdown will add a waiting time before the cold shutdown is initiated. This setting specifies how long the worker will wait before the cold shutdown is initiated and the worker is terminated.

This will apply also when the worker initiate cold shutdown without doing a warm shutdown first.

If the value is set to 0.0, the soft shutdown will be practically disabled. Regardless of the value, the soft shutdown will be disabled if there are no tasks running (unless worker_enable_soft_shutdown_on_idle is enabled).

Experiment with this value to find the optimal time for your tasks to finish gracefully before the worker is terminated. Recommended values can be 10, 30, 60 seconds. Too high value can lead to a long waiting time before the worker is terminated and trigger a KILL signal to forcefully terminate the worker by the host system.

worker_enable_soft_shutdown_on_idle

Added in version 5.5.

Default: False.

worker_soft_shutdown_timeout 大于 0.0,但当前没有任务运行, Worker 仍会跳过 软关。启用本设置后, 即使无任务运行也会触发软关行为。

小技巧

当 Worker 收到 ETA 任务,但尚未到达 ETA 时间,同时触发关机时, 如果没有其他任务正在运行,Worker 会 跳过 软关,立即进入冷关流程。 这可能导致 ETA 任务在 Worker 关闭过程中无法重新入队。 为避免该问题,启用此设置可确保 Worker 等待一定时间, 为 ETA 任务的重新入队和优雅关闭提供充足时间。

If the worker_soft_shutdown_timeout is set to a value greater than 0.0, the worker will skip the soft shutdown anyways if there are no tasks running. This setting will enable the soft shutdown even if there are no tasks running.

小技巧

When the worker received ETA tasks, but the ETA has not been reached yet, and a shutdown is initiated, the worker will skip the soft shutdown and initiate the cold shutdown immediately if there are no tasks running. This may lead to failure in re-queueing the ETA tasks during worker teardown. To mitigate this, enable this configuration to ensure the worker waits regadless, which gives enough time for a graceful shutdown and successful re-queueing of the ETA tasks.

Events

worker_send_task_events

Default: Disabled by default.

发送任务相关事件,以便使用如 flower 等工具进行监控。 该设置控制 Worker 的默认 -E 参数。

Send task-related events so that tasks can be monitored using tools like flower. Sets the default value for the workers -E argument.

task_send_sent_event

Added in version 2.2.

Default: Disabled by default.

若启用,则每个任务都会发送 task-sent 事件, 以便在任务被 Worker 消费前就能进行追踪。

If enabled, a task-sent event will be sent for every task so tasks can be tracked before they're consumed by a worker.

event_queue_ttl

transports supported:

amqp

Default: 5.0 seconds.

消息在发送到监控客户端的事件队列后,其过期时间(以秒为单位,支持 int 或 float)。 该设置将被用作消息的 x-message-ttl 属性。

例如,若该值设为 10,则投递到该队列的消息将在 10 秒后被删除。

Message expiry time in seconds (int/float) for when messages sent to a monitor clients event queue is deleted (x-message-ttl)

For example, if this value is set to 10 then a message delivered to this queue will be deleted after 10 seconds.

event_queue_expires

transports supported:

amqp

Default: 60.0 seconds.

监控客户端的事件队列在未使用后被自动删除的过期时间(以秒为单位,支持 int 或 float)。 该设置对应于 x-expires 属性。

Expiry time in seconds (int/float) for when after a monitor clients event queue will be deleted (x-expires).

event_queue_prefix

Default: "celeryev".

事件接收器队列名称使用的前缀。

The prefix to use for event receiver queue names.

event_exchange

Default: "celeryev".

事件交换机(exchange)的名称。

警告

此选项仍处于实验阶段,请谨慎使用。

Name of the event exchange.

警告

This option is in experimental stage, please use it with caution.

event_serializer

Default: "json".

发送事件消息时使用的消息序列化格式。

参见

序列化器

Message serialization format used when sending event messages.

参见

序列化器.

events_logfile

Added in version 5.4.

Default: None

可选项,指定 celery events 的日志输出文件路径(默认为输出到 stdout)。

An optional file path for celery events to log into (defaults to stdout).

events_pidfile

Added in version 5.4.

Default: None

可选项,指定 celery events 的 PID 文件创建/存储路径(默认不创建 PID 文件)。

An optional file path for celery events to create/store its PID file (default to no PID file created).

events_uid

Added in version 5.4.

Default: None

可选项,celery events 在降权运行时使用的用户 ID(默认不更改 UID)。

An optional user ID to use when events celery events drops its privileges (defaults to no UID change).

events_gid

Added in version 5.4.

Default: None

可选项,celery events 以守护进程运行时使用的用户组 ID(默认不更改 GID)。

An optional group ID to use when celery events daemon drops its privileges (defaults to no GID change).

events_umask

Added in version 5.4.

Default: None

可选项,celery events 在守护进程化创建文件(如日志、PID)时使用的 umask

An optional umask to use when celery events creates files (log, pid...) when daemonizing.

events_executable

Added in version 5.4.

Default: None

可选项,celery events 守护进程化时使用的 python 可执行文件路径 (默认为 sys.executable)。

An optional python executable path for celery events to use when deaemonizing (defaults to sys.executable).

远程控制命令

Remote Control Commands

备注

如需禁用远程控制命令,请参阅 worker_enable_remote_control 设置项。

备注

To disable remote control commands see the worker_enable_remote_control setting.

control_queue_ttl

Default: 300.0

远程控制命令队列中的消息在发送后将过期的时间(以秒为单位)。

若使用默认值 300 秒,表示如果在 300 秒内没有任何 Worker 消费该远程控制命令, 该命令将被丢弃。

该设置同样适用于远程控制的响应队列。

Time in seconds, before a message in a remote control command queue will expire.

If using the default of 300 seconds, this means that if a remote control command is sent and no worker picks it up within 300 seconds, the command is discarded.

This setting also applies to remote control reply queues.

control_queue_expires

Default: 10.0

未使用的远程控制命令队列在多久后从消息代理中删除(单位:秒)。

该设置同样适用于远程控制的响应队列。

Time in seconds, before an unused remote control command queue is deleted from the broker.

This setting also applies to remote control reply queues.

control_exchange

Default: "celery".

控制命令交换机的名称。

警告

此选项仍处于实验阶段,请谨慎使用。

Name of the control command exchange.

警告

This option is in experimental stage, please use it with caution.

日志

Logging

worker_hijack_root_logger

Added in version 2.2.

Default: Enabled by default (hijack root logger).

默认情况下,Celery 会移除根日志记录器(root logger)上已存在的所有处理器。 若你希望自定义日志处理器,可以通过将 worker_hijack_root_logger = False 来禁用此行为。

备注

也可以通过监听 celery.signals.setup_logging 信号来自定义日志系统。

By default any previously configured handlers on the root logger will be removed. If you want to customize your own logging handlers, then you can disable this behavior by setting worker_hijack_root_logger = False.

备注

Logging can also be customized by connecting to the celery.signals.setup_logging signal.

worker_log_color

默认值:若应用日志输出到终端,则启用。

是否在 Celery 应用的日志输出中启用颜色显示。

Default: Enabled if app is logging to a terminal.

Enables/disables colors in logging output by the Celery apps.

worker_log_format

Default:

"[%(asctime)s: %(levelname)s/%(processName)s] %(message)s"

日志消息使用的格式。

有关日志格式的详细信息,请参阅 Python 的 logging 模块。

The format to use for log messages.

See the Python logging module for more information about log formats.

worker_task_log_format

Default:

"[%(asctime)s: %(levelname)s/%(processName)s]
    %(task_name)s[%(task_id)s]: %(message)s"

任务中记录日志时使用的日志消息格式。

有关日志格式的详细信息,请参阅 Python 的 logging 模块。

The format to use for log messages logged in tasks.

See the Python logging module for more information about log formats.

worker_redirect_stdouts

Default: Enabled by default.

若启用此选项,则 stdoutstderr 的输出将被重定向至当前日志记录器。

适用于 celery workercelery beat

If enabled stdout and stderr will be redirected to the current logger.

Used by celery worker and celery beat.

worker_redirect_stdouts_level

Default: WARNING.

用于记录 stdoutstderr 输出的日志等级。 可选值包括:DEBUGINFOWARNINGERRORCRITICAL

The log level output to stdout and stderr is logged as. Can be one of DEBUG, INFO, WARNING, ERROR, or CRITICAL.

安全

Security

security_key

Default: None.

Added in version 2.5.

当启用 消息签名 时,此配置指定用于对消息签名的私钥文件路径(相对或绝对路径)。

The relative or absolute path to a file containing the private key used to sign messages when 消息签名 is used.

security_key_password

Default: None.

Added in version 5.3.0.

当启用 消息签名 时,此配置用于解密私钥的密码。

The password used to decrypt the private key when 消息签名 is used.

security_certificate

Default: None.

Added in version 2.5.

当启用 消息签名 时,此配置指定用于签名消息的 X.509 证书文件路径(相对或绝对路径)。

The relative or absolute path to an X.509 certificate file used to sign messages when 消息签名 is used.

security_cert_store

Default: None.

Added in version 2.5.

用于 消息签名 的 X.509 证书所在目录。可以使用通配符(glob)路径, 例如 /etc/certs/*.pem

The directory containing X.509 certificates used for 消息签名. Can be a glob with wild-cards, (for example /etc/certs/*.pem).

security_digest

Default: sha256.

Added in version 4.3.

自定义组件类(高级)

Custom Component Classes (advanced)

worker_pool

Default: "prefork" (celery.concurrency.prefork:TaskPool).

Worker 使用的进程池类名称。

Eventlet/Gevent

请勿使用此选项选择 eventlet 或 gevent 池。 应使用 -P 参数传给 celery worker, 以确保 monkey patch 能够及时应用,避免因应用顺序错误导致程序异常。

Name of the pool class used by the worker.

Eventlet/Gevent

Never use this option to select the eventlet or gevent pool. You must use the -P option to celery worker instead, to ensure the monkey patches aren't applied too late, causing things to break in strange ways.

worker_pool_restarts

Default: Disabled by default.

启用后,可以通过 pool_restart 远程控制命令重启 worker 的进程池。

If enabled the worker pool can be restarted using the pool_restart remote control command.

worker_autoscaler

Added in version 2.2.

Default: "celery.worker.autoscale:Autoscaler".

用于自动扩缩容的类名称。

Name of the autoscaler class to use.

worker_consumer

Default: "celery.worker.consumer:Consumer".

Worker 使用的消费者(consumer)类名称。

Name of the consumer class used by the worker.

worker_timer

Default: "kombu.asynchronous.hub.timer:Timer".

Worker 使用的 ETA 调度器类名称。 默认由进程池实现指定。

Name of the ETA scheduler class used by the worker. Default is or set by the pool implementation.

worker_logfile

Added in version 5.4.

Default: None

celery worker 的可选日志输出文件路径(默认为输出到 stdout)。

An optional file path for celery worker to log into (defaults to stdout).

worker_pidfile

Added in version 5.4.

Default: None

celery worker 的可选 PID 文件创建/存储路径(默认不创建 PID 文件)。

An optional file path for celery worker to create/store its PID file (defaults to no PID file created).

worker_uid

Added in version 5.4.

Default: None

celery worker 守护进程化运行时,降权使用的用户 ID(默认不更改 UID)。

An optional user ID to use when celery worker daemon drops its privileges (defaults to no UID change).

worker_gid

Added in version 5.4.

Default: None

celery worker 守护进程化运行时,降权使用的用户组 ID(默认不更改 GID)。

An optional group ID to use when celery worker daemon drops its privileges (defaults to no GID change).

worker_umask

Added in version 5.4.

Default: None

celery worker 守护进程化运行时创建文件(日志、PID 文件等)使用的 umask (可选)。

An optional umask to use when celery worker creates files (log, pid...) when daemonizing.

worker_executable

Added in version 5.4.

Default: None

celery worker 守护进程化运行时使用的 python 可执行文件路径(可选,默认为 sys.executable)。

An optional python executable path for celery worker to use when deaemonizing (defaults to sys.executable).

调度设置 (celery beat)

Beat Settings (celery beat)

beat_schedule

Default: {} (empty mapping).

beat 使用的周期性任务调度配置。 参见 条目

The periodic task schedule used by beat. See 条目.

beat_scheduler

Default: "celery.beat:PersistentScheduler".

默认使用的调度器类。 例如,在配合 https://pypi.org/project/django-celery-beat/ 扩展使用时,可设置为 "django_celery_beat.schedulers:DatabaseScheduler"

也可以通过 celery beat -S 参数进行设置。

The default scheduler class. May be set to "django_celery_beat.schedulers:DatabaseScheduler" for instance, if used alongside https://pypi.org/project/django-celery-beat/ extension.

Can also be set via the celery beat -S argument.

beat_schedule_filename

Default: "celerybeat-schedule".

当使用 PersistentScheduler 时,存储周期性任务最后运行时间的文件名。 可为相对路径或绝对路径,但注意在某些 Python 版本中可能自动添加 .db 后缀。

也可通过 celery beat --schedule 参数进行设置。

Name of the file used by PersistentScheduler to store the last run times of periodic tasks. Can be a relative or absolute path, but be aware that the suffix .db may be appended to the file name (depending on Python version).

Can also be set via the celery beat --schedule argument.

beat_sync_every

Default: 0.

在执行下一次数据库同步前,最多允许调度的周期性任务数量。

默认值为 0,表示根据时间间隔进行同步——默认间隔为 3 分钟,由 scheduler.sync_every 决定。 若设置为 1,则每发送一个任务消息就执行一次同步。

The number of periodic tasks that can be called before another database sync is issued. A value of 0 (default) means sync based on timing - default of 3 minutes as determined by scheduler.sync_every. If set to 1, beat will call sync after every task message sent.

beat_max_loop_interval

Default: 0.

beat 在两次检查调度之间最多可休眠的秒数。

此值的默认取决于具体调度器。 对于默认的 Celery beat 调度器,默认值为 300 秒(5 分钟); 而对基于 https://pypi.org/project/django-celery-beat/ 的数据库调度器而言,默认值为 5 秒, 因为调度计划可能由外部更改,因此需要及时感知并应用变更。

此外,当在 Jython 环境中以线程方式嵌入运行 Celery beat(使用 -B)时, 最大间隔会被重写为 1 秒,以保证能够及时关闭线程。

The maximum number of seconds beat can sleep between checking the schedule.

The default for this value is scheduler specific. For the default Celery beat scheduler the value is 300 (5 minutes), but for the https://pypi.org/project/django-celery-beat/ database scheduler it's 5 seconds because the schedule may be changed externally, and so it must take changes to the schedule into account.

Also when running Celery beat embedded (-B) on Jython as a thread the max interval is overridden and set to 1 so that it's possible to shut down in a timely manner.

beat_cron_starting_deadline

Added in version 5.3.

Default: None.

在使用 cron 调度时,beat 在判断调度是否到期时可向前回溯的秒数。 当设置为 None 时,所有超时的 cron 任务都会被立即执行。

When using cron, the number of seconds beat can look back when deciding whether a cron schedule is due. When set to None, cronjobs that are past due will always run immediately.

beat_logfile

Added in version 5.4.

Default: None

celery beat 的可选日志输出文件路径(默认为输出到 stdout)。

An optional file path for celery beat to log into (defaults to stdout).

beat_pidfile

Added in version 5.4.

Default: None

celery beat 的可选 PID 文件创建/存储路径(默认不创建 PID 文件)。

An optional file path for celery beat to create/store it PID file (defaults to no PID file created).

beat_uid

Added in version 5.4.

Default: None

celery beat 守护进程化运行时,降权使用的用户 ID(默认不更改 UID)。

An optional user ID to use when beat celery beat drops its privileges (defaults to no UID change).

beat_gid

Added in version 5.4.

Default: None

celery beat 守护进程化运行时,降权使用的用户组 ID(默认不更改 GID)。

An optional group ID to use when celery beat daemon drops its privileges (defaults to no GID change).

beat_umask

Added in version 5.4.

Default: None

celery beat 守护进程化运行时创建文件(日志、PID 文件等)使用的 umask (可选)。

An optional umask to use when celery beat creates files (log, pid...) when daemonizing.

beat_executable

Added in version 5.4.

Default: None

celery beat 守护进程化运行时使用的 python 可执行文件路径(可选,默认为 sys.executable)。

An optional python executable path for celery beat to use when deaemonizing (defaults to sys.executable).