A multiprocessing distributed task queue for Django

Q logo

A multiprocessing distributed task queue for Django

image0 image1 Documentation Status image2

Features

  • Multiprocessing worker pool
  • Asynchronous tasks
  • Scheduled, cron and repeated tasks
  • Signed and compressed packages
  • Failure and success database or cache
  • Result hooks, groups and chains
  • Django Admin integration
  • PaaS compatible with multiple instances
  • Multi cluster monitor
  • Redis, Disque, IronMQ, SQS, MongoDB or ORM
  • Rollbar and Sentry support

Requirements

Tested with: Python 3.7, 3.8, 3.9 Django 2.2.X and 3.2.X

Warning

Since Python 3.7 async became a reserved keyword and was refactored to async_task

Brokers

Installation

  • Install the latest version with pip:

    $ pip install django-q
    
  • Add django_q to your INSTALLED_APPS in your projects settings.py:

    INSTALLED_APPS = (
        # other apps
        'django_q',
    )
    
  • Run Django migrations to create the database tables:

    $ python manage.py migrate
    
  • Choose a message broker , configure and install the appropriate client library.

Read the full documentation at https://django-q.readthedocs.org

Configuration

All configuration settings are optional. e.g:

# settings.py example
Q_CLUSTER = {
    'name': 'myproject',
    'workers': 8,
    'recycle': 500,
    'timeout': 60,
    'compress': True,
    'cpu_affinity': 1,
    'save_limit': 250,
    'queue_limit': 500,
    'label': 'Django Q',
    'redis': {
        'host': '127.0.0.1',
        'port': 6379,
        'db': 0, }
}

For full configuration options, see the configuration documentation.

Management Commands

Start a cluster with:

$ python manage.py qcluster

Monitor your clusters with:

$ python manage.py qmonitor

Check overall statistics with:

$ python manage.py qinfo

Creating Tasks

Use async_task from your code to quickly offload tasks:

from django_q.tasks import async_task, result

# create the task
async_task('math.copysign', 2, -2)

# or with a reference
import math.copysign

task_id = async_task(copysign, 2, -2)

# get the result
task_result = result(task_id)

# result returns None if the task has not been executed yet
# you can wait for it
task_result = result(task_id, 200)

# but in most cases you will want to use a hook:

async_task('math.modf', 2.5, hook='hooks.print_result')

# hooks.py
def print_result(task):
    print(task.result)

For more info see Tasks

Schedule

Schedules are regular Django models. You can manage them through the Admin page or directly from your code:

# Use the schedule function
from django_q.tasks import schedule

schedule('math.copysign',
         2, -2,
         hook='hooks.print_result',
         schedule_type=Schedule.DAILY)

# Or create the object directly
from django_q.models import Schedule

Schedule.objects.create(func='math.copysign',
                        hook='hooks.print_result',
                        args='2,-2',
                        schedule_type=Schedule.DAILY
                        )

# Run a task every 5 minutes, starting at 6 today
# for 2 hours
import arrow

schedule('math.hypot',
         3, 4,
         schedule_type=Schedule.MINUTES,
         minutes=5,
         repeats=24,
         next_run=arrow.utcnow().replace(hour=18, minute=0))

# Use a cron expression
schedule('math.hypot',
         3, 4,
         schedule_type=Schedule.CRON,
         cron = '0 22 * * 1-5')

For more info check the Schedules documentation.

Testing

To run the tests you will need the following in addition to install requirements:

Or you can use the included Docker Compose file.

The following commands can be used to run the tests:

# Create virtual environment
python -m venv venv

# Install requirements
venv/bin/pip install -r requirements.txt

# Install test dependencies
venv/bin/pip install pytest pytest-django

# Install django-q
venv/bin/python setup.py develop

# Run required services (you need to have docker-compose installed)
docker-compose -f test-services-docker-compose.yaml up -d

# Run tests
venv/bin/pytest

# Stop the services required by tests (when you no longer plan to run tests)
docker-compose -f test-services-docker-compose.yaml down

Locale

Currently available in English, German and French. Translation pull requests are always welcome.

Todo

  • Better tests and coverage
  • Less dependencies?

Acknowledgements

Comments
  • TypeError: can't pickle _thread.lock objects

    TypeError: can't pickle _thread.lock objects

    Django 2.2.11 python 3.7.0 django-q 1.2.1 windows 10

    Hello, when i run manage.py qcluster i get error, does somebody know what could be source of it and how to resolve it?

      File "manage.py", line 21, in <module>
        main()
      File "manage.py", line 17, in main
        execute_from_command_line(sys.argv)
      File "C:\Users\Mateusz\Desktop\project\env\lib\site-packages\django\core\management\__init__.py", line 381, in execute_from_command_line
        utility.execute()
      File "C:\Users\Mateusz\Desktop\project\env\lib\site-packages\django\core\management\__init__.py", line 375, in execute
        self.fetch_command(subcommand).run_from_argv(self.argv)
      File "C:\Users\Mateusz\Desktop\project\env\lib\site-packages\django\core\management\base.py", line 323, in run_from_argv
        self.execute(*args, **cmd_options)
      File "C:\Users\Mateusz\Desktop\project\env\lib\site-packages\django\core\management\base.py", line 364, in execute
        output = self.handle(*args, **options)
      File "C:\Users\Mateusz\Desktop\project\env\lib\site-packages\django_q\management\commands\qcluster.py", line 22, in handle
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "C:\Users\Mateusz\AppData\Local\Programs\Python\Python37\lib\multiprocessing\spawn.py", line 105, in spawn_main
        exitcode = _main(fd)
      File "C:\Users\Mateusz\AppData\Local\Programs\Python\Python37\lib\multiprocessing\spawn.py", line 115, in _main
        self = reduction.pickle.load(from_parent)
    EOFError: Ran out of input
        q.start()
      File "C:\Users\Mateusz\Desktop\project\env\lib\site-packages\django_q\cluster.py", line 65, in start
        self.sentinel.start()
      File "C:\Users\Mateusz\AppData\Local\Programs\Python\Python37\lib\multiprocessing\process.py", line 112, in start
        self._popen = self._Popen(self)
      File "C:\Users\Mateusz\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 223, in _Popen
        return _default_context.get_context().Process._Popen(process_obj)
      File "C:\Users\Mateusz\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 322, in _Popen
        return Popen(process_obj)
      File "C:\Users\Mateusz\AppData\Local\Programs\Python\Python37\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
        reduction.dump(process_obj, to_child)
      File "C:\Users\Mateusz\AppData\Local\Programs\Python\Python37\lib\multiprocessing\reduction.py", line 60, in dump
        ForkingPickler(file, protocol).dump(obj)
    TypeError: can't pickle _thread.lock objects
    
  • [Error] select_for_update cannot be used outside of a transaction.

    [Error] select_for_update cannot be used outside of a transaction.

    This Django error (raised from SQL Compiler) pops up in my logs and prevent any scheduled tasks to run. The error is raised from this django_q line, which is very strange since the whole try block is within the transaction.atomic() context manager.

    Any idea on why this is happening and how to fix it? Thanks!

    Config:

    • db: Postgres 11 with psycopg2 interface
    • django-q 1.2.1
    • django 3.0
    • python 3.8

    Edit

    The error is reproduced in this basic demo app

  • Import Error running qcluster command Python 3.7 Django 2.1.5

    Import Error running qcluster command Python 3.7 Django 2.1.5

    Traceback (most recent call last):
      File "manage.py", line 22, in <module>
        execute_from_command_line(sys.argv)
      File "ENV/lib/python3.7/site-packages/django/core/management/__init__.py", line 381, in execute_from_command_line
        utility.execute()
      File "ENV/lib/python3.7/site-packages/django/core/management/__init__.py", line 375, in execute
        self.fetch_command(subcommand).run_from_argv(self.argv)
      File "ENV/lib/python3.7/site-packages/django/core/management/__init__.py", line 224, in fetch_command
        klass = load_command_class(app_name, subcommand)
      File "ENV/lib/python3.7/site-packages/django/core/management/__init__.py", line 36, in load_command_class
        module = import_module('%s.management.commands.%s' % (app_name, name))
      File "ENV/versions/3.7.0/lib/python3.7/importlib/__init__.py", line 127, in import_module
        return _bootstrap._gcd_import(name[level:], package, level)
      File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
      File "<frozen importlib._bootstrap>", line 983, in _find_and_load
      File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
      File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
      File "<frozen importlib._bootstrap_external>", line 728, in exec_module
      File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
      File "ENV/lib/python3.7/site-packages/django_q/management/commands/qcluster.py", line 4, in <module>
        from django_q.cluster import Cluster
      File "ENV/lib/python3.7/site-packages/django_q/cluster.py", line 24, in <module>
        from django_q import tasks
      File "ENV/lib/python3.7/site-packages/django_q/tasks.py", line 12, in <module>
        from django_q.cluster import worker, monitor
    ImportError: cannot import name 'worker' from 'django_q.cluster' (ENV/lib/python3.7/site-packages/django_q/cluster.py)
    Sentry is attempting to send 1 pending error messages
    Waiting up to 10 seconds
    Press Ctrl-C to quit
    
  • Task stays in queue when executing a requests.post

    Task stays in queue when executing a requests.post

    When I try and execute a requests.post the task stays on the queue and never completes the requests.post request.

    Pseudo code:

    class Auth
      def __init__(self, url, username, password):
          logging.debug('Making call for credentials.')
          r = requests.post(url, data={'username': username, 'password': password})
    
    def queue_get_auth(username, password):
        a = Auth('https://auth', 'username', 'password')
    
    
    def validate_login(username, password):
        job = async(queue_get_auth, username, password)
    

    Error I am seeing and repeats until I delete it from the database queued tasks:

    [DEBUG] | 2015-10-19 14:46:05,510 | auth:  Making call for credentials.
    14:46:05 [Q] ERROR reincarnated worker Process-1:3 after death
    14:46:05 [Q] INFO Process-1:9 ready for work at 13576
    14:46:19 [Q] INFO Process-1:4 processing [mango-two-juliet-edward]
    

    Is there a reason why the requests.post would be causing it to fail? How would I debug this? If I run it in sync: True it works fine. This is running on a Mac OS X 10.11, database sqlite.

    Q_CLUSTER = {
        'name': 'auth',
        'workers': 4,
        'recycle': 500,
        'timeout': 60,
        'compress': False,
        'save_limit': 250,
        'queue_limit': 500,
        'sync': False,
        'cpu_affinity': 1,
        'label': 'Django Q',
        'orm': 'default'
    }
    
  • scheduler creating duplicate tasks in multiple cluster environment

    scheduler creating duplicate tasks in multiple cluster environment

    We have a service that uses django-q for asynchronous tasks and it's deployed as 2 instances (2 AWS EC2 servers each running the same django project and each running a django-q cluster to process tasks). We've encountered an issue where the same scheduled task -- scheduled to run once -- gets picked up by each of the clusters in the scheduler (django-q.cluster) and ends up having 2 separate tasks being created.

    Example entries in our logs:

    On server 1:

    2017-04-02 20:25:56,747 - django-q - INFO - Process-1 created a task from schedule [14789]
    2017-04-02 20:25:56,842 - django-q - DEBUG - Pushed ('hamper-india-magnesium-pip', 'f1a1141c1835400ebc4f4b3894922b82')
    

    On server 2:

    2017-04-02 20:25:56,853 - django-q - INFO - Process-1 created a task from schedule [14789]
    2017-04-02 20:25:56,990 - django-q - DEBUG - Pushed ('alpha-william-kansas-apart', '5a4fcadb47674590933415dd5a71e1cc')
    

    Is this the expected behavior or is it a bug?

    What we are looking for is to have a scheduled task create only one async task to execute the action, even in a multi-cluster setup like ours.

    Can you comment on this behavior?

    We're using:

    Django (1.10.6)
    django-q (0.7.18)
    

    Thanks

    -Kevin

  • Worker recycle causes:

    Worker recycle causes: "ERROR connection already closed"

    Hello, and thanks for this great Django app! I'm using Python 3.4, Django 1.9.2, the latest django-q and the ORM broker backed by PostgreSQL.

    It appears that when my worker recycles, it loses the ability to talk to the database. Everything works fine up until that point. I've verified it is directly related to the recycle configuration parameter, and changing this changes the placement of the error accordingly.

    The output below shows the issue I'm having while running 1 single worker.

    14:43:16 [Q] INFO Processed [zulu-montana-triple-michigan] 14:43:16 [Q] INFO recycled worker Process-1:1 14:43:16 [Q] INFO Process-1:4 ready for work at 22360 14:43:16 [Q] INFO Process-1:4 processing [maryland-king-queen-table] 14:43:17 [Q] INFO Process-1:4 processing [summer-mirror-mountain-september] 14:43:17 [Q] INFO Processed [maryland-king-queen-table] 14:43:17 [Q] INFO Process-1:4 processing [illinois-finch-orange-sodium] 14:43:17 [Q] ERROR server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.

    Can you provide any guidance?

    Thanks

  • SSL errors after upgrading to qcluster version 1.1.0

    SSL errors after upgrading to qcluster version 1.1.0

    Hi,

    Is anyone else having SSL Errors when using the new version (1.1.0)? I tried upgrading it in production, but started to get "django.db.utils.OperationalError: SSL error: decryption failed or bad record mac" whenever django ORM's performs a query from within a django-q task (traceback below).

    Traceback (most recent call last):
      File "/usr/local/lib/python3.7/site-packages/django_q/cluster.py", line 379, in worker
        res = f(*task['args'], **task['kwargs'])
      File "/home/docker/src/tasks.py", line 8, in wake_up_driver_app
        for company in Company.objects.all():
      File "/usr/local/lib/python3.7/site-packages/django/db/models/query.py", line 274, in __iter__
        self._fetch_all()
      File "/usr/local/lib/python3.7/site-packages/django/db/models/query.py", line 1242, in _fetch_all
        self._result_cache = list(self._iterable_class(self))
      File "/usr/local/lib/python3.7/site-packages/django/db/models/query.py", line 55, in __iter__
        results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
      File "/usr/local/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1133, in execute_sql
        cursor.execute(sql, params)
      File "/usr/local/lib/python3.7/site-packages/sentry_sdk/integrations/django/__init__.py", line 446, in execute
        return real_execute(self, sql, params)
      File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 67, in execute
        return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
      File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 76, in _execute_with_wrappers
        return executor(sql, params, many, context)
      File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 84, in _execute
        return self.cursor.execute(sql, params)
      File "/usr/local/lib/python3.7/site-packages/django/db/utils.py", line 89, in __exit__
        raise dj_exc_value.with_traceback(traceback) from exc_value
      File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 84, in _execute
        return self.cursor.execute(sql, params)
    django.db.utils.OperationalError: SSL error: decryption failed or bad record mac
    

    Configuration:

    • broker: AWS SQS
    • database: AWS RDS Postgres 11.5
    • Django version: 2.2.10
    • Qcluster version: 1.1.0

    I found an old similar issue: https://github.com/Koed00/django-q/issues/79 but it seems to have been solved.

    Does anyone have a clue on how to investigate this issue? For now I'm keeping the previous version (1.0.2) that doesn't have those issues but I need some of the fixes that are part of 1.1.0 release.

    Thanks in advance!

  • Change schedule function to update or create new

    Change schedule function to update or create new

    By searching for the function we can update existing schedules.

    During development I noticed every time I changed a param from my scheduled function there was a entry inserted into the database. Giving me multiple scheduled functions from a single function.

    I havent updated any tests, if you want I can add a test for this 'new' feature

  • Add Sentry support

    Add Sentry support

    I see Rollbar is built in. Could you add support for Sentry too?

    Alternatively, you could pull out Rollbar into its own django-q-rollbar repo (and perhaps use extras so someone can add django-q[rollbar] to their requirements.txt), and expose a generic error handling interface so others can add their own.

    Really enjoying this project btw 😄

  • Django-q calls task twice or more

    Django-q calls task twice or more

    My background process is called twice (or more) but I'm really sure that should not be happening. My settings for Django Q:

    Q_CLUSTER = {
        'name': 'cc',
        'recyle': 10,
        'retry': -1,
        'workers': 2,
        'save_limit': 0,
        'orm': 'default'
    }
    

    My test task function:

    def task_test_function(email, user):
        print('test')
    

    calling it from the commandline:

    > python manage.py shell
    >>> from django_q.tasks import async
    >>> async('task_test_function', 'email', 'user')
    '9a0ba6b8bcd94dc1bc129e3d6857b5ee'
    

    Starting qcluster (after that I called the async)

    > python manage.py qcluster
    13:48:08 [Q] INFO Q Cluster-33552 starting.
    ...
    13:48:08 [Q] INFO Q Cluster-33552 running.
    13:48:34 [Q] INFO Process-1:2 processing [mobile-utah-august-indigo]
    test
    13:48:34 [Q] INFO Process-1:1 processing [mobile-utah-august-indigo]
    test
    13:48:34 [Q] INFO Processed [mobile-utah-august-indigo]
    13:48:34 [Q] INFO Processed [mobile-utah-august-indigo]
    ...
    

    And the function is called twice... For most functions I wouldn't really care if they run twice (or more) but I have a task that calls send_mail and people that are invited receive 2 or more mails...

    Is this a bug in Django Q or in my logic?

  • [Error] select_for_update happening when using replica(read-only) and default(write-only) DB.

    [Error] select_for_update happening when using replica(read-only) and default(write-only) DB.

    I've been getting a select_for_update cannot be used outside of a transaction error when using a replica set in my applications.

    Here are my settings

    DATABASES = {
        "default": {
            "ENGINE": os.getenv("DB_ENGINE"),
            "NAME": os.getenv("DB_NAME"),
            "USER": os.environ.get("DB_USER"),
            "HOST": os.environ.get("DB_HOST"),
            "PORT": os.environ.get("DB_PORT"),
            "PASSWORD": os.environ.get("DB_PASSWORD"),
        },
        "replica": {
            "ENGINE": os.getenv("DB_ENGINE"),
            "NAME": os.getenv("DB_NAME_REPLICA"),
            "USER": os.environ.get("DB_USER_REPLICA"),
            "HOST": os.environ.get("DB_HOST_REPLICA"),
            "PORT": os.environ.get("DB_PORT_REPLICA"),
            "PASSWORD": os.environ.get("DB_PASSWORD_REPLICA"),
        }
    }
    
    Q_CLUSTER = {
        "name": "myscheduler",
        "orm": "default",  # Use Django's ORM + database for broker
        ....
    }
    

    My database router currently uses the replica only to read and the default just to write.

    class DatabaseRouter:
    
        def db_for_read(self, model, **hints):
            """Always read from REPLICA database"""
            return "replica"
    
        def db_for_write(self, model, **hints):
            """Always write to DEFAULT database"""
            return "default"
            
        def allow_relation(self, obj1, obj2, **hints):
            """Objects from REPLICA and DEFAULT are de same, then True always"""
            return True
    
        def allow_migrate(self, db, app_label, model_name=None, **hints):
            """Only DEFAULT database"""
            return db == "default"
    

    I've been digging through the code (awesome documenation btw!) and found that during the task creation in the scheduler function it forces the database used in the transaction block.

    # Here it seems to force the usage, in this case it will be the replica database.
    with db.transaction.atomic(using=Schedule.objects.db):  
                for s in (
                    Schedule.objects.select_for_update()
                    .exclude(repeats=0)
                    .filter(next_run__lt=timezone.now())
                    .filter(db.models.Q(cluster__isnull=True) | db.models.Q(cluster=Conf.PREFIX))
                ):
    

    Is there a reason for this behaviour? I couldn't really understand why, since when removing the using from the transaction block made it work like a charm, reading only from replica and writing only on default.

    Dependencies

    • python = 3.9.5
    • Django = 3.1.7
    • psycopg2-binary = "2.8.6
    • django-q = 1.3.6
  • Unable to run migrations for django-q when using CockroachDB

    Unable to run migrations for django-q when using CockroachDB

    Currently, Django-q does not run with Cockroach Labs as a database backend.

    Steps to repeat:

    1. Setup a Cockroach Labs database, and new Django app: https://www.cockroachlabs.com/docs/stable/build-a-python-app-with-cockroachdb-django.html
    2. Install django-q as usual
    3. Run manage.py migrate

    You'll receive the error:

    django.db.utils.ProgrammingError: column "id" is referenced by the primary key
    

    This seems to be because this migration attempts to drop the primary key, but CockroachDB won't allow that to happen: https://github.com/Koed00/django-q/blob/master/django_q/migrations/0003_auto_20150708_1326.py#L23-L25

    My Django migration skills are not yet at ninja status 🥷 , but I'm looking for any potential ways to run the migration scripts without dropping the PK. Any ideas would be appreciated.

  • Bump django from 3.2.4 to 3.2.13

    Bump django from 3.2.4 to 3.2.13

    Bumps django from 3.2.4 to 3.2.13.

    Commits
    • 08e6073 [3.2.x] Bumped version for 3.2.13 release.
    • 9e19acc [3.2.x] Fixed CVE-2022-28347 -- Protected QuerySet.explain(**options) against...
    • 2044dac [3.2.x] Fixed CVE-2022-28346 -- Protected QuerySet.annotate(), aggregate(), a...
    • bdb92db [3.2.x] Fixed #33628 -- Ignored directories with empty names in autoreloader ...
    • 70035fb [3.2.x] Added stub release notes for 3.2.13 and 2.2.28.
    • 7e7ea71 [3.2.x] Reverted "Fixed forms_tests.tests.test_renderers with Jinja 3.1.0+."
    • 610ecc9 [3.2.x] Fixed forms_tests.tests.test_renderers with Jinja 3.1.0+.
    • 754af45 [3.2.x] Fixed typo in release notes.
    • 6f30916 [3.2.x] Added CVE-2022-22818 and CVE-2022-23833 to security archive.
    • 1e6b555 [3.2.x] Post-release version bump.
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

  • How are you ?

    How are you ?

    This is not an issue, I love this package, I discovered it 2 years ago and have been using it ever since. I just realized that your last online activity (github and twitter) was about a year ago, I was wondering if everything was ok. I hope everything is ok and you are doing well, thanks.

  • fromisoformat: argument must be str

    fromisoformat: argument must be str

    Getting this error "fromisoformat: argument must be str" when attempting to add a schedule.

        schedule(make_task,
                 3,
                 schedule_type=Schedule.MINUTES,
                 minutes=1,
                 repeats=3,
                 next_run=arrow.utcnow())
    
  • Tasks pile up in queue randomly

    Tasks pile up in queue randomly

    Something odd has been happening recently: tasks pile up in Django-Q (Queued tasks table) while some of them still get processed (successful). The hardware is Amazon Linux 2 on arm64. Any hints on what could cause the issue?

Related tags
Distributed Task Queue (development branch)
Distributed Task Queue (development branch)

Version: 5.0.5 (singularity) Web: http://celeryproject.org/ Download: https://pypi.org/project/celery/ Source: https://github.com/celery/celery/ Keywo

May 21, 2022
Distributed Task Queue (development branch)
Distributed Task Queue (development branch)

Version: 5.1.0b1 (singularity) Web: https://docs.celeryproject.org/en/stable/index.html Download: https://pypi.org/project/celery/ Source: https://git

May 18, 2022
a little task queue for python
a little task queue for python

a lightweight alternative. huey is: a task queue (2019-04-01: version 2.0 released) written in python (2.7+, 3.4+) clean and simple API redis, sqlite,

May 24, 2022
Django database backed celery periodic task scheduler with support for task dependency graph

Djag Scheduler (Dj)ango Task D(AG) (Scheduler) Overview Djag scheduler associates scheduling information with celery tasks The task schedule is persis

Oct 23, 2021
A simple app that provides django integration for RQ (Redis Queue)
A simple app that provides django integration for RQ (Redis Queue)

Django-RQ Django integration with RQ, a Redis based Python queuing library. Django-RQ is a simple app that allows you to configure your queues in djan

May 18, 2022
RQ (Redis Queue) integration for Flask applications

Flask-RQ RQ (Redis Queue) integration for Flask applications Resources Documentation Issue Tracker Code Development Version Installation $ pip install

Jan 5, 2022
Accept queue automatically on League of Legends.
Accept queue automatically on League of Legends.

Accept queue automatically on League of Legends. I was inspired by the lucassmonn code accept-queue-lol-telegram, and I modify it according to my need

Feb 24, 2022
Redis-backed message queue implementation that can hook into a discord bot written with hikari-lightbulb.

Redis-backed FIFO message queue implementation that can hook into a discord bot written with hikari-lightbulb. This is eventually intended to be the backend communication between a bot and a web dashboard.

Mar 30, 2022
Sync Laravel queue with Python. Provides an interface for communication between Laravel and Python.

Python Laravel Queue Queue sync between Python and Laravel using Redis driver. You can process jobs dispatched from Laravel in Python. NOTE: This pack

May 10, 2022
Beatserver, a periodic task scheduler for Django 🎵
Beatserver, a periodic task scheduler for Django 🎵

Beat Server Beatserver, a periodic task scheduler for django channels | beta software How to install Prerequirements: Follow django channels documenta

Apr 25, 2022
A fast and reliable background task processing library for Python 3.
A fast and reliable background task processing library for Python 3.

dramatiq A fast and reliable distributed task processing library for Python 3. Changelog: https://dramatiq.io/changelog.html Community: https://groups

May 16, 2022
Dagon - An Asynchronous Task Graph Execution Engine

Dagon - An Asynchronous Task Graph Execution Engine Dagon is a job execution sys

Jan 14, 2022
Full featured redis cache backend for Django.

Redis cache backend for Django This is a Jazzband project. By contributing you agree to abide by the Contributor Code of Conduct and follow the guidel

May 14, 2022
A Django app that integrates with Dramatiq.

django_dramatiq django_dramatiq is a Django app that integrates with Dramatiq. Requirements Django 1.11+ Dramatiq 0.18+ Example You can find an exampl

May 21, 2022
Queuing with django celery and rabbitmq

queuing-with-django-celery-and-rabbitmq Install Python 3.6 or above sudo apt-get install python3.6 Install RabbitMQ sudo apt-get install rabbitmq-ser

Dec 22, 2021
A fully-featured e-commerce application powered by Django
A fully-featured e-commerce application powered by Django

kobbyshop - Django Ecommerce App A fully featured e-commerce application powered by Django. Sections Project Description Features Technology Setup Scr

Feb 15, 2022
A multiprocessing distributed task queue for Django

A multiprocessing distributed task queue for Django Features Multiprocessing worker pool Asynchronous tasks Scheduled, cron and repeated tasks Signed

May 20, 2022
Mr. Queue - A distributed worker task queue in Python using Redis & gevent
Mr. Queue - A distributed worker task queue in Python using Redis & gevent

MRQ MRQ is a distributed task queue for python built on top of mongo, redis and gevent. Full documentation is available on readthedocs Why? MRQ is an

May 18, 2022
A Python package for easy multiprocessing, but faster than multiprocessing

MPIRE, short for MultiProcessing Is Really Easy, is a Python package for multiprocessing, but faster and more user-friendly than the default multiprocessing package.

May 14, 2022
A django integration for huey task queue that supports multi queue management

django-huey This package is an extension of huey contrib djhuey package that allows users to manage multiple queues. Installation Using pip package ma

May 14, 2022