Skip to content

Instantly share code, notes, and snippets.

@vijaykramesh
Created Mar 1, 2017
Embed
What would you like to do?
Airflow DAG that locks on enqueued for two of the tasks
from datetime import datetime, timedelta
import time
from airflow import DAG
from airflow import logging
from airflow.operators.python_operator import PythonOperator
default_args = {'owner': 'vijay',
'start_date': datetime(2017, 2, 26),
'email': ['data.engineers@change.org'],
'email_on_failure': True,
'email_on_retry': False,
'retries': 1,
'retry_delay': timedelta(minutes=5),
'depends_on_past': False}
dag = DAG(
'test_lock',
schedule_interval='@daily',
max_active_runs=1,
concurrency=2,
default_args=default_args)
def some_callable(**kwargs):
logging.warn("doing a task on %s" % kwargs['task_instance'])
time.sleep(120)
t1 = PythonOperator(task_id='t1',
python_callable=some_callable,
provide_context=True,
dag=dag)
t2 = PythonOperator(task_id='t2',
python_callable=some_callable,
provide_context=True,
dag=dag)
t3 = PythonOperator(task_id='t3',
python_callable=some_callable,
provide_context=True,
dag=dag)
t4 = PythonOperator(task_id='t4',
python_callable=some_callable,
provide_context=True,
dag=dag)
@vijaykramesh

This comment has been minimized.

Copy link
Owner Author

@vijaykramesh vijaykramesh commented Mar 1, 2017

When this is started, two of the tasks immediately begin (due to concurrency=2) and the other two immediately become enqueued but never move out of that state. The logs for the two that get stuck enqueued show:

[2017-03-01 05:14:22,131] {base_task_runner.py:95} INFO - Subtask: [2017-03-01 05:14:22,130] {models.py:168} INFO - Filling up the DagBag from /damper/dags/test_lock.py
[2017-03-01 05:14:22,207] {base_task_runner.py:95} INFO - Subtask: [2017-03-01 05:14:22,207] {models.py:1128} INFO - Dependencies all met for <TaskInstance: test_lock.t2 2017-02-26 00:00:00 [queued]>
[2017-03-01 05:14:22,220] {base_task_runner.py:95} INFO - Subtask: [2017-03-01 05:14:22,220] {models.py:1122} INFO - Dependencies not met for <TaskInstance: test_lock.t2 2017-02-26 00:00:00 [queued]>, dependency 'Task Instance Slots Available' FAILED: The maximum number of running tasks (test_lock) for this task's DAG '2' has been reached.
[2017-03-01 05:14:23,879] {jobs.py:2062} INFO - Task exited with return code 0

My guess is there are multiple scheduler processes all attempting to enqueue the tasks. There may be some check when it actually goes to start it (I think here, perhaps being called from here) that prevents it from actually running, but because it was already enqueued it never gets picked up to run again?

@vijaykramesh

This comment has been minimized.

Copy link
Owner Author

@vijaykramesh vijaykramesh commented Mar 1, 2017

with more verbose logging it seems like the second check is failing, but the status doesn't get moved back correctly?

2017-03-01 06:53:07,831] {base_task_runner.py:95} INFO - Subtask: [2017-03-01 06:53:07,831] {models.py:373} DEBUG - Loaded DAG <DAG: test_lock>
[2017-03-01 06:53:07,871] {base_task_runner.py:95} INFO - Subtask: [2017-03-01 06:53:07,871] {models.py:1148} DEBUG - <TaskInstance: test_lock.t2 2017-03-01 06:52:47 [queued]> dependency 'Task Instance State' PASSED: True, Task state queued was valid.
[2017-03-01 06:53:07,872] {base_task_runner.py:95} INFO - Subtask: [2017-03-01 06:53:07,871] {models.py:1148} DEBUG - <TaskInstance: test_lock.t2 2017-03-01 06:52:47 [queued]> dependency 'Trigger Rule' PASSED: True, The task instance did not have any upstream tasks.
[2017-03-01 06:53:07,872] {base_task_runner.py:95} INFO - Subtask: [2017-03-01 06:53:07,871] {models.py:1148} DEBUG - <TaskInstance: test_lock.t2 2017-03-01 06:52:47 [queued]> dependency 'Not In Retry Period' PASSED: True, The task instance was not marked for retrying.
[2017-03-01 06:53:07,873] {base_task_runner.py:95} INFO - Subtask: [2017-03-01 06:53:07,872] {models.py:1148} DEBUG - <TaskInstance: test_lock.t2 2017-03-01 06:52:47 [queued]> dependency 'Previous Dagrun State' PASSED: True, The task did not have depends_on_past set.
[2017-03-01 06:53:07,873] {base_task_runner.py:95} INFO - Subtask: [2017-03-01 06:53:07,872] {models.py:1128} INFO - Dependencies all met for <TaskInstance: test_lock.t2 2017-03-01 06:52:47 [queued]>
[2017-03-01 06:53:07,874] {base_task_runner.py:95} INFO - Subtask: [2017-03-01 06:53:07,872] {models.py:1148} DEBUG - <TaskInstance: test_lock.t2 2017-03-01 06:52:47 [queued]> dependency 'Not In Retry Period' PASSED: True, The task instance was not marked for retrying.
[2017-03-01 06:53:07,874] {base_task_runner.py:95} INFO - Subtask: [2017-03-01 06:53:07,873] {models.py:1148} DEBUG - <TaskInstance: test_lock.t2 2017-03-01 06:52:47 [queued]> dependency 'Trigger Rule' PASSED: True, The task instance did not have any upstream tasks.
[2017-03-01 06:53:07,893] {base_task_runner.py:95} INFO - Subtask: [2017-03-01 06:53:07,893] {models.py:1148} DEBUG - <TaskInstance: test_lock.t2 2017-03-01 06:52:47 [queued]> dependency 'Task Instance Slots Available' PASSED: False, The maximum number of running tasks (test_lock) for this task's DAG '2' has been reached.
[2017-03-01 06:53:07,894] {base_task_runner.py:95} INFO - Subtask: [2017-03-01 06:53:07,893] {models.py:1122} INFO - Dependencies not met for <TaskInstance: test_lock.t2 2017-03-01 06:52:47 [queued]>, dependency 'Task Instance Slots Available' FAILED: The maximum number of running tasks (test_lock) for this task's DAG '2' has been reached.
[2017-03-01 06:53:07,894] {base_task_runner.py:95} INFO - Subtask: [2017-03-01 06:53:07,893] {models.py:1148} DEBUG - <TaskInstance: test_lock.t2 2017-03-01 06:52:47 [queued]> dependency 'Previous Dagrun State' PASSED: True, The task did not have depends_on_past set.
[2017-03-01 06:53:09,687] {jobs.py:186} DEBUG - [heart] Boom.

Note that if I stop the scheduler and restart it it will pick up the enqueued tasks correctly.

@vijaykramesh

This comment has been minimized.

Copy link
Owner Author

@vijaykramesh vijaykramesh commented Mar 1, 2017

it seems like the first check, when the task instance gets marked State.QUEUED, it also runs self.executor.queue_command(…)
https://github.com/apache/incubator-airflow/blob/master/airflow/jobs.py#L1102
whereas in the second check, if hte task instance gets marked State.QUEUED, it doesn’t https://github.com/apache/incubator-airflow/blob/master/airflow/models.py#L1300
so my initial thought was to make that second check actually queue the command in the executor
but even simpler, if the DagRun.get_running_tasks() check includes State.QUEUED task instances then we never even need to get to the second check (because this will trip https://github.com/apache/incubator-airflow/blob/master/airflow/jobs.py#L1057)

that seems a bit wrong though as then that second check at https://github.com/apache/incubator-airflow/blob/master/airflow/models.py#L1300 could still put things in an odd state

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment