Requested items to be solved with this feature
- Searching on hostvars ansible/awx#371
- Ability to filter natively
- Filtering on "resolved" hostvars
- Smart inventories don't contain groups ansible/awx#1999
diff --git a/awx/api/views/__init__.py b/awx/api/views/__init__.py | |
index 609e88e155..141820ce73 100644 | |
--- a/awx/api/views/__init__.py | |
+++ b/awx/api/views/__init__.py | |
@@ -2745,6 +2745,11 @@ class WorkflowJobNodeList(ListAPIView): | |
serializer_class = serializers.WorkflowJobNodeListSerializer | |
search_fields = ('unified_job_template__name', 'unified_job_template__description') | |
+ def get_queryset(self): | |
+ parent = self.get_parent_object() |
SELECT "main_unifiedjob"."id", | |
"main_unifiedjob"."polymorphic_ctype_id", | |
"main_unifiedjob"."modified", | |
"main_unifiedjob"."description", | |
"main_unifiedjob"."created_by_id", | |
"main_unifiedjob"."modified_by_id", | |
"main_unifiedjob"."name", | |
"main_unifiedjob"."execution_environment_id", | |
"main_unifiedjob"."old_pk", | |
"main_unifiedjob"."emitted_events", |
print("\x00") |
Requested items to be solved with this feature
import os | |
import io | |
import time | |
import stat | |
from ansible_runner.utils.streaming import stream_dir | |
# source_directory = '/home/alancoding/repos/awx' | |
source_directory = '.' |
Setup
/api/v2/jobs/586/job_events/?not__stdout=&order_by=counter&page=1&page_size=50
With that scenario, we run different scenarios, and each scenario has 2 queries. The first query gets the count and the second query gets the objects.
Report from ansible/awx#12176 and elsewhere is traceback ending with
django.db.utils.OperationalError: out of shared memory HINT: You might need to increase max_locks_per_transaction.
We are trying to reproduce that.
We know that postgres default for max_locks_per_transaction
is 64.
Issue: ansible/awx#5765
The current work is making a change so that we will use our pre-existing TTL cache in all services, not just the callback receiver.
Our configure-tower-in-tower settings modifies the normal Django settings. If you inspect settings
, you find it is the normal Django LazySettings
type.
We have 3 types of notifications - started, successful, failed. We should always send the started notification and either the success / failed notification for all jobs (and all job types). The started notification is simple, but we have problems consistently sending the success / failed (the final) notification.
The final notifications are sent when a particular event from a job is processed by the callback receiver. That event is:
I will refer to this event as the wrapup event for clarity.
[ | |
{ | |
"uuid": "170b9c04-0787-4bf1-bf53-7bec570d020c", | |
"counter": 1, | |
"stdout": "", | |
"start_line": 0, | |
"end_line": 0, | |
"runner_ident": "e5cac63c-1605-43fb-a9c0-d825822bd356", | |
"event": "playbook_on_start", | |
"pid": 2189988, |