Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save kerneltoast/5ff724e156ea3bbe71ce7972e6a011b9 to your computer and use it in GitHub Desktop.
Save kerneltoast/5ff724e156ea3bbe71ce7972e6a011b9 to your computer and use it in GitHub Desktop.
From 96470399a5a6fba864b90afd15eda43cdc8c8ac4 Mon Sep 17 00:00:00 2001
From: Sultan Alsawaf <sultan@openresty.com>
Date: Wed, 30 Dec 2020 15:42:11 -0800
Subject: [PATCH] task_finder2: fix list corruption in
__stp_tf_cancel_all_task_work()
The previous commit (b26b4e2c2 "task_finder2: fix panics due to broken
task work cancellation") made it possible for the next node in the task
work list to be free, which would made list_for_each_entry_safe() not so
safe anymore. Using list_for_each_entry_safe() is still the fastest
approach here, so when the next node in the list happens to be freed, we
should just restart iteration on the list.
---
runtime/linux/task_finder2.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/runtime/linux/task_finder2.c b/runtime/linux/task_finder2.c
index ecf1f77fd..83fc17b5e 100644
--- a/runtime/linux/task_finder2.c
+++ b/runtime/linux/task_finder2.c
@@ -225,6 +225,7 @@ static void __stp_tf_cancel_all_task_work(void)
// Cancel all remaining requests.
stp_spin_lock_irqsave(&__stp_tf_task_work_list_lock, flags);
+restart:
list_for_each_entry_safe(node, tmp, &__stp_tf_task_work_list, list) {
struct __stp_tf_task_work *tf_work;
struct task_work *work;
@@ -242,6 +243,21 @@ static void __stp_tf_cancel_all_task_work(void)
tf_work = container_of(work, typeof(*tf_work), work);
list_del(&tf_work->list);
_stp_kfree(tf_work);
+
+ /*
+ * If the tf_work we just freed was the next node in the list,
+ * then we need to restart the list iteration because
+ * list_for_each_entry_safe() can't cope with the next node
+ * being freed. We still need to use list_for_each_entry_safe()
+ * because we need to get through one successful pass through
+ * the entire list, since it's not guaranteed that this list
+ * will be empty when this function exits, as there can still be
+ * active task workers running, which is fine since the
+ * stp_task_work API will wait for all task workers to finish
+ * before allowing the module to unload.
+ */
+ if (tf_work == tmp)
+ goto restart;
}
stp_spin_unlock_irqrestore(&__stp_tf_task_work_list_lock, flags);
}
--
2.29.2
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment