Last active
August 29, 2015 14:24
-
-
Save mallow111/11dda3a722565651ba74 to your computer and use it in GitHub Desktop.
give up tempest test for healthmonitor, instead use scenario test
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
i found one thing which i am not sure | |
[12:10] <mwang2> so when i create health monitor, no matter its admin_state_up flag is true or false, its provisioing status is always active, is this by perpose? | |
[12:11] <blogan> well provisioning status shouldn't reflect any changes in admin_state_up | |
[12:11] <blogan> just operating status if there is on | |
[12:11] <blogan> one | |
[12:12] <mwang2> for healthmonitor,it does not have | |
[12:17] <mwang2> this affect the admin state up test for health monitor, since health monitor does not have operating status, how can we test this flag then | |
[12:17] <mwang2> https://review.openstack.org/#/c/191217/7/neutron_lbaas/tests/tempest/v2/api/base.py | |
[12:18] <blogan> a functional test would have to be used then, one that ensured that if that gets set to False then the members of the pool come back into rotation | |
[12:18] <blogan> well pool members that were pulled otu of rotation | |
[12:19] <mwang2> sorry, i dont get you, can you explain more | |
[12:20] <blogan> so if a health monitor is running on a pool, it will pull pool members out of rotation of being load balanced if that pool member goes down | |
[12:21] <blogan> if you then set the admin_state_up field to False on the health montior, those pool member should start being load balanced again | |
[12:21] <blogan> and basically monitoring of pool members should stop | |
[12:24] <mwang2> how do you mean by being load balanced again | |
[12:25] <blogan> i mean the load baalncer will forward traffic to them, bc if the health monitor detects that they're down it will tell the load abalncer to not send traffic to it | |
[12:25] <blogan> with the health monitor disabled, this shouldn't happened | |
[12:27] <mwang2> you know what, this is currently one error that i am facing, when the health monitor's admin state up is false, while lb, listener, pool 's admin state up are all true, it gave me server error | |
[12:28] <blogan> when does it give you server error? | |
[12:28] <mwang2> hang on | |
[12:28] <mwang2> let me find for you | |
[12:30] <mwang2> https://gist.github.com/mallow111/11dda3a722565651ba74 | |
[12:31] <blogan> this is the failure happening in master? | |
[12:31] <mwang2> no | |
[12:31] <mwang2> in my tempest test patch for health monitor | |
[12:32] <blogan> ah okay | |
[12:32] <mwang2> so this morning after restarck and delete the neutron-lbaaa folder, reclone the new version, and restart the q-svc, the master branch works fine | |
[12:32] <blogan> well you'd have to look the api logs and see what the traceback was there | |
[12:32] <mwang2> get_load_balancer_status_tree.(self.load_balancer_id)) | |
[12:34] <mwang2> let me ask you this question, when only healthmonitor's admin status up is false, while the rest are all true, what is supposed to happen | |
[12:35] <blogan> well healthmonitor is a leaf node, so the its admin_state_up shouldn't affect anyone else's | |
[12:36] <mwang2> in this case, do we still need admin state up tempest test for healthmonitor or not | |
[12:38] <blogan> not for what yall are doing | |
[12:38] <blogan> a scenario test (not testscenarios) | |
[12:38] <blogan> maybe | |
[12:38] <blogan> low priority though | |
[12:39] <mwang2> https://review.openstack.org/#/c/178827/ | |
[12:40] <mwang2> can you put your comment in this patch | |
[12:42] <blogan> done | |
[12:42] <mwang2> thanks a lot, i really appreciate your explaination | |
[12:42] <mwang2> makes things clear | |
[12:44] <blogan> no porblem | |
[12:44] <blogan> i should have noticed this in the first place but sometimes you dont see the obvious things | |
[12:44] <mwang2> it is ok, this is learning curv :) | |
[12:45] <mwang2> glad we found it now | |
[12:46] <blogan> yeah me too |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment