Skip to content

Instantly share code, notes, and snippets.

@p-mongo
Created May 1, 2020 23:43
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save p-mongo/e718f20715c8c8ff2479beb76656dd6f to your computer and use it in GitHub Desktop.
Save p-mongo/e718f20715c8c8ff2479beb76656dd6f to your computer and use it in GitHub Desktop.
Tenex Paste
(3) carbon% ~/apps/tnex/script/make-patch-build -p drivers-atlas-testing
/home/w/apps/tnex/script/make-patch-build:21: warning: Using the last argument as keyword parameters is deprecated; maybe ** should be added to the call
/home/w/apps/tnex/lib/fe/patch_build_maker.rb:11: warning: The called method `run' is defined here
No rebase in progress?
HEAD is now at 0fefcc7 Add files produced by python runner to gitignore (#55)
Already on 'master'
Your branch is up to date with 'origin/master'.
HEAD is now at 0fefcc7 Add files produced by python runner to gitignore (#55)
Sending payload: {"project":"drivers-atlas-testing","desc":null,"patch":"diff --git a/.evergreen/config.yml b/.evergreen/config.yml\nindex 4fea2d4..ea7c208 100644\n--- a/.evergreen/config.yml\n+++ b/.evergreen/config.yml\n@@ -51,7 +51,7 @@ functions:\n continue_on_err: true # Because script may not exist OR platform may not be *nix.\n add_expansions_to_env: true\n command: |\n- .evergreen/${DRIVER_DIRNAME}/install-driver.sh\n+ integrations/${DRIVER_DIRNAME}/install-driver.sh\n # Install driver on Windows.\n - command: subprocess.exec\n params:\n@@ -59,7 +59,7 @@ functions:\n continue_on_err: true # Because script may not exist OR platform may not be Windows.\n add_expansions_to_env: true\n command: |\n- C:/cygwin/bin/sh .evergreen/${DRIVER_DIRNAME}/install-driver.sh\n+ C:/cygwin/bin/sh integrations/${DRIVER_DIRNAME}/install-driver.sh\n \n \"run test\":\n # Run the Atlas Planned Maintenance Test.\n@@ -74,7 +74,7 @@ functions:\n ATLAS_API_PASSWORD: ${atlas_secret}\n add_expansions_to_env: true\n command: |\n- astrolabevenv/${PYTHON_BIN_DIR}/astrolabe spec-tests run-one tests/${TEST_NAME}.yaml -e .evergreen/${DRIVER_DIRNAME}/workload-executor\n+ astrolabevenv/${PYTHON_BIN_DIR}/astrolabe spec-tests run-one tests/${TEST_NAME}.yaml -e integrations/${DRIVER_DIRNAME}/workload-executor\n \n \"delete test cluster\":\n # Delete the cluster that was used to run the test.\n@@ -146,14 +146,12 @@ axes:\n DRIVER_REPOSITORY: \"https://github.com/mongodb/mongo-python-driver\"\n DRIVER_REVISION: \"master\"\n PYMONGO_VIRTUALENV_NAME: \"pymongotestvenv\"\n- - id: pymongo-3.10.x\n- display_name: \"PyMongo (3.10.x)\"\n+ - id: ruby-master\n+ display_name: \"Ruby (master)\"\n variables:\n- DRIVER_DIRNAME: \"python/pymongo\"\n- DRIVER_REPOSITORY: \"https://github.com/mongodb/mongo-python-driver\"\n- DRIVER_REVISION: \"3.10.1\"\n- PYMONGO_VIRTUALENV_NAME: \"pymongotestvenv\"\n-\n+ DRIVER_DIRNAME: \"ruby\"\n+ DRIVER_REPOSITORY: \"https://github.com/mongodb/mongo-ruby-driver\"\n+ DRIVER_REVISION: \"master\"\n \n # The 'platform' axis specifies the evergreen host distro to use.\n # Platforms MUST specify the PYTHON3_BINARY variable as this is required to install astrolabe.\n@@ -167,6 +165,13 @@ axes:\n variables:\n PYTHON3_BINARY: \"/opt/python/3.7/bin/python3\"\n PYTHON_BIN_DIR: \"bin\"\n+ - id: ubuntu-18.04\n+ display_name: \"Ubuntu 18.04\"\n+ run_on: ubuntu1804-test\n+ batchtime: 10080 # 7 days\n+ variables:\n+ PYTHON3_BINARY: \"/opt/python/3.7/bin/python3\"\n+ PYTHON_BIN_DIR: \"bin\"\n - id: windows-64\n display_name: \"Windows 64\"\n run_on: windows-64-vsMulti-small\n@@ -184,14 +189,6 @@ axes:\n display_name: CPython-2.7\n variables:\n PYTHON_BINARY: \"/opt/python/2.7/bin/python\"\n- - id: python36\n- display_name: CPython-3.6\n- variables:\n- PYTHON_BINARY: \"/opt/python/3.6/bin/python3\"\n- - id: python37\n- display_name: CPython-3.7\n- variables:\n- PYTHON_BINARY: \"/opt/python/3.7/bin/python3\"\n - id: python38\n display_name: CPython-3.8\n variables:\n@@ -200,21 +197,17 @@ axes:\n display_name: CPython-3.7-Windows\n variables:\n PYTHON_BINARY: \"C:/python/Python37/python.exe\"\n+ - id: ruby-2.7\n+ display_name: Ruby 2.7\n+ variables:\n+ RVM_RUBY: ruby-2.7\n \n buildvariants:\n-- matrix_name: \"tests-python\"\n+- matrix_name: \"tests-ruby\"\n matrix_spec:\n- driver: [\"pymongo-master\", \"pymongo-3.10.x\"]\n+ driver: [\"ruby-master\"]\n platform: [\"ubuntu-16.04\"]\n- runtime: [\"python27\", \"python36\", \"python37\", \"python38\"]\n- display_name: \"${driver} ${platform} ${runtime}\"\n- tasks:\n- - \".all\"\n-- matrix_name: \"tests-python-windows\"\n- matrix_spec:\n- driver: [\"pymongo-master\", \"pymongo-3.10.x\"]\n- platform: [\"windows-64\"]\n- runtime: [\"python37-windows\",]\n+ runtime: [\"ruby-2.7\"]\n display_name: \"${driver} ${platform} ${runtime}\"\n tasks:\n - \".all\"\ndiff --git a/.gitignore b/.gitignore\nindex 4fd492e..e12d676 100644\n--- a/.gitignore\n+++ b/.gitignore\n@@ -1,2 +1,5 @@\n **/__pycache__\n .idea/\n+astrolabe.egg-info/\n+results.json\n+xunit-output/\ndiff --git a/astrolabe/cli.py b/astrolabe/cli.py\nindex 0f8790b..2d52d77 100644\n--- a/astrolabe/cli.py\n+++ b/astrolabe/cli.py\n@@ -515,5 +515,22 @@ def run_headless(ctx, spec_tests_directory, workload_executor, db_username,\n exit(0)\n \n \n+@spec_tests.command('validate-workload-executor')\n+@WORKLOADEXECUTOR_OPTION\n+@click.option('--connection-string', required=True, type=click.STRING,\n+ help='Connection string for the test MongoDB instance.',\n+ prompt=True)\n+def validate_workload_executor(workload_executor, connection_string,):\n+ \"\"\"\n+ Runs a series of tests to validate a workload executor.\n+ Relies upon a user-provisioned instance of MongoDB to run operations against.\n+ \"\"\"\n+ from astrolabe.workload_executor_validator import validator_factory\n+ import unittest\n+ TestCase = validator_factory(workload_executor, connection_string)\n+ suite = unittest.defaultTestLoader.loadTestsFromTestCase(TestCase)\n+ unittest.TextTestRunner().run(suite)\n+\n+\n if __name__ == '__main__':\n cli()\ndiff --git a/astrolabe/exceptions.py b/astrolabe/exceptions.py\nindex d105778..acbf3df 100644\n--- a/astrolabe/exceptions.py\n+++ b/astrolabe/exceptions.py\n@@ -21,4 +21,8 @@ class AstrolabeTestCaseError(AstrolabeBaseError):\n \n \n class PollingTimeoutError(AstrolabeBaseError):\n- pass\n\\ No newline at end of file\n+ pass\n+\n+\n+class WorkloadExecutorError(AstrolabeBaseError):\n+ pass\ndiff --git a/astrolabe/spec_runner.py b/astrolabe/spec_runner.py\nindex 4325e0e..30f2a36 100644\n--- a/astrolabe/spec_runner.py\n+++ b/astrolabe/spec_runner.py\n@@ -12,7 +12,6 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n-import json\n import logging\n import os\n from time import sleep\n@@ -156,13 +155,10 @@ class AtlasTestCase:\n self.cluster_name))\n \n # Step-2: run driver workload.\n- LOGGER.info(\"Starting workload executor\")\n self.workload_runner.spawn(\n workload_executor=self.config.workload_executor,\n connection_string=self.get_connection_string(),\n- driver_workload=json.dumps(self.spec.driverWorkload))\n- LOGGER.info(\"Started workload executor [PID: {}]\".format(\n- self.workload_runner.pid))\n+ driver_workload=self.spec.driverWorkload)\n \n # Step-3: begin maintenance routine.\n final_config = self.spec.maintenancePlan.final\ndiff --git a/astrolabe/utils.py b/astrolabe/utils.py\nindex 2dcc632..d3504cc 100644\n--- a/astrolabe/utils.py\n+++ b/astrolabe/utils.py\n@@ -26,6 +26,11 @@ import junitparser\n \n from pymongo import MongoClient\n \n+from astrolabe.exceptions import WorkloadExecutorError\n+\n+\n+LOGGER = logging.getLogger(__name__)\n+\n \n class ClickLogHandler(logging.Handler):\n \"\"\"Handler for print log statements via Click's echo functionality.\"\"\"\n@@ -126,15 +131,14 @@ def load_test_data(connection_string, driver_workload):\n # TODO: remove this if...else block after BUILD-10841 is done.\n if sys.platform in (\"win32\", \"cygwin\"):\n import certifi\n- client = MongoClient(connection_string, tlsCAFile=certifi.where())\n- else:\n- client = MongoClient(connection_string, **kwargs)\n+ kwargs['tlsCAFile'] = certifi.where()\n+ client = MongoClient(connection_string, **kwargs)\n \n coll = client.get_database(\n driver_workload.database).get_collection(\n driver_workload.collection)\n coll.drop()\n- coll.insert(driver_workload.testData)\n+ coll.insert_many(driver_workload.testData)\n \n \n class DriverWorkloadSubprocessRunner:\n@@ -144,6 +148,8 @@ class DriverWorkloadSubprocessRunner:\n if sys.platform in (\"win32\", \"cygwin\"):\n self.is_windows = True\n self.workload_subprocess = None\n+ self.sentinel = os.path.join(\n+ os.path.abspath(os.curdir), 'results.json')\n \n @property\n def pid(self):\n@@ -154,20 +160,46 @@ class DriverWorkloadSubprocessRunner:\n return self.workload_subprocess.returncode\n \n def spawn(self, *, workload_executor, connection_string, driver_workload):\n- args = [workload_executor, connection_string, driver_workload]\n+ LOGGER.info(\"Starting workload executor subprocess\")\n+\n+ try:\n+ os.remove(self.sentinel)\n+ LOGGER.debug(\"Cleaned up sentinel file at {}\".format(\n+ self.sentinel))\n+ except FileNotFoundError:\n+ pass\n+\n+ _args = [workload_executor, connection_string, json.dumps(driver_workload)]\n if not self.is_windows:\n+ args = _args\n self.workload_subprocess = subprocess.Popen(\n- args, preexec_fn=os.setsid, stdout=subprocess.PIPE,\n- stderr=subprocess.PIPE)\n+ args, preexec_fn=os.setsid,\n+ #stdout=subprocess.PIPE,\n+ #stderr=subprocess.PIPE\n+ )\n else:\n- wargs = ['C:/cygwin/bin/bash']\n- wargs.extend(args)\n+ args = ['C:/cygwin/bin/bash']\n+ args.extend(_args)\n self.workload_subprocess = subprocess.Popen(\n- wargs, creationflags=subprocess.CREATE_NEW_PROCESS_GROUP,\n+ args, creationflags=subprocess.CREATE_NEW_PROCESS_GROUP,\n stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n+\n+ LOGGER.debug(\"Subprocess argument list: {}\".format(args))\n+ LOGGER.info(\"Started workload executor [PID: {}]\".format(self.pid))\n+\n+ # Check if something caused the workload executor to exit immediately.\n+ try:\n+ self.workload_subprocess.wait(timeout=1)\n+ raise WorkloadExecutorError(\n+ \"Workload executor quit without receiving termination signal\")\n+ except subprocess.TimeoutExpired:\n+ pass\n+\n return self.workload_subprocess\n \n def terminate(self):\n+ LOGGER.info(\"Stopping workload executor [PID: {}]\".format(self.pid))\n+\n if not self.is_windows:\n os.killpg(self.workload_subprocess.pid, signal.SIGINT)\n else:\n@@ -175,9 +207,10 @@ class DriverWorkloadSubprocessRunner:\n outs, errs = self.workload_subprocess.communicate(timeout=10)\n \n try:\n- with open('results.json', 'r') as fp:\n+ with open(self.sentinel, 'r') as fp:\n stats = json.load(fp)\n except (FileNotFoundError, json.JSONDecodeError):\n- stats = {'numErrors': -1, 'numFailures': -1}\n+ stats = {'numErrors': -1, 'numFailures': -1,\n+ 'numSuccessfulOperations': -1}\n \n return outs, errs, stats\ndiff --git a/astrolabe/workload_executor_validator.py b/astrolabe/workload_executor_validator.py\nnew file mode 100644\nindex 0000000..b63dd5a\n--- /dev/null\n+++ b/astrolabe/workload_executor_validator.py\n@@ -0,0 +1,92 @@\n+from copy import deepcopy\n+from subprocess import TimeoutExpired\n+from time import sleep\n+from unittest import TestCase, main\n+\n+from pymongo import MongoClient\n+\n+from atlasclient import JSONObject\n+from astrolabe.exceptions import WorkloadExecutorError\n+from astrolabe.utils import DriverWorkloadSubprocessRunner, load_test_data\n+\n+DRIVER_WORKLOAD = JSONObject.from_dict({\n+ 'database': 'validation_db',\n+ 'collection': 'validation_coll',\n+ 'testData': [{'_id': 'validation_sentinel', 'count': 0}],\n+ 'operations': []\n+})\n+\n+\n+class ValidateWorkloadExecutor(TestCase):\n+ WORKLOAD_EXECUTOR = None\n+ CONNECTION_STRING = None\n+\n+ def setUp(self):\n+ self.client = MongoClient(self.CONNECTION_STRING, w='majority')\n+ self.coll = self.client.get_database(\n+ DRIVER_WORKLOAD['database']).get_collection(\n+ DRIVER_WORKLOAD['collection'])\n+ load_test_data(self.CONNECTION_STRING, DRIVER_WORKLOAD)\n+\n+ def test_write(self):\n+ # import pdb; pdb.set_trace()\n+ operations = [\n+ {'object': 'collection',\n+ 'name': 'updateOne',\n+ 'arguments': {\n+ 'filter': {'_id': 'validation_sentinel'},\n+ 'update': {'$inc': {'count': 1}}}}]\n+ driver_workload = deepcopy(DRIVER_WORKLOAD)\n+ driver_workload['operations'] = operations\n+ driver_workload = JSONObject.from_dict(driver_workload)\n+\n+ subprocess = DriverWorkloadSubprocessRunner()\n+\n+ try:\n+ subprocess.spawn(workload_executor=self.WORKLOAD_EXECUTOR,\n+ connection_string=self.CONNECTION_STRING,\n+ driver_workload=driver_workload)\n+ except WorkloadExecutorError:\n+ outs, errs = subprocess.workload_subprocess.communicate(timeout=2)\n+ self.fail(\"The workload executor terminated prematurely before \"\n+ \"receiving the termination signal.\\n\"\n+ \"STDOUT: {!r}\\nSTDERR: {!r}\".format(outs, errs))\n+\n+ sleep(2 * 3) # Sleep for 2 x amount of time as we do in the main runner.\n+\n+ try:\n+ outs, errs, stats = subprocess.terminate()\n+ except TimeoutExpired:\n+ self.fail(\"The workload executor did not terminate soon after \"\n+ \"receiving the termination signal.\")\n+\n+ # Check that results.json is actually written.\n+ if all(val == -1 for val in stats.values()):\n+ self.fail(\"The workload executor did not write a results.json \"\n+ \"file in the expected location, or the file that was \"\n+ \"written contained malformed JSON.\")\n+\n+ if any(val < 0 for val in stats.values()):\n+ self.fail(\"The workload executor reported incorrect execution \"\n+ \"statistics. Reported statistics MUST NOT be negative.\")\n+\n+ # make this a single operation and make it an upsert.\n+ num_updates = stats['numSuccessfulOperations']\n+ update_count = self.coll.find_one(\n+ {'_id': 'validation_sentinel'})['count']\n+ if update_count != num_updates:\n+ self.fail(\n+ \"The workload executor reported inconsistent execution \"\n+ \"statistics. Expected approximately {} successful \"\n+ \"operations to be reported, got {} instead.\".format(\n+ update_count * 2, stats['numSuccessfulOperations']))\n+\n+\n+def validator_factory(workload_executor, connection_string):\n+ ValidateWorkloadExecutor.WORKLOAD_EXECUTOR = workload_executor\n+ ValidateWorkloadExecutor.CONNECTION_STRING = connection_string\n+ return ValidateWorkloadExecutor\n+\n+\n+if __name__ == '__main__':\n+ main()\ndiff --git a/docs/source/integration-guide.rst b/docs/source/integration-guide.rst\nindex 724a45c..162c19c 100644\n--- a/docs/source/integration-guide.rst\n+++ b/docs/source/integration-guide.rst\n@@ -28,12 +28,12 @@ Setting up\n $ git clone git@github.com:<your-github-username>/drivers-atlas-testing.git\n $ cd drivers-atlas-testing && git checkout -b add-driver-integration\n \n-#. Create a new directory for your driver under the ``drivers-atlas-testing/.evergreen`` directory.\n+#. Create a new directory for your driver under the ``drivers-atlas-testing/integrations`` directory.\n It is recommended that drivers use the name of their language to name their folder.\n Languages that have multiple drivers should use subdirectories named after the driver (e.g. for Python,\n- we should create ``.evergreen/python/pymongo`` and ``.evergreen/python/motor``)::\n+ we could create ``integrations/python/pymongo`` and ``integrations/python/motor``)::\n \n- $ mkdir -p .evergreen/driver-language/project-name\n+ $ mkdir -p integrations/<driver-language>/<project-name>\n \n .. note::\n \n@@ -77,7 +77,7 @@ API desired by the :ref:`workload-executor-specification` specification.\n .. note::\n \n For example, PyMongo's ``astrolabe`` integration uses this pattern to implement its\n- `workload executor <https://github.com/mongodb-labs/drivers-atlas-testing/blob/master/.evergreen/python/pymongo/workload-executor>`_.\n+ `workload executor <https://github.com/mongodb-labs/drivers-atlas-testing/blob/master/integrations/python/pymongo/workload-executor>`_.\n \n .. _integration-step-driver-installer:\n \n@@ -193,7 +193,7 @@ you intend to test. Each entry has the following fields:\n \n * ``id`` (required): unique identifier for this ``driver`` axis entry.\n * ``display_name`` (optional): plaintext name for this driver version that will be used to display test runs.\n-* ``variables.DRIVER_DIRNAME`` (required): path, relative to the ``astrolable/.evergreen`` directory where the\n+* ``variables.DRIVER_DIRNAME`` (required): path, relative to the ``astrolable/integrations`` directory where the\n driver-specific scripts live.\n * ``variables.DRIVER_REPOSITORY`` (required): HTTPS URL that can be used to clone the source repository of the\n driver to be tested.\ndiff --git a/docs/source/spec-workload-executor.rst b/docs/source/spec-workload-executor.rst\nindex 146d98d..9a332db 100644\n--- a/docs/source/spec-workload-executor.rst\n+++ b/docs/source/spec-workload-executor.rst\n@@ -59,6 +59,7 @@ After accepting the inputs, the workload executor:\n * MUST keep count of the number of operation errors (``numErrors``) that are encountered while running\n operations. An operation error is when running an operation unexpectedly raises an error. Workload executors\n implementations should try to be as resilient as possible to these kinds of operation errors.\n+ * MUST keep count of the number of operations that are run successfully (``numSuccessfulOperations``).\n \n #. MUST set a signal handler for handling the termination signal that is sent by ``astrolabe``. The termination signal\n is used by ``astrolabe`` to communicate to the workload executor that it should stop running operations. Upon\n@@ -71,6 +72,7 @@ After accepting the inputs, the workload executor:\n \n * ``numErrors``: the number of operation errors that were encountered during the test.\n * ``numFailures``: the number of operation failures that were encountered during the test.\n+ * ``numSuccessfulOperations``: the number of operations executed successfully during the test.\n \n .. note:: The values of ``numErrors`` and ``numFailures`` are used by ``astrolabe`` to determine the overall\n success or failure of a driver workload execution. A non-zero value for either of these fields is construed\n@@ -78,8 +80,8 @@ After accepting the inputs, the workload executor:\n The workload executor's exit code is **not** used for determining success/failure and is ignored.\n \n .. note:: If ``astrolabe`` encounters an error in parsing the workload statistics dumped to ``results.json``\n- (caused, for example, by malformed JSON), both ``numErrors`` and ``numFailures`` will be set to ``-1`` and the\n- test run will be assumed to have failed.\n+ (caused, for example, by malformed JSON), ``numErrors``, ``numFailures``, and ``numSuccessfulOperations``\n+ will be set to ``-1`` and the test run will be assumed to have failed.\n \n .. note:: The choice of termination signal used by ``astrolabe`` varies by platform. ``SIGINT`` [#f1]_ is used as\n the termination signal on Linux and OSX, while ``CTRL_BREAK_EVENT`` [#f2]_ is used on Windows.\n@@ -109,6 +111,7 @@ Pseudocode Implementation\n # Initialize counters.\n var num_errors = 0;\n var num_failures = 0;\n+ var num_success = 0;\n \n # Run the workload - operations are run sequentially, repeatedly until the termination signal is received.\n try {\n@@ -118,7 +121,9 @@ Pseudocode Implementation\n # The runOperation method runs operations as per the test format.\n # The method return False if the actual return value of the operation does match the expected.\n var was_succesful = runOperation(db, collection, operation);\n- if (!was_successful) {\n+ if (was_successful) {\n+ num_success += 1;\n+ } else {\n num_errors += 1;\n }\n } catch (operationError) {\n@@ -131,14 +136,14 @@ Pseudocode Implementation\n # The workloadExecutor MUST handle the termination signal gracefully.\n # The termination signal will be used by astrolabe to terminate drivers operations that otherwise run ad infinitum.\n # The workload statistics must be written to a file named results.json in the current working directory.\n- fs.writeFile('results.json', JSON.stringify({���numErrors���: num_errors, 'numFailures': num_failures}));\n+ fs.writeFile('results.json', JSON.stringify({���numErrors���: num_errors, 'numFailures': num_failures, 'numSuccessfulOperations': num_success}));\n }\n }\n \n Reference Implementation\n ------------------------\n \n-`PyMongo's workload executor <https://github.com/mongodb-labs/drivers-atlas-testing/blob/master/.evergreen/python/pymongo/workload-executor>`_\n+`PyMongo's workload executor <https://github.com/mongodb-labs/drivers-atlas-testing/blob/master/integrations/python/pymongo/workload-executor>`_\n serves as the reference implementation of the script described by this specification.\n \n \ndiff --git a/.evergreen/python/pymongo/install-driver.sh b/integrations/python/pymongo/install-driver.sh\nsimilarity index 100%\nrename from .evergreen/python/pymongo/install-driver.sh\nrename to integrations/python/pymongo/install-driver.sh\ndiff --git a/.evergreen/python/pymongo/workload-executor b/integrations/python/pymongo/workload-executor\nsimilarity index 82%\nrename from .evergreen/python/pymongo/workload-executor\nrename to integrations/python/pymongo/workload-executor\nindex 0d917e2..a29ef6e 100755\n--- a/.evergreen/python/pymongo/workload-executor\n+++ b/integrations/python/pymongo/workload-executor\n@@ -7,7 +7,7 @@\n trap 'wait $PID; exit $?' INT\n \n # Invoke the workload executor as a background process and store its process ID as $PID\n-\"$PYMONGO_VIRTUALENV_NAME/$PYTHON_BIN_DIR/python\" \".evergreen/$DRIVER_DIRNAME/workload-executor.py\" \"$1\" \"$2\" &\n+\"$PYMONGO_VIRTUALENV_NAME/$PYTHON_BIN_DIR/python\" \"integrations/$DRIVER_DIRNAME/workload-executor.py\" \"$1\" \"$2\" &\n PID=$!\n \n # Wait for a state change in $PID (without this the foreground process would return)\ndiff --git a/.evergreen/python/pymongo/workload-executor.py b/integrations/python/pymongo/workload-executor.py\nsimilarity index 100%\nrename from .evergreen/python/pymongo/workload-executor.py\nrename to integrations/python/pymongo/workload-executor.py\ndiff --git a/integrations/ruby/Dockerfile b/integrations/ruby/Dockerfile\nnew file mode 100644\nindex 0000000..ee45d7f\n--- /dev/null\n+++ b/integrations/ruby/Dockerfile\n@@ -0,0 +1,7 @@\n+FROM ruby:2.7\n+\n+WORKDIR /app\n+COPY . .\n+\n+ENTRYPOINT [\"./entrypoint.sh\"]\n+CMD [\"ruby\", \"/app/executor.rb\"]\ndiff --git a/integrations/ruby/entrypoint.sh b/integrations/ruby/entrypoint.sh\nnew file mode 100755\nindex 0000000..79caf20\n--- /dev/null\n+++ b/integrations/ruby/entrypoint.sh\n@@ -0,0 +1,15 @@\n+#!/bin/bash\n+\n+set -e\n+\n+if false; then\n+ (git clone https://github.com/mongodb/mongo-ruby-driver\n+ cd mongo-ruby-driver\n+ bundle install\n+ gem build *.gemspec\n+ gem install *.gem)\n+else\n+ gem install mongo --no-document\n+fi\n+\n+exec \"$@\"\ndiff --git a/integrations/ruby/executor.rb b/integrations/ruby/executor.rb\nnew file mode 100755\nindex 0000000..a25c09b\n--- /dev/null\n+++ b/integrations/ruby/executor.rb\n@@ -0,0 +1,130 @@\n+#!/usr/bin/env ruby\n+\n+require 'json'\n+require 'mongo'\n+\n+Mongo::Logger.logger.level = Logger::WARN\n+\n+class Executor\n+ def initialize(uri, spec)\n+ @uri, @spec = uri, spec\n+ @operation_count = @failure_count = @error_count = 0\n+ end\n+\n+ attr_reader :uri, :spec\n+ attr_reader :operation_count, :failure_count, :error_count\n+\n+ def run\n+ set_signal_handler\n+ load_data\n+ while true\n+ break if @stop\n+ perform_operations\n+ end\n+ p results\n+ write_results\n+ end\n+\n+ private\n+\n+ def set_signal_handler\n+ Signal.trap('INT') do\n+ @stop = true\n+ end\n+ end\n+\n+ def load_data\n+ collection.delete_many\n+ if data = spec['testData']\n+ collection.insert_many(data)\n+ end\n+ end\n+\n+ def perform_operations\n+ spec['operations'].each do |op_spec|\n+ begin\n+ case op_spec['name']\n+ when 'find'\n+ unless op_spec['object'] == 'collection'\n+ raise \"Can only find on a collection\"\n+ end\n+\n+ args = op_spec['arguments'].dup\n+ op = collection.find(args.delete('filter') || {})\n+ if sort = args.delete('sort')\n+ op = op.sort(sort)\n+ end\n+ unless args.empty?\n+ raise \"Unhandled keys in args: #{args}\"\n+ end\n+\n+ docs = op.to_a\n+\n+ if expected_docs = op_spec['result']\n+ if expected_docs != docs\n+ puts \"Failure\"\n+ @failure_count += 1\n+ end\n+ end\n+ #when 'insertOne'\n+ when 'updateOne'\n+ unless op_spec['object'] == 'collection'\n+ raise \"Can only find on a collection\"\n+ end\n+\n+ args = op_spec['arguments'].dup\n+ scope = collection\n+ if filter = args.delete('filter')\n+ scope = collection.find(filter)\n+ end\n+ if update = args.delete('update')\n+ scope.update_one(update)\n+ end\n+ unless args.empty?\n+ raise \"Unhandled keys in args: #{args}\"\n+ end\n+ else\n+ raise \"Unhandled operation #{op_spec['name']}\"\n+ end\n+ rescue Mongo::Error => e\n+ puts \"Error: #{e.class}: #{e}\"\n+ @num_errors += 1\n+ end\n+ @operation_count += 1\n+ end\n+ end\n+\n+ def results\n+ {\n+ numOperations: @operation_count,\n+ numSuccessfulOperations: @operation_count-@error_count-@failure_count,\n+ numErrors: @error_count,\n+ numFailures: @failure_count,\n+ }\n+ end\n+\n+ def write_results\n+ File.open('results.json', 'w') do |f|\n+ f << JSON.dump(results)\n+ end\n+ end\n+\n+ def collection\n+ @collection ||= client.use(spec['database'])[spec['collection']]\n+ end\n+\n+ def client\n+ @client ||= Mongo::Client.new(uri)\n+ end\n+end\n+\n+uri, spec = ARGV\n+\n+if spec.nil?\n+ raise \"Usage: executor.rb URI SPEC\"\n+end\n+\n+spec = JSON.load(spec)\n+\n+executor = Executor.new(uri, spec)\n+executor.run\ndiff --git a/integrations/ruby/functions.sh b/integrations/ruby/functions.sh\nnew file mode 100644\nindex 0000000..e2bc913\n--- /dev/null\n+++ b/integrations/ruby/functions.sh\n@@ -0,0 +1,249 @@\n+# This file contains basic functions common between all Ruby driver team\n+# projects: toolchain, bson-ruby, driver and Mongoid.\n+\n+get_var() {\n+ var=$1\n+ value=${!var}\n+ if test -z \"$value\"; then\n+ echo \"Missing value for $var\" 1>&2\n+ exit 1\n+ fi\n+ echo \"$value\"\n+}\n+\n+detected_arch=\n+\n+host_arch() {\n+ if test -z \"$detected_arch\"; then\n+ detected_arch=`_detect_arch`\n+ fi\n+ echo \"$detected_arch\"\n+}\n+\n+_detect_arch() {\n+ local arch\n+ arch=\n+ if test -f /etc/debian_version; then\n+ # Debian or Ubuntu\n+ if test \"`uname -m`\" = aarch64; then\n+ arch=ubuntu1604-arm\n+ elif lsb_release -i |grep -q Debian; then\n+ release=`lsb_release -r |awk '{print $2}' |tr -d .`\n+ # In docker, release is something like 9.11.\n+ # In evergreen, release is 9.2.\n+ release=`echo $release |sed -e 's/^9.*/92/'`\n+ arch=\"debian$release\"\n+ elif lsb_release -i |grep -q Ubuntu; then\n+ if test \"`uname -m`\" = ppc64le; then\n+ release=`lsb_release -r |awk '{print $2}' |tr -d .`\n+ arch=\"ubuntu$release-ppc\"\n+ else\n+ release=`lsb_release -r |awk '{print $2}' |tr -d .`\n+ arch=\"ubuntu$release\"\n+ fi\n+ else\n+ echo 'Unknown Debian flavor' 1>&2\n+ exit 1\n+ fi\n+ elif test -f /etc/redhat-release; then\n+ # RHEL or CentOS\n+ if test \"`uname -m`\" = s390x; then\n+ arch=rhel72-s390x\n+ elif test \"`uname -m`\" = ppc64le; then\n+ arch=rhel71-ppc\n+ elif lsb_release >/dev/null 2>&1; then\n+ if lsb_release -i |grep -q RedHat; then\n+ release=`lsb_release -r |awk '{print $2}' |tr -d .`\n+ arch=\"rhel$release\"\n+ elif lsb_release -i |grep -q CentOS; then\n+ release=`lsb_release -r |awk '{print $2}' |cut -c 1 |sed -e s/7/70/ -e s/6/62/ -e s/8/80/`\n+ arch=\"rhel$release\"\n+ else\n+ echo 'Unknown RHEL flavor' 1>&2\n+ exit 1\n+ fi\n+ else\n+ echo lsb_release missing, using /etc/redhat-release 1>&2\n+ release=`grep -o 'release [0-9]' /etc/redhat-release |awk '{print $2}'`\n+ release=`echo $release |sed -e s/7/70/ -e s/6/62/ -e s/8/80/`\n+ arch=rhel$release\n+ fi\n+ else\n+ echo 'Unknown distro' 1>&2\n+ exit 1\n+ fi\n+ echo \"Detected arch: $arch\" 1>&2\n+ echo $arch\n+}\n+\n+set_home() {\n+ if test -z \"$HOME\"; then\n+ export HOME=$(pwd)\n+ fi\n+}\n+\n+uri_escape() {\n+ echo \"$1\" |ruby -rcgi -e 'puts CGI.escape(STDIN.read.strip).gsub(\"+\", \"%20\")'\n+}\n+\n+set_env_vars() {\n+ DRIVERS_TOOLS=${DRIVERS_TOOLS:-}\n+\n+ if test -n \"$AUTH\"; then\n+ export ROOT_USER_NAME=\"bob\"\n+ export ROOT_USER_PWD=\"pwd123\"\n+ fi\n+\n+ if test -n \"$MONGODB_URI\"; then\n+ export MONGODB_URI\n+ else\n+ unset MONGODB_URI\n+ fi\n+\n+ export CI=evergreen\n+\n+ # JRUBY_OPTS were initially set for Mongoid\n+ export JRUBY_OPTS=\"-J-Xms512m -J-Xmx1536M\"\n+\n+ if test \"$BSON\" = min; then\n+ export BUNDLE_GEMFILE=gemfiles/bson_min.gemfile\n+ elif test \"$BSON\" = master; then\n+ export BUNDLE_GEMFILE=gemfiles/bson_master.gemfile\n+ fi\n+ \n+ # rhel62 ships with Python 2.6\n+ if test -d /opt/python/2.7/bin; then\n+ export PATH=/opt/python/2.7/bin:$PATH\n+ fi\n+}\n+\n+setup_ruby() {\n+ if test -z \"$RVM_RUBY\"; then\n+ echo \"Empty RVM_RUBY, aborting\"\n+ exit 2\n+ fi\n+\n+ #ls -l /opt\n+\n+ # Necessary for jruby\n+ # Use toolchain java if it exists\n+ if [ -f /opt/java/jdk8/bin/java ]; then\n+ export JAVACMD=/opt/java/jdk8/bin/java\n+ export PATH=$PATH:/opt/java/jdk8/bin\n+ fi\n+\n+ # ppc64le has it in a different place\n+ if test -z \"$JAVACMD\" && [ -f /usr/lib/jvm/java-1.8.0/bin/java ]; then\n+ export JAVACMD=/usr/lib/jvm/java-1.8.0/bin/java\n+ export PATH=$PATH:/usr/lib/jvm/java-1.8.0/bin\n+ fi\n+\n+ if [ \"$RVM_RUBY\" == \"ruby-head\" ]; then\n+ # When we use ruby-head, we do not install the Ruby toolchain.\n+ # But we still need Python 3.6+ to run mlaunch.\n+ # Since the ruby-head tests are run on ubuntu1604, we can use the\n+ # globally installed Python toolchain.\n+ #export PATH=/opt/python/3.7/bin:$PATH\n+\n+ # 12.04, 14.04 and 16.04 are good\n+ curl -fLo ruby-head.tar.bz2 http://rubies.travis-ci.org/ubuntu/`lsb_release -rs`/x86_64/ruby-head.tar.bz2\n+ tar xf ruby-head.tar.bz2\n+ export PATH=`pwd`/ruby-head/bin:`pwd`/ruby-head/lib/ruby/gems/2.6.0/bin:$PATH\n+ ruby --version\n+ ruby --version |grep dev\n+ else\n+ if test \"$USE_OPT_TOOLCHAIN\" = 1; then\n+ # nothing, also PATH is already set\n+ :\n+ elif true; then\n+\n+ # For testing toolchains:\n+ #toolchain_url=https://s3.amazonaws.com//mciuploads/mongo-ruby-toolchain/`host_arch`/f11598d091441ffc8d746aacfdc6c26741a3e629/mongo_ruby_driver_toolchain_`host_arch |tr - _`_patch_f11598d091441ffc8d746aacfdc6c26741a3e629_5e46f2793e8e866f36eda2c5_20_02_14_19_18_18.tar.gz\n+ toolchain_url=http://boxes.10gen.com/build/toolchain-drivers/mongo-ruby-driver/ruby-toolchain-`host_arch`-291ba4a4e8297f142796e70eee71b99f333e35e1.tar.xz\n+ curl --retry 3 -fL $toolchain_url |tar Jxf -\n+ export PATH=`pwd`/rubies/$RVM_RUBY/bin:$PATH\n+ #export PATH=`pwd`/rubies/python/3/bin:$PATH\n+\n+ # Attempt to get bundler to report all errors - so far unsuccessful\n+ #curl -o bundler-openssl.diff https://github.com/bundler/bundler/compare/v2.0.1...p-mongo:report-errors.diff\n+ #find . -path \\*/lib/bundler/fetcher.rb -exec patch {} bundler-openssl.diff \\;\n+\n+ else\n+\n+ # Normal operation\n+ if ! test -d $HOME/.rubies/$RVM_RUBY/bin; then\n+ echo \"Ruby directory does not exist: $HOME/.rubies/$RVM_RUBY/bin\" 1>&2\n+ echo \"Contents of /opt:\" 1>&2\n+ ls -l /opt 1>&2 || true\n+ echo \".rubies symlink:\" 1>&2\n+ ls -ld $HOME/.rubies 1>&2 || true\n+ echo \"Our rubies:\" 1>&2\n+ ls -l $HOME/.rubies 1>&2 || true\n+ exit 2\n+ fi\n+ export PATH=$HOME/.rubies/$RVM_RUBY/bin:$PATH\n+\n+ fi\n+\n+ ruby --version\n+\n+ # Ensure we're using the right ruby\n+ ruby_name=`echo $RVM_RUBY |awk -F- '{print $1}'`\n+ ruby_version=`echo $RVM_RUBY |awk -F- '{print $2}' |cut -c 1-3`\n+ \n+ ruby -v |fgrep $ruby_name\n+ ruby -v |fgrep $ruby_version\n+\n+ # We shouldn't need to update rubygems, and there is value in\n+ # testing on whatever rubygems came with each supported ruby version\n+ #echo 'updating rubygems'\n+ #gem update --system\n+\n+ # Only install bundler when not using ruby-head.\n+ # ruby-head comes with bundler and gem complains\n+ # because installing bundler would overwrite the bundler binary.\n+ # We now install bundler in the toolchain, hence nothing needs to be done\n+ # in the tests.\n+ if false && echo \"$RVM_RUBY\" |grep -q jruby; then\n+ gem install bundler -v '<2'\n+ fi\n+ fi\n+}\n+\n+bundle_install() {\n+ args=--quiet\n+ \n+ # On JRuby we can test against bson master but not in a conventional way.\n+ # See https://jira.mongodb.org/browse/RUBY-2156\n+ if echo $RVM_RUBY |grep -q jruby && test \"$BSON\" = master; then\n+ unset BUNDLE_GEMFILE\n+ git clone https://github.com/mongodb/bson-ruby\n+ (cd bson-ruby &&\n+ bundle install &&\n+ rake compile &&\n+ gem build *.gemspec &&\n+ gem install *.gem)\n+ \n+ # TODO redirect output of bundle install to file.\n+ # Then we don't have to see it in evergreen output.\n+ args=\n+ fi\n+\n+ #which bundle\n+ #bundle --version\n+ if test -n \"$BUNDLE_GEMFILE\"; then\n+ args=\"$args --gemfile=$BUNDLE_GEMFILE\"\n+ fi\n+ echo \"Running bundle install $args\"\n+ # Sometimes bundler fails for no apparent reason, run it again then.\n+ # The failures happen on both MRI and JRuby and have different manifestatinons.\n+ bundle install $args || bundle install $args\n+}\n+\n+kill_jruby() {\n+ jruby_running=`ps -ef | grep 'jruby' | grep -v grep | awk '{print $2}'`\n+ if [ -n \"$jruby_running\" ];then\n+ echo \"terminating remaining jruby processes\"\n+ for pid in $(ps -ef | grep \"jruby\" | grep -v grep | awk '{print $2}'); do kill -9 $pid; done\n+ fi\n+}\ndiff --git a/integrations/ruby/install-driver.sh b/integrations/ruby/install-driver.sh\nnew file mode 100755\nindex 0000000..0e17787\n--- /dev/null\n+++ b/integrations/ruby/install-driver.sh\n@@ -0,0 +1,6 @@\n+#!/bin/bash\n+\n+set -ex\n+\n+(cd `dirname $0` &&\n+ docker build -t dat-ruby .)\ndiff --git a/integrations/ruby/workload-executor b/integrations/ruby/workload-executor\nnew file mode 100755\nindex 0000000..2000b73\n--- /dev/null\n+++ b/integrations/ruby/workload-executor\n@@ -0,0 +1,9 @@\n+#!/usr/bin/env ruby\n+\n+require 'shellwords'\n+\n+puts ([$0] + ARGV.map { |arg| Shellwords.shellescape(arg) }).join(' ')\n+\n+$: << File.dirname(__FILE__)\n+\n+require 'executor'\n","githash":"0fefcc7468c119aa0d75db3bbe309677017f7e57","buildvariants_new":["all"],"tasks":["all"],"finalize":true} for patches/
Traceback (most recent call last):
3: from /home/w/apps/tnex/script/make-patch-build:21:in `<main>'
2: from /home/w/apps/tnex/lib/fe/patch_build_maker.rb:32:in `run'
1: from /home/w/apps/tnex/lib/evergreen/client.rb:212:in `create_patch'
/home/w/apps/tnex/lib/evergreen/client.rb:104:in `request_json': Evergreen PUT patches/ failed: 500: 400 (Bad Request): can't get patched config: Could not patch remote configuration file: could not run patch command: error waiting on process '99195382-beea-4a4d-bec9-20565fdf69e9': exit status 1 (Evergreen::Client::ApiError)
(3) carbon%
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment