Skip to content

Instantly share code, notes, and snippets.

@tarilabs
Created December 13, 2023 10:41
Show Gist options
  • Save tarilabs/09c1ea54ec9468290dcde084ad67bb5f to your computer and use it in GitHub Desktop.
Save tarilabs/09c1ea54ec9468290dcde084ad67bb5f to your computer and use it in GitHub Desktop.
[{"body":"**Is your feature request related to a problem? Please describe.**\r\nIn the [first implementation of model registry and serving implemention](https://github.com/opendatahub-io/model-registry/issues/173) we focused on creating unit testing by mocking the model registry service because the TestContainer approach was not feasible as it would not work in `openshift-ci` environment [1]. On the other hand having some ITs where we can actually test the connection to an existing model registry/ml-metadata service could be very helpful and furthermore we could setup real use cases in these tests.\r\n\r\n[1] https://cloud.redhat.com/blog/running-testcontainers-in-openshift-pipelines-with-docker-in-docker\r\n\r\n**Describe the solution you'd like**\r\nSetup `e2e` test suite in `odh-model-controller` where we could setup model-registry service by directly applying a `deployment` and then run all expected tests.\r\n\r\n**Describe alternatives you've considered**\r\nKeep just unit tests, unfortunately we these we cannot cover all possible tests (especially those that writes on model registry are not really testable as we are mocking it)\r\n\r\n**Additional context**\r\nSome examples we could take inspiration from:\r\n- https://github.com/opendatahub-io/opendatahub-operator/tree/42b2bdd6eccbab5669b467d44ee9a1a25a3a449e/tests/e2e\r\n- https://github.com/openshift/release/blob/37ce76256337378b57c80f696a36b863cb2ec330/ci-operator/config/opendatahub-io/opendatahub-operator/opendatahub-io-opendatahub-operator-incubation.yaml#L57-L77\r\n- https://github.com/operator-framework/operator-sdk/blob/e001847a67f16d7173ae4fee7cff57cd4405edbb/testdata/go/v3/memcached-operator/test/e2e/e2e_test.go\r\n","number":235,"title":"[model-controller] Setup e2e tests for model registry and serving reconciliation"},{"body":"**Is your feature request related to a problem? Please describe.**\r\nRight now for `InferenceService` entities we do have a state which can have two different possible values `DEPLOYED` and `UNDEPLOYED`. \r\n\r\nSince these entities are intended to be used to describe the `intention to deploy/undeploy a model` and not the deployment state itself, I think that the values could be misleading.\r\n\r\n**Describe the solution you'd like**\r\n I think we should slightly change the possible values to properly keep track of correct IS states:\r\n- `READY_TO_DEPLOY`: represents the intention to deploy an InferenceService (this will be considered by the model controller)\r\n- `DEPLOYED`: this is going to be updated by the model controller, just to avoid that IS actually deployed will remain in READY_TO_DEPLOY state.\r\n- `READY_TO_UNDEPLOY`: intention to undeploy a deployed model (considered by model controller)\r\n- `UNDEPLOYED`: _[optional]_ represents a not deployed model, we could use _nil_ state instead.\r\n\r\n**Describe alternatives you've considered**\r\nn/a\r\n\r\n**Additional context**\r\nModel controller operation are still tracked/audited using `ServeModel` entities, one per deployment.\r\n","number":229,"title":"InferenceService state terminology"},{"body":"**Is your feature request related to a problem? Please describe.**\r\nMLMD python wheel dependency (used by this MR python client) is platform/architecture specific\r\n\r\n**Describe the solution you'd like**\r\nMR python client to rely on pure library for supporting the pragmatic use-case leveraged \r\n\r\n**Describe alternatives you've considered**\r\nThis is based on the observation that we are using MLMD with gRPC remote connection, so the embedded binary executable in the MLMD python wheel is not strictly required (with caveat of tests, but this is mentioned in details in the document produced by this exploration).\r\n\r\nThe document under discussion presents now 2 alternatives, which can help Apple-silicon contributors of Model Registry, but also can potentially simplify our life when we will face deployment of the Model Registry python client on non-x86 platforms (I'm thinking of Edge scenarios, etc).\r\n\r\nThe alternatives ~are:\r\n\r\n- Repackaging the distributed wheel, making changes to make it pure lib\r\n- Changing pip+bazel build of upstream MLMD project, by making that produce a pure lib\r\n\r\n**Additional context**\r\nPre-requisites:\r\n- #225 \r\n","number":228,"title":"Explore repackaging of MLMD python wheel as a pure lib"},{"body":"deliverables:\r\n\r\n- video (with chapters): https://www.youtube.com/watch?v=grXnjGtDFXg\r\n- written walkthrough (can be used later for social media post): https://github.com/tarilabs/demo20231121#demo\r\n- slides with screenshots (backup in case of network/cloud outages): https://docs.google.com/presentation/d/1toYW0oFoS7iOpcz8_XvSlTq1bv8hwLl9ptL_HIO553g/edit?usp=sharing\r\n\r\nI've made sure to highlight the message that the MR is not a central orchestrator, as requested.\r\n\r\nB-sides\r\n- https://www.youtube.com/watch?v=l0SrSRmibwM\r\n- https://youtu.be/-ko3LkuNwjY\r\n- https://www.youtube.com/watch?v=OWlaTqA93Xg\r\n","number":227,"title":"Technical Demo, freezed on 2023-11-21"},{"body":"**Is your feature request related to a problem? Please describe.**\r\nCurrently ODH dashboard does not provide file-upload-into-s3 capability, as discussed in UX/UI meetings\r\n\r\n**Describe the solution you'd like**\r\nExplore opportunity for sidecar capability and expose function as part of the flow for UI/UX.\r\n\r\n**Describe alternatives you've considered**\r\nIf not accepted, we can opt to disable flow in the UI and provide only MR python client upload via notebook\r\n\r\n**Additional context**\r\nTo be maintained separate from Model Registry core specifically, as we don't want to jeopardize the out-of-scope of Storing for MR\r\n","number":226,"title":"Investigate sidecar capability for file upload into S3-compat bucket"},{"body":"**Is your feature request related to a problem? Please describe.**\r\nCurrently python tests rely on the embedded compiled binary mlmd server inside the platform/architecture specific python wheel.\r\n\r\n**Describe the solution you'd like**\r\nUsing Testcontainers for Python and testing analogously to what we done as testing strategy on go Core layer \r\n\r\n**Describe alternatives you've considered**\r\nUnfortunately this would make it unavailable in docker-in-docker setups such as DevContainers. While this problem can be circumvented by manually raising a MLMD server in the host docker, the problem appears to be slowness in FS reactivity when deleting the sqlite file. Considering the ideal solution is likely more about avoiding emulation and rely on gRPC, I would conclude this is still the better solution.\r\n\r\n**Additional context**\r\nn/a\r\n","number":224,"title":"MR python client test to use Testcontainers (for Python)"},{"body":"**Describe the bug**\r\nA clear and concise description of what the bug is.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Startup the model registry:\r\n```bash\r\ndocker compose -f docker-compose.yaml up\r\n```\r\n\r\n2. Create a registered model:\r\n```bash\r\ncurl -X 'POST' \\\r\n 'http://localhost:8080/api/model_registry/v1alpha1/registered_models' \\\r\n -H 'accept: application/json' \\\r\n -H 'Content-Type: application/json' \\\r\n -d '{\r\n \"name\": \"MyModel\",\r\n \"state\": \"LIVE\"\r\n}'\r\n```\r\n\r\n3. Create model version:\r\n```bash\r\ncurl -X 'POST' \\\r\n 'http://localhost:8080/api/model_registry/v1alpha1/model_versions' \\\r\n -H 'accept: application/json' \\\r\n -H 'Content-Type: application/json' \\\r\n -d '{\r\n \"name\": \"v1\",\r\n \"state\": \"LIVE\",\r\n \"registeredModelID\": \"1\"\r\n}'\r\n```\r\n\r\nCreate artifact without providing `artifactType`:\r\n```bash\r\ncurl -X 'POST' \\\r\n 'http://localhost:8080/api/model_registry/v1alpha1/model_versions/2/artifacts' \\\r\n -H 'accept: application/json' \\\r\n -H 'Content-Type: application/json' \\\r\n -d '{\r\n \"state\": \"UNKNOWN\",\r\n \"name\": \"v1-artifact\"\r\n}'\r\n```\r\n\r\nYou will get the error:\r\n\r\n```bash\r\n\"unsupported artifactType\"\r\n```\r\n\r\n**Expected behavior**\r\nMy expectation was that the `artifactType` would have been inferred and pre-filled automatically with `model-artifact` as right now this is the only valid artifact type.\r\n\r\nI think the root cause here is that `api/model_registry/v1alpha1/model_versions/<id>/artifacts` accepts a generic `Artifact` and not the specific `ModelArtifact`, see https://github.com/opendatahub-io/model-registry/blob/bf56ee21940d0c302e59f68974d84fb2e3f2ffa1/api/openapi/model-registry.yaml#L419\r\n\r\n**So here I think we have two options:**\r\n1. Just accept `ModelArtifact` exactly as already did for plain artifacts endpoints\r\n - pros: the behavior will be exactly the same as the plain model artifact endpoints\r\n - cons: if at some point we extend the number of artifact types we might have to change the API\r\n2. Fill that field with `model-artifact` at proxy level (as long as we know there is just one artifact type)\r\n - pros: easy to do and we do not have to change the API if new artifact types come in, just need to handle different types\r\n - cons: we have to hardcode that value \r\n\r\nPS: I remember the same issue was raised in some older PR but I could not find it :(\r\n\r\n**Additional context**\r\nn/a\r\n","number":218,"title":"Getting `unsupported artifactType` in v1alpha1/model_versions/<id>/artifacts"},{"body":"**Is your feature request related to a problem? Please describe.**\r\nSince both Python client and Go service library share the schema models for mlmd types, there should be only one source of truth for creating/using types and for testing. \r\n\r\n**Describe the solution you'd like**\r\nBring back the `yaml` metadata library definitions and mapper from the older GraphQL code base to restore the metadata library feature. \r\nThe library can be used for:\r\n* Generating Go and Python types to keep them aligned\r\n* Initializing mlmd on REST service startup\r\n* Optionally Go/Python/ROBOT framework test code could also be potentially generated to validate that all fields are used and supported\r\n\r\n**Describe alternatives you've considered**\r\nKeeping ml metadata schema synced in separate Go and Python codes and tests is not maintainable in the long run as the number of types increase. \r\n\r\n**Additional context**\r\nNone","number":211,"title":"Bring Metadata Library back!"},{"body":"Using a gitAction/Runner to call the existing quay.io image and run the GoLang tests. Task 7 from https://github.com/opendatahub-io/model-registry/issues/187\r\n\r\n**_Definition of Done_**\r\n\r\nWhen the pushed image can be pulled, all GoLang existing tests are run, all tests are passed and all relevant feedback is available for the engineer.","number":196,"title":"Include GoLang tests in GitAction/Runner"},{"body":"_**Tracker Task for the Implementation of GitActions/Runners for Model Registry Testing**_\r\n\r\nWhen a pull request is initiated we want to run any existing golang and robot framework tests contained within the repository.\r\nThe GoLang and Robot Framework tests should be run in parallel via separate GitActions/Runners.\r\n\r\nGit actions will be used to initiate the testing. Git runners will be used to run the testing.\r\n\r\nThe pull request cannot be merged until all tests pass.\r\n\r\n- [x] Task 1. [Run tests locally on laptop using the docker image](https://github.com/opendatahub-io/model-registry/issues/190)\r\n- [x] Task 2. [Take Github Action tutorial. (Radim & Tony)](https://github.com/opendatahub-io/model-registry/issues/191)\r\n- [x] Task 3. [Create quay.io image (public)](https://github.com/opendatahub-io/model-registry/issues/192)\r\n- [x] Task 4 [Create gitAction to pull a quay.io image](https://github.com/opendatahub-io/model-registry/issues/193)\r\n- [x] Task 5 [Create gitRunner to be called by the GitAction](https://github.com/opendatahub-io/model-registry/issues/194)\r\n- [x] Task 6 [Implement simple test via GitAction / GitRunner](https://github.com/opendatahub-io/model-registry/issues/195)\r\n- [ ] Task 7 [Include GoLang tests in Git Runner](https://github.com/opendatahub-io/model-registry/issues/196)\r\n- [x] Task 8 [Include Robot Framework tests in Git Runner](https://github.com/opendatahub-io/model-registry/issues/198)\r\n\r\n**_Definition of Done_**\r\n\r\nWhen existing tests can successfully be run on a pull request and provide correct pass/fail and accessible feedback is available for the engineer.","number":187,"title":"Implement Git Actions and Runners to initiate the model registry test-suite on pull requests."},{"body":"**Is your feature request related to a problem? Please describe.**\r\nWhen a user is on data science pipelines screen, and asks to register the model that came of the run from UI, to make API call the details about the model are required.\r\n\r\n**Describe the solution you'd like**\r\nWe need details like a model name, s3 location, model type etc from the DSP itself so that these can be supplied to the RegisteredModel API call.\r\n\r\n**Additional context**\r\ntalk to DSP team and I am hoping this data can be found the MLMD server if we check the details of the run. So, there may be API call that dashboard can call get that info, we need to figure that out","number":178,"title":"Investigate how DSP generated model details can be captured for supporting the Registered Model Call"},{"body":"**Is your feature request related to a problem? Please describe.**\r\nSecond step would be to monitor the MR state and create `InferenceService` CRs based on the Model Registry content.\r\n\r\n**Describe the solution you'd like**\r\nCreate a background timer controller/reconciler that periodically fetches all inference services stored in the model registry, and for each of them creates a `InferenceService` CR (if not already existing).\r\n\r\n**Describe alternatives you've considered**\r\nn/a\r\n\r\n**Additional context**\r\nThis polling/timer should trigger the reconciliation for each namespace where Model Registry is installed and at least one ServingRuntime is applied.\r\n","number":175,"title":"[model-controller] Periodically check MR and create `InferenceService` CRs"},{"body":"**Is your feature request related to a problem? Please describe.**\r\nFirst step in [sequence diagram](https://github.com/opendatahub-io/model-registry/issues/104#issuecomment-1782358019) is to monitor `ServingRuntime` CRs and create the corresponding `ServingEnvironment` instance in the Model Registry, the serving runtime name will be the `namespace` where the watched ServingRuntime CRs are located.\r\n\r\n**Describe the solution you'd like**\r\nCreate a new reconciler in the odh-model-controller monitoring for the `ServingRuntime` CRs and update the model registry accordingly by creating new `ServingEnvironment` records if not already existing.\r\n\r\n**Describe alternatives you've considered**\r\nn/a\r\n\r\n**Additional context**\r\nn/a\r\n","number":174,"title":"[model-controller] Create `ServingEnvironment` in MR based on `namespace` containing ServingRuntime CRs"},{"body":"**Is your feature request related to a problem? Please describe.**\r\nFollowup of https://github.com/opendatahub-io/model-registry/issues/104: implement the described flow diagram.\r\nInitially this implemantion could be placed in the [odh-model-controller](https://github.com/opendatahub-io/odh-model-controller) repository.\r\n\r\n**Describe the solution you'd like**\r\nImplement the logic described in [this diagram](https://github.com/opendatahub-io/model-registry/issues/104#issuecomment-1782358019), focusing on the `odh-model-controller` actor specified in the sequence diagram.\r\n\r\n**Describe alternatives you've considered**\r\nn/a\r\n\r\n**Additional context**\r\nSplitting this issue into multiple tasks:\r\n- [ ] https://github.com/opendatahub-io/model-registry/issues/174\r\n- [ ] https://github.com/opendatahub-io/model-registry/issues/175\r\n\r\n","number":173,"title":"[model-controller] Implement controller for MR and serving interactions"},{"body":"**Is your feature request related to a problem? Please describe.**\r\nCapture the fully qualified name (FQN) of the registered model in a cluster, as MLMD tends to be restrict with simple names\r\n\r\n**Describe the solution you'd like**\r\nEither prefix the `namespace` or capture this information in a different way that a registered model name or model version identifiers FQN. Otherwise move the identity columns to auto-generated uuids and have separate columns for name cpatuing\r\n\r\n**Additional context**\r\nPM asked me to capture the cluster identity too along with namespace above","number":172,"title":"Capture FQN for Registered Model"},{"body":"**Describe the bug**\r\nWhen no Models are yet registered on Model Registry, an error is reported instead of an empty list\r\n\r\n**To Reproduce**\r\nTo reproduce, just attempt a listing when no RegisteredModels are present on MR:\r\n\r\n![Screenshot 2023-11-18 at 21 52 48](https://github.com/opendatahub-io/model-registry/assets/1699252/d645aed4-07a0-4638-b6e7-1da6cb91087a)\r\n\r\nBut the error is wrong, since the Type is actually present on MR:\r\n\r\n![image](https://github.com/opendatahub-io/model-registry/assets/1699252/a0730f83-35b6-45a1-a423-f24c15f94712)\r\n\r\n**Expected behavior**\r\nShould probably not return an error, but an empty list.\r\n\r\n**Additional context**\r\nDemonstrating with one upserted RegisteredModel\r\n![Screenshot 2023-11-18 at 21 55 43](https://github.com/opendatahub-io/model-registry/assets/1699252/bb23c87f-0ec6-4f25-89b8-e98abedbc9ad)\r\n","number":169,"title":"(low) When no models on registry, an error is returned instead of empty list"},{"body":"As raised [here](https://github.com/opendatahub-io/model-registry/pull/154#discussion_r1395277910):\r\n\r\n- check if we can get away without `ZeroIfNil` but dereference the pointer (I did not found a way at the time, since this snippet prints the address for instance:\r\n```\r\n\tx := 123\r\n\ty := &x\r\n\tfmt.Printf(\"%v\", y)\r\n```\r\nbut we can keep the ZeroIfNil, this is just a minor check worth of doing\r\n- Check if we can refactor some of the methods, similarly to how it was done leveraging `func mapTo[S getTypeIder, T any](s S, id int64, convFn func(S) (*T, error)) (*T, error) `","number":156,"title":"Core library improvement"},{"body":"**Is your feature request related to a problem? Please describe.**\r\nAs discussed in UI/UX meetings other than key-value pairs (i.e., `properties`) we also have `labels`.\r\nRight now openapi models do not have explicit `labels` field, we should consider adding it.\r\n\r\n**Describe the solution you'd like**\r\nIn the openapi spec: add a new field named `labels` that will contain all labels associated to an entity.\r\n- type: array of strings?\r\n- affected entities: \r\n - `RegisteredModel`\r\n - `ModelVersion`\r\n - others?\r\n\r\nHow can we map these labels in the underlying ml-metadata model?\r\nCreate a property named `labels` of type `struct value` that can contain the array of strings\r\n\r\n**Describe alternatives you've considered**\r\nn/a\r\n\r\n**Additional context**\r\nn/a\r\n","number":151,"title":"Model `labels` in the openapi models"},{"body":"Make sure the Model Registry components working with self-signed certs work without any errors in the disconnected mode","number":137,"title":"[test] Test the self-signed certs working in the disconnected mode"},{"body":"**Is your feature request related to a problem? Please describe.**\r\nWould like to download a model from Hugging Face\r\n\r\n**Describe the solution you'd like**\r\nAs user using the notebook provide a Python API to download a model from the Hugging Face and register in the Model Registry along with all the metadata of the Model.\r\n\r\n**Additional context**\r\nDo not include this API as part of the model registry API, but find out there is ODH notebook SDK that we can contribute into and place in there. or have a separate package. Also consider design of the API as framework, such that API can be used easily with other model registries. Only for example \r\n\r\n```\r\nHuggingFaceProvider hf = ServiceProvider.HuggingFace(...)\r\nmodel = import_model(hf, <id>)\r\nmodelregistry.register_model(model)\r\n```","number":133,"title":"Add ability to download a model from Hugging Face using Python API"},{"body":"**Is your feature request related to a problem? Please describe.**\r\nWe need a way to import Python libraries into a Notebook or Pipeline component\r\n\r\n**Describe the solution you'd like**\r\nMake Model Registry Python available as a `pip` based package\r\n\r\n**Describe alternatives you've considered**\r\n[Python: Creating a pip installable package](https://betterscientificsoftware.github.io/python-for-hpc/tutorials/python-pypi-packaging/#:~:text=Inside%20that%20package%20directory%2C%20alongside,be%20installed%20and%20become%20importable.)\r\n\r\n","number":130,"title":"Creating a pip installable package for Python APIs"},{"body":"**Is your feature request related to a problem? Please describe.**\r\nPython APIs are currently defined at par with GO APIs that are more focused on the lower level logical layer of the Model Registry. \r\n\r\n**Describe the solution you'd like**\r\nFor user usability purposes, to be used in the Notebooks and Pipeline components should encapsulate these lower-level APIs into functional calls like `register_model` or `list_registered_models` etc. First, figure out the API and define it here in the issue. once there is acceptance on API then follow up with implementation and testing of the API\r\n\r\n**Additional context**\r\nConsider looking at other popular open source model registries to get inspiration about the type of calls to expose","number":129,"title":"Create higher level Python APIs for user"},{"body":"Documentation should be built by CI and served statically. This helps us reference it in other parts of the code, as well as providing accessibility to users.\r\n","number":120,"title":"Serve documentation statically"},{"body":"**Is your feature request related to a problem? Please describe.**\r\nDefine how the security (AuthZ) is handled when a user using the API clients. \r\n\r\nWhen a user is using the REST API layer, the JWT token will be passed in a Header, and user information can be gleaned from that along with roles that can be used for the gating of certain operations by the user. When user is using API client such as Python API or Go API from their Notebooks and Pipelines, a similar gating of operations need to be done.\r\n\r\n**Describe the solution you'd like**\r\nWhen a Notebook is running, it typically runs under a Service Account or User Account. A Service Account is pre-confiured with certain roles. The Python/Go clients need to glean this information from the executing environment they are in and use that user's credentials.\r\n\r\n","number":96,"title":"Define security strategy for API clients "},{"body":"### Description\r\n\r\nDesign and develop a \"Model Registry\" solution in the RHODS.\r\n\r\nA model registry provides a repository for model developers to store and manage models, versions, and model metadata. It fills a gap between model experimentation and model serving activities. It provides a central interface for stakeholders in the ML life cycle to collaborate on models securely, optimizing the overall efficiency of the MLOps workflow.\r\n\r\n### Target Branch\r\n\r\ntbd\r\n\r\n### Requirements\r\n\r\n[Requirements doc #1](https://docs.google.com/document/d/1mRUQ9C5Br7T3bA49H503HTq0J7F4WKySdxaWpTX-i8k/edit) \r\n[Requirements doc #2](https://docs.google.com/document/d/1XT0PxVp78VEyeMXKkF5qrP-fdgN4V_G9UxH2a1OpPY8/edit#heading=h.vk5ku8fmql4y)\r\n[Proposal doc](https://docs.google.com/document/d/1G-pjdGaS2kLELsB5kYk_D4AmH-fTfnCnJOhJ8xENjx0/edit#heading=h.ds8q4xtkmu64)\r\n\r\n### Itemized UX Issues\r\n\r\nhttps://github.com/opendatahub-io/odh-dashboard/issues/1804\r\n\r\n### Itemized Dev Issues\r\n\r\n* #44\r\n* #58\r\n* #70 \r\n * https://github.com/opendatahub-io/model-registry/issues/107\r\n* #61 \r\n* #90 \r\n* #60 \r\n* #59 \r\n* #47 \r\n* #46 \r\n* #45 \r\n* https://github.com/opendatahub-io/model-registry/issues/65\r\n* #96\r\n* #137 \r\n* #129 \r\n* #130 \r\n* \r\n\r\n### Related artifacts\r\n\r\n_No response_","number":95,"title":"[Tracker]: Model Registry - MVP"},{"body":"https://app.snyk.io/ \r\n\r\nWe need to Synk scan on the repository set up to discover any vulnerabilities we may pull in. Check other projects in ODH space how they were able to configure this. Also do for the Operator repository\r\n\r\n@vaibhavjainwiz can help with the setup","number":62,"title":"Add SNYK scan for the repository"},{"body":"**Is your feature request related to a problem? Please describe.**\r\nWhen a Model Registry is deployed, to be used by Model Serving it needs to be discovered.\r\n\r\n**Describe the solution you'd like**\r\nWrite Go utility to look up the Model Registry CR and its status to find the details of the service and return the coordinates such that calling module can use it and invoke the REST API using the service endpoint.\r\n","number":61,"title":"Utility function for Model Registry service discovery"},{"body":"**Is your feature request related to a problem? Please describe.**\r\nREST API server should serve the API based on RBAC rules (what rules TBD)\r\n\r\n**Describe the solution you'd like**\r\nREST API Server should honor the RBAC rules defined by the configuration when serving the API. ","number":60,"title":"Support RBAC on REST API Server"},{"body":"**Is your feature request related to a problem? Please describe.**\r\n\r\nModel Registry needs a operator that can be used deploy Model Registry related resources in an OpenShift platform\r\n\r\n**Describe the solution you'd like**\r\n\r\nDesign Go based operator using Opearator SDK for Model Registry. The deployment model should mimic much like what we see in Kustomize scripts https://github.com/kubeflow/kfp-tekton/tree/master/manifests/kustomize/base/metadata\r\n\r\nThe resulting YAML of Kusomize needs to be converted into Go constructs. Kustomize script directly calls the deployment on mySQL database, here may be we can add additional dereference and and call `Custom Resource` of MySQL operator otherwise switch this to PostgreSQL and use the Crunchy Operator for achieve the same.","number":47,"title":"Create an Operator for Model Registry"},{"body":"**Is your feature request related to a problem? Please describe.**\r\nNeed a test runner for a Python test suite to run one or more Python test client scripts against the server. \r\n\r\n**Describe the solution you'd like**\r\nA simple way to run integration tests using something like `make test/integration` is needed. Python probably has a JUnit style test runner facility that can run one or more Python tests. It will be useful if the runner can also start a test go server and shutdown at the end of integration tests. \r\n\r\n**Describe alternatives you've considered**\r\nCurrent alternative of running a single Python test client is not very useful. \r\n\r\n**Additional context**\r\nIntegration tests are more useful at this point than unit tests to make sure all the major happy paths and scenarios for calling the server APIs work as intended. ","number":14,"title":"[feature] Create a test suite runner for Python test clients"}]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment