Skip to content

Instantly share code, notes, and snippets.

@eriknelson
Forked from pmorie/proposal.md
Created March 8, 2018 17:05
Show Gist options
  • Save eriknelson/c27b39f2bc5b7c366c2d14f66c8bdb95 to your computer and use it in GitHub Desktop.
Save eriknelson/c27b39f2bc5b7c366c2d14f66c8bdb95 to your computer and use it in GitHub Desktop.

Proposal: Controlling access to Services and Plans

Abstract

Proposes changes to service-catalog to facilitate controlling access to certain services and plans.

Motivation

Not all services and plans should be available to all users. The existing cluster-scoped resources for brokers, services, and plans are not sufficient to implement access control to ensure that users have access only to the service and plans that they should.

There are two ingredients to successfully controlling access to services and plans:

  • Namespaced versions of these resources are required to control access to services along the boundaries of namespaces
  • API surfaces to allow black/whitelisting which services and plans from a broker's catalog have k8s resources created for them

Use Cases

  • As a cluster operator, I want to control access to certain services and plans so that only certain namespaces are allowed to use them.

Use Case: Controlling access to services and plans

Certain services or plans are not suitable for consumption by every user of a cluster. For example, a service may have a monetary cost associated with it or grant the user a high degree of privilege. In order to prevent users from gaining access to services and/or plans that they should not be able to use, we must be able to:

  1. Keep the services and plans in the cluster-scoped catalog limited to those that anyone can use
  2. Allow services and plans that are only for certain users to be used only by the users that should have access to them

For example: a broker may offer a highly-privileged service that ordinary users should not even be aware of, let alone allowed to use. In this case, the cluster administrator should be able to keep that service from appearing in the cluster-scoped catalog of services and plans but also make it available to users with the appropriate level of privilege.

Goals and non-goals

There are many related problem in service-catalog that users are interested in solutions for, but we need to keep the scope of this proposal controlled. In that light, let's establish the goals of this proposal:

  • Make it possible to keep the cluster-scoped catalog limited to services and plans that everyone should be able to use
  • Make it possible to add privileged services and plans into specific namespaces

The following are valid goals but outside the scope of this proposal:

  • Make it possible to use a service and plan from namespace X to provision a ServiceInstance in namespace Y
  • Allow creating a ServiceBinding in namespace X to a ServiceInstance in namespace Y

We should take care to acheive the goals of this proposal without preventing further progress on other issues that are out of scope.

Analysis

Why namespaces?

Unfortunately, it is not possible in Kubernetes to create an ACL (access control list) filtering scheme that shows users only the resources in a collection that they are allowed to see. The fundamental gap here is that an external authorizer may have its state changed at any time out of band to kubernetes, making it impossible to do implement a correct LIST or WATCH operation from a certain resource version.

Since ACL-filtering the cluster-scoped list of services and plans is not a realistic option, we must find another method of controlling read/write access to resources. In Kubernetes, namespaces are the defacto way of performing this type of access control.

Adding namespaced resources for service brokers (ClusterServiceBroker), services (ClusterServiceClass) and plans (ClusterServicePlans) allows us to take advantage of the existing namespace concept to perform access control.

Filtering services and plans from a broker

Adding namespaced resources for brokers, services and plans is necessary but not sufficient to control access to services and plans. A single broker may offer a mix of services that all users should be able to access and services that should only be usable by some users.

In order to prevent a broker that offers a mix of unprivileged and privileged services to the cluster-scoped catalog, there must be a way to filter the services and plans exposed by a broker. This can be accomplished through the use of white/black lists that control which services and plans in a broker's catalog have service-catalog resources created for them. For example:

  • A cluster administrator should be able to prevent privileged services from appearing in cluster-scoped catalog
  • A cluster administrator should be able to add certain privileged services to a namespace

Design

In this proposal we'll focus on adding the namespaced ServiceBroker, ServiceClass, and ServicePlan resources. For details on filtering which services and plans in a broker's catalog have k8s resources created for them, see kubernetes-retired/service-catalog#1773.

Namespaced resources

The namespaced resources for brokers, services, and plans should have the same behaviors as their cluster-scoped cousins. To a great degree, we can reuse the same API fields in the namespaced resources, but there are some exceptions:

ServiceBroker resource

The API for the ServiceBroker resource should differ from ClusterServiceBroker in exactly one area:

  • A user should only be able to specify a secret within the same namespace to hold the auth information

ServiceClass resource

Differences between ClusterServiceClass and ServiceClass:

  • ServiceClass.Spec should have ServiceBrokerName instead of ClusterServiceBrokerName

ServicePlan resource

Differences between ClusterServicePlan and ServicePlan

  • ServicePlan.Spec should have ServiceBrokerName instead of ClusterServiceBrokerName
  • ServicePlan.Spec should have ServiceClassRef instead of ClusterServiceClassRef

Changes to ServiceInstance

The ServiceInstance resource should be changed to allow users to unambiguously specify a ServiceClass and ServicePlan instead of the cluster-level resources.

  • Add fields to PlanReference to represent the external and k8s names of ServiceClass and ServicePlan (as opposed to the cluster-scoped versions)
  • Add reference fields to ServiceClassSpec that represent the references to the namespaced resources

Implementation plan

NOTE: To facilitate reuse of shared fields between the cluster scoped resources and their new, namespaced counterparts, shared fields should be extracted into their own structs and embedded where used. The naming convention for these will be SharedX, Ex: SharedServiceBrokerSpec.

These changes should be made for both the versioned and unversioned types.go, found in apis/servicecatalog/v1beta1/types.go and apis/servicecatalog/types.go.

TODO: Naming okay?

TODO: Fuzzer?

Adding ServiceBroker

  1. Extract shared fields into SharedServiceBrokerSpec type:
type SharedServiceBrokerSpec struct {
	URL string `json:"url"`
	InsecureSkipTLSVerify bool `json:"insecureSkipTLSVerify,omitempty"`
	CABundle []byte `json:"caBundle,omitempty"`
	RelistBehavior ServiceBrokerRelistBehavior `json:"relistBehavior"`
	RelistDuration *metav1.Duration `json:"relistDuration,omitempty"`
	RelistRequests int64 `json:"relistRequests"`
}
  1. Remove the shared fields from ClusterServiceBrokerSpec and embed SharedServiceBrokerSpec. AuthInfo should remain as is.

  2. Add ServiceBrokerSpec type with embedded SharedServiceBrokerSpec. Auth info here should be a an optional (pointer) string.

type ServiceBrokerSpec struct {
	SharedServiceBrokerSpec
	AuthInfo *string
}

TODO: Should this simply be a string, or does ServiceBrokerAuthInfo need altering?

  1. Add ServiceBroker type
type ServiceBroker struct {
	metav1.TypeMeta `json:",inline"`
	metav1.ObjectMeta `json:"metadata,omitempty"`
	Spec ServiceBrokerSpec `json:"spec,omitempty"`
	Status ClusterServiceBrokerStatus `json:"status,omitempty"`
}

TODO: Anything unique required this type due to the fact that it's namespaced? Is ClusterServiceBrokerStatus acceptable if nothing needs to change, about the status?

Validations and Defaults

TODO: Fill in

Adding the ServiceClass resource

Extract shared fields much the same as was done with SharedServiceBrokerSpec and add the ServiceClassSpec type with its unique field. Add ServiceClass type. Unique to the ServiceClassSpec is the namespace scoped ServiceBrokerName:

type SharedServiceClassSpec {
	ExternalName string `json:"externalName"`
	ExternalID string `json:"externalID"`
	Description string `json:"description"`
	Bindable bool `json:"bindable"`
	BindingRetrievable bool `json:"bindingRetrievable"`
	PlanUpdatable bool `json:"planUpdatable"`
	ExternalMetadata *runtime.RawExtension `json:"externalMetadata,omitempty"`
	Tags []string `json:"tags,omitempty"`
	Requires []string `json:"requires,omitempty"`
}

type ClusterServiceClassSpec {
	SharedServiceClassSpec
	ClusterServiceBrokerName string `json:"clusterServiceBrokerName"`
}

type ServiceClassSpec {
	SharedServiceClassSpec
	ServiceBrokerName string `json:"serviceBrokerName"`
}

type ClusterServiceClass struct {
	metav1.TypeMeta `json:",inline"`
	metav1.ObjectMeta `json:"metadata,omitempty"`
	Spec ServiceClassSpec `json:"spec,omitempty"`
	Status ClusterServiceClassStatus `json:"status,omitempty"`
}

ServicePlan resource

Differences between ClusterServicePlan and ServicePlan

  • ServicePlan.Spec should have ServiceBrokerName instead of ClusterServiceBrokerName
  • ServicePlan.Spec should have ServiceClassRef instead of ClusterServiceClassRef

Changes to ServiceInstance

The ServiceInstance resource should be changed to allow users to unambiguously specify a ServiceClass and ServicePlan instead of the cluster-level resources.

  • Add fields to PlanReference to represent the external and k8s names of ServiceClass and ServicePlan (as opposed to the cluster-scoped versions)
  • Add reference fields to ServiceClassSpec that represent the references to the namespaced resources

Controller Manager Changes

TODO

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment