Introduce a first-class concept of 'scope' into Bicep for extensible resources, so that Stacks understands ownership, and is able to request deletion of extensible resources that are deployed 'inside' Azure resources.
Stacks currently stores a list of id
properties for Azure resources to go and clean up if they are removed from the Stack. Extensible resources generally require authentication with another control plane; details of this are difficult to persist or provide to the Stacks RP in a reusable manner.
Stacks and What-If would probably also benefit from the concept of 'ownership' - e.g. understanding that particular extensible resources are scoped 'within' Azure resources.
It becomes especially tricky when handling 'runtime' resources in deeply nested templates, as Stacks relies upon having a deterministic list of resource ids affected by a particular deployment. For example, it would be difficult to represent how to obtain authentication for a deeply nested AKS resource.
Note, still requires a lot of parameter passing, but better than passing kubeConfig
around. This will also benefit from the providers
proposal to address the verbosity.
Introduce the first class concept of 'scope':
param aks resource 'Microsoft.ContainerService/managedClusters@2022-04-01'
import kubernetes as k8s {
scope: toDeploymentScope(aks)
namespace: 'default'
}
// alternative, equivalent
import kubernetes as k8s {
scope: {
type: 'AzureResourceManager'
id: aks.id
}
namespace: 'default'
}
Scope will have a type definition like the following:
{
type: 'AzureResourceManager'
id: '/subscriptions/...'
}
The Deployment engine will have to know how to obtain AKS credentials - e.g. by generating a listAdminCredentials()
function call based on the id
provided in the scope. This could be swapped out in the future if we find a better mechanism (e.g. OBO).
{
"imports": {
"name": "k8s": {
"type": "Kubernetes",
"config": {
"scope": {
"type": "AzureResourceManager",
"id": "/subscriptions/..."
}
}
}
},
"resources": {
"res1": {
"import": "k8s",
"id": "/planes/Kubernetes/import.namespace/{namespace}/type/apps%2Fdeployment/properties.metadata.name/{resourceName}",
"properties": { ... }
}
}
}
When returning the resourceIds affected by a deployment, the deployment engine will need to include the 'scope' information for extensible resources:
[
"scope": {
"type": "AzureResourceManager",
"id": "/subscriptions/..."
},
"id": "/planes/Kubernetes/import.namespace/{namespace}/type/apps%2Fdeployment/properties.metadata.name/{resourceName}"
]
This would allow Stacks to reconstruct a template with the following structure to instruct the Deployments engine to clean up a particular extensible resource (note the addition of the "deletedResources" section):
{
"imports": {
"k8s": {
"type": "Kubernetes",
"config": {
"scope": {
"type": "AzureResourceManager",
"id": "/subscriptions/..."
}
}
}
},
"resources": {},
"deletedResources": [
{
"import": "k8s",
"id": "/planes/Kubernetes/import.namespace/{namespace}/type/apps%2Fdeployment/properties.metadata.name/{resourceName}"
}
]
}
For 'local-mode', scope could be extended to support generic AKS clusters with e.g.
"scope": {
"type": "UserOwned",
"kubeConfig": "...."
},
If we're focused on the Azure scenario, this would not need to be implemented right away. Note that it's unclear how Stack would handle this scenario, as it would imply the 'scope' needs to be persisted somehow.