Skip to content

Instantly share code, notes, and snippets.

@soeirosantos
Last active November 6, 2020 01:32
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save soeirosantos/8f0d184f2b76c7c70b3cf86a788333b9 to your computer and use it in GitHub Desktop.
Save soeirosantos/8f0d184f2b76c7c70b3cf86a788333b9 to your computer and use it in GitHub Desktop.

Shared VPC with GKE and Cloud Memorystore

Note: You can see an improved version of this tutorial on the Google Cloud community website: https://cloud.google.com/community/tutorials/shared-vpc-gke-cloud-memorystore

In this lab we are going to configure a Shared VPC between two service projects. One project will contain a GKE cluster and the other one will contain managed services that would be accessed from applications deployed to the GKE cluster.

Let's start checking the projects. The shared-vpc project will be the Shared VPC host, the other two, shared-cluster and tenant-a, will contain the GKE cluster and managed services, respectively.

Before you move forward, if you are not familiar with the Shared VPC concepts and nomenclature, please, check the references by the end of this lab.

gcloud projects list

PROJECT_ID                                    NAME            PROJECT_NUMBER
shared-cluster-279801                         shared-cluster  48977974920
shared-vpc-279801                             shared-vpc      480781943946
tenant-a-279801                               tenant-a        507534582923

Configure Shared VPC

Enable the container.googleapis.com Service API.

gcloud services enable container.googleapis.com --project shared-vpc-279801

gcloud services enable container.googleapis.com --project shared-cluster-279801

Create a VPC network in the shared-vpc project and subnets to be used for the service projects. For the shared-cluster project we will also add secondary ranges for the pods and services.

gcloud compute networks create shared-net \
    --subnet-mode custom \
    --project shared-vpc-279801

gcloud compute networks subnets create k8s-subnet \
    --project shared-vpc-279801 \
    --network shared-net \
    --range 10.0.4.0/22 \
    --region us-central1 \
    --secondary-range k8s-services=10.0.32.0/20,k8s-pods=10.4.0.0/14

gcloud compute networks subnets create tenant-a-subnet \
    --project shared-vpc-279801 \
    --network shared-net \
    --range 172.16.4.0/22 \
    --region us-central1

To perform the next steps the following roles are required Compute Shared VPC Admin (compute.xpnAdmin) and Project IAM Admin (resourcemanager.projectIamAdmin).

Enable the Shared VPC and associate the service projects:

gcloud compute shared-vpc enable shared-vpc-279801

gcloud compute shared-vpc associated-projects add shared-cluster-279801 \
    --host-project shared-vpc-279801

gcloud compute shared-vpc associated-projects add tenant-a-279801 \
    --host-project shared-vpc-279801

Now let's check some credentials that we'll need to use in the next steps.

gcloud projects get-iam-policy  shared-cluster-279801

bindings:
- members:
  - serviceAccount:service-48977974920@compute-system.iam.gserviceaccount.com
  role: roles/compute.serviceAgent
- members:
  - serviceAccount:service-48977974920@container-engine-robot.iam.gserviceaccount.com
  role: roles/container.serviceAgent
- members:
  - serviceAccount:48977974920-compute@developer.gserviceaccount.com
  - serviceAccount:48977974920@cloudservices.gserviceaccount.com
  - serviceAccount:service-48977974920@containerregistry.iam.gserviceaccount.com
  role: roles/editor

gcloud projects get-iam-policy tenant-a-279801

bindings:
- members:
  - serviceAccount:service-507534582923@compute-system.iam.gserviceaccount.com
  role: roles/compute.serviceAgent
- members:
  - serviceAccount:service-507534582923@container-engine-robot.iam.gserviceaccount.com
  role: roles/container.serviceAgent
- members:
  - serviceAccount:507534582923-compute@developer.gserviceaccount.com
  - serviceAccount:507534582923@cloudservices.gserviceaccount.com
  - serviceAccount:service-507534582923@containerregistry.iam.gserviceaccount.com
  role: roles/editor

gcloud compute networks subnets get-iam-policy k8s-subnet \
   --project shared-vpc-279801 \
   --region us-central1
etag: ACAB

gcloud compute networks subnets get-iam-policy tenant-a-subnet \
    --project shared-vpc-279801 \
    --region us-central1
etag: ACAB

For the next steps we are going to enable access to the service projects to manage subnet configuration in the host project.

notice the etag value from the above results, we will use them in the policies we are creating in the next step.

Create the following file with the service accounts from the shared-cluster project:

# k8s-subnet-policy.yaml
bindings:
- members:
  - serviceAccount:48977974920@cloudservices.gserviceaccount.com
  - serviceAccount:service-48977974920@container-engine-robot.iam.gserviceaccount.com
  role: roles/compute.networkUser
etag: ACAB

And apply the policy to bind it to the k8s-subnet subnet

gcloud compute networks subnets set-iam-policy k8s-subnet \
    k8s-subnet-policy.yaml \
    --project shared-vpc-279801 \
    --region us-central1

Do the same for the tenant-a-subnet using the tenant-a project's service account.

tenant-a-subnet-policy.yaml
bindings:
- members:
  - serviceAccount:507534582923@cloudservices.gserviceaccount.com
  role: roles/compute.networkUser
etag: ACAB
gcloud compute networks subnets set-iam-policy tenant-a-subnet \
    tenant-a-subnet-policy.yaml \
    --project shared-vpc-279801 \
    --region us-central1

Finally add the GKE service account from the shared-cluster project to the host project with the roles/container.hostServiceAgentUser role.

gcloud projects add-iam-policy-binding shared-vpc-279801 \
   --member serviceAccount:service-48977974920@container-engine-robot.iam.gserviceaccount.com \
   --role roles/container.hostServiceAgentUser

Check the GKE cluster subnet:

gcloud container subnets list-usable \
    --project shared-cluster-279801 \
    --network-project shared-vpc-279801

Checking Connectivity

Moving forward we create a simple cluster to check connectivity:

gcloud container clusters create shared-cluster \
    --project shared-cluster-279801 \
    --zone=us-central1-a \
    --enable-ip-alias \
    --network projects/shared-vpc-279801/global/networks/shared-net \
    --subnetwork projects/shared-vpc-279801/regions/us-central1/subnetworks/k8s-subnet \
    --cluster-secondary-range-name k8s-pods \
    --services-secondary-range-name k8s-services

gcloud compute instances list --project shared-cluster-279801

NAME                                           ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
gke-shared-cluster-default-pool-a6da1fc4-579h  us-central1-a  n1-standard-1               10.0.4.4     104.154.189.60  RUNNING
gke-shared-cluster-default-pool-a6da1fc4-m05f  us-central1-a  n1-standard-1               10.0.4.3     34.67.86.185    RUNNING
gke-shared-cluster-default-pool-a6da1fc4-njlg  us-central1-a  n1-standard-1               10.0.4.2     34.70.153.158   RUNNING

We want to create a Google Cloud Memorystore instance that should be reachable from the GKE cluster.

First we need to configure the private service access.

gcloud services enable servicenetworking.googleapis.com --project shared-vpc-279801

gcloud beta compute addresses create \
memorystore-pvt-svc --global --prefix-length=24 \
--description "memorystore private service range" --network shared-net \
--purpose vpc_peering --project shared-vpc-279801

gcloud services vpc-peerings connect \
--service servicenetworking.googleapis.com --ranges memorystore-pvt-svc \
--network shared-net --project shared-vpc-279801

Now the Cloud Memorystore for Redis instance.

gcloud services enable redis.googleapis.com --project tenant-a-279801
gcloud services enable servicenetworking.googleapis.com --project tenant-a-279801

gcloud redis instances create my-redis --size 5 --region us-central1 \
--network=projects/shared-vpc-279801/global/networks/shared-net \
--connect-mode=private-service-access --project tenant-a-279801

gcloud redis instances list --region us-central1 --project tenant-a-279801

INSTANCE_NAME  VERSION    REGION       TIER   SIZE_GB  HOST          PORT  NETWORK     RESERVED_IP      STATUS  CREATE_TIME
my-redis       REDIS_4_0  us-central1  BASIC  5        10.177.176.3  6379  shared-net  10.177.176.0/29  READY   2020-06-09T13:55:36

Let's check if we can access this Redis instance from the cluster

gcloud container clusters get-credentials shared-cluster --zone us-central1-a --project shared-cluster-279801

kubectl run -it --rm --image gcr.io/google_containers/redis:v1 r2 --restart=Never -- sh

redis-cli -h 10.177.176.3 info

redis-benchmark -c 100 -n 100000 -d 1024 -r 100000 -t PING,SET,GET,INCR,LPUSH,RPUSH,LPOP,RPOP,SADD,SPOP,MSET -h 10.177.176.3 -q

Now let's see if we can access a service running inside a VM instance.

gcloud compute instances create my-instance --zone us-central1-a \
--subnet projects/shared-vpc-279801/regions/us-central1/subnetworks/tenant-a-subnet \
--project tenant-a-279801

Created [https://www.googleapis.com/compute/v1/projects/tenant-a-279801/zones/us-central1-a/instances/my-instance].
NAME         ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP   STATUS
my-instance  us-central1-a  n1-standard-1               172.16.4.3   34.69.82.109  RUNNING

We can try to ssh:

gcloud compute ssh my-instance \
    --project tenant-a-279801 \
    --zone us-central1-a

You'll notice that we can't. We need to add a firewall rule to enable ssh access.

gcloud compute firewall-rules create shared-net-ssh \
    --project shared-vpc-279801 \
    --network shared-net \
    --direction INGRESS \
    --allow tcp:22,icmp

Try to connect again with the same gcloud ssh command above.

From inside the my-instance VM run the following commands:

sudo apt install nginx

Let's access this i from the GKE cluster.

kubectl run -it --rm --image busybox bb8 --restart=Never

wget -qO- 172.16.4.3

Note that the wget command hangs and wouldn't return.

To enable this access add the following firewall rule for the k8s pod, service and nodes ranges.

gcloud compute firewall-rules create k8s-access \
    --project shared-vpc-279801 \
    --network shared-net \
    --allow tcp,udp \
    --direction INGRESS \
    --source-ranges 10.0.4.0/22,10.4.0.0/14,10.0.32.0/20

Run the pod and try the wget again.

Wrap up

We have created a Shared VPC and added two service projects to it. From the GKE cluster we are able to access the Memorystore instance through the private connection. And we are also able to access services running in GCE VM instances, for that we need to have proper firewall rules in place.

References

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment