Skip to content

Instantly share code, notes, and snippets.

@thesaravanakumar
Last active February 8, 2023 20:17
Show Gist options
  • Save thesaravanakumar/cec96dc1712787f5c0a585610ae7d549 to your computer and use it in GitHub Desktop.
Save thesaravanakumar/cec96dc1712787f5c0a585610ae7d549 to your computer and use it in GitHub Desktop.
ppt for FBS/EU IT audiance

Network Endpoint Groups

Abstraction layer that enables container native load balancing

  • Load Balancers can not identify pods in a node (VM) it only identifies instance group or nodes. So this is where NEGs comes in. NEGs are integrated with the Kubernetes ingress controller running on GCP
  • A network endpoint group (NEG) is a configuration object that specifies a group of backend endpoints or services. A common use case for this configuration is deploying services in containers. You can also distribute traffic in a granular fashion to applications running on your backend instances.

Types of NEGs

  • Zonal NEG - One or more internal IP address endpoints that resolve to either VM or Pods.
  • Internet NEG - A single internet-routable endpoint that is hosted outside of Google Cloud.
  • Serverless NEG - A single endpoint within Google's network that resolves to an App Engine, Cloud Functions, API Gateway, or Cloud Run service.
  • Hybrid connectivity NEG - One or more endpoints that resolve to on-premises services, server applications in another cloud.
  • Private Service Connect NEG - A single endpoint that resolves to one of the following:
    • A Google-managed regional API endpoint
    • A managed service published using Private Service Connect

Cloud Load balancing

  • It is a fully-distributed, software-defined solution that balances user traffic to multiple backends to avoid congestion and ensure low latency. There are different types of load balancing, depending on the type of traffic you are dealing with, global or regional.

Screenshot 2023-02-05 at 9 38 29 PM

  • Lets say you have an backenend instance deployed in us-west and configured a load-balancing virtual IP. When your new users grow to another region, all you need to do is to create instances in the additional regions. There is no change in the virtual IP or the DNS service settings. So any one of the instance overloaded, the incoming traffic can be rerouted to other region and routed back when the instance is ready to serve.

  • Cloud Load Balancing uses anycast virtual IPs, giving you a single global frontend virtual IP address. It also provides cross-regional failover, fast autoscaling, and scales to millions of queries per second.

  • That is external load balancing at layer 7.But in any three-tier app, after the frontend,you have the middleware and the data sources to interact with in order to fulfill a user's request. That's where you need layer 4, internal load balancing between the frontend and the other internal tiers. Layer 4 internal load balancing is really for TCP/UDP traffic behind RFC 1918 VIP, where the client IP is preserved. You'll get the automatic health checks, and there's really no middle proxy. It leverages the software-defined networking controls and data plane for load balancing.

Screenshot 2023-02-05 at 9 50 21 PM

  • For global HTTPS load balancing, you have the global anycast virtual IPs, IPv4 or IPv6, associated with the forwarding rule, which directs traffic to a target proxy. The target proxy terminates the client's session. And for HTTPS, you deploy your certificates here. This is not a single device, but distributed logic throughout the infrastructure. The URL map configured provides layer 7 routing and directs the client request to the appropriate backend service. The backend services can be managed instance groups or network endpoint groups for your containerized workloads. This is also where service capacity and health is determined, and Cloud CDN is enabled to cache content for improved performance. You can set up firewall rules to control traffic to and from your backend here.

Screenshot 2023-02-05 at 9 53 33 PM

  • Internal load balancing setup works the same way. We still have a forwarding rule, but here, it points directly to a backend service. The forwarding rule has the virtual IP address, protocol, and up to five ports.

Certs

  • And Cloud Load Balancing supports multiple SSL certificates, as well, if you wanted to serve multiple domains using the same load-balancing IP address and port.

Screenshot 2023-02-05 at 9 56 10 PM

---

Screenshot 2023-02-05 at 9 58 53 PM

Screenshot 2023-02-05 at 9 59 47 PM

Why Google Cloud Load Balancing is important.

  • Distribute to single or multiple regions

  • Meet HA requirements

  • Autoscaling

  • CDN

  • To decide which load balancer best suits your implementation, you need to think about whether you need global versus regional load balancing. Global load balancing means backend endpoints live in multiple regions, whereas regional load balancing means backend endpoints live in a single region.

external load balancers - to distribute traffic coming from the internet to their GCP network internal load balancers - to distribute traffic within their GCP network.

Screenshot 2023-02-05 at 10 10 13 PM

  • The external load balancing includes four options-- the HTTP(S) load balancing, for a HTTP or HTTP(S) traffic; the TCP proxy for TCP traffic, for ports other than 80 and 8080 without SSL offload; the SSL proxy, for SSL offload on ports other than 80 or 8080; or the network load balancing, for TCP or UDP traffic.

  • the global HTTP(S) load balancer is for layer 7 traffic and is built using the Google frontend engines at the edge of Google's network,

  • the regional network load balancer is for the layer 4 traffic and is built using Maglevs. Google built Maglevs in 2008 to load-balance all traffic that comes into our data centers and distribute traffic to the frontend engines at our network edges. The traffic is distributed to a set of regional backend instances.

  • You want to preserve the client IP address, all the way to the backend instance, and perform TLS termination on these instances.

Screenshot 2023-02-05 at 10 15 23 PM

- your decision to use them would depend on whether you require SSL offload or not.

Screenshot 2023-02-05 at 10 17 37 PM

- the regional layer 7 internal load balancing based on Google's Andromeda network virtualization stack. Similar to the HTTP(S) load balancer and the network load balancer, internal layer 7 load balancing is neither a hardware appliance nor an instance-based solution and can support as many connections per second as you need, since there is no load balancer in the path between your client and the backend instances.

Screenshot 2023-02-05 at 10 20 43 PM

Screenshot 2023-02-05 at 10 22 21 PM

Global versus regional load balancing

  • Use global load balancing when your backends are distributed across multiple regions, your users need access to the same applications and content, and you want to provide access by using a single anycast IP address. Global load balancing can also provide IPv6 termination.
  • Use regional load balancing when your backends are in one region, and you only require IPv4 termination.

Screenshot 2023-02-05 at 10 24 35 PM

Screenshot 2023-02-05 at 10 26 46 PM

  • GFE - Google front ends
  • Andromeda
  • Maglev
  • Envoy

GCP Load balancer's features

  • HTTP(S) load balancing
  • Cloud Logging
  • TCP/SSL load balancing
  • Seamless autoscaling
  • SSL offload
  • High-fidelity health checks
  • Cloud CDN integration

Screenshot 2023-02-05 at 10 43 54 PM

  • On the global side, we have our HTTP, HTTPS load balancers, and our proxies for SSL and TCP. And on the regional, we have our external facing network load balancer and our internal layer 4 load balancing. By regional, I mean that the back instances are constrained to a specific region, and global-- we can have back end instances across multiple regions.

Screenshot 2023-02-05 at 10 48 57 PM

Screenshot 2023-02-05 at 10 49 11 PM

Screenshot 2023-02-05 at 10 51 39 PM

  • think of traditional load balancers and public clouds. They are regional. They have a regional VIP address and regional set of back end instances. So let's say you have a service that are distributed in three different regions. You have three different VIPs. Now if you wanted to globally load balance across those three regions, you need a DNS load balancer that would map the client request to one of these VIPs, but there are several challenges with this approach. Imagine an instance in one of the regions going away. The load balancing and the DNS infrastructure has to know of that change. Let's say your client caches that IP address, and that could result in a suboptimal selection. Last, your capacity is siloed. Resources in one region cannot be used in another region.

Screenshot 2023-02-05 at 10 54 32 PM

Screenshot 2023-02-05 at 10 55 11 PM

Screenshot 2023-02-05 at 10 55 22 PM

Screenshot 2023-02-05 at 10 56 51 PM

  • In a load balancing, you have something called the front end and you have something called the back end. In the front end, you have that global anycast VIP. That VIP is associated with the forwarding rule. That forwarding rule points to a target proxy. Now this target proxy could be HTTP, HTTPS, TCP, or SSL. The target proxy is also the location where have URL maps, and URL map-essentially, maps in a client URL based on host and path rules to a specific back end service. Now this is where the back end portion begins. A back end service is made up of back ends. Back ends could be managed instance groups or network endpoint groups. The back end service is also the location where we associate health configuration and serving capacity.

Name port -> which is a port that you can define for the communication between the proxy layer and the client. It doesn't have to be the same one as the front end

DEMO?

Internal LB

Screenshot 2023-02-05 at 11 06 03 PM

  • You want to scale and grow them behind a virtual IP address that's only accessible from your internal instances. And for these use cases, we have the internal load balancer. So for the layer 4 internal load balancer, you're effectively load balancing behind an RFC 1918 private virtual IP address. You are load balancing TCP and UDP, similar to a network load balancer, and doing the 2, 3, or 5-tuple hashing. Your client IP is preserved, and you can have TCP, HTTP, or HTTPS health checks. Now the key takeaway is this. There is really no middle proxy. Effectively, there's really no load balancer.

Screenshot 2023-02-05 at 11 06 48 PM

  • The way it works is we have our underlying software defined networking layer, Andromeda. That takes care of doing the connection tracking-- the consistent hashing. And what it does is it sends traffic directly from the client VM to the back end. That's how the client IP is preserved. But what's even more important to note is that this is a very scale out architecture. Because there is no choke point, you can have very performance load balancer. If you look, the data model seems vaguely familiar.

Screenshot 2023-02-05 at 11 08 06 PM

Container based load balancing

Screenshot 2023-02-05 at 11 11 40 PM

  • So the network endpoint groups concept-- the load balancer will now have visibility to the pod layer-- to the pod level-- by using IP and port pairs You can expect better latency-- better network utilization because we don't have traffic getting to one node, and then being translated to a different one-- and now also better health checking, because instead of health checking the node and then gets translated, it just health tracked the pods directly.

Screenshot 2023-02-05 at 11 15 38 PM

- have HTTP and TLS running everywhere and have better privacy and data integrity.

Screenshot 2023-02-05 at 11 17 04 PM

- Use it in concert with Cloud Armor. Use it in concert with Identity-aware Proxy, and layer it with firewall rules. So with the Google global network and our global load balancer, we are able to absorb, mitigate, and, you know, dissipate a lot of the volumetric layer 3 and layer 4 attacks. So that, you get with the global load balancer. Now for application layer attacks, we recommend that you take a look at a Cloud Armor solution. There's actually a lot of really good talks over here at Next with Cloud Armor. But essentially with Cloud Armor, you can specify security policies such as IP allow/deny lists and geo-based access control, in addition to protecting protection against cross site scripting or SQL injection attacks.Now layer that with Identity-aware Proxy.

Screenshot 2023-02-05 at 11 20 17 PM


Screenshot 2023-02-05 at 11 25 23 PM

Screenshot 2023-02-05 at 11 27 40 PM


Load Balancing

  • Distributes user traffic across multiple instances

  • Single point of entry with multiple backends

  • Fully distributed and software-defined

  • Global and Regional

  • Serve content as close as possible to users

  • Autoscaling with health checks

  • In this session we're going to learn about google cloud load balancing and how it's used to distribute traffic within the google cloud platform.

  • a low balancer distributes user traffic across multiple instances of your application so by spreading the load you reduce the risk of your applications experiencing performance issues a load balancer is a single point of entry with either one or multiple back ends and within gcp these back ends could consist of either instance groups or negs and i'll be getting into any g's in just a little bit low balancers on gcp are fully distributed and software defined so there is no actual hardware load balancer involved in low balancing on gcp it is completely software defined and so there's no need to worry about any hardware any pre-warming time as this is all done through software now depending on which low balancer you choose google cloud gives you the option of having either a global load balancer or a regional load balancer the load balancers are meant to serve content as close as possible to the users so that they don't experience increased latency and gives the users a better experience as well as reducing latency on your applications when dealing with low balancers in between services google cloud also offers auto scaling with health checks in their load balancers to make sure that your traffic is always routed to healthy instances and by using auto scaling able to scale up the amount of instances you need in order to handle the load automatically now

Screenshot 2023-02-07 at 3 06 48 PM

  • As there are many different low balancers to choose from it helps to know what specific aspects you're looking for and how you want your traffic distributed and so google has broken them down for us into these three categories the first category is global versus regional global load balancing is great for when your back ends are distributed across multiple regions and your users need access to the same applications and content using a single anycast ip address as well when you're looking for ipv6 termination global load balancing will take care of that now when it comes to regional load balancing this is if you're looking at serving your back ends in a single region and handling only ipv4 traffic now once you've determined whether or not you need global versus regional low balancing the second category to dive into is external versus internal external load balancers are designed to distribute traffic coming into your network from the internet and internal load balancers are designed to distribute traffic within your network and finally the last category that will help you decide on what type of load balancer you need is the traffic type and shown here are all the traffic types that cover http https tcp and udp

backend services

Screenshot 2023-02-07 at 2 41 36 PM

  • how a low balancer knows exactly what to do is defined by a backend service and this is how cloud load balancing knows how to distribute the traffic the backend service configuration contains a set of values such as the protocol used to connect to back ends various distribution in session settings health checks and timeouts these settings provide fine grain control over how your load balancer behaves an external http or https load balancer must have at least one backend service and can have multiple backend services the back ends of a backend service can be either instance groups or network endpoint groups also known as negs but not a combination of both.

  • now moving on to the values themselves i wanted to first start with health checks and google cloud uses the overall health state of each back end to determine its eligibility for receiving new requests or connections back ends that respond successfully for the configured number of times are considered healthy back-ends that fail to respond successfully for a separate number of times are considered unhealthy and when a back-end is considered unhealthy traffic will not be routed to it next up is session affinity and session affinity sends all requests from the same client to the same back end if the back end is healthy and it has capacity service timeout is the next value and this is the amount of time that the load balancer waits for a backend to return a full response to a request next up is traffic distribution and this comprises of three different values the first one is a balancing mode and this defines how the load balancer measures back-end readiness for the new requests or connections the second one is target capacity and this defines a target maximum number of connections a target maximum rate or target maximum cpu utilization and the third value for traffic distribution is capacity scalar and this adjusts overall available capacity without modifying the target capacity and the last value for back-end services are back-ends and a back-end is a group of endpoints that receive traffic from a google cloud load balancer and there are several types of back-ends

Screenshot 2023-02-06 at 8 21 33 PM

- Lets talk about load balancers themselves.

HTTP(S) Load balancer

Screenshot 2023-02-06 at 8 39 33 PM

  • when it comes to http and https load balancer this is a global proxy based layer 7 low balancer which is at the application layer and so just as a note here with all the other low balancers that are available in gcp the http and https low balancer is the only layer 7 load balancer all the other low balancers in gcp are layer 4 and will work at the network layer and so this low balancer enables you to serve your applications worldwide behind a single external unicast ip address external http and https load balancing distributes http and https traffic to back ends hosted on compute engine and gke external http and https load balancing is implemented on google front ends or gfes as shown here in the diagram gfes are distributed globally and operate together using google's global network and control plane in the premium tier gfes offer cross-regional low balancing directing traffic to the closest healthy backend that has capacity and terminating http and https traffic as close as possible to your users with the standard tier the load balancing is handled regionally and this load balancer is available to be used both externally and internally that makes this load balancer global external and internal this load balancer also gives support for https and ssl which covers tls for encryption in transit as well this load balancer accepts all traffic whether it is ipv4 or ipv6 traffic and just know that ipv6 traffic will terminate at the low balancer and then it will forward traffic as ipv4 so it doesn't really matter which type of traffic you're sending the load balancer will still send the traffic to the back end using ipv4 this traffic is distributed by location or by content as shown in the previous diagram forwarding rules are in place to distribute defined targets to each target pool for the instance groups again defined targets could be content based and therefore as shown in the previous diagram video content could go to one target whereas static content could go to another target url maps direct your requests based on rules so you can create a bunch of rules depending on what type of traffic you want to direct and put them in maps for requests ssl certificates are needed for https and these can be either google managed or self-managed and so just as a quick note here the ports used for http are on 80 and 8080 as well on https the port that is used is port 443

SSL Proxy Load Balancer

Screenshot 2023-02-06 at 10 48 33 PM

  • now moving into the next low balancer is ssl proxy an ssl proxy low balancing is a reverse proxy load balancer that distributes ssl traffic coming from the internet to your vm instances when using ssl proxy load balancing for your ssl traffic user ssl connections are terminated at the low balancing layer and then proxied to the closest available backend instances by either using ssl or tcp with the premium tier ssl proxy low balancing can be configured as a global load balancing service with the standard tier the ssl proxy load balancer handles lowbalancing regionally this load balancer also distributes traffic by location only ssl proxy low balancing lets you use a single ip address for all users worldwide and is a layer 4 load balancer which works on the network layer this load balancer shows support for tcp with ssl offload and this is something specific to remember for the exam this is not like the http or https load balancer where we can use specific rules or specific configurations in order to direct traffic ssl proxy low balancer supports both ipv4 and ipv6 but again it does terminate at the load balancer and forwards the traffic to the back end as ipv4 traffic and forwarding rules are in place to distribute each defined target to its proper target pool and encryption is supported by configuring back-end services to accept all the traffic over ssl now just as a note it can also be used for other protocols that use ssl such as web sockets and imap over ssl and carry a number of open ports to support them

TCP Proxy Load Balancer

Screenshot 2023-02-06 at 10 51 34 PM

  • moving on to the next load balancer is tcp proxy now the tcp proxy load balancer is a reverse proxy load balancer that distributes tcp traffic coming from the internet to your vm instances when using tcp proxy load balancing traffic coming over a tcp connection is terminated at the load balancing layer and then forwarded to the closest available backend using tcp or ssl so this is where the low balancer will determine which instances are at capacity and send them to those instances that are not like ssl proxy load balancing tcp proxy load balancing lets you use a single ip address for all users worldwide the tcp proxy load balancer automatically routes traffic to the back ends that are closest to the user this is a layer 4 load balancer and again can serve traffic both globally and externally tcp proxy distributes traffic by location only and is intended for specifically non-http traffic although you can decide if you want to use ssl between the proxy and your back end and you can do this by selecting a certificate on the back end again this type of load balancer supports ipv4 and ipv6 traffic and ipv6 traffic will terminate at the low balancer and forwards that traffic to the back end as ipv4 traffic now tcp proxy low balancing is intended for tcp traffic and supports many well-known ports such as port 25 for simple mail transfer protocol or smtp

Network Load balancer

Screenshot 2023-02-06 at 10 54 03 PM

  • next up we have the network load balancer now the tcp udp network load balancer is a regional pass-through load balancer a network load balancer distributes tcp or udp traffic among instances in the same region network load balancers are not proxies and therefore responses from the back end vms go directly to the clients not back through the load balancer the term known for this is direct server return as shown here in the diagram this is a layer 4 regional load balancer and an external load balancer as well that can serve to regional locations it supports either tcp or udp but not both although it can low balance udp tcp and ssl traffic on the ports that are not supported by the tcp proxy and ssl proxy ssl traffic can still be decrypted by your back end instead of the load balancer itself traffic is also distributed by incoming protocol data this being protocols scheme and scope there is no tls offloading or proxying and forwarding rules are in place to distribute and define targets to their target pools and this is for tcp and udp only now with other protocols they use target instances as opposed to instance groups lastly a network load balancer can also only support self-managed ssl certificates as opposed to the google managed certificates as well

Internal Loadbalancer

Screenshot 2023-02-06 at 10 56 12 PM

  • Layer 4 Load Balancer

  • Regional and internal

  • Supports either TCP or UDP; not both

  • Balances internal traffic between instances

  • Cannot be used to balance internet traffic

  • Traffic sent to backend directly; does not terminate client connections

  • When using forwarding rules

    • You must specify at least one and up to 5 ports by number
    • You must specify ALL to forward traffic to all ports
  • so the last low balancer to introduce is the internal load balancer now an internal tcp or udp load balancer is a layer 4 regional load balancer that enables you to distribute traffic behind an internal load balancing ip address that is accessible only to your internal vm instances internal tcp and udp load balancing distributes traffic among vm instances in the same region this load balancer supports tcp or udp traffic but not both and as i said before this type of load balancer is used to balance traffic within gcp across instances this low balancer cannot be used for balancing internet traffic as it is internal only traffic is automatically sent to the back end as it does not terminate client connections and for forwarding rules this load balancer follows specific specifications where you need to specify at least one and up to five ports by number as well you must specify all to forward traffic to all ports now again like the network load balancer you can use either tcp or udp and so that's pretty much all i had to cover with this lesson on low balancing


Screenshot 2023-02-06 at 11 36 48 PM

Screenshot 2023-02-06 at 11 38 33 PM

Network LB

  • So let's get started with network load balancers. Right? So these are load balancers that are really operating at the network layer. Right? So when I say that, they're operating at layer three and layer four. Right? These are external load balancers. Right? So we talked about external versus internal. These are facing the internet. And these are regional load balancers. So what I mean by that is they operate within Google Cloud regions.
  • So they are regional. They are highly available. They do run across multiple zones. They are designed to really load balance TCP and UDP traffic.
  • n terms of the network load balancers the client IP is preserved. So when we talk about the layer seven load balancers, that isn't the case. And we get into the concept we call x-forward for headers, in and ways that you can manipulate that. But in terms of this, the network load balancer, that client IP is actually passed directly through the load balancer. So it's not terminating sessions. It's not changing anything. Because of that, you can actually use the VPC firewall constructs that we talked about in our previous video to actually enforce security policy. So if there is a regional IP block that you don't want to allow, or something that you want to white list, you can use the firewall rules to actually control that because the client IPs are being preserved so it passes through. In terms of how does it actually load balance the traffic.

Screenshot 2023-02-06 at 11 41 56 PM

  • So the challenge with DNS based load balancing is kind of what I've illustrated here. Right? So if you're running multiple back ends, and they're running in various regions, what you have to do is you have various DNS records that are pointing to these various regions. There's a lot of things in there that are outside of your control. Right? So, for example, you could have a failure in, say, San Francisco. Right? You've got to update the DNS potentially. You could have people that have stale DNS entries. So even if they are pounding away on this and it's not available, they don't have the most current DNS records. There's lots of things that are outside of your control, per se. Right? And when things fail, guess what. You've got to go back and update this stuff. So, for example, if San Francisco fails, you have to make a change so that you redirect all the traffic to another region.

Screenshot 2023-02-06 at 11 42 23 PM

  • Exactly. So the way this works, and the way this is very different is to start off, you have one VIP globally. So in this case, if you're running MyApp.com in that regional construct, you're at a different IP for every region. In this case, you're using 200.1.1.1 in this case. Right? So whether you're in Asia, whether you're in Europe, whether you're in America, you're always hitting that same IP. Right? So what happens is when you come in, we're advertising a global set of blocks. So the various ISPs, the way that you get to Google, or when I say you, the way customers get to Google to sort of access your MyApp.com is they're going to come in from the internet somewhere. So it's going to hit one of our load balancers. And our load balancer is actually going to look at all the back ends you've configured on the backside. And it's actually going to figure out what is the closest one to it? So maybe you're running one on east west coast, and you have somebody come in from Europe. It's going to push it to the west or to the east coast. If you're running it in all regions, then if it comes in in Europe, it's going to land it in Europe. So net, net it gives your customer better performance, because that actually get routed to the back ends that are closest to them.

Screenshot 2023-02-06 at 11 42 56 PM

Screenshot 2023-02-06 at 11 43 12 PM

  • So we talked about the single global AnyCast VIP. Right? And this can be IPv4 or IPv6. Right? And it runs globally. Worldwide capacity. We talked about this. Right? These are processes that run out at Google's edge. They're highly scalable. And like I said, we use these for things like our search engine. So we know they're tried. They're tested. They run really well. Cross region failover and fallback. So this is kind of what I was talking about earlier. You might have instances running at all regions. But if, say, something fails in Europe, guess what. The load balancer knows Europe is not in commission at the moment. I'm going to fail that to, say, the east coast of the US. Cause that's the next closest region. Or if you're in the west coast of the US and it fails, it's going to say, oh, I know my next closest is the central region. So all that intelligence lives with the load balancer. And it figures out, how do I get it to the closest back end to ultimately give your customer the best experience? STEPHANIE WONG: Awesome. RYAN PRZYBYL: We talked about auto-scaling in terms of how we use these with DDOS and various other things so it's incredibly quick. It's also a single point to apply Googleable policies. Right? So we talked about URLs, various sort of security policies you can overlay on that. We're not going to talk about Cloud Armor here. But Cloud Armor sort of works in function with this, if you want to whitelist or blacklist IP blocks. All of that stuff works in conjunction over the top of these layer seven load balancers. STEPHANIE WONG: OK. RYAN PRZYBYL: And again, super robust. Millions of queries per second.

Screenshot 2023-02-06 at 11 43 27 PM

  • So one of the key things to understand here is there really is no load balancer. Right? When you go and configure a load balancer, you're actually configuring the control plane. Right? Our SDN control plane. It's not like you're building a box that's running on top of a VM. Right? Because that will create a bottleneck. STEPHANIE WONG: Right. RYAN PRZYBYL: So by basically allowing you to program the SDN directly when packets come in and they need to be load balanced, the SDN just sort of looks at them and goes, oh, I know that there are four back ends for this. I'm going to pick one. I'm going to send it to the back end. Right? It doesn't actually forward it to a load balancer. And a load balancer doesn't make a load-balancing decision and then forward it to one of the VMs. Right? So why we chose to do that is, again, we don't want a single point of failure in there. We don't want something that can fail. When we start talking about the SDN and the control plane, if it's failing we have larger issues that we're dealing with. STEPHANIE WONG: Right.

Screenshot 2023-02-06 at 11 43 40 PM

Screenshot 2023-02-06 at 11 44 10 PM

  • so what really happens is the SDN will say, oh, I know that VIP is configured as a load balancer. I know these five VMs are on the backside, and it's going to pick one of them, send it. The VM is going to receive it. It's going to see, oh, this isn't my IP address per see, but it's a VIP that I know that I'm the back end from a load balancer. So I can receive that packet. I can process that packet. And I can respond to that packet. We talked a little bit about health checks, TCP, HTTP, HTTPS health checks. This is really sort of what it uses to check and make sure those back ends are healthy. In this case, the client IP is preserved. So we talked about that. Because you're dealing with VM to VM here in most cases. Right? So we're maintaining VM A's IP address when you send to those back ends. Right? STEPHANIE WONG: Right. RYAN PRZYBYL: So that's really what we're saying when we say the client IP is preserved. It is really the VM IP. And we talked about there's really no middle proxy. There's no actual sort of choke point in this. Right? Because you are configuring the SDN directly. Right? So that's how we deliver super highly scalable capacity here to do this, where there is no sort of choke points, or things that can break-- STEPHANIE WONG: Yeah. It's not a server. It's not a VM. It's not a physical device. RYAN PRZYBYL: Yeah. If you've configured F5 load balancers or something in your data center, it's a physical device that you put in there. STEPHANIE WONG: Yeah. Exactly. RYAN PRZYBYL: In this case, there's nothing physical about it. It's just a control plane configuration. STEPHANIE WONG: Right. RYAN PRZYBYL: Right. So it's very elegant in its simplicity.

Screenshot 2023-02-07 at 11 25 40 AM

Screenshot 2023-02-07 at 11 27 24 AM

Screenshot 2023-02-08 at 10 21 03 PM

  • Low latency, highly available connection between your on-premises and Google Cloud VPC networks
  • Directly accessible internal IP addresses - Private Google Access
  • Does not traverse the public internet
  • Dedicated connection
  • Not encrypted
  • Expensive

Screenshot 2023-02-08 at 11 55 22 PM

  • A subnetwork of a VPC
  • Each VPC network consists of one or more subnets and each subnet is associated with a region
  • The name or region of a subnet cannot be changed after you have created it
  • Primary and secondary ranges for subnets cannot overlap with any allocated range

Increasing subnet IP space

  • Must not overlap with other subnets in the same VPC network
  • Inside the RFC 1918 address-space
  • Network range must be larger than the original
  • Once subnet has been expanded you cannot undo it

Reserved IP Addresses

  • Network - First address

  • Default Gateway - Second address

  • Second-to-last address - Google Cloud future use

  • Broadcast - Last address

  • A virtualized network within Google Cloud

  • A VPC is a Global resource

  • Encapsulated within a Project

  • VPC's do not have any IP address ranges associated with them

  • Firewall rules control traffic flowing in and out of the VPC

  • Resources within a VPC can communicate with one another by using internal (private) IPv4 addresses

  • Support only for IPv4 addresses

  • Each VPC contains a default network

  • 2 Network types: Auto Mode or Custom Mode

  • Routes define the network traffic path from one destination to the other

  • In a VPC routes consists of a single destination (CIDR) and a single next hop

  • All routes are stored in the routing table for the VPC

  • Each packet leaving a VM is delivered to the next hop of an applicable route based on a routing order

  • Types:

    • System-generated
      • Default
      • Subnet Route
    • Custom Routes
      • Static Route
      • Dynamic Route
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment