Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save matijagrcic/0f765f88fa40103088e77e7328166440 to your computer and use it in GitHub Desktop.
Save matijagrcic/0f765f88fa40103088e77e7328166440 to your computer and use it in GitHub Desktop.
[ Music ] >> Please welcome Vice President, Serverless Compute, AWS, Holly Mesrobian. [ Music ] >> Welcome. Welcome to Accelerating Innovation with Serverless on AWS. As I was introduced, I'm Holly Mesrobian, and I'm the VP of Serverless Compute. Today, we're going to talk about how Serverless got to where it is, our vision for the future, and how we think about innovation going forwards. And I have a couple of great customers who are going to join me and show just what is possible with a Serverless approach. Both Lambda and ECS launched in 2014. And while they're very different services, with quite different approaches, they started with the same vision in mind. It's to help customers so that they can build more and focus on just building their applications. This gave developers and organizations simple, but opinionated ways to build by taking advantage of everything that AWS has to offer. ECS took a major step towards this vision with the launch of Fargate in 2017. For those of you who don't know, today is Fargate's fifth birthday. We're celebrating. So this is a great milestone for taking a step back, looking at how things have progressed with Serverless, where they're going, and to do that, I'm going to take you all the way back to the 1830s. So the story of electricity was one that was shared with me some years ago, and it's really stuck with me on how it has unlocked innovation. I think about that journey when I think about what's possible for application development. I don't know how familiar you are with that story, but it tracks to where we are today, and also where I think we're headed. In the early 1800s, the first power generators were invented, which paved the way for widespread electrification in the modern world. During the next 50 years, AC/DC, the light-hopped bulb, and the plug were all invented, and during that time, factories and businesses had to run their own power generation. You see up here on the screen, "Burden's Wheel." That's an example of early power generation. Now you fast-forward to today. Virtually no one runs their own power generation. An electricity is cheap, safe, resilient, and dependable, and it's available in most locations. But more importantly, it has enabled companies to invest in building better and new products. With cheap and reliable electricity, companies have been able to expand their business without having to first build a power base themselves. And that has led to new businesses and products being built rapidly. There were first the basics, like the light bulb, and that was obvious. But there were a lot of things that weren't so obvious, like televisions, computers, home appliances, and it was all because of early access to electricity. And this has been critical to the development of our modern world. You can see a similar pattern emerging and a pattern with future and emerging in the computing environment. Not too long ago, and in some places today, people manage their own infrastructure and data centers. This is like running your own power station. It might make sense if you're selling power, but it can be a real burden if that's not your core business. Also, these businesses are not able to leverage all the deep investment that's happening in the cloud all the time. And while they're also investing their valuable engineering resources, the thing that we most value in building, operating, and maintaining infrastructure. And that's before you get to upfront capital costs and all before you get to do innovation for your business. Because of this, we see many customers move over time to the cloud and leveraging virtual machines. They get a lot of key benefits, such as the agility of spinning up and down quickly, breadth of functionality and those services I was talking about, pay for use pricing, elasticity, and of course a global footprint. This is a good start to increase the agility and innovation for a business. Then along came containers to provide a high degree of workload mobility. You're running something already, you want to keep that investment, and so you're going to take it and move it into the cloud. Containers allow you to package up your application and move it to different computing environments. It starts a lot more rapidly than the VMs, and of course there's the great tool chains. But you're still managing servers and updating and patching operating systems. And this is where serverless comes in. Customers tell us running on instances are great, but managing the infrastructure can slow them down. Too much of the time is spent corralling that infrastructure and not enough time on building, which is really where you want to focus. This is core to your business and core to your pace of innovation. And this is exactly what's offered by serverless and serverless containers. You focus on code and you let someone else worry about the infrastructure and a lot more of the maintenance burden up the application stack. With serverless, you're able to automatically scale up and down with demand. It reduces cost with pay-as-a-you-go usage. You're able to leverage the built-in reliability, and of course it's easy to use. Inside serverless, we take advantage of economies of scale and we keep innovating to run workloads more efficiently, passing on the savings and reliability to you. It's what we obsess about every single day with many teams of engineers doing just this as their core job. This is why more and more customers are choosing serverless. Using serverless increases the speed at which applications can be developed, which translates to increased agility and productivity for your organization. Serverless automates high-performance scaling so you can meet the demands of your customers as your business grows. Serverless helps customers achieve the lowest TCO by offering pay-per-use pricing and also reduces the operational overhead so your engineers can focus on high-impact work. And AWS customers, using serverless, benefit from the built-in inclusion of security best practices. We believe serverless should be your default compute option when building a new application today to help you do the most for your business. And customers agree. MAThem is Sweden's largest online-only grocer. They started using Lambda in 2020. By decoupling services with EventBridge, they're able to ship features in as little as three days. Founded in 2017, Utopus aims to accelerate the integration of renewable energy into the power grid by optimizing energy production using analytics and insights. They serve over 235 customers globally at launch they started with an on-prem solution in order to process petabytes of data. This didn't scale so they decided to move to AWS and selected to use fully-managed serverless solutions wherever possible. They ended up choosing Fargate and Lambda, and today with Lambda, they're processing over 200 billion signals every day and enter two hours. As a comparison, that takes over two weeks with their former solution. And according to Deloitte, building and running a solution on serverless is 57% cheaper than server-based solutions. These are not exceptions. We hear similar stories from serverless customers all the time. So for many customers, the first decision when it comes to building an application is compute. You can start with EC2 and have all those knobs, or you can go to the completely other end of the spectrum and go with AWS Lambda where you're focusing just on your application. The layers of abstraction available to you with AWS is very empowering because your teams have the choice and we provide the tools, services and APIs to help you build your applications. When you're looking to optimize for your time to market and your pace of innovation, we always suggest you start with Lambda because you'll manage as little as possible. Ultimately, when you make technical decisions, you have to consider your existing application portfolio and you have a few options. You can start fresh and build new applications. This is what many customers start and it makes sense because you're building with serverless building blocks from the ground up without addressing technical debt. But when it comes to your existing portfolio, you also have a few options. You can reduce the amount of stuff that you're running by retiring systems or adopting fully managed SAS solutions. Next, you can do a lift and shift migration to AWS and most customers benefit from doing this quickly to get out of data center management and onto a common environment. But then when it comes to modernizing, you can replatform or refactor. When I say replatform, what I mean is taking a service that you manage and having AWS manage it for you, such as moving to a managed database or going from a self-managed RabbitMQ to an AmazonMQ. Refactoring is where our customers see the biggest transformation and we see successful customers start by refactoring their most business-critical or customer-facing applications because that's where they'll get the most time-to-market and TCO benefits. We'll focus most on modernization and building applications today. As a first step, many customers replatform their applications, especially if they're using a more traditional application architecture. Containerizing applications and running them on ECS Fargate is a great place to start. But modernization doesn't stop here. It usually involves refactoring. A good mental model is we first modernize your mission-critical applications and then we refactor to higher levels of abstraction so you can manage less and can focus on your code and the value that you add. In this case, you'd look to refactor services up stack to AWS Lambda wherever possible, especially if the services are event-driven or face unpredictable demand. And finally, when building green-filled applications, we recommend starting serverless first. How do you make this decision? Where do you start? In this approach, you can think of serverless as maximizing the use of fully managed services in your application architecture. If your application can use Lambda's programming model, that's a good place to start. Some of our most successful customers ask this question, "Can this service or application run on Lambda?" And if it can, that's exactly what they do. This offloads most of the responsibility for operating and managing their applications so they can focus on higher value added work. Customers building with a more traditional service-based programming model, such as long-running processes listening on a port, can look to ECS Fargate as the starting point. Most customers end up with a mix of compute. But for what they're doing, this helps them with standardizing architectural decision-making across their organization. Now, to see how this works in practice for a mission-critical application, please welcome Sheen Briesles from Lego. Sheen. [applause] Thank you, Hori. Good afternoon. Our mission of the Lego Group is to inspire and develop builders of tomorrow. As an anti-year-old company, we've been innovating products to enable learning through play. Technology plays a key part in taking these innovations to millions of children and adults across the world. Speaking of technology, across the Lego Group, we use a number of different technologies. But today, I am talking about the use of serverless technology within marketing and channels technology division. MCT, as we call, is part of the bigger digital technology organization. MCT is formed of five domains, but can be grouped into three categories. The first one is the Lego.com and the entire e-commerce platform, the whole ecosystem. It talks about products, content, images, payments, customers, orders, everything comes into that. Then we have hundreds of Lego brand and retail stores and our partner stores across the globe, serving customers, where customers can buy their products, join the rewards program, and redeem rewards. Our customer support center uses modern technology to help customers have a unique Lego experience. We've been operating on serverless for three years. 2019, when we moved the e-commerce platform onto serverless, this was not a lift and shift migration. It was a complete rewrite of the front end, as well as the back end application. The architecture as a high level looks like this. It has three main layers. Each layer can scale independently of each other. At the front, we have the front end application, the browser technology. That's the entry point customers to the shop. So it has web applications, React, NextJS, all these things. And then we have the so-called the back end for front end layer, or the BFF layer. This is where content caching using elastic cache and graph-fuel queries and mutation, you know, interacting with the back end services happen. So these layers get constantly get traffic from customers. So we chose Fargate to run these applications because Fargate enables us to spend the least amount of time working with the infrastructure so that engineers can focus on, you know, developing products. The back end is 100% on lambda-based microservices. It uses number of managed services to bring together the microservices applications consumed by the front end. We also work with number of chosen third party applications. Now, in an e-commerce world, the traffic spikes are, you know, like this. Especially when we have popular product launches, Black Friday and special events, it can vary. So we needed a platform and the technology that can scale as per the demand and deliver. So that's one of the key principles why we went to serverless. Now, like many organizations, enabling the cloud helps us to focus more on the innovation rather than doing all the heavy lifting ourselves. That allows the scaling as we need frees engineers to bring more value to customers and also to the business. Another important aspect with the modern applications is the observability. It is very important that every part of the application we can monitor and observe. That is going deep down to every microservice and the resource inside a microservice. And finally, the composable architecture. This is where the event-driven distributed application comes in where we can easily extend to add more capabilities to the platform and to the business. Now, as I mentioned, we've been operating for a while. So the whole journey started back in 2017 when we had our legacy on-prem e-commerce platform. During a number of high-profile product launches and, you know, Black Friday events and things like that, the platform struggled to deliver. We faced downtime. So from moving on from there, looking at cloud and serverless, 2018, we started to experiment with serverless. We started building simple services. Based on the success of that, we moved everything over to serverless in 2019. Now, at that time, we had typically two teams, like the front end and the back end, around 20 engineers. Now, the success of this migration and the operating with serverless enabled us to expand from there in terms of the serverless growth as well as the team growth. So from that point onwards, we started to set up smaller product-focused squads or smaller teams. In other words, stream-aligned teams. So one of the teams that we set up was a squad developing payment service. I will talk about that soon. And then the journey continued. So the payment service started rolling out. And in 2021, we saw many high-volume traffic events, especially during product launches, equaling Black Friday, you know, traffic patterns and things. The platform stood and delivered. Now, in 2022, we have nearly 30 product squads. Many of these squads operate with serverless. So I mentioned the serverless, you know, the payment service platform. It's a unified payments API platform. We call it Brickbank. It enables different business units to come to that particular platform in order to take payments, go to different payment providers, and, you know, the types of payments. We thought this one. Every business unit used to have, you know, direct integration with a number of providers. So this enabled to, you know, unify that and provide a scalable platform. It's operated by an independent autonomous squad, a small squad made of, like, 8 to 10 people, and they operate on their own cloud account, one repository. This is a pattern we now follow across different product squads. In terms of the use of services, several managed services from AWS is being used, and a number of microservices. You might be asking, if it's a smaller squad, why do we have these many microservices? The point is, we break things down to a granular level. We build smaller, independently deployable microservices so that engineers can work in parallel. So that's what gives us the velocity in these teams to move faster. In terms of the traffic they handle, more than 1000 orders per minute on a typical, you know, high volume season, and hundreds of thousands of AP requests, and even notifications, and concurrent land-back executions. These numbers may not be huge, because, remember, they operate on their separate account, away from the high volume, lago.com account, for example. And it's an independent team, so they need to take care of operating their services. So they set up a number of monitoring dashboards, observability measures, and various alerts and alarms to get notified if there is anything something to be looked into. What did we learn so far? There have been so many learnings during our serverless journey. Here are a few. First and foremost, serverless adoption, we should start with something very simple. So, as I always say, experiment, evaluate, evolve. That always works. Serverless is a great technology for incremental and iterative development. It's not just in terms of engineering, this approach gives visibility to product stakeholders, product teams. So that means there is a good collaboration between product teams and engineering teams in a good, you know, that is good for the business. And the other thing is automation. When we say automation, we always think of pipelines, GICD tests, automation, etc. Observability is another important aspect. Having the alerts and alarms streamed to the right channel, we have to start with the data. We have to start with the data. We have to start with the data. We have to start with the data. We have to start with the data. We have to start with the data. We have to start with the data. We have to start with the data. So things like how do we do structured logging? What are the absolute principles we need to follow? How do we do solution design when it comes to building a microservice? What is the threat modeling? How do you do threat modeling, for example? So all these aspects needs to be brought out as best practices and guidelines for engineers to follow. Then we took the best practice bits from serverless, well-architected framework and also serverless lens. And finally, the important bit. Serverless teams, I always mention that it brings engineering diversity. It's because engineers are not as programmers. They take part in end-to-end delivery of everything. They become DevOps engineers, they understand architecture, they take care of their services and everything. That's very important when we operate with serverless. And our motto is, only the best is good enough. That means whatever we do at the LEGO group, we should always strive to get the best capability possible. Serverless is a technology enables us to go along with this journey. The name LEGO is an abbreviation of two Danish words. That means play well. When play is at the center of everything, the possibilities are endless. Thank you so much. [Applause] [Music] Thank you, Sheen. You've seen how serverless enables customers to innovate faster and not have to worry about managing infrastructure. It's deceptively simple because we take on the complexity on your behalf. From the lowest layer to the highest, we continuously innovate and expose those innovations via simple, consistent abstractions. At the serverless end of the spectrum, you take advantage of the most layers of innovation. Here are the main pillars of innovation that we're constantly working on and investing in. We enable you to run the most demanding workloads on serverless by optimizing for performance. We take on more of the security responsibility with embedded security best practices. We enable applications built on scalable, cloud-native architectural patterns, and we support more and more types of workloads. You knew there was a power analogy coming, right? This is similar to the innovations made to the AC motor. While simple, the AC motor is core to power production and transmission. A small percentage of improvement to its cost, performance, and reliability have outsized impact everywhere. For Lambda, one such example of a performance improvement is around cold starts. While a tiny percentage of Lambda requests experience cold starts, improvements here enable new types of applications such as synchronous APIs and interactive microservices. So many of you probably saw this. Snap Start launched last night in Peter DeSantis' keynote. It is a performance-optimized feature that significantly improves cold starts for Java-based functions. With Snap Start, your function will typically run up to 10 times faster, and you can enable it by checking a box on your function, and you don't typically need to make any code changes. And enabling Snap Start doesn't cost anything. Snap Start is available for functions using Amazon Credo11, and we expect to add more runtimes with time. This is a great example of making the complex simple. You check a box, and your function can start up to 10 times faster. As Peter covered a lot of the details last night, I'm not going to go into everything, but I do want to talk a little bit about how it works. So we take all those cycles that we spent downloading code, bootstrapping runtimes, and initializing your code, and instead we run it just once. We then take a snapshot of the memory and disk space of the paused micro VM, and we store that snapshot, and we distribute it across servers. When a request comes in that otherwise would have required a cold start, we resume the snapshot, and of course this is way faster than doing all those things that I talked about earlier. To make this possible, we've had to create a lot of innovative new technology from the Firecracker micro VM level snapshotting to lazy loading the snapshots to optimizing resume times. This builds upon our innovation in Firecracker, which I announced in Verner's keynote in 2018, and our ability to quickly load code chunks, which we discussed with support of containers on Lambda in 2020 in the serverless keynote. So really years of continuous innovation have gone into this launch. So for those of you who really want to get into the details on this work, there's a number of members of the Lambda team who are going to cover this later today and tomorrow, and I recommend you join these sessions to hear about the deep innovation that went into this. Bill is a leading provider of cloud-based software that simplifies, digitizes, and automates back office financial processes for small and mid-sized business. They've been trying out Snap Start, and they're seeing 20 times faster cold starts for their Java workloads. And Capital One has been a disrupter in the financial services industry since 1994, using technology to transform banking and payments. They've also been using Snap Start and are seeing great results. Customers tell us that they love ECS Fargate for web applications, compute and memory intensive and longer-running workloads. Earlier this year, we launched larger task sizes for ECS Fargate and faster scaling. This is a big improvement, with four times more VCPUs and up to 16 times faster task launches. For batch jobs, the ECS RunTask API allows customers to launch 20 tasks per second. This is 20 times faster than a year ago. It allows customers to respond to their customers much more quickly. This is another example of improving the service on your behalf without you needing to do anything. We're already seeing these features open up new workloads for customers, including resource-intensive applications such as media processing for genomics. This used to require setting up and managing infrastructure to handle those resource requirements, but now can be run with much lower infrastructure management with ECS Fargate. Ray Cole, a distinguished engineer at SmartSheet reports that Fargate's larger instance sizes have been critical and unlocking and migrating SmartSheet's most resource-intensive workloads to serverless cloud infrastructure. He goes on to say, our adoption of Fargate has set new engineering-wide standards for what we expect in terms of service availability, reliability, team agility, feature velocity, and overall operational excellence. One of the primary benefits of AWS Lambda and serverless containers is that AWS takes on a greater share of responsibility for security operations. This does not mean that security is free, as shown customers are still responsible for securing data, managing their application and its dependencies. But because AWS takes on infrastructure security, it's easier to maintain a security posture. AWS handles operating system updates, patching, and monitoring, all built into the service. So most everyone remembers December of 2021. I know I do. A log4j CVE drove a huge volume of updates of Java code bases across companies around the world, and it really ruined many engineers December. How bad was it? How did the world react? And what did it take to fix this? A poll of cybersecurity practitioners conducted paints a pretty grim picture. 27% of organizations were less secure during the remediation. 23% fell behind on their 2022 security priorities as a result. 52% of teams spent weeks or more than a month remediating log4j, and 48% of teams gave up holiday time and weekends to assist with the remediation. So we're talking weeks of stressful work during the holidays away from family. But what about Lambda customers? For Lambda, we also needed to absorb the log4j CVE, but we did this on your behalf. We too started to address immediately in war rooms. We had to act fast. The question on everyone's mind was how quickly can you fix this? I was in those war rooms, but after a couple of days, we were able to apply patches for all Java runtimes. By December 12, just three days later, we were done, and all Lambda customers inside and outside Amazon were done too. With ECS Fargate, customers take on a bit more of the shared responsibility model compared to Lambda, but the response was still very fast. The serverless containers team provided customers with a simple way to identify the issue in their images via ECR enhanced scanning and Amazon Inspector. To mitigate the issue, the team chose to make it opt-in to avoid potential impact to customers running workloads and to give customers a chance to test it in a pre-prod before a prod environment. But then when opting in, the patch was automatically applied for new Fargate task launches. AWS takes on more of the shared responsibility model and that frees teams to focus on code. Your code, while less code, is still your responsibility. Customers have been asking us to help here too. To check for vulnerabilities, customers scan Lambda-based applications by integrating with their existing security tooling and processes. These scans can happen weekly or monthly and leave customers unaware of vulnerabilities in the interim. To address this, we're excited that Amazon Inspector now supports Lambda functions. It enables automated security assessments for apps running on Lambda, and once enabled, Inspector discovers eligible function code in all accounts in an AWS organization. Inspector assesses functions on discovery when there's an update and when a new vulnerability is published. Findings are centralized. In the Inspector console, push to Security Hub and also made available through Events in EventBridge. This way, security teams can more easily determine their risks. So what can you build on top of these innovations? Serverless applications share a common pattern. There are small pieces loosely joined APIs at the front door. There's decoupling with messages and events and integrating with SaaS-based solutions, and all orchestrated by workflows. While architectural patterns don't sound like a breeding ground for innovation, to build the electric grid required a very different way of thinking compared to local, small power stations. Under the hood, innovations such as AC, AC motors and transformers massively increase the coverage area of power stations and enabled economies of scale. The same thing is true with Serverless. Some of the innovations under the hood enable new architectural patterns that power highly scalable performance and resilient applications. So here is a modern architecture similar to what was shared by Sheen earlier, and yes, it does look a lot like a three-tier architecture. There are some of the same elements. Data, logic and presentation, but there are also some differences. What we're looking at is a single microservice, and many of our customers, like Sheen was talking about, run hundreds or even thousands of these kinds of services to approve scalability and resilience. We're using a combination of events, messages and cues that enable communication within the microservices and APIs to communicate between the services. And rather than a single database, we're illustrating three, but really the whole point there is that you can choose the best data store for your business. What you can't see is how you store state, and that's an important consideration when you're running functions or containers. We've already talked about how some of these components work. What many customers want to know next is what does it look like operationally? One of the application components that we've worked on abstracting is networking. With Lambda, that's something you don't really have to think about when integrating services or building APIs. But with ECS, this is still a consideration. To simplify how customers build applications with ECFs, we've removed some of the concern with Service Connect for ECS and ECS Fargate. Service Connect allows developers to specify friendly names for their ECS services and use them in their application code to connect them without using additional infrastructure. ECS Service Connect provides convenient traffic telemetry and supports reliable ECS deployments. Before, if you wanted to connect your ECS service, you could choose between several options. ECS Service Discovery, Simple but without telemetry, EOBs, Rich Feature Set, and telemetry, but requires additional infrastructure, and Atmesh, which is very flexible but can be complex. You can think of Service Connect as combining the simplicity of ECS Service Discovery with EOB telemetry and traffic resilience offered by Atmesh. Let's see how this works. With ECS Service Connect, you can refer and connect your services by logical names using namespaces provided by AWS CloudMap. A namespace can span across multiple ECS clusters residing in different VPCs, and all ECS services that belong to a namespace can communicate with services in that namespace. Once the service is deployed, you can connect to it using the logical name, and to connect across ECS clusters, you specify the same namespace for your ECS service that you need to talk to each other. The Amazon ECS console provides dashboards with real-time network metrics. One of the trends across our industry is embracing the asynchronous nature of almost all applications. Kernel development has had to do this for a long time, and it's increasingly important when you think about the cost of stalling a CPU. Languages, such as GoLang, JavaScript, and Swift are also embracing various native asynchronous primitives. Events are a powerful software construct for loosely coupled systems. It supports evolution, both with integrating with existing systems and refactoring, as you change and add new capabilities. You can see this in the diagram with the separate building blocks connecting logically separate concerns. Asynchronous patterns are ideal for defaulting to serverless compute, especially Lambda, but that doesn't mean it's not great for other types of applications. The final pillar of innovation is around workload support, helping you run more things serverlessly. Over the past few years, we've significantly improved the performance and developer experience of Lambda when running web-based applications and microservices. Provision concurrency and Snap Start, which I was just talking about, provides good performance and latency. In terms of simplicity, earlier this year, we launched Lambda Function URLs, making it super easy to create an HTTP endpoint for your Lambda functions. This doesn't require setting up API gateway to trigger your function, and it's at no additional cost. Function URLs are ideal for building web hooks or simple microservices and APIs that don't require the power of an API gateway. In 2022, the more appropriate question is probably, "What aren't customers building with serverless?" Event-driven and interactive applications, data processing and machine learning, these are all very common. Air Canada redesigned their mobile app on Lambda, async and Dynamo, leading to faster releases and new features, and the app rating went from two stars to 4.7 in the process and even one, a 2022 Webby Award. AstraZeneca runs over 51 billion statistical tests, and under 24 hours with Lambda S3 and Batch. They provide input into over 40 drug discovery projects during its first year of launch. Using Lambda and ECS Fargate as part of their machine learning workflow, HealthVANA can automatically validate and deliver health records to healthcare providers, over 50 million so far. One such company is Vercel, who's using AWS Serverless, not just to build interactive applications, but to change how interactive applications are built. To share their story, please welcome Guillermo Roche, CEO of Vercel. Thanks, Holly. I'm honored to join you here today to talk about how we integrated the serverless model into Vercel's frameworks and infrastructure. My name is Guillermo Roche, and I'm the CEO and co-founder of Vercel, the platform for Frontend Developers, providing the speed and reliability that innovators need to create at the moment of inspiration. Vercel's ecosystem starts with our investment in open source frameworks and tooling. Our Frontend framework, NEXCH-AS, was our initial step towards an open source future. Built on React, NEXCH-AS gives you the best developer experience to power the fastest React-based applications with all the features you need, like server rendering, static rendering, TypeScript support, smart bundling, route prefetching, everything you need, no configuration required. In Crucially, NEXCH-AS ensures your Frontend code ships a architecture that is compliant with serverless and edge compute right out of the box. We also invest in futuristic frameworks like Zvelte, which is both a framework and compiler that is seeing a lot of mainstream adoption and is built by Vercel's engineer, Rich Harris. We've contributed a Webpack, which is a bundle that's used by 17% of the top 1000 websites on the Internet. And now we're investing the new generation of Rust-based compiler infrastructure, like Turbopack, which is hundreds of times faster than Webpack, like Turbo repo, which powers some of the largest TypeScript and JavaScript monorables in the world. And SWC, a modern transpiler built in Rust, that replaces Babel. As you can see, we're all about making the frontend fast and will continue to invest in open source tooling that makes developers' lives easier. As frameworks like NEXCH-AS gain adoption, we're seeing more and more of the most dynamic web applications in the world revolutionize our frontend through Vercel's technology. From the leading commerce storefronts, to revolutionary software and technology companies, to the top media outlets in the world. NEXCH-AS is downloaded over 3.5 million times each week. But the modern frontend developers and designers are not just downloading software and running it on their local machines. They're constantly deploying to the cloud, such that they can share their work and validate their products in a real world setting. We've now supported over 100 million deployments over the past two years as a result. And with over 35 billion requests served each month, Vercel is delivering more and more of the top web properties of the world. Our mission is two-fold. In development, we want to make the developer experience real-time so that you can collaborate and build the best web projects. In production, we want to make the web faster and better for each and every visitor. To best illustrate how this works, I want to share the story of how the Washington Post revolutionized their frontend with Vercel. As you would expect from a developer-first platform, it all starts with pushing your code to the cloud. Vercel integrates seamlessly with GitHub, GitLab, Bitbucket, or any on-prem CI/CD infrastructure by leveraging the Vercel CLI. For each and every commit and pull request, Vercel builds your frontend and gives you back a preview URL that you can instantly share with anybody. For Washington Post, this meant turning the development of their special features and the election website into a real-time, collaborative process. Vercel does for frontend developers what Google Docs did for documents. This invites the entire organization to contribute to building great software products. By commenting and ensuring, for example, that your accessibility is great, testing in multiple devices really validated in the user experience itself is great. This moves you from just reviewing code, from being able to review your user experience. And building empathy by ensuring all voices within an organization are heard in that software development process. And all of this happens while ensuring that the developer workflow maximizes productivity. In that example, all those comments got resolved, which makes the check pass, and then blocks the CI pipeline. Now the developer can confidently move onto the next feature or ship the next bug fix. Under the hood, to enable this innovative collaborative model, Vercel converts your frontend code into optimal serverless infrastructure, like declarative routing configuration, static assets, lambda functions, and now even V8 isolate-based edge compute. Because the developers are constantly pushing new code to the tune of those 100 million deployments on Vercel in the last two years, this kind of innovation would have been impossible with always on server-based VMs or containers. We now deploy millions of functions each week that get invoked over 5 billion times each week. The serverless model has worked so well for our customers in development and CI/CD environments that brands like Washington Post have bet on Vercel to be the engine of innovation as they iterate and invest in new products. And here's the kicker. Because serverless proved to be so effective during the development process, the Washington Post also bet on Vercel to serve their frontend production workloads. To the post, performance, resilience, and security are paramount to handle high traffic events like the recent election day coverage and traffic spikes that are as unpredictable as they use themselves. To deliver world-class sites in production, we've turned lambda into an edge-first compute layer and added global distributed caching with built-in purging and invalidation that can get data from any data source, whether it's a database like DynamoDB, a headless CMS like Sanity or Contentful, or your headless e-commerce platform of choice. With this model of global edge caching and maximally efficient serverless compute, our customers not only get optimal performance, but they can scale up and then down to match traffic fluctuations, all at a fraction of the cost and overhead of manually provisioning servers. When WAPO had a high traffic moment earlier this month, they handled it flawlessly. And according to them, this was the smoothest election night anyone could remember on the team. Vercel, coupled with AWS' best-in-grade infrastructure, is helping leading digital organizations transform themselves through the frontend and accelerate the world's adoption of serverless technology. And it's not just WAPO. For Netflix engineering, they're now saving 21 days of built-time collectively, thanks to Vercel's built-in frontend build optimizations for Mona Repos. To illustrate how transformative frontend performance can be for business, DiSenio, a leading European commerce brand, lifted conversion by 34% by delivering their frontend on Vercel. In Harrow Rosen, not only survived Black Friday, but thrived. They were able to 3x their holiday sales compared to the previous year. We're excited to continue our work with AWS to enable more and more brands to reinvent their frontends, to innovate and deliver a faster web. If you want to learn more about our mission and our technology, head to Vercel.com. Back to Holly. Thank you, Guillermo. It's great to hear from customers who are building cutting-edge technology and products on top of serverless. But we understand customers are at different stages of the serverless journey, and to help with that, with their uplattling of skills, we put together a range of learning resources. We believe AWS Serverless fulfills the promise of accelerating innovation making very powerful compute simple. It offers reliable compute that's very easy to use. We're innovating across the stack, and many best practices and improvements are included by default without customers needing to do anything. With serverless compute, AWS takes on the most of the shared security responsibility, and according to Deloitte, running workloads on AWS Serverless is up to 57% cheaper over time. And you can run virtually all of your applications on serverless compute. We believe Serverless should be your default compute option when building a new application today. And we're not the only ones who think so. Thank you. [Applause] [Silence]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment