Cloudflare and the Edge-First Web
Tracing the shift from centralized clouds to edge networks, and the startups making it easier
Since its 2019 IPO, Cloudflare has become both a mainstay developer tool and an object of fascination for analysts and investors. This has manifested in stock performance that implies long-term potential beyond what’s readily visible in the numbers—the stock is regularly singled out as the biggest statistical anomaly when investors plot multiples versus revenue growth.
Much of its outperformance is attributed to Cloudflare’s potential to compete with the public cloud giants (something Ben Thompson has written about), but what I believe is largely overlooked is the move to edge infrastructure that the company is already enabling. To better understand that move, let’s first look back.
Cloudflare’s rise
Like many of today’s new infrastructure giants, Cloudflare didn’t invent the CDN industry, but rather reawakened it by embracing the programmability of modern cloud infrastructure (if you’re not familiar with CDNs, check out Justin Gage’s overview). By the time Matthew Prince, Michelle Zatlyn, and Lee Holloway started Cloudflare in 2009, CDNs were already a commodity business. However, the team didn’t approach building the company with ambitions to create a faster Akamai. They started with a vision to improve internet security, and ultimately realized that in order to do so, they would need to build PoPs around the world and find a way to mediate internet traffic.
Instead of competing purely on speed of delivering static assets to end users, Cloudflare proved capable of stopping DDoS attacks—where hackers try to take down a site by overloading it with requests—by creating and monitoring the cache in between users and origin servers. Combined with a generous free pricing tier and much simpler integration—developers just had to make Cloudflare their DNS server, not install any hardware—the company was able to quickly capture market share from the bottom up, despite much larger competitors and real, physical barriers to entry.
What enabled better DDoS protection also positioned Cloudflare for continued innovation, moving the company from a pure CDN competitor to a broad edge infrastructure provider. By starting from zero, Cloudflare could build its CDN network using software-defined hardware, creating greater programmability compared to the legacy servers Akamai went to great lengths to place around the world. As a result, Cloudflare can both expose programmability to the developer and inject more logic at the cache layer that would otherwise need to live at the origin server far away from the end user (Muji has an excellent deep dive on the network architecture). In effect, this creates a programmable network between edges, which has become the platform on which the company has built other services. DDoS protection went from the reason to use Cloudflare to one of many products available on its network.
Cloudflare today
Over the last few years, Cloudflare’s pace of product innovation has been dizzying. They began with Workers in 2017, which let customers run compute in edge servers, originally for very limited workloads, before expanding to larger compute workloads with Workers Unbound in 2020. The company then introduced Durable Objects, KV, R2, and most recently D1 to support various stateful workloads.
In effect, Cloudflare is attempting to create parity between what developers can do in their edge servers and what they can do in a centralized cloud data center. This is obvious enough in how the company is naming its services—R2 is a nod to AWS’s S3—and in the API compatibility it offers with AWS services. The response from customers has been strong, with 450K developers building on Workers since launch.
And while “edge” has historically been associated with industries like IoT, VR, and AVs, I believe we’re at the beginning of a wholesale shift of internet infrastructure from centralized clouds to edge networks. Step One was widespread adoption of CDNs, which are a default requirement for new websites at this point. Step Two is the move of additional compute, and ultimately data, to the edge.
There are now compelling forces driving that transition. Speed differences were initially dismissed as not significant enough to merit moving workloads closer to users, even by Cloudflare’s own CEO. But it’s becoming clear that even small incremental slowdowns can lead to user drop-offs and lower Lighthouse scores, affecting SEO. Other factors come into play, including:
New data privacy and sovereignty laws, which are forcing developers to keep user data local—something that’s impossible in a single-region deployment.
Widespread adoption of WebAssembly (WASM), which is creating greater portability between environments at native speed and making it easier to spread the same application across centralized cloud and edge.
As Vercel* CEO Guillermo Rauch pointed out, for most developers today “cloud” means AWS us-east. The foundations Cloudflare has laid over the past 10 years are beginning to change that.
Matthew Prince’s Hierarchy of Developer Needs:
Opportunities in the edge ecosystem
Despite the advances Cloudflare and competitors have made over the past few years, building on the edge (and keeping that infrastructure in sync with centralized clouds) is still complex for the average developer, small team, and even larger company. What’s particularly exciting about the startups building in this market today is that they’re abstracting away the complexity of deploying apps to the edge, and in some cases reversing the trade-offs entirely—enabling a better end user experience by way of better developer experience.
Web development and deployment platforms
Perhaps surprisingly, the first of what I expect to be many “edge native” success stories have not served traditionally latency sensitive workloads, but rather gone after familiar websites and webapps.
Vercel and Netlify are serverless web development and deployment platforms that originally targeted a new class of JAMstack apps—static sites written in JavaScript and reliant on API calls for back-end functions—that could be cached at the edge to enable a faster end-user experience. By sending these static assets to CDNs automatically behind the scenes, these platforms could create an incredibly simple one-click deployment workflow that’s easier than deploying to a central cloud and better for the end user.
Since then, Vercel in particular has expanded its scope with Edge Functions. Building on top of Cloudflare Workers, Edge Functions allow developers to move their compute logic closer to users, making dynamic sites as fast as static sites. In effect, this extends the promise of JAMstack (easy-to-write apps that are extremely performant for end users) from static-only (blogs, landing pages, etc.) to broader webapps by enabling features like A/B testing, Auth, and Geolocation to run in servers at the edge and feel instant to the user. As Google’s Kelsey Hightower put it, “the networks have gotten so advanced, you can almost treat the internet like a computer itself… Vercel just feels more like the operating system for this distributed computer.”
I find these platforms to be such interesting harbingers of what’s to come because edge infrastructure is often not the reason developers choose to use them—developer experience is. However, the performance benefits of their underlying edge infrastructure (which customers barely had to think about when building) become too obvious to ignore over time, making their ability to manage edge deployments a critical differentiator. They aren’t edge providers, but they are “edge native” platforms, giving customers the best of the edge without forcing them to become experts in edge deployment the way developers and DevOps teams have had to become experts in cloud deployment.
Globally distributed data
As they look to build more complex apps on the edge, developers now require databases built to connect to edge infrastructure and deliver faster reads globally. This has brought about an explosion of new serverless databases designed from Day One with edge workloads in mind. Some of these new solutions are offering flavors of well-established databases optimized both for a better, serverless developer experience and reads from Cloudflare Workers—namely Upstash’s Redis and Kafka services. Cloudflare itself has now entered that race with D1, its edge SQL database.
What’s perhaps even more interesting is the way in which back-end architectural advances are coinciding with better edge compatibility. Three of my favorite examples are:
ChiselStrike, which only requires developers to define their back-end in Typescript and specify geolocation policies. From there, it takes care of both generating and deploying a back-end globally.
Neon, which has separated storage and compute in their Postgres database, making data replication cheaper and easier across regions. The team is already working on APIs that connect to Cloudflare Workers and route traffic to the nearest region.
ReadySet, which has built a database-caching layer that dramatically speeds up reads without requiring a database migration. While not yet focused on edge deployments, the company could eventually extend that caching layer to the edge.
Like Vercel and Netlify, these new databases aim to be both better for the developer and better for the end user. They can solve the latency issue as well as global data privacy compliance, as regulation requires more data localization, without impeding engineering velocity. And when combined with edge-native, serverless web platforms, form a stack that’s simple and hyper-performant, even for complex apps.
Managing apps across edge and cloud
While vertical serverless platforms may be the first to deliver on the promise of edge infrastructure due to their ability to abstract away the complexity of edge deployment and management, I expect many larger enterprises to still favor an approach that offers them more granular control of their infrastructure. In many cases, this means building on Cloudflare’s services directly. But in cases where apps live across a centralized cloud and an edge network, enterprises will need a new set of tools for managing infrastructure.
These tools could look like the in-house software many of the more vertically focused providers run under the hood repackaged for use on any infrastructure. Seaplane, for instance, offers a cloud control plane that distributes and maintains apps across cloud and edge. Customers only need to set the rules for their deployment (e.g., ensuring high availability, keeping EU data local), and the service will automatically distribute the app and route traffic accordingly. This makes it easier to place apps closer to users without requiring any change to the app itself.
With WASM rising in popularity, we’re also seeing new architectures emerge that are more natively optimized for complex, distributed deployments by virtue of being built for interoperability. Take Cosmonic, which is commercializing the open source project wasmCloud. The startup packages app components such that they are universally portable, and then offers a service to host those apps across cloud and edge.
The web development community is increasingly rallying around the idea of interoperability. Cloudflare, Vercel, Shopify, and open source contributors from Node.js and Deno recently announced a new community group dedicated to JavaScript interoperability between serverless environments and runtimes. These sorts of initiatives offer increasing visibility into a world where what’s centralized and what’s deployed at the edge is an implementation detail.
Meeting changing expectations
In his annual letter a few years back, Jeff Bezos wrote about customers’ “divine discontent” as a driver for Amazon’s progress. Customer expectations continually go up, creating new room for Amazon to improve.
Visualizing Bezos’s theory (credit Ben Thompson):
As more workloads move to the edge, I believe customer expectations, set largely by their experience on the most popular and sophisticated sites, will similarly rise. This turns complex deployments across cloud and edge from a nice-to-have to a requirement—if you’re not doing it, you’re ceding competitive advantage. The good news is that with sophisticated underlying infrastructure from Cloudflare and its competitors and an exciting wave of new startups building on top of it, meeting these elevated expectations is becoming easier and easier.
If you’re building something new in the edge ecosystem, or thinking about adopting edge infrastructure, feel free to reach out to me at dcahana@ggvc.com.
(*) indicates a GGV portfolio company
Good overview, thanks Dan! 💚 🥃
Good overview on how cloudflare is taking edge computing to the next level. Edge computing is much more relevant for its compliance and security perspective vs performance perspective. For performant application, in addition to network factor, the dependency on other microservices is a big factor.
If an edge computing provider can define those dependencies in a simpler way, it can be the game changer for industry.