On January 22, 2026, Cloudflare experienced a BGP route leak that impacted both its own customers and multiple external networks. An automated routing policy configuration error caused unintentional leaking of Border Gateway Protocol prefixes from a router in Miami, Florida. While the incident lasted only 25 minutes, it provides valuable lessons about BGP routing security and the challenges of managing complex network infrastructure at scale.
What Happened
The route leak was the result of an accidental misconfiguration on a router in Cloudflare's network, affecting only IPv6 traffic. The incident caused congestion on backbone infrastructure in Miami, elevated packet loss for some Cloudflare customer traffic, and higher latency for traffic across affected links. Additionally, some traffic was discarded by firewall filters designed to accept only traffic for Cloudflare services and customers.
Cloudflare sincerely apologized to users, customers, and networks impacted by this BGP route leak—a rare instance where the company found itself causing rather than observing such an incident.
Understanding BGP Route Leaks
Cloudflare has written extensively about BGP route leaks and even records route leak events on Cloudflare Radar for public viewing and learning. To understand route leaks, it's helpful to refer to the formal definition within RFC7908.
Essentially, a route leak occurs when a network tells the broader Internet to send it traffic that it's not supposed to forward. Technically, a route leak occurs when a network, or Autonomous System (AS), appears unexpectedly in an AS path. An AS path is what BGP uses to determine the path across the Internet to a final destination.
An example of an anomalous AS path indicative of a route leak would be finding a network sending routes received from a peer to a provider. During this type of route leak, the rules of valley-free routing are violated, as BGP updates are sent from one AS to their peer, and then unexpectedly up to a provider.
Often the leaker is not prepared to handle the amount of traffic they're going to receive and may not even have firewall filters configured to accept all incoming traffic. In simple terms, once a route update is sent to a peer or provider, it should only be sent further to customers and not to another peer or provider AS.
During the incident on January 22, Cloudflare caused a similar kind of route leak, taking routes from some peers and redistributing them in Miami to other peers and providers. According to route leak definitions in RFC7908, this caused a mixture of Type 3 and Type 4 route leaks on the Internet.
Timeline of Events
At 19:52 UTC on January 22, a change that ultimately triggered the routing policy bug was merged in Cloudflare's network automation code repository.
At 20:25 UTC, automation ran on a single Miami edge-router, resulting in unexpected advertisements to BGP transit providers and peers. This marked the start of impact.
At 20:40 UTC, the network team began investigating unintended route advertisements from Miami.
At 20:44 UTC, an incident was raised to coordinate response.
At 20:50 UTC, the bad configuration change was manually reverted by a network operator, and automation was paused for the router so it couldn't run again. This marked the end of impact—just 25 minutes after it began.
At 21:47 UTC, the change that triggered the leak was reverted from the code repository.
At 22:07 UTC, automation was confirmed by operators to be healthy to run again on the Miami router, without the routing policy bug.
At 22:40 UTC, automation was unpaused on the single router in Miami.
The Configuration Error
On January 22, 2026, at 20:25 UTC, Cloudflare pushed a change via its policy automation platform to remove BGP announcements from Miami for one of its data centers in Bogotá, Colombia. This was intentional, as recent infrastructure upgrades removed the need to forward some IPv6 traffic through Miami toward the Bogotá data center.
The change generated a configuration diff that appeared innocent at a glance—only removing prefix lists containing BOG04 unicast prefixes. However, this resulted in a policy that was too permissive.
The resulting policy would now mark every prefix of type "internal" as acceptable and proceed to add informative communities to all matching prefixes. More importantly, the policy also accepted the route through the policy filter, which resulted in the prefix—intended to be "internal"—being advertised externally.
This is an issue because the "route-type internal" match in JunOS or JunOS EVO (operating systems used by HPE Juniper Networks devices) will match any non-external route type, such as Internal BGP (IBGP) routes.
As a result, all IPv6 prefixes that Cloudflare redistributes internally across the backbone were accepted by this policy and advertised to all BGP neighbors in Miami. This is unfortunately very similar to the outage Cloudflare experienced in 2020.
The Impact
When the policy misconfiguration was applied at 20:25 UTC, a series of unintended BGP updates were sent from Cloudflare (AS13335) to peers and providers in Miami. These BGP updates are viewable historically by looking at MRT files with tools like monocle or using RIPE BGPlay.
Analyzing the BGP updates reveals that Cloudflare took prefixes received from Meta (AS32934), its peer, and then advertised them toward Lumen (AS3356), one of its upstream transit providers. This is a route leak because routes received from peers are only meant to be readvertised to downstream (customer) networks, not laterally to other peers or up to providers.
As a result of the leak and forwarding of unintended traffic into the Miami router from providers and peers, Cloudflare experienced congestion on its backbone between Miami and Atlanta. This would have resulted in elevated loss for some Cloudflare customer traffic and higher latency than usual for traffic traversing these links.
In addition to this congestion, networks whose prefixes Cloudflare leaked would have had their traffic discarded by firewall filters on Cloudflare's routers designed to only accept traffic for Cloudflare services and customers. At peak, Cloudflare discarded around 12Gbps of traffic ingressing its Miami router for these non-downstream prefixes.
Follow-Ups and Prevention
Cloudflare is a strong supporter and active contributor to efforts within the IETF and infrastructure community that strengthen routing security. The company knows firsthand how easy it is to accidentally cause a route leak, as evidenced by this incident.
Preventing route leaks requires a multi-faceted approach. Cloudflare has identified multiple areas for improvement, both short- and long-term.
Immediate Fixes
In terms of routing policy configurations and automation, Cloudflare is:
- Patching the failure in routing policy automation that caused the route leak and mitigating this potential failure and others like it immediately
- Implementing additional BGP community-based safeguards in routing policies that explicitly reject routes received from providers and peers on external export policies
- Adding automatic routing policy evaluation into CI/CD pipelines that looks specifically for empty or erroneous policy terms
- Improving early detection of issues with network configurations and negative effects of automated changes
Long-Term Solutions
To help prevent route leaks in general, Cloudflare is:
- Validating routing equipment vendors' implementation of RFC9234 (BGP roles and the Only-to-Customer Attribute) in preparation for rollout of the feature, which is the only way independent of routing policy to prevent route leaks caused at the local Autonomous System (AS)
- Encouraging long-term adoption of RPKI Autonomous System Provider Authorization (ASPA), where networks could automatically reject routes that contain anomalous AS paths
The Importance of RFC9234
RFC9234 introduces BGP roles and the Only-to-Customer (OTC) attribute, providing a mechanism to prevent route leaks at the protocol level rather than relying solely on routing policy configuration. This is crucial because, as this incident demonstrates, complex routing policies can have subtle bugs that result in unintended route advertisements.
By implementing BGP roles, networks can declare their relationship with each BGP neighbor (provider, customer, peer, etc.), and the protocol itself can enforce valley-free routing rules. This provides a safety net independent of potentially complex or buggy routing policy configurations.
RPKI ASPA: The Future of Route Leak Prevention
While RFC9234 helps prevent route leaks originating from your own network, RPKI Autonomous System Provider Authorization (ASPA) enables networks to protect themselves from route leaks caused by others. With ASPA, network operators can publish cryptographically signed records declaring which ASes are authorized to be their providers.
Other networks can then validate AS paths against these ASPA records, automatically rejecting routes containing anomalous AS paths that violate declared provider relationships. This creates a distributed, cryptographically verifiable system for detecting and preventing route leaks across the Internet.
Lessons Learned
This incident highlights several important lessons for network operators:
First, automation is essential for managing complex networks at scale, but automated systems require careful safeguards. Configuration changes should be evaluated not just for syntax correctness but for semantic correctness—does the resulting policy actually do what's intended?
Second, routing policies can have subtle bugs that aren't immediately obvious from code review. The removal of prefix lists in this case created an unexpectedly permissive policy because of how the "route-type internal" match behaves in JunOS.
Third, quick detection and response are crucial. The 25-minute duration of this incident, while still impactful, was relatively brief because Cloudflare's network team quickly identified the issue, coordinated response, and manually reverted the problematic configuration.
Fourth, transparency matters. By publicly documenting this incident, Cloudflare contributes to the broader community's understanding of route leak causes and prevention strategies.
Conclusion
Cloudflare would again like to apologize for the impact caused to users and customers of Cloudflare, as well as any impact felt by external networks. Route leaks remain one of the most challenging operational issues facing the Internet, capable of being caused by simple configuration errors with far-reaching consequences.
The industry's ongoing work on standards like RFC9234 and RPKI ASPA represents important progress toward making the Internet's routing infrastructure more robust and resilient. However, as this incident demonstrates, there's still work to be done, and defense-in-depth approaches combining policy safeguards, automated validation, and protocol-level protections remain essential.
By sharing lessons learned from incidents like this one, the Internet community can collectively improve routing security and work toward a more stable and reliable global network.
Source: Route leak incident on January 22, 2026 - Cloudflare Blog