- A single manipulated packet can cause a website to mix up user sessions, leak confidential info, or serve poisoned content that infects everyone who visits.
- After two decades of attempted patches, the protocol remains fundamentally unsafe whenever proxies or shared connections are involved.
- Until full HTTP/2 support arrives everywhere, organisations need to aggressively sanitise incoming requests and routinely scan for lurking vulnerabilities.
BENGALURU: Just because a website uses shiny new security badges on the surface doesn’t mean it’s locked up tight behind the scenes.
Recent research has unveiled a worrying reality under the hood: millions of websites, even those behind cutting-edge proxies and cloud platforms, are silently backsliding to the outdated HTTP/1.1 protocol somewhere along the request chain. This isn’t just a touch of technical debt—it’s a cybercriminal’s dream come true.
How traffic flows
When you click on a website in 2025, your request doesn’t go straight to its destination. It bounces around—from your browser, through content delivery networks, load balancers, proxies, and then finally hits the website’s back-end servers.
Somewhere along this relay, if one component only speaks the old HTTP/1.1, the whole secure foundation can be undermined.
PortSwigger, the well-known application security firm, threw a spotlight on this issue. They found that over 24 million websites—yes, even big corporate ones—still downgrade requests to HTTP/1.1, despite advertising modern security up front. This isn’t just nostalgia for the early 2000s; it’s a recipe for disaster.
The fatal flaw: Request smuggling
So, what makes HTTP/1.1 so risky? In a word: ambiguity. The protocol simply lumps requests together on a TCP connection, with multiple ways to specify where one ends and the next begins. That means hackers can trigger so-called “request smuggling” attacks, slipping malicious requests between legitimate ones.
Suddenly, servers have no idea which data belongs to which user—a perfect opening for session hijacking, data theft, or worse.
Cybersecurity researcher James Kettle, from PortSwigger, revealed all this at Black Hat USA and DEF CON—earning hefty bug bounty rewards in the process. The flaw is so severe that a single manipulated packet can cause a website to mix up user sessions, leak confidential info, or serve poisoned content that infects everyone who visits.
Just imagine: logging in to your favourite online store and landing in another customer’s account instead, or having every page you load laced with credit card-stealing code.
Why are we still using HTTP/1.1?
Alarmingly, the inertia isn’t just on small-time web hosts. Major cloud service providers like Google and Cloudflare still default to HTTP/1.1 internally unless admins painstakingly reconfigure every layer.
The industry’s mainstay front-ends—Nginx, Akamai, Fastly, CloudFront—often lack full upstream HTTP/2 support, making upgrades a real challenge.
Website operators can’t just flip a magic switch and hope for safety. Every component in the chain, from CDN to app server, must support the newer protocols and be configured to reject risky, ambiguous requests. That’s rarely the default, and it requires technical finesse that many organisations lack.
This isn’t theory—attackers have already shown how devastating these flaws can be. Security researchers demonstrated successful request smuggling hacks against giants like PayPal, exposing unencrypted passwords and siphoning off bug bounties for their trouble. The ease with which such bugs can be exploited is alarming: all it takes is a tiny inconsistency between how two servers interpret an HTTP request.
For example, if a request headers both a “Content-Length” and a “Transfer-Encoding: chunked” flag, different parts of the server chain might read the data differently. An attacker can send a malformed payload that one server thinks is over—and another server keeps reading. The result? Malicious code gets silently attached to a victim’s request.
Can we fix HTTP/1.1?
Not really, says Kettle. After two decades of attempted patches, the protocol remains fundamentally unsafe whenever proxies or shared connections are involved. “If we want a secure web, HTTP/1.1 must die,” he warns.
While it’s still safe for straight, direct client-server connections, the web is rarely that simple nowadays.
Until full HTTP/2 support arrives everywhere, organisations need to aggressively sanitise incoming requests and routinely scan for lurking vulnerabilities.
PortSwigger’s latest HTTP Request Smuggler tool even automates the search for hidden flaws, but that’s just playing catch-up.
No matter how pretty a website looks or how many security logos it displays, it could still be vulnerable deep in its infrastructure. Any delay just gives hackers more opportunities to slip through the cracks.