Choose a language:

Online traffic in the age of agentic AI — why the old rules no longer apply

Published:
Agentic AI

The future of online traffic isn't just bigger --- it's faster, smarter, and more intentional. Here's what the rise of AI agents means for peak events, online traffic orchestration, and online trust.

There's a difference between a river rising and a dam breaking. The total volume of water might be the same — but the damage depends entirely on how fast it arrives.

That distinction is about to define the next era of online traffic. AI agents don't browse the way humans do. They don't open tabs, get distracted, or mistype URLs. When a product drops at 10:00, they arrive at 10:00:01 — all of them, at the same endpoint, at the same millisecond. The result isn't a traffic spike. It's a wall of demand that hits faster than any autoscaling system can respond to.

In this episode of the Smooth Scaling Podcast, we sat down with Hans Skovgaard, Chief Product & Technology Officer at Queue-it, to dig into what happens when AI agents become the dominant force in online traffic. Hans has over 30 years of experience scaling systems and software organizations, and his perspective cuts through the hype: the real challenge of agentic AI isn't volume. It's the speed and coordination of that volume — and the fairness questions it forces on every business running a high-stakes online event.

AI is becoming the front door to the internet

The change is already showing up in the traffic data. Cloudflare CEO Matthew Prince reported that over January 2026 alone, weekly requests generated by AI agents more than doubled across their network. Akamai's 2025 State of the Internet report puts it even more starkly: AI-bot activity surged 300% in the past year.

Gartner predicts that by 2030, 80% of all product searches will be done through agentic AI, and 20% of online purchases will be made by AI agents. Hans thinks even that might be conservative — though he acknowledges broad consumer adoption will take time.

What this means practically is that your website is no longer just built for humans with browsers. It's being accessed by software that decides, compares, and acts on behalf of users. That changes how you have to think about traffic, capacity, and online traffic orchestration.

From 'bots vs. humans' to 'good intent vs. bad intent'

For years, bot defense was built on a straightforward assumption: bots are bad, humans are good. That distinction made sense when most automated traffic was scrapers, credential stuffers, and scalping bots.

But the landscape has shifted. Imperva's 2025 Bad Bot Report found that for the first time in a decade, automated traffic surpassed human traffic — accounting for 51% of all web traffic in 2024. Agentic AI breaks the old binary. Legitimate AI agents — shopping assistants, accessibility tools, travel planners, procurement systems — are becoming normal users of digital services. At the same time, the tools that make these good agents possible also make it easier for non-experts to build bad bots.

"In the past, we separated bots from humans and said humans are good and bots are bad. That distinction has to change — it needs to shift from bots versus humans to good intentions versus bad intentions. You can have agents with good intentions and humans with bad intentions."

Hans Skovgaard, CPTO at Queue-it

Think about it in physical-world terms. A personal shopper walking into a store on your behalf is a legitimate delegate — they're acting with good intent, representing a real customer. A group buying up all the stock to resell at a markup is acting with bad intent, regardless of whether they're human.

The same happens online. What matters isn't whether the visitor is human or automated — it's whether the intent behind the visit is legitimate. And it makes the burst-traffic problem even harder: when a hundred thousand agents arrive simultaneously, you can't just block all of them. You need to tell the good ones from the bad ones — in real time, under extreme load.

The burst problem: why the peaks of the future break today's infrastructure

"If agents figure out that something goes on sale at 10:00, they're all going to be there at 10:00:01 — not 10:00 while opening a browser and clicking around. So bursts are going to be even more violent than we see today."

Hans Skovgaard, CPTO at Queue-it

That's the core of the problem. When humans shop online, they arrive gradually. They open tabs, get distracted, retype URLs, click around. There's a natural ramp-up. Agents don't work that way.

This is a volatility problem, not just a capacity problem. Autoscaling can help with gradual increases in demand, but it can't respond fast enough when hundreds of thousands of agents hit the same endpoint at the same millisecond.

Nokia's Global Network Traffic Report underscores why this matters at a macro level: global WAN traffic is projected to grow 3–7x by 2034, with AI traffic reaching roughly 30% of the total. More critically, a single AI session can create a 3.5x traffic multiplier across inter-data-center links. These aren't evenly distributed loads — they're the kind of sharp, coordinated bursts that overwhelm traditional infrastructure.

As Hans put it: "Autoscaling isn't going to be the answer. This problem just multiplies exponentially, because intelligent programs can act at exactly the same point in time."

For any business running ticket drops, product launches, flash sales, or registrations, the peaks of the future won't just be taller — they'll be steeper. And steep peaks don't just threaten uptime. They threaten fairness: if your systems buckle under agent-driven load, it's the human customers — the ones still loading the page — who lose out first.

Your website is about to get a second front door

E-commerce is already beginning to adapt. Forward-thinking retailers are considering API-first interfaces where agents can ask structured questions — is this in stock, what's the price, what delivery options exist — without needing to navigate a website designed for humans.

This is a positive development. Agent-friendly access improves convenience and opens up new channels for discovery and sales. But it also raises a critical question: will the agent channel become an unfair fast lane?

If an agent can check inventory, add to cart, and complete a purchase through an API in milliseconds while a human is still loading the product page, that's not a fair contest — especially during moments of scarcity.

"What we need to work on is how we funnel agent traffic through the same queue as human traffic, so that being an agent doesn't give you an advantage."

Hans Skovgaard, CPTO at Queue-it

Queue-it is already working on this. Hans mentioned two active projects — one in Japan and one with a large e-commerce retailer — focused on ensuring agents and humans compete on equal terms during high-demand events, using virtual waiting room technology.

The principle is straightforward: separate access paths are fine, but the fairness rules should be consistent across them — and that becomes critical when the second front door can process requests a thousand times faster than the first.

Identity: the missing piece of the puzzle

For an AI agent to complete a purchase on someone's behalf, it needs to prove a few things. Not just that it can technically click "buy" — but that it's authorized to spend real money, represents a real person, and isn't copying itself thousands of times over.

The emerging model, as Hans described it in our conversation, is scoped, expiring authorization — a certificate that defines what an agent can buy, within what timeframe, and roughly how much it can spend.

"The AI agent will probably have a certificate issued by some certificate provider that defines what it can buy, in what timeframe, and for approximately how much. So it won't just be given a credit card and told to go crazy."

Hans Skovgaard, CPTO at Queue-it

This is still early days. Visa, Mastercard, Google, and several hyperscalers are all developing protocols for agent-based commerce — and the landscape is fragmented. Beyond payment credentials, there's a broader set of identity signals taking shape: platform identities (Apple, Google accounts) that limit mass account creation, regional digital IDs like the EU Digital Identity Wallet (required by law across EU member states by 2027), and eventually reputation-based signals.

"If an agent can say: 'I bought socks on this platform four years ago, I've made purchases in Target, I'm in Taylor Swift's fan club' — that's probably a pretty good indicator that you're a real person who wants two tickets, not a thousand to resell."

Hans Skovgaard, CPTO at Queue-it

For now, the practical takeaway is to design for a multi-signal world. No single identity protocol is going to win in the near term. Organizations need to support multiple signals and apply stricter requirements where the stakes are higher — like at checkout. This is also where identity intersects with the burst-traffic problem: during a peak event, the system needs to verify thousands of agent identities per second without adding latency that degrades the experience for everyone.

Making abuse expensive — and keeping the rules fair at every stage

CAPTCHAs used to be the go-to defense against bots. But if you've struggled recently to identify traffic lights or crosswalks in a grid of blurry images, you already know the problem: these challenges have become harder for humans than for bots.

Imperva reports blocking an average of 2 million AI-enabled attacks every single day. The challenge-solving economy is thriving on the dark web, and the line between human solvers and AI solvers is blurring fast.

"On the dark web, if you want to buy a thousand solved challenges, you can. And you don't actually know if it's 200 people solving them or some AI — the price may not be that different."

Hans Skovgaard, CPTO at Queue-it

The next generation of defense isn't about creating the perfect challenge. It's about economics: making abusive automation expensive. An attacker might be willing to spend some compute to solve one challenge. But if they need to solve 100,000 — each one dynamic, frequently changing, and computationally costly — the math stops working in their favor.

Hans emphasized that variety and rotation are critical. The more agents have seen the same challenge, the less compute they need — they just train on that specific problem. Challenges that change frequently remain harder to crack at scale. But challenges are only one layer. The more interesting question — and one of the most practical ideas from our conversation — is whether every stage of the customer journey even needs the same level of security. Hans doesn't think so.

"You might want an agent to browse five sites and compare prices. But at checkout, the rules might be different and much stricter. That flexibility is what our system needs to give customers — how do they manage different risk levels at different stages?"

Hans Skovgaard, CPTO at Queue-it

This makes intuitive sense. In a physical store, anyone can walk in and look around. But when it's time to pay — especially for something scarce or high-value — you expect to show ID, prove you can pay, and follow the rules. That's how it should work online too: lower friction when people are discovering and browsing, tighter controls when inventory is being allocated and transactions are being completed.

This layered approach fits into a broader framework for online traffic orchestration: dynamic challenges that adapt and rotate, traps and honeypots that raise the cost of automation, bot detection for human-mimicking paths, tiered verification that scales with risk, and fairness controls like rate shaping and virtual waiting rooms that prevent any single actor — human or agent — from dominating.

The road ahead: faster peaks, higher stakes, and fairness as the foundation

If there's one thing Hans wanted to stress in our conversation, it's that these changes are already happening — and they're happening fast.

Cloudflare's 2025 Year in Review found global internet traffic grew 19% year-over-year — up from 17% in 2024 — with AI user-action crawling up 15x over the course of the year. The acceleration is real, and it shows no sign of slowing down.

"I was looking for good resources on internet traffic, and reports that are six months old are already outdated. You almost have to go look at what Fastly, Cloudflare, and Akamai are publishing right now. It's moving so fast that something six months old might be completely wrong."

Hans Skovgaard, CPTO at Queue-it

The peaks of internet traffic aren't getting smaller. They're getting smarter. And as agentic AI matures, the forces driving volatility — sharper bursts, coordinated agent behavior, and an increasingly blurry line between good and bad intent — will only intensify.

That's what makes fairness the defining challenge of this era. It's no longer just about making sure humans aren't outcompeted by bots. It's about making sure humans and legitimate agents get fair access — while keeping bad actors out, whether they're human or automated. As Hans put it when we asked what scalability means to him:

"Scalability is both the ability to handle the load technically, but also to manage a good experience around it. You could in principle put someone on a phone queue that works perfectly well, and 12 hours later they've had the worst experience of their life. So, handling scalability while also creating a user experience that is meaningful and acceptable is super important."

Hans Skovgaard, CPTO at Queue-it

The difference between staying online and earning trust is in that last part — the experience around the load, not just surviving it.


RELATED: How Queue-it helps build online trust, one high-traffic event at a time

The organizations that prepare now — by rethinking their assumptions about who's visiting their site and investing in online traffic orchestration that works for both humans and agents — will be the ones that turn agentic AI into an opportunity rather than a crisis.

FROM THE SMOOTH SCALING PODCAST

This article is based on Episode 21 of the Smooth Scaling Podcast, featuring Hans Skovgaard, Chief Product & Technology Officer at Queue-it.

Listen to the full episode: Episode 21—Agentic AI and the future of online traffic