Choose a language:

Scaling Ticketing Systems for Traffic Bursts & Bots with Line-Up CEO Barnaby Clark

In this episode, Barnaby Clark, CEO of Line-Up, reveals the engineering practices behind resilient ticketing systems that handle real-world demand. Barnaby explains how Line-Up rebuilt their platform from the ground up to meet the complex needs of live events, from unique inventory structures and API scaling to predictive load handling and third-party integrations. Barnaby dives into the evolving threat of bots, the nuances of asynchronous payments, and how to design for bursts in traffic without breaking the customer experience. It’s a practical look at infrastructure, performance, and the unpredictable nature of ticketing at scale.

Barnaby Clark is CEO and Co-Founder at Line-Up. He has 12 years of experience designing innovative software products across diverse stacks, scaling and guiding cross-functional teams, building high-growth e-commerce platforms, and overcoming complex software challenges. Line-Up was shortlisted for Best Technology Provider at the British Media Awards, won Seedcamp London and has secured multiple funding rounds from angel investors, institutional backers, and corporate entities. Prior to Line-Up, Barnaby spent 5 years working on Mergers & Acquisitions and private capital fundraising efforts within the technology sector.
 

Episode transcript:

Jose

Hello and welcome to the smooth scaling podcast, where we are speaking with industry experts to uncover how to design, build and run scalable and resilient systems.

I'm your host, Jose Quaresma, and today I'm joined by Barnaby Clark, the CEO of Line-Up, a ticketing commerce platform. We talked about the challenges of ticketing and how Line-Up addresses them. Specifically, we talked about designing and running scalable ticketing systems to meet peak demand, we talked about being able to predict traffic and the impact that it has on your ability to scale and how bots impact the industry. Enjoy.

Welcome Barnaby to the Smooth Scaling Podcast.

 

Barnaby

Great to be here.

 

Jose

It's great to have you. And I would like to start a little bit with you telling us a little bit about yourself and what led you to founding Line-Up.

 

Barnaby

Sure. So my background, I mean, I started in M&A and, and, corporate finance working with a lot of tech companies, but really had a deep passion for live entertainment and live experiences. And with a couple of people I've known for a long time, we came up with an idea of building a company in the live entertainment space, really with the mission being to try and get people to more live events.

That evolved, as many startups to do over time, into the ticketing space. And then we started working with a number of venues across the UK. And then COVID happened. During COVID, we took the decision to completely rebuild Line-Up from scratch and take that opportunity of kind of the live entertainment industry being shut down effectively to build a really modern system using everything that we did.

We'd learned from our years of experience working the space. And coming off the back of COVID started working with some really exceptional venues and focused on live commercial entertainment and ticketing West End venues and venues in Broadway and off Broadway and across the UK.

 

Jose

And talking about ticketing, right? Ticket onsales are very interesting in different dimensions, and especially in the scope of this podcast from a technical standpoint.

You often have limited inventory and kind of a big traffic peak usually, and you have bots and all that. Can you tell us a little bit about the considerations when kind of operating in the ticketing space and potentially what does a typical user journey looks like?

 

Barnaby

Yeah, I think ticketing has some really unique challenges when you're thinking about scaling and, I mean, it's a subsector of ecommerce obviously. But unlike most of ecommerce, you've got some very different challenges in the ticking space and they're not consistent. They depend on the type of onsale you're doing.

So I think one of the challenges you have is you can have an onsale where you've got very limited inventory and a very high demand. That's the one that people often think about. So big name act or big name concert where you've got very large numbers of people come into a site all of a sudden. And often there's a sense of a gated entrance. So people know when they're coming to the site. They're already hitting that site, trying to get in and suddenly you open the floodgates. So that's one challenge.

But then that can happen in an unproductive way. So you can have a situation where you have a famous actor or a famous star, sends out a tweet or sends out a message to their fan base and drives unexpected high loads of traffic into a flow.

And then you have other situations where you might have plenty of inventory across a long run. But you have a lot of people coming through that. So you're looking at how you can maintain a very high transaction throughput and a good user experience.

So I think that there's several different challenges.

And then I think the last thing in the ticking world, again, because you can have seated events or GA events. But if you think about a seated event, it's not so much like the ecommerce model where you have a stock of a thousand. And yes, you need to track that the stock and make sure you then sell out. In ticking, every seat is a unique piece of inventory. So you have to think about making sure that people can get seats next to each other, that you're not just letting hundreds of thousands of people try and book, you know, a hundred seats where no one's going to get seats in a group or next to each other.

So there's some quite unique challenges there in terms of how you manage the systems and the scale for that.

 

Jose

And so in Line-Up, in your platform, can you share a little bit on what does a typical user journey looks like? I think that will help us when talking about addressing those challenges.

 

Barnaby

Yes, most of the journeys, and it kind of depends on the phase of the event, but most of the journeys, we have an average time to purchase with about four minutes.

So people are coming in, they're usually going to see a calendar. We're usually talking about a single show journey, so they know what show they're coming into book. They're going to see a calendar of dates. They may have an idea of what date they're booking for, or they may not. They may be looking more on the price and availability.

Once they've picked a date, they're going to either see a GA (general admission flow where they can just pick different price points, or they're going to pick a specific seat or set of seats.

Once they add those seats to their basket, they're going to go through to see upsells, products, merchandise, other things they might buy, and they're going to go to the payment.

So, you know, each of those steps will take a different amount of time. But the key ones for us really are when they're thinking about picking their seats, at what point do we hold those seats, making sure we reserve those seats for them? And then through to the transaction page, making sure that we can have a good throughput of transactions. Those are the pressure points for us in terms of infrastructure—is where we're making writes to the database and where we're creating new transaction items.

 

Jose

So the user journey really up to that selection of seats, right? Why is it that is not as sensitive for you as then the seat choice?

 

Barnaby

Well, I think for us, we kind of split the journey into what we consider to be read and cacheable data versus when we're starting to make writes.

So if you think about the early parts of the journey, a lot of that data, what the show is on, that's not changing much. The price, yes, the available prices are changing, but they're not changing as dramatically as the seat availability.

So a lot of that early part of the journey, you can rely on a certain level of caching or running off the read replica of the database, so you're not worried so much about lock contention and things like that.

It's the point that you want to start holding those seats. And even that is a decision that different systems make around when they hold those seats. Do you hold those seats the minutes on clicks on them, which is great? You know, that means they've got the seat, no else is going to take them. But the flip side to that is you're going to have a lot more writes onto the database.

Or do you wait until they kind of get the seats they want, put them through to the next page and then make a call on? At that point, I'm going to try and hold the seats and risk saying to them the seats that you clicked on and are no longer available because, you know, there's 50 other people on the page at the same time and picked up the same seats.

So for us that journey is split between what you can do in read-only mode, where you're primarily pulling off a cache, that's fine at high volume. Versus restricting the writes into the data and where you might have contention because you're making sure that if you've got an inventory call, you're not, you've got to be careful about how you're monitoring that.

 

Jose

That's kind of looking at it from that split and that two different perspectives and the impact that they have and, the requirement towards the infrastructure. That's super interesting and helpful as well to guide you, when you design your infrastructure, right?

 

Barnaby

Exactly. And I think as much as you can rely on that, the read stuff, and then optimize to that. So for us in terms of how we think about the APIs that we're designing and building, we do think about them very differently. And we can direct read APIs to different database classes. We can cache them, we can do other things with them. So splitting out each API call that we might make through the journey into: is this going to mutate anything? Is this purely just giving people information? Down to how we think about how we rate limit them and how we control traffic from them. It's important for us to think about it.

 

Jose

I guess one of the challenges is on the second part of the journey where we have the kind of the mutating actions, if you will, then I guess that's where you start addressing the scalability challenges right? So you have been thinking a lot and doing a lot on being able to scale quickly—can you tell us a little bit more about that?

 

Barnaby

One of the unique things, not completely unique, but one of the unusual things in the ticketing industry is this kind of sudden burst in traffic. So if you have a show going on sale at a specific point in time on a specific day, you can have a sort of 100x increase in traffic to an API that is entirely legitimate.

I mean, it might look like it's a denial-of-service attack, but it's not. It's legitimate. Lots of people are trying to book tickets. So you have to think about how you can put queues and other things in place to try and protect the service. But ultimately, you have to think about how you can scale into that traffic, because you don't want to keep people waiting around.

So a lot of what we've been thinking about is how you can optimize that scaling. And obviously with modern infrastructure and we're running on Kubernetes and that's fine. We can turn on new pods. But things like: how long does the container take to start up? What does the cache look like to the container? Am I having to download that container fresh for every new pod I want to turn on? Or have I got a shared cache? What's the startup time? Can I offload things that might delay the startup time into other microservices?

So thinking about how quickly you can scale up. And then thinking about how you need to scale up all the parts of the infrastructure. So it's not just scaling up the API pods. It might be, you know, you need to scale up the database. You need to scale up replicas. You need to scale up cache. And how long does each of that take?

And then that leads you into: how can you do that predictively or how can you think about doing that in advance?

So, yeah, a lot of the energy and effort that we've been working on over the last couple years has been thinking about how we can optimize that scaling and how quickly we can do it, and how we can get it to a point where it's as automatic as possible.

 

Jose

And I mean, in that predictive space, what are some of the things that you have done or that you're looking into? Is there something that you can and would like to share?

 

Barnaby

So I think there's a couple of things we've done and different things. So I think that there's some really good auto-cluster scalers out there now that are doing things in smart ways, and we've used those. Those have been quite beneficial.

But I think the really interesting one for us is thinking about how you can kind of take signals from earlier in the booking journey to help you to scale for later on.

So if you think about what we were talking about earlier, you've got a lot of people coming into a queue or coming into a waiting room or sitting around waiting for an event to go on sale. Now they might be pulling read-only data. There's some indication that those people are there. You can use that information to start to scale into it, even if you don't know that that event's going to happen.

And then I think, again, if you think back to that journey we talked about, the early part of that user journey is read-only, it's discovery, it's finding a date, it's finding pricing. Now, at some point, you're going to have to scale the write side of the infrastructure. But you can use the kind of information about the early read journey to think about, right, “well, I need to start scaling out the write part of the database. I need to start scaling up my asynchronous task queues and things like that because I've got an influx of traffic on the read side that means that I'm going to start seeing a much bigger throughput on the transactions.

So I think picking up signals from user actions earlier on, that you can then leverage to start to kind of scale in advance and be ready for when that traffic moves through that journey.

I mean, if you think, as I said, we've got a four-minute average journey from start to finish. That gives you more than enough time to be ready for that traffic once it's got through, once people have decided what date they want to go once they've seen the plan. By the time they start adding seats to them, seats to their basket, our philosophy is that we've got enough early signals on that, that we should be able to scale into that.

 

Jose

And in those early signals, are you usually thinking about what's the kind of overall load, or are you trying to work on making it kind of as specific as possible? Or is that maybe not as important from a load perspective. I guess you know which kind of shows people waiting for, but which areas of the stadium, let's say, they are more interested in, is that something that…

 

Barnaby

That that's an interesting one. I'd say that's a slightly different challenge within the scaling which is how you scale the data sets that you're dealing with and again that's an interesting challenge that this industry gives that some other industries might not. So I mean, a good example of that would be an arena.

Again, when you start building a system, often it can be tempting to build it and say, “right, I'm going to design, you know, a way of showing seating plans and you build it and you pick a season plan, you build it for a thousand seats.” And then someone comes along and builds a venue with 100,000 seats.

So, you know, it's about thinking about the data sets that you're handling and how big they can become and how you can think about what the maximum challenge would be or how you would break that up.

So an arena would be an example of that where if you look at an arena on our system, we're not going to load every seat on the page view. When you look at that, we're going to chunk that up into blocks. We're going to aggregate that data and then load an aggregate view and kind of let you dive into it at different sections. Ao it's about how you slice up that data, especially on an API where you can paginate, but it's not always great to paginate on a client experience like that. You don't want to paginate through a seat list. I mean, when I'm on a browser, I want to see all the seats in the area. I don't want to have to like press “load more” or something. So it's about tying in your user experience and your UI with how you're going to load that data.

The other thing for us that we have to think about is that a large percentage of the tickets that we sell are sold through third parties over an API. So we're transacting a lot through API with third party sellers. So it's also thinking about how you can structure an API in a way that people can't break it, or people can't say, “you know, I want to load 100,000 things.” That's an important problem. So I see that as a slightly different scaling problem, but it's a scaling problem from a data, architecture problem rather than an infrastructure problem.

 

Jose

And you mentioned the third parties that you're integrating with on the purchase side.

 

Barnaby

So other services reach out to us to use our API to sell tickets. So from our perspective, a lot of West End tickets, especially are sold through third party ticketing agents. So they sell through other platforms. Whether that's mobile apps, whether that's other things that are using our API.

So I mean, I think just within the ticketing industry, there's usually one primary system that sits at the venue and in places where we operate, that tends to be us. So we would be the one that's kind of the source of truth at the venue in terms of who sat where.

But that means if somebody's booking, from a theft by the system, that booking has to come into us. So a lot of the traffic and a lot of the transactions that we do are API-based. And the majority is through our booking which is white-labeled, but we do have a significant portion through the API.

And one of the decisions we took early on was to, rather than to have kind of a segregated API or a separate API, was to maintain a single API, partly because I think you get a lot of engineering benefits and just focused on people transacting over the same API. It can scale in sync, which makes sense.

But that does mean we have to think about not just how we're using it, but how other people might use it and making sure that there are protections in place that mean that if somebody decides to drive a lot of traffic to that API, and that seems like rate limiting at different levels. And so we operate rate limiting both at the overall level, but then we have protections around rate limiting on the write side. So that again, making sure that whoever is interacting with that API with us ourselves or third parties, that there's systems and protections in place to make sure it scales.

And I also I don't think scaling is a binary thing. I think I don't think either scale or not scale. I think there's an element of what happens while a system is scaling up to capacity. So obviously you can put queues in things, but there's, there's no guarantee that someone will have done that. So it's what, what protection do you have for the system while it's scaling into traffic?

 

Jose

Do you have any external dependencies that you need to take into consideration as well?

 

Barnaby

We do. Yeah. Mainly payment providers. And I think one of the interesting things there is, I mean, we work with the three big ones, Square, Stripe and Adyen, and we've never had any issues with them in terms of scale.

But what is interesting about the way the payment has gone is that a lot of that is now asynchronous. So, especially with 3D secure and things like that, your payment completion moves your transaction flow to be asynchronous as well. So again, that's, that's a slightly separate scaling consideration, which is, if you're now relying on web hooks coming into complete payments, complete transactions—how do you think about scaling all the parts of the service?

So it's great. You've scaled the read, you scaled the write, you can create the transactions and everything else, but then you've got these asynchronous tasks that might be taking place to process the web hooks that are coming in to send out the confirmation emails and things like that.

Again, it's about thinking about every step of the journey because if you have a really large on sale and you sell lots and lots of tickets, but then no one gets a confirmation email for two hours because the task queues or the services that are sending up emails or generating PS didn't scale alongside the rest of the system, so then they get back to up. That's then a separate problem.

So again, thinking about all the parts of the system that need to scale. And a big part of that for us is monitoring and making sure we've got sufficient tooling in place to make sure we can monitor when something might become a problem.

Because if an API is not scaling, it becomes evident pretty quickly. You get delays on response times. You get time out errors, et cetera, et cetera. People tend to notice that pretty quickly. If an email is taking two hours to go out when it should go out instantly, that's a more subtle thing. You don't really want to wait until the customer is complaining about that. So it's thinking about monitoring lengths of asynchronous queues, and other things that you might need to do.

 

Jose

And talking about external dependencies, email is also tricky, and even beyond the scope that you were just sharing, right? Are your services scaled up for it?

But then even if it's peak traffic and you're sending a lot of emails to a specific email provider in a specific country, then there's also that whole system that can become the bottleneck as well.

 

Barnaby

Yeah, and I think that's why it's crucial when you're looking at what partners and what third-party systems you want to work with. And I think more and more now SaaS systems are calling out to other services and other systems. So making sure that those systems are capable of scaling and have the things in place.

And I think one of the things for us that's quite interesting is when we think about how you test these things. Because you can load test your own system, but then if you're depending on a third-party system as part of the flow, how do you build that into the load testing or how do you build that into your scenario planning to make sure that that's not going to become an unexpected bottleneck in a real-life scenario?

 

Jose

Yeah. And what have you found to be to be more effective when approaching that, right? Have you been working together with some third parties to kind of coordinate some load testing? You did mention with the payments that you never had any issue, so maybe it's in that case it hasn't been needed.

 

Barnaby

So some third-party services are good, and they’ll document how you can load test with them or give you recommendations on how to simulate that. Others you have to make certain assumptions and effectively mock them. So you have to create realistic mock experiences of what that might be, whether that's building in delays or other things. But that tends to be service by service. Some of them are good and give you guidance on what to expect.

And again, this is getting really into the weeds now. But if you've got production and development environments, if those third parties have also got production and development environments, and if your development environment is linked to their development environment—you can fall into the trap of doing a lot of testing in a development environment to then discover that the production environment is quite different.

 

Jose

That's a widely known or at least felt problem, that things are working well in dev and staging, and then once you get into production, then there's certainly a percentage of issues there that you only have in production.

 

Barnaby

Exactly. And some things you just, it's virtually impossible to test. I mean, you can't obviously test what an end-to-end payment flow would look like with production cards unless you're willing to spend a lot of money.

So I think some things you have to pass them off. But then it's kind of working out how, what the risk it is and how you mitigate it.

 

Jose

And on that point of the mitigation part, because I think that's big and I understand some of your focus has been on that area as well. It’s one thing to have the scalability as quick as possible, as efficient as possible, being able to predict, but then it’s another to be prepared for any issues, edge cases, anything unexpected. Am I right in thinking you've been focusing on that as well?

 

Barnaby

Yes, absolutely. And I think I don't want to pretend that we've got all the answers here or that we've solved all the problems. I think it's something we will continue to work on. I think that's key to be continually evolving that.

But for us the key things are looking at disaster recovery scenarios. Looking at what happens when things go down, what happens when parts of the infrastructure break. I mean, we're all, I think most people are dependent, to some extent, on third-party systems, whether that's hosting providers or DNS services or reverse proxies and things like that. So thinking about how you can mitigate risks there.

But then also within our own application, talking about what happens. The worst-case scenario to me is you've got an influx of traffic and then everything stops working and the recovery is really slow because you've overwhelmed a bunch of pods what you've overrun the core application, or the database is going into a restart or something like that.

So a lot of the focus we've had has been on things like adaptive rate limiting to make sure that we're stopping traffic before it gets to that point, thinking about rate limiting different parts of the service. So we rate limit both at the database side, but we also rate limit at the edge, and then we rate limit the applications.

Then the next point for us to think about is traffic shedding. So thinking about what's the core priority traffic that we should be handling certain versus what's maybe lower priority traffic in in those extreme examples. 

So this is why I say I don't see scaling as a binary thing. It's kind of how do you protect the system as much as possible when it is under load and stress so that even if things aren't going 100% according to plan, you're not kind of either available or not available. 

And I think that's a really important part here is how do you think about each part of the application and how you can keep as much of it running as possible even in the most extreme situations where things aren't going as you expect.

 

Jose

I fully agree. I think it gets pretty interesting when you get to the chaos engineering side of things. And we did have Kolton Andrus for the CEO of Gremlin the other day in the podcast as also talking about chaos engineering and kind of knowingly starting to test the system by making some of the parts of it fail on purpose to see how it reacts as a whole.

And this kind of the goal, which is probably very hard to achieve a hundred percent, but I think it's a very interesting goal is to start small, but start kind of doing those and seeing how the system reacts and then then trying to improve it.

 

Barnaby

100% exactly that. I think chaos engineering is a great thing and that kind of continuously trying to think about what might happen and then actually running those scenarios and seeing how the system can adapt.

 

Jose

Very good. I think it's super interesting topics and different perspectives on the work that you do at Line-Up. I appreciate you sharing it.

One thing, no discussion on scaling and the work that you're doing at Line-Up would be complete without talking about bots. So is there kind of, is there kind of any perspective, anything that you want to share on how are you fighting them? What are the things that you're looking at in that area?

 

Barnaby

Yeah, I mean, we're seeing a lot more of them. That's not that surprising. I think there's been a big increase in things like residential properties that are a lot more available.

So I think ticketing again slightly stands out there as an industry that suffers more from than maybe some other industries. And part of that is a lot of the time we're dealing with inventory that may not be priced at the market value.

So what I mean by that is often you'll have very high-profile bands or artists who are selling tickets to events at a price that is not the highest price they could achieve on the open market. So there is an incentive there then for people to pick up those tickets and resell them at higher prices. And then there's lots of different discussions going on around that. I don't want to get too much into that now. But the reality is that there is an incentive for people to find ways to buy those tickets and resell them.

So we're seeing that the bots are quite sophisticated to the extent that there's bots that will buy tickets and listen to resale simultaneously. So they can be in the basket and then they can appear on a resale site before they do the transaction.

And then just the volume that we see, and I would say it's becoming increasingly hard to filter out that track traffic and detect it.

So yeah, it's a big challenge. I think there's a lot of stuff that can be done and the things that we can do. And it's a little bit of an arms race. You can introduce things like CAPTCHAs and other things to make the barrier a little bit higher to make the prices a little bit more expensive. I mean, there are CAPTCHA farms and things where people can fill these out, but it does at least reduce the economic incentive slightly. I think that there's models that can be put in place in terms of how tickets are sold, whether that's through ballots and things like that, that again, can reduce the risk.

I think again, waiting in queues can do an awful lot here in terms of restricting people's access into the flow and then also shuffling up people who are in their queues. So making it harder for people to get into their booking.

Again, there's not one solution. I think it's looking at each part of the journey. It's very customer-specific because some customers won't have a problem with it, and some customers will have a really big problem. So it's finding a solution that works in each scenario, and then also looking at each part of that looking journey and thinking about how you can mitigate the risk high.

 

Jose

Yeah. And I'm actually also very curious to see the evolution in the next years because now with more of an agentic web that we're moving towards right now we've been trying to separate between bots and people. I think maybe soon it will be between bots and other bots that people sent to the queue to get the tickets for right so then I think it's not getting any easier right?

 

Barnaby

A hundred percent. I was talking to someone about this the other day: the notion of identity is becoming a really hard thing to define both in terms of online presence and whether that's in videos or images and things like that, this notion of identity is becoming blurred.

So, yes, I think you're completely right. As we see more and more agents making purchases for people and things like that, then that question of how you filter out legitimate and not legitimate is going to become harder.

So yeah, I think it's not going anywhere, and it will be a challenge as we go forward. But it's also an opportunity, I think, to do some interesting things and to stand out and to think about how we consult these in different ways.

 

Jose

All right. And we're close to wrapping up. I would like to finish it off with a few rapid-fire questions.

So short questions that ideally, it's a short answer as well.

You don't have to think too much about it. We would love to get your input or your answer.

To you, scalability is:

 

Barnaby

Challenging.

 

Jose

Second question. Is there a book, podcast, thought leader, anything that you would recommend to anyone? Could be in this area, could be anything else.

 

Barnaby

I mean, lots of books. I find that the engineering blogs of some of Stripe’s engineering blog, I think it's always very interesting. Shopify's engineering blog, I think they've done some great stuff.

I'm also a big fan of Hacker News, which I can have some great articles in there about these things, some more technical deep dive. So, yeah, those things in this sector.

 

Jose

Just two more. Is there a technology that you're excited about right now? And bonus points if you don't say AI, but it's also accepted.

 

Barnaby

It's not particularly new. I continue to be excited about two things.

One, how the SaaS offering is being broken up into different functional pieces that are then being offered as really good, optimized versions of what they do. So what I mean by that is, you know: authentication, you've got lots of people like Work OS who are doing a really good authentication product. Billing, you've got something like Lago doing really good billing products, of how do you bill for SaaS?

So there's this continued kind of trend of thinking about focused parts of an application and how you can build a really good solution for that and then build that out in a way that people can use it. I think there's going to be more of that.

And why I think that's interesting is that these focused products… Embedded analytics is another area where really interesting are things happening. We've gone from these big analytics products to stuff that you can embed into applications. I think that's an interesting trend.

And I think part of that trend is, as well, the increased commercialization of open-source technology. You have these really great open-source projects that seem to have found a good way of commercializing, so that you have some now really big established open-source companies that are very profitable and have a commercial offering. I think that's an interesting way to build software. And I think we can see that across other sectors.

Sorry, this is a quick-fire answer. So, yeah, if I'm excluding AI, and I think there's a lot of exciting things in AI and in AI around this as well in terms of anomaly detection and those kinds of things. But if I'm excluding that, I would say those things, I think, are quite interesting.

 

Jose

Thank you for sharing. Last question. What advice would you give your younger self or someone starting right now a career in this area?

 

Barnaby

I think specifically in terms of starting companies and being an entrepreneur, my advice would be that it takes practice. To do it as many times as you can. I mean, we've been around for a while, but we've been through a few iterations. I think that it does take practice and if you can do it a few times early on and get started early on, then that helps.

 

Jose

That's a good piece of wisdom to wrap this up with.

Thank you so much, Barnaby, for your time and for joining us.

We went through quite a bit on a topic that I'm very passionate about, so thank you for that.

 

Barnaby

Absolutely. It was really nice to speak to you.

 

Jose

And that's it for this episode of the Smooth Scaling podcast. Thank you so much for listening. If you enjoyed it, consider subscribing and perhaps share it with a friend or colleague. If you want to share any thoughts or comments with us, send them to smoothscaling@queue-it.com.

This podcast is researched by Joseph Thwaites, produced by Perseu Mandillo, and brought to you by Queue-it, your Virtual Waiting Room partner. I'm your host, Jose Quaresma. Until next time, keep it smooth, keep it scalable.

 

[This transcript was generated using AI and may contain errors.]

Handle peak traffic with confidence, no matter the demand