In this episode, Angela Timofte, former VP of Global Engineering at Trustpilot, shares the decade-long journey of evolving Trustpilot’s architecture from a monolith to an event-driven, serverless-first platform. She reflects on the technical and organizational shifts that made it possible—from early trade-offs and nano-services to guardrails, templating, and chaos engineering. Angela also discusses the role of AI in engineering productivity, why staying small matters, and what scalability really means across tech, teams, and leadership. A thoughtful, candid look at modernizing systems for long-term resilience.
Angela Timofte is a technology leader known for transforming organizations for scale and impact. As former VP of Global Engineering & Applied AI at Trustpilot, she led both the engineering and data science functions, driving the company’s shift from monolithic systems to scalable, event-driven, cloud-native architecture and drove a major transformation going from maintenance to value creation across the engineering organization. An AWS Serverless Hero and international speaker, she’s recognized for her work on scalability, data infrastructure, and high-performance engineering culture. Today, Angela advises companies through her consultancy, Atim Advisory, and is building a new tech venture.
Episode transcript:
Jose
Hello and welcome to the Smooth Scaling Podcast, where we're speaking with industry experts to uncover how to design, build, and run scalable and resilient systems. I'm your host, Jose Quaresma, and today we had a great conversation with Angela Timofte, who was the VP of Global Engineering at Trustpilot, where she worked for over 10 years.
We talked mostly about the journey she led—from monolith to event-driven architecture at Trustpilot—and about the considerations and trade-offs along the way.
If you like this podcast, please subscribe and leave a review. Enjoy.
Welcome, Angela.
Angela
Thank you.
Jose
It's great to have you on the podcast.
Angela
Yeah, it's a pleasure.
Jose
We have a few different topics, right? But mostly, or at least I'd like to start with the journey that you’ve been involved in at Trustpilot—specifically around going from monolithic to event-driven architecture.
Could we maybe start by you sharing with us a little bit—maybe not a strict definition, but at least an overview of what a monolith means to you, and on the other hand, what an event-driven architecture is?
Angela
Well, I’ll start by saying that I’ve been at Trustpilot for over ten years. When I joined the company, we very much had a monolithic architecture.
And what I mean—or at least what it meant at Trustpilot by monolithic architecture—was that we had one machine holding all of our code and one database holding all of our data. So it was all in one place. That’s what we had, and how I would describe monolithic architecture.
Now, of course, an evolution of that monolith is to have multiple monoliths, and that kind of breaks the definition I just gave—like one place—but again, it’s more about the size of your application. That kind of defines whether it’s a monolith or not, in my mind.
Then, when it comes to event-driven architecture, you can have some monoliths in that type of architecture, depending on how you’re structuring your code and your data. But in our case, it was all about how you handle the data.
So the data is handled through events, and you react to events. That’s how you build your architecture—and that’s what we did at some point in that journey.
Jose
So I think usually—and I’d love to hear your thoughts on this—when talking about monolith versus event-driven or versus microservices, there's also this idea that it's sometimes okay to start with a monolith if you're just in the very early phases. That if you start already with a full microservice architecture, you might be overcomplicating things.
Is that also how you see it? Maybe—I don’t know if it's exactly what happened at Trustpilot—but was it okay in the very beginning that it was a monolith, but then you got to a point where you had to break it apart?
Angela
I mean, many times I've said that about Trustpilot and where it started. I was constantly saying it was okay the way they approached the infrastructure because they didn’t have clients, they didn’t have users.
So of course, they started with something small—on-prem and all of that. Also, this was 18 years ago when the company was founded, so technology wasn’t where it is today.
Today, I might approach things differently—also because of the skills people have. I might even, from the start, go with an event-driven architecture because of the cost, because yeah, preparing for the future.
So I might approach things differently now, but back then, I would say the company did the right thing. That’s what people knew. That’s where technology was at the time. So perfectly valid.
I would say it depends on what you're doing and the type of business you have. Do you expect to have a lot of load at some point in time? Or actually, is your goal to handle a hundred users max? Then don’t overcomplicate.
Don’t get an engineer that’s all about event-driven architecture—because you don’t need it. So it all depends on the type of business you have.
Jose
And at Trustpilot, when was it that you got to the point where the monolith was... when you actually thought, “Oh, we should go ahead and start breaking it apart and move toward an event-driven architecture”?
Can you tell us a little bit about when that happened, and what were the challenges or symptoms you started seeing?
Angela
Yeah, so it was actually at the time when I joined Trustpilot that we started to break down the monolith.
There were a few reasons for that. One was scale—more users and companies using the product, which created quite a few issues handling the load.
But also internally—I joined the company in March 2015, and a few months after I joined, the company received a big investment. It was like $75 million, which was huge at the time. Even today it’s huge.
So then they started hiring a lot more people. And the problem we were having with the monolith was that when we were trying to deploy something—I actually had to physically go to the SRE guy and ask him to please prioritize my deployment. And I was hoping I wasn’t going to break production, because then they had to revert everything, and that wasn’t easy.
Of course, everything was impacted. Making changes to the same piece of code—we constantly had to align. So as we were adding more people, even developing became very difficult.
And then also the load—we had to constantly increase the size of the machine running our application, and the same with the database.
So yeah, a few months after I joined—I joined in March, and in September we formed a new team. I was part of that team, and our responsibility was to break down the monolith, both on the data side and on the infrastructure side, so we could prepare for more load.
Jose
And how did you go about breaking apart the monolith? Was it kind of—did you do it piece by piece? I think—is it called the strangler pattern? Where you start pulling pieces out of the monolith and moving them into microservices?
How was it that you went about it, and how long did it take?
Angela
So the first part was like that. We just had one piece, and then we broke it into multiple. We kind of said, “Hey, this is for the B2C, this is for the B2B, this makes sense as internal admin,” and so on.
So we broke it down into smaller pieces, but not all the way to microservices. Remember, this was 2015—the technology wasn’t really there to help us go all the way to an event-driven architecture. So it was just smaller pieces of that big monolith.
It probably took us a year or two to do some of that work.
And during this time, we also had to develop new products—because with the new investment came a lot of new requirements, as everyone knows. So we had to develop new products at the same time.
Then I would say, moving to event-driven started around 2018, if I remember correctly, when AWS Lambda functions were launched. And someone can check my dates here, but I believe it was 2018—they announced it at re:Invent in December.
And that’s when we were like, “Okay, this is perfect for us—the type of load, the type of usage we have—let’s go all in on event-driven architecture,” with all the supporting technology from AWS.
Jose
And that initial breaking down into—I don’t know, I’ll call them “baby monoliths,” so not quite microservices—was that mainly driven by the business requirements?
You mentioned both the need to scale for load and also the development pain of coordinating changes across teams. Were those the two main drivers? Was one more important than the other?
Angela
It was a combination of both, to be honest—because everything was happening at the same time.
We were hiring—we doubled the number of people internally almost every month. And then the load, in terms of usage, users, businesses—that was also growing exponentially.
So yeah, we had to do all of this at the same time, which is why some of this re-architecture took longer than you might expect. It was because everything was happening at once.
And most of us were new. I had to catch up on everything about the business in three or four months, and then I was moved into this new team that we created. And I was the one who knew the most—after four months in the business, I was the veteran.
I had to explain things to the others, and I was like, “I actually don’t know much myself, but we’re going to make it work.”
So yeah, everyone was new. Technology was developing super fast, and we had to learn while all these changes were happening around us.
Jose
And from a technology perspective, how did you approach this transformation? Were there any specific guiding or design principles you had in mind to help define the target state?
Angela
To be honest, I can’t really remember when we broke it down from big monolith to baby monolith—at that point, I think it was more about context and keeping that intact. That was one of the guiding principles.
But then when we moved to event-driven architecture, that’s when we made a very clear principle. A little bit later than 2018—I can’t remember the exact year, probably 2019—we established this principle in engineering that we apply serverless first.
So we said: you should first go to serverless technology. If that’s not the answer, then go to containers. If that’s not the answer, then the last option is a virtual machine, like an EC2.
That became our guiding principle to push people—because, of course, this was new to everyone, and no one had the skills yet on how to develop with an event-driven architecture.
Most people had the mindset, “But I know how to do things this other way. I’m much faster—why should I do it differently?” But that’s where, as a function, we believed this was preparing us not just for the next year—which is how we used to think—but for the future.
And that’s what happened—we didn’t really have to do any big rearchitectures since.
Jose
That’s true. And how did that work in practice? So, if I came to you and said, “I have this new service, and it's running in a virtual machine—I want to deploy it this way,” would you then have a discussion about it?
Would you ask, “Can you show me why it didn’t work as serverless, and why it didn’t work as a container?” So, were you kind of asking for proof or at least a discussion on why it didn’t fit the earlier options?
Angela
We were a startup, and I’d say we were like the classic startup where some of the things we did were pretty scrappy.
One thing we did was set up an alert when someone created a virtual machine. And then the whole function would be like, “So... why? What are you doing?”
And then we’d gather—we were still quite small—so we could actually gather and be like, “Okay, let’s brainstorm. Let’s see—why did you choose this solution?”
And multiple engineers would come around the whiteboard and start drawing boxes to figure out, is that really the best solution?
And most of the time, we’d go back to, “No—you can solve this with serverless architecture.”
So that was one of the funny little things we did—alerting. And then once we got better, we started to show in all-hands meetings how many services we had in each type of infrastructure. And we’d celebrate when teams moved fully from virtual machines to containers or to serverless.
Jose
Did you also have any discussions with engineers at the time who didn’t believe that serverless was the right way? Was that something that came up often?
Angela
I’d say it was mostly at the beginning, when no one really understood how this new technology worked. And of course, it had some limitations.
One big challenge was that, when Lambda functions launched, they didn’t support .NET—and all of our code was in .NET. Which meant people knew .NET.
So, by going to a different type of architecture, we had to change the programming language as well.
Jose
Okay. What did you have to change it to?
Angela
To JavaScript, I believe—and later on, TypeScript. So that was a limitation that led to even more debate—like, “Why would we go for this?”
I remember when they launched support for .NET Core—it was a big win for us. We were like, “Oh, now we can go back to .NET.”
Jose
Did you go back to .NET?
Angela
Actually, no. But it was the point that people were kind of against moving because of the programming language. And then it was like, “Oh, .NET Core came in,” and I remember I tested it out and thought, “Yeah, we can move, we can do all this.”
But by then, everyone had kind of gotten used to not using .NET, and it was like, “Why would we go back?” It was still a little bit more complex to set up and everything.
So it became more of a discussion point—an argument that had been used earlier against going all in on serverless.
Jose
So it sounds like you didn’t mandate that everyone had to use TypeScript or JavaScript in the beginning. Was it then kind of up to each of the teams to choose? How much freedom did they have when picking their tech stack or the language their Lambdas were running on?
Angela
In the beginning, there was huge flexibility. We paid for it. I felt some of that myself.
Yeah, there was a lot of flexibility early on, and each team was choosing what they wanted to use. One of the teams even chose F# as their programming language.
Jose
Is that still alive?
Angela
Not anymore, right? Well, I had to learn F#—because the team that decided to use that language... they all left. And then I had the privilege, let’s say, of inheriting all of that.
Jose
And migrating it, then.
Angela
And migrating it all, obviously—because no one else wanted to touch it.
Jose
I think that’s a very clear example of the trade-offs between giving people flexibility to make their own choices and running a business where people need to have a shared set of skills and be able to help across teams.
Angela
Yeah, and that was a pivotal point for us. We learned that lesson the hard way. With the flexibility we allowed early on, we had to pay that back.
After that, we agreed on a defined tech stack. It was approved, and if you wanted to go beyond it, you had to have a clear argument.
Even today, I don’t believe in strictly limiting anyone—but you do need a good case for why you want to go outside the guardrails.
Jose
So you were setting the guardrails or the framework for what was kind of the default or the first approach, but still allowing for some emerging changes.
Nice, very good.
You talked a little bit about the tech stack, right? So can you tell us a little more about what the overall AWS tech stack looked like in the event-driven architecture?
Angela
Yes. So we started—and even today, most of our architecture is built around SNS, SQS, Lambda functions, and DynamoDB. That’s the core setup.
And then, as AWS introduced new services, we started using a bit of EventBridge, and we had to bring in some other types of databases as well—like Aurora and a bit of DocumentDB.
And we also use ECS. We do not use Kubernetes. That was a big wave that came through Trustpilot—as in many other companies—with engineers being very passionate about Kubernetes. We tested it out and decided we didn’t need the complexity or what Kubernetes offers, so we stopped pursuing it.
So no, we don’t use Kubernetes.
Jose
And was that—were you already well into the Lambda work and the event-driven architecture by the time that Kubernetes discussion came up? So it wasn’t really, “Should we go Lambda or Kubernetes?” It was more that you were already on the Lambda journey, and then Kubernetes was discussed afterward?
Angela
Yeah, exactly. We were already deep into the event-driven, serverless architecture. Then Kubernetes came in—EKS was launched—and we had the discussion. But by then, it didn’t make sense for us to go that way.
And yeah, EC2s—we were using those, and I’m pretty sure there are still a few running here and there.
Jose
Were there any key trade-offs you had to think about? We already talked a bit about trade-offs from a development perspective—like flexibility—but on the infrastructure or architecture side, were there any that come to mind when deciding which AWS services to use?
Angela
I would say, in terms of architecture, one of the big things was that we went a little too hard on breaking things down into the smallest unit. That’s where today we have parts that aren’t just microservices—they’re nano services.
Which means the number of services we now have to maintain is very, very high.
Also, since we started fairly early with AWS services, we didn’t introduce some of the newer tools that would have made orchestration easier—like EventBridge and EventBridge Pipes.
Because of how our engineering was set up early on, everything was built around what I mentioned earlier: SNS, SQS, Lambda, DynamoDB. We had all the templates ready to go that way, which also contributed to the number of services we now have to orchestrate and maintain.
So that’s maybe another trade-off we made.
Jose
So in the end—was it worth it?
I mean, we started this journey with a monolith, which brought challenges from both a business and development perspective—like constantly scaling for demand and struggling with code coordination.
Now, ten years later, when you look back, how would you rate the return on investment?
Angela
A hundred percent worth it. It shouldn't even be a question, I would say.
Before moving to event-driven architecture—and that migration itself took years—it was a constant struggle. Every new product feature required a massive effort to implement. There was always a trade-off between what we could offer to our customers versus what the technology could support.
But once we moved to event-driven architecture, we didn’t have to have that conversation anymore.
The way I always talked to Product and to Designers was: “Go dream as big as you want—and I’ll set the limitations if needed.” But I never had to come in and say, “We can’t do that because it’ll crash this service or that system.”
Before, that was always the first question: “Can our infrastructure handle this kind of usage?”
So yes—it was a good investment. And I’m happy we did it early, even though it was tough.
Had we waited, we would have had to do other migrations—probably one every year. Like, “Oh, another migration,” and then another one, and then, “Oh, we hit the limit of this technology.”
But with this type of architecture—it scales as you grow.
So yeah, it’s perfect.
Jose
A big topic we’ve been talking about—and we wouldn’t be doing this podcast justice if we didn’t bring it up—is traffic peaks, right? That’s a lot of what we focus on here.
Did you have traffic peaks at Trustpilot? And were there any specific use cases that led to them?
Angela
Yes. I mean, like most companies, Black Friday is always a moment in time that impacts Trustpilot. In the beginning, we had to sit down two months ahead of time and prepare for the load we knew was coming—we had to scale up all of our services and so on.
Another peak for us was usually January. The reason for that is—well, people receive their products, and then they want to leave a review. That usually happens in January, so that’s another period when we’d typically see higher traffic.
And another one—and this kind of became a bit of a joke—was whenever we had the Trustpilot Christmas party. Almost every year in the beginning, we had some sort of outage during the party. All the engineers had to be ready to jump back in and fix things.
We used to laugh about it, like, “I think people are watching when we schedule our Christmas party—and then they go use Trustpilot.” It kind of became a tradition.
But after we moved to event-driven architecture, that was the third reason I was really happy we did it—I could actually enjoy our Christmas parties.
So yeah, those were the recurring spikes we saw each year.
Another big one was during COVID-19—so in 2020, when everything shut down and people had to move to ordering online. And it wasn’t just the typical stuff—people were buying everything online. They had to find new businesses, and that’s when our traffic really boomed.
Jose
A lot of people bored and upset with a lot of things. Yeah.
So those are more of the predictable traffic spikes, right? And I guess you mentioned that you were preparing for those.
Did you also experience any that were rather unpredictable?
Angela
We did. I mean, now looking back, it kind of makes sense that people would come during that period to Trustpilot to see who they could trust—but at the time, the COVID spike came as unexpected traffic for us. We didn’t prepare for it at all.
But it was a good exercise—it showed us that even without preparation, we didn’t have any outages. We just scaled to meet the demand.
So yeah, even now, even for the expected traffic spikes, we don’t have to change anything in our infrastructure—it just scales with demand.
Jose
Were there any cases where you thought, “Okay, we have this under control. Everything’s auto-scaling. The system’s in a great place”—and then you actually found a new bottleneck somewhere?
Does anything like that come to mind?
Angela
I’m trying to think... If I say, “No, everything auto-scales perfectly,” I’m sure someone will go and try to break it.
Jose
But you can say you don’t remember—that’s fair.
Angela
No—um, to be honest, we’ve had lots of unexpected events, and our infrastructure just worked. I’m very proud of the stage we’ve reached.
But of course, we got here through many unexpected events that we had to adjust to. We also run a lot of different programs where people try to break our systems, and that’s part of how we manage and prepare for those situations.
So I’d say we are in a position now where everything auto-scales. I’m sure there’s some forgotten place in the internal infrastructure where someone would say, “Hey, remember this thing with the weird name that was developed ten years ago? It’s still running.”
But for our customers and consumers, we’re pretty safe in terms of scalability.
Jose
And you mentioned some programs that tested things out—was that also from a resilience perspective? Did you go into chaos engineering or similar practices to stress-test the systems?
Angela
Yes, yes—those were some of the principles we set up. We did chaos engineering. When Netflix introduced Chaos Monkey, that was the first thing our infrastructure engineer came in with—they said, “We’re doing this.”
The product engineers weren’t very happy about it—having their work tested like that—but yeah, we did it.
Jose
I’m sure they were happier later, once things improved as a result of that testing, right?
Angela
Of course. And the same with security—we’ve been running different programs to test things from a security perspective and make sure our application is ready for almost anything.
Jose
One thing I’d like to touch on is that from a re-architecting perspective, there’s also the team structure itself, right?
You alluded to this earlier—having more and more teams, but a big dependency on the monolith made coordination difficult.
Was that something you also considered during the transformation?
And I guess this ties into Conway’s Law—the idea that your system architecture tends to reflect your organizational structure. I hope I’m not misquoting it too badly!
Did that come into play during the process? Or put another way, what were some of the more unexpected things you learned from an engineering organization perspective during this?
Angela
Good point. Looking at the size of most companies—even today—we’ve been on the smaller side. And I’d say that was actually part of the decisions we made around our architecture and the technology we chose. It allowed us to run with small teams.
When I first joined, we were two teams. That was the size of the product and tech organization. Then we doubled to four teams, then six.
Today, in my area—which is Engineering and Applied AI—we’re about 90 people, including leadership. That’s still relatively small compared to other organizations at this scale.
And that’s because we applied those principles: reuse, templating, and standardization.
Even how we name services might seem like a small detail, but it really helps—especially for onboarding. It determines how fast someone becomes productive, and how many people you actually need to run the business.
If you run everything in-house, just that one team might be half the size of our entire engineering org.
So these small details—how you design infrastructure, tech choices, programming languages—they all impact scale. The more variety you have, the more people you need, because you can't rely on just one person knowing a system. You’ll need at least three or four, just to be safe.
But when you use the same technology everywhere, it's much easier to stay flexible and adapt.
We also chose technologies that are well-known in the industry. Sometimes something newer might look more interesting or “better,” like a new programming language or AWS service. But we often decided against it because we asked, “Do we now need to educate every new person who joins?” It would have made hiring and onboarding much harder if we had gone with something very niche.
All of those decisions played a role in our goal to stay small—because the bigger you get, the harder it is to move fast and develop effectively.
Jose
How have you seen AI—and generative AI in particular—impacting engineering work in the last few years?
Angela
Yes... I knew that was coming. I knew it.
Jose
It had to come up.
Angela
Yeah, fair.
So, even with GenAI—trying to think of the timeline now—I’ll just say, when GitHub Copilot came out…
Jose
GitHub Copilot?
Angela
Yes, GitHub Copilot. When that was launched—well, even before it was fully released—we said, “We want to try this. We want to test it.”
I believe it was either 2022 or 2023, can’t remember exactly. But I know it was fairly new, and we decided to introduce it to the team.
There was quite a bit of discussion from a security point of view—like, who owns what, what are you giving away, are you sharing code, and so on.
But we got through all of those discussions, and we enabled everyone to use GitHub Copilot. That was during a period when we needed to become much faster at developing new products for the business, so we introduced it fairly early.
At least for the repetitive areas, it could handle those really well—even back then. So yeah...
Jose
So was it mostly around auto-completion and that kind of thing?
Angela
Yeah, exactly—that’s what it was doing. It was great for creating tests, or for generating the structure of tests and so on.
We introduced it from the beginning, and we’re still using it. And when I look at the data—because I always try to benchmark where we are compared to others—we’re actually at the top in terms of how much engineers are accepting the suggestions, compared to other companies.
Jose
Okay—so not from the perspective of percentage of code that’s AI-generated, but more around the subjective experience of the people using it?
Angela
Yeah, exactly. It’s more like: how much value did engineers feel they got?
If someone gets a hundred lines of code suggested, but they only accept one line—that’s probably not great. They still had to look at all of it.
But if they accept 50% of those lines, it probably means the suggestions were useful. So we’re looking at it from that perspective, especially with GitHub Copilot.
Jose
So more like a confidence level in the usefulness of the tool?
Angela
Yeah, exactly.
And then as more AI technologies came in—at Trustpilot, I was also leading the Applied AI team, and that team has actually been quite advanced. They were the first ones in.
So when people ask, “What have you done with GenAI?”—that team is like, “We’ve been doing this for years,” way before ChatGPT and everything else came out.
In terms of products we’ve offered to businesses, we’ve been using GenAI technologies for many, many years.
Internally, teams have been leveraging that as well—but from a productivity perspective, the journey is still ongoing. Even now, we’re looking at what’s next.
GitHub Copilot was the first step. Now we’re asking, “What’s next to make us even more productive?”
Jose
And would you like to tell us a little more about your next step? I understand your career is going through a bit of a change now.
Angela
Yes, absolutely.
I’ve decided to leave Trustpilot and start something on my own.
The plan is to take some of the ideas and practices I developed at Trustpilot and bring them into other companies—through consultancy and advising. That way I can keep my skills sharp, continue learning from others, and apply what I’ve learned in new contexts.
Besides that, I’m also looking at starting my own tech company. That’s always been the dream, and I feel like now is the perfect time—especially with everything that’s happening in our industry with GenAI.
I always say: companies that started out as cloud-native had a huge advantage over those that came before. And I think the same is true now—companies that start out as AI-native will have a big edge over those that don’t.
The older companies will need to go through transformation—and I’ve led many of those. They're hard. They take time. You need to change how people think and how the organization works.
But when you start with a blank page and build for AI from the start, it’s so much easier—and you can move so much faster.
So that’s the plan for the future. I’m very excited about what’s coming next. There are already so many new skills I’m learning, and that’s what I’ve always loved—learning, growing, and developing myself as a person and as a leader.
Jose
I’ll be following it—I’m looking forward to seeing what comes out of it.
And as we wrap up, are you ready for a couple of rapid-fire questions?
Angela
Yeah.
Jose
Don’t overthink—just go with your first thought.
Is there a book, podcast, or thought leader you’d recommend people check out?
Angela
Yes—I’ll combine the book and the thought leader.
Someone I’d recommend is Chester Elton. He wrote the book Leading with Gratitude. He’s also a coach—I had the opportunity to meet him last year and have a coaching session. He’s amazing.
And now he’s started creating more content online, so I’d definitely recommend people follow him.
Jose
So we get two for one there?
Angela
Yeah—two for one.
And then, in terms of a podcast, I would say Lenny’s Podcast. I really like that—it combines engineering, product, and leadership. So it’s perfect for the kind of content I like to digest.
Jose
And another question: is there any professional advice you would give your younger self?
Or maybe someone just starting out in this area?
Angela
I would say—something that would’ve helped me—is: don’t be afraid to make mistakes. And don’t assume the people around you know everything.
When I was starting out, I had this impression that everyone else was so much better, and that they had all the answers. So I was scared to ask questions.
So—ask questions. Don’t be afraid.
The only advantage others may have is that they’ve been doing it a bit longer than you. But that doesn’t mean they know everything either. So just don’t be afraid.
Jose
Last question for you.
Scalability is...?
Angela
Scalability is growing without breaking—what makes you excellent.
And the way I’d put it is:
In tech, it means architecture that can handle the number of users you have.
In teams, it means having processes that don’t hinder creativity.
And in leadership, it means knowing when to step out—so you let others step in.
Jose
Very good.
Angela
That’s what I would say scalability is—for me.
Jose
That’s a wonderful answer—and a great way to wrap up.
Thank you so much for joining us.
Angela
Yeah, thank you for having me. It was a pleasure.
Jose
And that’s it for this episode of the Smooth Scaling Podcast. Thank you so much for listening.
If you enjoyed it, consider subscribing—and maybe share it with a friend or colleague.
If you want to share any thoughts or comments with us, send them to smoothscaling@queue-it.com.
This podcast is researched by Joseph Thwaites, produced by Perseu Mandillo, and brought to you by Queue-it—your virtual waiting room partner.
I’m your host, Jose Quaresma. Until next time—keep it smooth, keep it scalable.
[This transcript was generated using AI and may contain errors.]