I’d like to highlight some things we’ve discovered at Queue-it when setting up and maintaining an automated UI test suite. We used Selenium, xUnit, and .NET (C#), but my suggestions are broadly scoped and could easily be relevant for other tool/framework set-ups.
Our story begins a few years ago when we decided that having automated UI tests would be beneficial. Why? Well, we do not have the manpower to do manual testing on every release and, even so, manual testing is time-consuming. Having an automated approach lets the machines to all the trivial work so we can concentrate our efforts on more relevant and exciting matters.
We didn’t have much experience with existing UI testing frameworks, but I had worked a little with Selenium in a Java set-up. I found it worked pretty well for this project and because Selenium also supported .NET, we decided to give it a go.
Fast forward to today, and I can now share with you three tips to help guide you if you are setting up an automated UI test suite for the very first time: (Note that these descriptions contain code examples in C#, so a basic knowledge of object-oriented programming would be optimal to fully understand this)
Use expert knowledge to get you started
Ensure good solution structure with separation of concerns
A test suite is a complex structure on its own – in our case, a .NET solution in Visual Studio. A solution needs good organization and structure to be maintainable and readable.
We had an external expert join us to set up the initial solution structure, a model project containing pages and interaction with Selenium components and another test project using the classes from the model project and with no knowledge of Selenium.
With this set-up, the underlining UI test framework (Selenium) could, if needed, be replaced with another UI framework without changing any of the tests. Besides that, the tests read just like any other unit test, clearly visualizing their intent. Here’s a xUnit test example for creating a new layout:
public void CustomLayouts_ListPage_Add()
var expectedLayoutName = "NewCustomLayout" + DateTime.UtcNow.Ticks;
var listPage = new SelfService_CustomLayoutListPage();
var detailsPage = GetDetailsPage();
The above test is pretty self-explanatory. When reading it, you don’t even know if it’s a website, Windows application, or even a mobile app being tested. And why should you bother? The technicalities are abstracted away so you can focus on writing the actual test.
Design for failure using wait/polling mechanisms.
We started writing our tests so that they read as any other unit tests. However, remember that when working with UI, timing is important. Here are some differences with UI tests and other tests:
- If you go to a page and then click a button before the page has completed loading, you get an exception
- If you click a tab control which shows a section (which was hidden before) and then afterwards click some element in this section before it’s actually visible, you get an exception
- If you go to a page which afterwards loads data via a separate AJAX call, you first see an empty page and then you assert that there should be data, and you get an exception (assertion should be after the data load)
- To get around these problems, we added many wait/polling mechanisms. We do not want to get an exception if something is missing – we just try again. At some point, we do a timeout concluding that this will never happen, i.e. the test is actually broken.
return result != null;
Another example, with use of Polly, when we want to assert that a redirection happens at some point in time using a Polly policy to handle retries for specific exception types:
var retryPageLoad = Policy
.WaitAndRetry(10, (retryCount) => TimeSpan.FromSeconds(retryCount));
var inQueuePageAfter = new QueueFront_InQueuePage(
All of our pages have a base class where we assert some general expectations like “has the page loaded completely” or “have all AJAX calls finished”. These mandatory expectations should always be met before continuing with anything else.
Scope your tests, focusing on different UI functionalities in each test.
Use an API to meet test pre- and post- conditions.
When writing a test, you normally scope it. For example, you want to test creating some entity in one test and deleting that entity type in another. You can have one test with both, but when it fails, it’s not clear if it was deletion or creation that was root problem until you investigate it.
Writing a delete test without something to delete will not work. You could say the test has a precondition that is not met. Normally you can mock this out, i.e. faking that there is data, but the UI tests run against a “real” test or production environment with “real” data, most likely in some database somewhere. Our solution to this problem was using our own Queue-it API. This exposed creation of entities, so preconditions could be met, but also that the tests became scoped. Let me explain with the following xUnit test for deleting a layout:
public void CustomLayouts_ListPage_Delete()
var layout = TryCreateLayoutViaApi();
var listPage = new SelfService_CustomLayoutListPage();
var layoutListItem = listPage.Layouts.FirstOrDefault(l => l.Name == layout.DisplayName);
layoutListItem = listPage.Layouts.FirstOrDefault(l => l.Name == layout.DisplayName);
In the Arrange section, you see the layout is being created, i.e. the precondition is met. The test can then continue using the actual UI to delete the layout, finishing off asserting that it’s actually gone. One good thing about this is that the UI test is only testing the UI and only for the deleted part (not created), so the test is nicely scoped.
Another thing to notice here is the name TryCreateLayoutViaApi. Inside this function, we try to create the layout calling a RESTful method on some web server. That call could potentially fail, so we design for this, doing a reasonable amount of retries before failing the test completely. The goal here is to make your test as robust as possible.
The API can also be used for post conditions, ensuring that test entities are removed for the database in a clean-up or dispose call.
Overall, having an automated UI test suite is a good idea; it saves a significant amount of time compared to manual testing and it also helps to find bugs before a release, which makes it a very important QA tool.
Do ensure that your tests run as part of your build pipeline so that you always know the state of your current release candidate. Also remember that it takes time to get it right, but following our suggestions will give you a head start.
By Frederik Williams, Queue-it Software Developer