On this website we use cookies that are strictly necessary to make our website function correctly, as well as optional – analytics, performance and/or marketing - cookies which help us improve our website by collecting and reporting information on how the website is used, as well as help us to reach out to you with information about our organization or offer. If you do not wish to accept optional cookies when visiting our website, you may change the settings in the Cookie Settings. For more information go to Cookie Settings.

Skip to content

Low-Code & Cloud: Creative Solutions to Modern Problems

Technology

May 14, 2020 - 9 minutes read

Objectivity_Cloud_Migration_Elements_Blog_768x440
Maciej Wyrodek

Website: http://thebrokentest.com/

See all Maciej's posts

Share

We live in exciting times. Technology is rapidly evolving and the things which were merely a pipe dream just a few years ago are now within our reach.

For example, a Rapid Software Development (RSD) platform (or, in other words, a Low-Code Application Development Platform) such as Mendix changed the way we write applications.  If we accept certain technical and user experience constraints, we can write applications much faster than ever before.  Stuff that took weeks to develop, can now be done swiftly in days.

But as I mentioned before, with new possibilities come new constrains.

For example, you have significantly less control over what’s going to go under the hood of your product, making performance improvements much harder (if not impossible).  Of course, there are some guidelines on how to tackle this problem, but since this technology is quite new there’s still a long road ahead of us.

In the past, the most important question relating to performance was:

  • Will servers handle the load coming their way? 
  • What will happen if they won't?

Now, thanks to Cloud Platforms like Amazon Web Services and Microsoft Azure, there’s another question worth asking: "How much will it cost me to have the performance I want?".

In fact, this topic is at the core of the case study described below, as it’s the exact question one of our customers was looking to answer.

In the following sections, we’ll discuss our approach and the different challenges we faced while searching for the answer.

The Standard Process for Performance Testing

In a general overview, the approach to performance testing is as follows.

  1. First, we need to learn about the customer’s needs. The team tries to estimate:
  • What type of load will be typical for the application?
  • What will be the maximum load?
  • When interacting with the application, what are a typical user’s user stories?
  1. Based on this information, testers prepare test cases representing a typical usage scenario.
  2. Next tools like JMeter, Gatling, etc. are used to record those scenarios.
  3. Recording needs to be modified. Since it’ll be run many times in separate, parallel runs, they cannot conflict with each other. For example, we don't want all of the 10,000 users to be editing the same document.
  4. Create/secure an environment with similar performance parameters to the production environment.
  5. Run a test on that environment.
  6. Next the Dashboard and Report are generated and analysed.
  7. Lastly, if there are any issues, they are shared with the team.

Low-Code Development vs. Traditional Development

In the introduction, we mentioned that although rapid application development has a lot of benefits, it also has certain constraints, and the team has less control over what makes it into the code.

Now it's time for another one. With RSD, developers mostly build the application from blocks. Consequently, they have much less control over how the user interface (web page) interacts with the backend.

This type of code is generated dynamically by Mendix, according to its own algorithms. Developers can create a more typical communication process, but this takes time – which basically undermines the biggest advantage of rapid software development.

For small, simple forms, this isn’t a problem and changes are minuscule and easy to manage. But, for a more prominent application with long forms, this isn’t so straightforward.

This leads to exciting challenges – for instance, a seemingly insignificant UI change could lead to severe changes in the sent request and response.

As mentioned in the previous sections, two steps are part of preparing the performance test:

  • Record scenarios,
  • Parametrise them.

As you can guess from the above, this quite quickly turns into a Sisyphean task.

To not dwell too much on the technical details, here are some numbers:

  • After cleaning the payload, there were more than 1000 parameters per request that had to be collated.
  • A huge portion of them had both dynamic names and values.
  • Error messages were meaningless (e.g. "HTTP 1.1/560 560").

On its own, the above point would be challenging enough, but there’s one more aspect that made it much harder to deal with. Rapid Software Development lives up to its name, developers’ delivery speed is fast, leading to frequent deployments. After putting all this together, the bigger picture became clear. Tried and true performance tools won't suffice.

If we used them, we would have to rerecord or rewrite each test after every single deployment. This would take too much time and effort. We needed to search for a new, more efficient solution.

Cloud Solution for Performance Problems

Objectivity prides itself for being a company that is open to innovation and experimentation. And, in situations like this, it helps that we have many communities which put a lot of effort into discovering and researching the tools available on the market.

Thanks to this, we were able to quickly asses our options. Different tools and solutions were rejected either due to their cost, low efficiency, or unreliability.

After a short Proof of Concept, it turned out that the best option would be to use Selenium Web Driver.

As an automation expert, this made me cringe. It’s common knowledge that Selenium wasn't designed for such a task – so much so that this is outright stated in the section of “worst practices” in Selenium’s documentation.

There are many issues with this approach. The main two being:

  • It’s difficult to assess the performance penalty that comes with using Selenium Web Driver.
  • The measurements will also include “noise” from many elements beyond our control.

But, in our case, the situation was much more straightforward: our customer wanted to know how much it would cost to run an application on the cloud, while maintaining acceptable performance. Essentially, they were looking for a recommendation regarding what kind of Azure nodes they’d need.

We believed that with proper preparation, we could give them quite a precise answer.

Fortunately for us, we also realised that we would need to use Selenium for only 1 of the 3 scenarios we had to do. The other two, although with some difficulty, could still be done in JMeter.

So, for safety measures, we decided to split the team into two separate teams. One team would take care of JMeter cases, and the other would build a solution with Selenium.

Selenium Performance Test: The Implementation

The first problem we had to tackle was scalability. A single JMeter on a powerful computer can easily simulate up to a thousand users (more with clustering). However, the same cannot be said of Selenium. The average browser is resource intensive. Depending on the computer, having 4-8 instances running could visibly affect performance.

If you have many tabs open, you may doubt this. But the thing is that, at best, you have maybe 1-2 pages displayed and you’re not actively clicking on both of them. Additionally, drops in performance (which aren’t usually noticeable) could seriously affect our measurements.

One advantage in our case was that the newsletter module which we had to test with Selenium had a relatively low number of users compared with the rest of application. It had around ~200 users.

An attentive reader might ask whether it even makes sense to test this module since it has only 200 users.

We believe that it does make sense. Those users are generating and sending newsletters to hundreds of customers – those operations are both resource intensive and one of the most critical functionalities in our customer’s system.

Gridlastic: Selenium Grid for Running Multiple Browsers Simultaneously

Fortunately, Selenium enables running multiple browsers simultaneously on remote machines using Selenium Grid.

Again, the scalability of the cloud comes in hand with a properly dockerised environment, which easily creates containers for running browsers.

The problem is that building such an infrastructure takes time.

Fortunately, there is already a product on the market that addresses this kind of need – Gridlastic. In short, this is a service which can quickly and inexpensively create a vast amount of nodes for running test automation, using the AWS cloud.

How to Measure UI Performance?

JMeter and similar tools make a simple shot to the API and get the response – measuring the time from sending to receiving a response.

In the case of Selenium, the situation is much more complicated:

Our code sends information to Selenium Web Driver, which sends information to the browser, which performs an action that sends the request – and, finally, the browser gets the response.

In the meantime, we have to send another request to the Web Driver to check if the response we expected has arrived (or to be more exact, if the expected change happened on-site).

So, as you can see, there are many more points of failure and many more places exposed to non-deterministic behaviour risks.

The perfect solution would be to make our own implementation of the driver, which would allow us to do better measurements. However, although perfection is great, sometimes "good enough" is more valuable – especially if it can give you feedback much faster.

In our case, we decided to run tests for one user repeatedly to create a baseline performance to which we will compare the result under a heavier load. Such a solution is not perfect but it helps us measure trends.

Tool Features

JMeter and similar tools are tried and true, which means they are the effect of years of development and usage. They are great at what they were designed to do. Providing many vital functionalities.

To name few:

  • the abovementioned ability to run tests in parallel,
  • the ability to measure and record each action,
  • the ability to create graphs, dashboards, and reports,
  • the ability to randomise start times so all tests aren’t started simultaneously.

All these features had to be implemented in our tests. Fortunately, in some cases, we were able to take shortcuts. Our tool created results in the format used by JMeter so we could use it to draw graphs and interpret results.

Machines for Testing Performance

Last, but not least, the computing power of AWS nodes was also affecting us. Gridlastic allows us to use 3 types of AWS nodes as a base for running the browsers used in tests.

Modern applications have a heavy front-end. This was especially noticeable while using weaker machines to run the tests. Execution time between a less effective machine and a more powerful one was drastic. And this is just in terms of the machines used to run tests.

Results

At the end, the tests were performed, and the test results and server logs were analysed.

The first performance runs found that one part of the system was working much slower than expected. The generation of email newsletters turned out to be the bottleneck. Fortunately, after analysing how the templates were generated, this was relatively easy to fix. Then, another series of tests was run to find the sweet spot in terms of the "hardware" options which should selected for the cloud to make the project work with the expected performance. We found out that the application wasn’t splitting loads very well between different cores.

This information was crucial in our recommendations for what type of Azure nodes should be used for the application.

Conclusion

As a result of today’s changing and evolving technologies, it turns out that many of the old, tired and true approaches once favoured, don't work so well anymore.

In the past, companies had to worry about servers and how to maintain them – namely, the high cost of having and running them. Whereas, with the cloud, the question is no longer, “What servers will I need?”, it’s, "Which cloud and what nodes should I use?".  The cloud and low-code software development broadened our horizons – contributing to the emergence of many creative solutions as well as new problems in need of new solutions.

With these new software development approaches come new ways of testing. Tests now need to answer different questions – “What kind of nodes do I need?” instead of “What kind of servers do we need?” – and be performed in new ways.

Another issue which testing has to resolve is testability i.e. how to render the new tools and technologies testable. Especially, how to find a way in which to do so quickly and efficiently. As the project case study described in this article shows – with the proper dose of experience, knowledge, and creativity, all challenges and problems can be tackled.

If you’d like to learn more about the cloud, download our “Cloud Done Right: Effective Cost Management eBook.

Maciej Wyrodek

Website: http://thebrokentest.com/

See all Maciej's posts

Related posts

You might be also interested in

Contact

Start your project with Objectivity

CTA Pattern - Contact - Middle