Performance Testing with Open Source Tools – Myths and Reality

Some time ago Federico Toledo published Performance Testing with Open Source Tools- Busting The Myths. While Federico definitely has good points there, there is some truth in these myths too. Otherwise we wouldn’t see so many commercial tools built on the top of open source including BlazeMeter (it is ironic that the article is posted on the BlazeMeter site), Flood, and OctoPerf. The open source load testing tools definitely advanced a lot in both maturity and functionality – but there are many areas where they are behind best commercial tools (and if it matters for a specific user is another subject – for some it may not). Instead of diving in arguing about specific points (which I partly did in my earlier post – start from The Future of Performance Testing if you are interested), I decided to talk to people who monetize on these “myths”. So here is a virtual interview with Guillaume Betaillouloux, co-founder and Performance Director of OctoPerf.

How did you get into performance engineering?

As most things in life, by chance. I was working for a service company in Paris and looking for a technical job. Most jobs I was offered were related to older technologies like Prolog/Cobol. But there was one that piqued my interest. That’s when I encountered Henrik Rexed and joined his team of testers in a big French insurance company. I was given a computer with LoadRunner, 1 hour of explanation and that was it, I had to figure out everything by myself. I remember really liking the technical side of these tests. But I must confess I was not too fond of having to report the results to stakeholders or deal with political/personal issues related to (poor) test results.

What load testing tools did you use?

At first, we were using LoadRunner, but we quickly moved to Performance Center.

After a year I was put in charge of the Performance Center platform (9.52 at the time) because as a “tool” it was not considered a proper “production” environment, so we had to maintain it ourselves. In this time, I learned a lot, but the price for this was to have to deal with random infrastructure issues and unhelpful support from HP. I mostly figured out my own workarounds, sometimes dirty hacks even, to get it to work properly.

In the meantime, Henrik left to work for Neotys, the company was I believe 4 years old at the time. And located in the south of France, so we’re talking Californian weather compared to Paris where you’re lucky if you get two sunny days in a row.

I joined him after my service company had me certified on NeoLoad a year after. I must confess I was feeling so high and mighty about using LoadRunner, the “best” tool in the market, that it took this 3 day training to have me realize that viable alternatives existed. A lesson I always try to remember even today.

After this I spent almost 4 years working at Neotys, demos, proofs of concept, training people, the usual turf of a pre-sales engineer.

Why did you decide to create OctoPerf?

To put things in perspective, I was really fond of NeoLoad, to the point that I invested a lot of time in my work. I remember several occasions of working 60-hour weeks on customer premises to make sure everything goes smoothly. But after a while I started to disagree with the route NeoLoad was taking, the product was becoming more and more expensive, slowly driving customers away.

That’s around the time when I met 2 former colleagues from Neotys who left basically because they were bored in their jobs. They had come up with a new tool but entirely in a browser, a feature I wanted to see in NeoLoad for quite some time. It was called Jellly at the time (mid 2015) but it was an early build of what OctoPerf would become. The tool looked young but promising, and I was looking for a change and a challenge, which is why I joined them along with Quentin to develop the business plan/strategy.

I remember one of the first discussion we had was about the name of the tool, jelly.io was owned by someone who just purchased it a few years before in the hope of reselling it (apparently, he succeeded). So instead, we used jellly.io and people kept misspelling it because of the 3 ‘l’. That’s when we decided to go for a name that would first be available but also avoiding a “load-something” name (I know 10 of these, not even counting LoadRunner/NeoLoad) while integrating the notion of performance. We decided the octopus was a nice upgrade from a jellyfish and we ended up with OctoPerf.

What is your business model?

We quickly figured that the market was quite crowded with Blazemeter spearheading the “JMeter in the cloud” offer. So, we aimed for a “SaaS load testing tool in your browser” approach, barely mentioning JMeter in the beginning. As you can import HAR files from any browser/proxy into OctoPerf and design your script in our UI, we thought the focus on JMeter wasn’t required.

We quickly realized this was a mistake and we shifted our strategy to also attract JMeter users better:

  • adding a generic support for any action/plugin from JMeter we had not implemented in our UI yet,
  • communication around a JMeter performance center in the cloud,
  • Multiple blog posts to share our knowledge on JMeter and other techs we’ve learned along the way.

And at the same time, we also created a version of OctoPerf that you can install entirely on your premises because a lot of companies are not ready/willing to move all their business in the cloud. Security being one of the main concerns and almost impossible to entirely address in a load testing tool since anything could be customer/critical data and requires encryption.

So, with this we can address Performance Center/LoadRunner customers but also “JMeter in the cloud” requirements and in a lot of situations this is a strong argument because both will be able to use the same tool while having an exit door to JMeter at any time.

Why did you decide to build on the top of JMeter?

That’s a big topic for us and one we often discuss. The first important point is that if you’re building a business you need to come up with an MVP soon. A lot of our competitors tried to build a tool from the ground up but it requires a huge effort on R&D to simply be able to compete with other entry-level tools.

I think one example of this is Nouvola, it was one of the tools on my radar and I watched over their progress with a mix of curiosity and fear that they would make a breakthrough. They started 2 years before us but our MVP already had more features than they managed to add to their own custom tool. In the end I can only surmise what happened to them but my best guess is that the money they got from investors was asking for more turnover than they could make with a fresh new tool. And I don’t want to sound patronizing or anything, because starting a company requires as much luck as careful planning (if not more) and I do realize we had a lot of it.

My point is that we were always going to use one of the open source solutions as a basis for our tool. And when we started in 2015, JMeter was undoubtedly the best tool around. And by best, I mean reliable, but also feature wise (already 16 years of R&D and still active). Because it’s easy to like one tool or another because you like one feature of it, but when you build a company you have to think about most if not all the possible situations/outcomes.

JMeter could do this, because it has a large library of samplers/controllers but also plugins. And on top of that you can script your way out of all difficult situations using JSR scripts.

Did you consider other open source tools? How would compare the existing open source tools?

We considered Gatling for a while but due to a combination of factors we didn’t move forward:

  • It uses Scala, which can make development more challenging as you will find less engineers/resources available,
  • It was quite basic at the time, with only a handful of protocols supported,
  • Much smaller community, almost no tutorials/forums online,
  • They were launching their own paid version of gatling,
  • Code base scripting that could go against the “ease of use” and “UI-driven” mindset in OctoPerf.

You have to remember this was in 2015, a lot has changed since, Gatling is definitely a stronger tool. But since the mindset is very different, we’ve decided to address it through another tool than OctoPerf, we called it Kraken and it is open source.

What OctoPerf adds to basic JMeter?

The base idea is that everything you do should be intuitive. Typically, in a 20 min OctoPerf demo we can record/replay/analyze a basic script. We do a lot of 1-hour sessions with our customers to get them up to speed and that usually enough time to have a first basic test on their application.

As you can guess if you try OctoPerf we’ve spent enormous amounts of efforts on the user experience, and we keep doing it (every proof of concept is an occasion to see someone using it in action). For instance, if you’re designing a regex in OctoPerf we have a visual selector that lets you create it in a few clicks. You can also automate these correlations much like in LoadRunner/NeoLoad but with even more options.

You can drag and drop blocks around to create your scripts, design your load policy in a few clicks and run even a very large test with a single button. You do not need to trouble yourself with datasets, hardware, reporting or any of the usual pain points of JMeter, we’ll do it for you.

Speaking of reporting we’ve also spent a lot of time and effort toward a report engine that is entirely configurable from your browser and templating of your custom reports to minimize interactions as much as possible.

And all that in a web UI online or on premises (using docker) so that you can work from anywhere, without having anything installed on your computer (you DO NOT need JMeter). And you can achieve the same things through our API since the UI is just a layer on top of it. But we also provide plugins for all the CI/CD tools.

There is a wide-spread opinion that we have a have a lot of available plug-ins and additional products (such as InfluxDb and Prometheus) – so you don’t really need it in the core product. You just pick up whatever plug-in you need for extended functionality. How do you answer that?

Yes that’s true but the way we see it is that you pay us because we are experts at handling common issues of JMeter like scaling the test or aggregating the report data, and what you pay will be several orders of magnitude less than the time it would take you to do it. On top of that we have become JMeter experts along the way and our support team answers in a matter of minutes to all customer requests. And as most very complete tools, there are times when JMeter can be counter intuitive.

I’d like to ask this question to our competitors that are 5-10 times more expensive, because then I can understand why using a hard-to-maintain combination of open source makes sense. In fact, that’s usually where our new customers are when start interacting, and we show them they could be focusing on more relevant topics than say, maintaining a database for their test results.

Open source tools and plug-ins do advance in both functionality and maturity. Do you see it as a threat to OctoPerf? What if they would add most of your advance features to the core product?

Yes it can feel threatening at times but we have learned to look at it differently. Typically, the addition of the boundary extractor in JMeter 4.0 is one of these cases since we had a similar functionality on top of the regex. This pushed us to rework it, make it even easier to use and, in the end, everyone won something.

To me these advances, by reducing our competitive advantage, they force us to move forward, find new ways to improve OctoPerf. And sometimes they just give us new feature ideas, or we watch them to see if there’s any traction before integrating them. That’s what is really cool with JMeter, many people/companies try to bring something to the table and there’s a lot of experimentation. If you’re willing to take the time you can learn much from these successes or failures.

I’m also quite confident on the core features we’ve developed simply because the amount of work we had to put into them over the years can hardly be reproduced from an open source project or a team of testers. Our auto scaling engine for instance is something we work on all the time, to make it resilient to startup issues, to take into account large tests with simple scripts as much as small tests with large scripts, we add metrics to our automatic monitoring and alerts, etc.

How did it happen that so many load testing – related companies are located in France? Neotys, Gatling, OctoPerf, Ubik…

Well in our case we have learned how to build a load testing tool while working at Neotys so obviously there’s an explanation here. Similar things happened around LoadRunner in Tel Aviv if I remember correctly.

A lot of open source projects are actually mostly maintained by French developers, like JMeter through Ubik but also VLC the video player. I think it has more to do with an open source culture than anything.

And it’s also related to the fact that in France, you can use your 2 years unemployment funds to work on a personal company. I know that’s how a lot of projects start here. Which is a good and a bad thing, because you can have a poor idea and keep it alive for two years, but it also allows a lot of companies to come up with a viable MVP. In our case so much so that we did not have to go looking for funding, a decision that we have no regrets about because it gives us so much liberty. Liberty to make OctoPerf easy to maintain, and aim for the long term. But also, in terms of adding people to the team, since we have no pressure, we can take time to handpick people we like to work with.

Thank you very much! I really enjoyed out virtual conversation that brought in many great points beyond the original issue. As per open source products, I guess you summarized it best saying “what you pay will be several orders of magnitude less than the time it would take you to do it”. So evaluating a tool, it is up to you to decide if you want to pay for functionality you need or you want to invest your time and energy in implementing it. A pitfall here is that you would probably drastically underestimate both the functionality you need and efforts needed to implement it unless you know it for sure.

Share

Leave a Reply

Your email address will not be published. Required fields are marked *