Follow up on the Test Automation Discussion – Bringing in the Performance View

There was a rather heated discussion around A Context-Driven Approach to Automation in Testing by James Bach and Michael Bolton (referred below as the article and the authors – other references would be explicit). Here are a couple of places where it got fiercely attacked Reviewing “Context Driven Approach to Automation in Testing” by Chris McMahon
and Open letter to “CDT Test Automation” reviewers as well as in many other different places. Other posts where more reasonable – such as The A Word returns (sort of) by Alan Page.

The discussion touched a lot of very important points from several different areas – so the topic sat in my backlog for a while I contemplated how to approach it. Well, while I haven’t found any good way – finally I give up writing a connected text and just sharing some [almost independent] thoughts here in a random order. I, of course, am looking from a somewhat different – performance – point of view, so not going to jump into details of the discussion too deeply.

1) I wonder why testing discussions got so heated and why they can’t be done in a more professional way – especially considering that everybody who participate in these discussions deeply cares about the profession. Those who don’t care find better things to do than write long posts and articles. Sometimes it is even difficult to see real issues behind these very intensive (to say it mildly) discussions. I re-read the article again after all this fire and don’t quite see any reason for such level of emotions. And it looks like the main point of the article (as I see it) wasn’t actually discussed in any meaningful way.

2) There is an interesting issue when you write anything except a textbook; it can be well seen on the example of this article. The authors basically say that automation is good, but has its limits and can’t replace an intelligent tester. They say several times that automation is good and needed. But the majority of the content is devoted to different automation issues and limitations. Does it mean that authors are against automation (leaving aside terminological discussion here)? I don’t see it this way. Author explicitly say that they are all for it (as far as it helps in testing). Still critics say that the article is against automation (and it is quite possible that some beginners get the same impression). So how are you supposed to write an article that says that automations is good but has its limitation? Is it always necessary to put first a textbook on automation and put limitations in the last chapter to keep balance? It is not actually what an article is for – an article is to discuss a specific topic, which may be quite advanced, and just refer to the context in the beginning. The article doesn’t concentrate on automated regression testing (which would be discussed here later) and perhaps quite a lot of other topics – but it is not supposed to be a comprehensive textbook on the topic.

3) There is a huge difference between testing a system in the best way (a “consultant” coming to a project to test a system that is almost ready – actually the way most performance testing was done in the past) and setting up the best process to test the system from the inception to the release, especially during agile / iterative development (an “engineer” working with the team from the very beginning). We, of course, have a lot of consultants who specialize on setting agile / DevOps / Continuous Integration / etc. processes – and a lot of engineers doing deep system testing / analysis – but here I rather mean the approach, not title or specialization. If you are a consultant coming to test a system at a specific point of time, you don’t need to automate regression testing at all, it just doesn’t have any value (unless you also asked to setup long-term processes). You need only the type of automation described in the article – so Alan Page is probably right here describing it as “exploratory automation”.

4) If we talk about long interative development process, it looks like everybody agree that we need automated regression testing (or checking as it referred in the article). Doesn’t look like anybody seriously argue it for functional testing nowadays (and I definitely don’t see that in the article despite the critique – even if more attention there is paid to “exploratory automation”). But I see quite a few people saying, in one way or another, that automated regression testing is the only thing we needed. That is my main concern here (and, I guess, the main point of the article): no, it is NOT enough. However what can be automated (checked) and what can’t (or shouldn’t) and what it would depends upon should be the subject for discussions and further elaboration – and I don’t see it actually happening. By the way, it still remains unclear for me if people attacking the article are saying that automated regression testing is the only thing we needed – or that they just believe that more stress should be on automation (but still agree that not everything could – and should – be automated).

5) Side note about GUI-automation. It is interesting that the authors – “we generally avoid doing that” – are getting too close to contradicting the basic principle of the Context-Driven Approach that everything depends on context. Unfortunately, there are too many contexts when you should use GUI to test (or protocol-level recording in performance testing which has the same issue). Either API is so complicated that nobody really knows what is behind GUI, or there is a thick layer between API and GUI (Rich Internet Applications, Fat Clients, etc.), or there is no API to talk about at all. Yes, working with GUI (as well as protocol-level recording) is a real pain as soon as you get to automation. Yes, it may be much better to work with reliable APIs – if you have it, if you know the exact way to use it, and if you not missing anything on the top of it (or at least the risk to miss is small enough).

6) We get in even more interesting area when we switch to performance testing. We have the same difference in approaches between the need to test a system for performance and the need to establish a process to verify performance as part of SDLC. And here we get even wider spectrum of opinions: from some posts / articles talking about it as something rather self-evident (for example, Building Faster Experiences with Continuous Performance or almost all Velocity talks on the subject) to some posts /articles challenging the idea itself (for example, The Myth of Continuous Performance Testing by Stephen Townshend). It is the same two different views (“engineer” and “consultant”) – but aggravated by the fact that it is much, much more difficult to automate regression performance testing. If you just come to test system for performance and reliability (as it is exactly the case in traditional load test) – you don’t need any regression automation. And, vice versa, if you need to test the system each iteration – soon you start to think what may be done there…

7) As far as for a long time the “consultant” approach was practically the only option in load testing, not much tool support for performance testing automation is currently available. It started to change – and you see a lot of announcements for almost every load testing tool about enhanced and improved support – but it is rather first steps. So if you want to do some automated performance regression testing you would need to figure out what exactly should be done – and probably do a lot of plumbing, which makes it quite a challenge in non-trivial cases (and, by the way, good performance testers/engineers may be not necessarily experts in plumbing). However there is one consideration that significantly increases the value of regression performance testing – as performance is cumulative of all parts of involved code/components, chances to catch up performance issues even with minimal / limited performance tests are pretty good (assuming good analysis – that is far from given).

8) Automated performance regression testing is far from established practice in the general case. While it may appear so from many presentation and publications – if you look more closely it is rather either simple cases (many are just single-user) or it is rather a special case (when great engineers worked a lot to make system-specific plumbing around a specific system). And, with performance testing, it is probably easier to see the areas that should remain outside of regression testing (although I meet more and more people believing that we should automate performance testing completely one way or another). The question what, how, and when should be automated in performance testing and what shouldn’t looks the most interesting for me at the moment – and it definitely heavily depends on the context. My talk Continuous Performance Testing: Myths and Realities was accepted by the CMG Impact performance and capacity conference (hold November 6-9 in New Orleans) – looking forward to discuss it with other performance professionals there. Of course, I have many more questions than answers – but the area is so vague at the moment that I hope that good questions may trigger a productive discussion.

Share

Leave a Reply

Your email address will not be published. Required fields are marked *