Problems with Test Automation and Modern QA š
Amir Ghahrai has written a fantastic article about Test Automation, and says something which Iāve been thinking about a lot:
Manual testing has lost its virtue, thanks to development practices and cultures such as agile and DevOps, which have created a divide in the QA space - those who can code and those who canāt.
I urge you all to read it, itās a well thought out post about the pitfalls of test automation and the process it creates.
My view on the recent DevOps/CI/CD culture is that āwrite code, commit it, hit build and automation checks the code works before it automatically goes liveā is dangerous.
Sure, automated tests can be good (if done well). But thereās far, far more to QA than that.
Technology vs Phonecalls
Earlier today, Health Minister Lord Bethell said, referring to the UK Government Coronavirus tracing app:
It is the human contact that is the one most valued by people. And in fact there is a danger of being too technological, and relying too much on text and emails and alienating or freaking out people because youāre telling them quite alarming news through quite casual communication.
Iām not denying that this is what the trials have shown. Iām sure that Lord Bethell is not wrong in his assertions.
Putting politics and my thoughts on the contact tracing app aside, I do however believe that this is the wrong conclusion: that people are feeling alienated by technology, instead of empowered by it. That people prefer to speak to somebody on the phone rather than a cold and complicated bit of technology.
I havenāt seen the app, nor am I privy to any of the findings spoken about, so Iām going to talk in very general terms here.
There are people out there who prefer to use an app
Itās not unusual for people to dislike phonecalls, especially unexpected phonecalls from an unknown number. In fact, Iāve heard many stories about people who refuse to answer the phone if itās from an unknown number, or if itās unexpected.
It might be for an any number of reasons: people may be too busy. They might struggle with anxiety when on the phone. They might be hearing impaired. They might be doing something important that youāre disturbing. Or they might be asleep.
Iām not denying that there are people who struggle with technology who would prefer to use the phone: what I am saying is that there are people who struggle with the phone who prefer to use technology.
An app is often better than a phonecall, anyway
I donāt think that an app can be easily dismissed as an electronic version of a phonecall. If you want to give people important information ā especially when it comes to health messages ā an app can be better than a phonecall for a few ways.
You can control the message
Humans areā¦ human. They make mistakes, and the message they give to your customers may not be what you expect it to be. By sending an email, a text message, or an in-app alert, you know what the message said.
You may not be able to control the feeling of the person receiving the message, but you can certainly make sure that the messenger says it in the best way possible. Indeed, Iām sure that the people making the phonecalls are reading off a script, anyway. So why not better control this by making it a pre-prepared email?
You can give tailored information
An app message doesnāt just have to be just āYou have had significant contact with someone whoās been tested positive for Covid-19. You should stay at homeā.
It can contain all sorts of information. It can give links to a FAQ page, or give you up to date health information. It can provide details of local places to contact if you need support. It can give you an opportunity to get in touch through email, an in-app message system ā or even a phonecall ā if thereās an unanswered question.
A mobile device is such a powerful tool, which can be tailored to user preference, location, and accessibility settings. It can show video, it can show long-form text. It can remind you of things. It can give you directions on a map. A mobile device can do so much that even a phonecall canāt do.
A phonecall can be a scammer
A mobile device can also have better security. Phones are notoriously easy to use for scamming: you can fake the phone number, and if you sound convincing enough you can make people believe you.
An app canāt be 100% secure, but itās more secure than a phonecall or text message. If you get a message through an app that youāve already installed, you know it probably came from that app.
It gives people time to process that information, and come back to it later
When someone tells you something over the phone, unless itās being recorded that information has gone. Itās in your (fallible) memory, and it canāt be referred back to it.
Particularly if the information is distressing, itās very likely that you will forget something important.
If a message is sent in an app, it can be kept and referred to later on. If you need time to process, you can take that time and come back to it. You can show it to your partner, to your friend, or someone else.
Give people the choice
Some people might prefer getting a phonecall, and thatās fine. Phonecalls absolutely still have a place in our society.
But people should be given a choice. If I want to contact my healthcare provider and ask them about something, then I should be able to do that through an app if I want to. I bank with a challenger bank: I can phone them if I want to, but I can also send them an email or use their in-app chat support system.
Itās that choice thatās best. Itās that choice which is more inclusive.
Updating PyDicom and missing dicom tags?
As part of a bit of work updating PyDicomfrom 0.x to 1.x, I came across a number of AttributeErrors:
AttributeError: 'FileDataset' object has no attribute 'ImagePathFilterTypeStackCodes'
The Dicoms are the same, and the error message are always in the same format (with a missing Dicom tag that it could find before) so the problem must be with PyDicom. (Note: this is assuming the tag isnāt a private tag. This is also new behaviour: you can search for a Dicom by its ID instead of its name)
Turns out thereās a change in PyDicom which removed a specific bit of code:
# Take "Sequence" out of name (pydicom < 0.9.7)
# e..g "BeamSequence"->"Beams"; "ReferencedImageBoxSequence"->"ReferencedImageBoxes"
# 'Other Patient ID' exists as single value AND as sequence so check for it and leave 'Sequence' in
if dictionaryVR(tag) == "SQ" and not s.startswith("OtherPatientIDs"):
if s.endswith("Sequence"):
s = s[:-8] + "s"
if s.endswith("ss"):
s = s[:-1]
if s.endswith("xs"):
s = s[:-1] + "es"
if s.endswith("Studys"):
s = s[:-2] + "ies"
return s
(inside datadict.py
method CleanName()
ā lines 134-146 in the version we were using, for those who want to find it)
That means, if our code was looking for the attribute ReferencedImageBoxSequence
, then you could also search forĀ ReferencedImageBoxes
, and theyāre both copies of the same Dicom tag data.
The fix? Unfortunately itās a labourous one: you need to go through the test code, see which ones work and which donāt, and change them. This might be a task you can automate using grep, but it might also just be quicker to do it manually.
Checkbox Testing
Checkbox testing (n.): the act of running a pre-defined set of tests to confirm that the software is working.
But thereās so much wrong with the act of checkbox testing. As part of a wider testing strategy, it does have its place. But itās often seen a lot as being the only method of testing a new feature, and this is where the problem lies. Let me explain why.
āA pre-defined set of testsā
Having some kind of plan of how to start testing a feature or a product is never a bad thing, especially if thereās a lot of testing to be done and there are things that you really must check.
The problem lies when you forget to go off this happy path, and wander around the app. Because the times you ask yourself āI wonder what happens ifā¦?ā are the times you often find those hidden bugs.
Going through the same list isnāt necessarily a bad thing: it reminds you of the important parts of the software that needs to be checked in your regression testing. However, you need to let yourself try things out. If you see a button you havenāt pressed in a while, or you hear about something else a colleague found the other week, you should let yourself go off on a tangent and see what happens.
āConfirm that the software is workingā
This might sound like arguing semantics, but I really think that thereās a difference between saying that a tester āconfirms the software is workingā and that a tester ālooks for any problems in the softwareā.
The latter gives the tester the mindset that there has to be problems in the software; that it only needs to be found. On the other hand, by saying that the tester confirms software is working, youāre already suggesting that the software is bug free, and youāre just looking to prove it. Because of human bias, weāre more likely to miss a bug if we donāt really want to find any.
Beware of automated checkbox testing
Originally, I thought that checkbox testing was a trap you can fall in when doing manual testing. But Iāve seen realised that you can do automated checkbox testing, too.
Just as with manual testing, you need to avoid the trap of āconfirming that it worksā. If your automation is quick enough, your automation can be used to not only check those edge cases that you would in regular exploratory testing, but you can also use automation to check for the edge cases that you canāt easily check with regular exploratory testing. With scripting, you can do things that would normally be tricky or time consuming to do manually. Be it check a large set of test data, or change the environment under test such as configuration, network connectivity, and even timezone.
My TestBash Brighton takeaways
I was incredibly lucky to be able to attend TestBash Brighton as a speaker at TestBash Essentials. Iāve listened to some really intelligent people talk there, so I decided to write up a small takeaway. No long prose here, just bits and pieces of things I think were interesting.
(Donāt see your talk here? Donāt worry. It probably only means Iāve been unable to easily write about it here without just paraphrasing your talk!)
What is testing?
I try not to worry too much about definitions and names of things, but there was an excellent talk, and lots of discussion, on what testing is and what testing isnāt.
Some key quotes that I noted down:
āA user would never do thatā, āYeah, but what if they did?ā
We demonstrate we need to look at the blind spots
We bring uncertainty into a safe space where itās ok to talk about them
Security
We had a joint talk by a tester and a pen tester, and on giving the talk together spoke about (and gave) the message that itās not just the responsibility of pen testers to do security testing. Itās something the whole team can get involved in as early as we can. Knowing things like the OWASP top 10 and knowing the tools makes it easier for us.
And something that was said almost as an aside is the idea that we can include security checks into our build process and/or automation. I think this is a really good idea, and is something Iāll be thinking about in the future.
Claire coined a really good term: security smells. Like code smells, but with security. I would really like to hear more about this.
The testing periodic table
This is an interesting one. Ady Stokes has created the Periodic Table of Testing. Thereās a lot to this, and itās something I need to read a lot more about.
Pair testing
Making a partner makes you accountable.
I really like the idea of working with the community, doing something together, to improve your craft. Not only does it make you accountable (which, if you look at the dates of my blog posts, is something I need!), but it also improves ideas.
Iāll be keeping this one in the back of my mind. Iām not sure I want to just do a journey of pair testing with people, but I donāt know what I want to do yet.
Culture in a ginormous company
Some takeaways here are:
-
We should be working ourselves out of a job: the endgame should be that everyone becomes more test minded.
-
If we use the right levers, we can start to do great things
-
If there are restrictions, try to identify the fears behind them. To get high level support, try to speak their language, and know their targets. Find something to tie onto.
-
Recognise it can take even a year to get things done. Itās important to recognise the scope.
-
Some useful books: Emotional Intelligence 2.0, Driving Technical Change, Peopleware, The Trusted Advisor
-
Donāt ask, just demonstrate
Organising community in a company
As someone who set up a community of practice of sorts where I work and is always looking for ideas to improve it, I found this one interesting. While it wasnāt a āhow to set up a community of practiceā it did still have some useful hints about creating a community.
Things Iāll be trying are putting together a proper purpose (not just a few sentences that goes into the email invite), making sure itās a community owned by everybody: not just me. And something that isnāt just a regular meeting at the same time every week. By moving things around a little and doing different things, different people can get something out of our little community.
Heck, the speakers did mention things like socials and game nights. Thatās certainly something that could work in our company!
Speaking at PHP Yorkshire 2019
Iām thrilled to be speaking at PHP Yorkshire this year. After helping out organise the conference for the last couple of years, itās nice to be speaking there this year.
Those teams working in an agile fashion will usually bring the tester in as early as possible in the development cycle ā often during the planning stages ā to find potential problems before they create work to fix. But checking for potential technical problems is only a small part of what the QA team can do in this stage.
The QA team has a wide scope to make the product as good as it can be. This allows the tester to use not just their technical knowledge, but their non-technical knowledge, in their quest for quality.
In this talk, we will be outlining those non technical disciplines that a tester has, from historian to lawyer, and even spy. Testers will come away from this talk full of ideas of questions to ask of their product, while other members of the team will come away with a greater understanding of the knowledge a good tester can bring to the table.
Items covered will include accessibility, data protection, misuse of a product, and being culturally sensitive.
Speaking at TestBash Essentials, Brighton
Iām pleased to finally be able to say that Iāll be speaking at TestBash Essentials, at TestBash Brighton next year.
Iām honoured to be part of such an amazing lineup, in what is the best testing conference around.
Those teams working in an agile fashion will usually bring the tester in as early as possible in the development cycle ā often during the planning stages ā to find potential problems before they create work to fix. But checking for potential technical problems is only a small part of what the QA team can do in this stage.
The QA team has a wide scope to make the product as good as it can be. This allows the tester to use not just their technical knowledge, but their non-technical knowledge, in their quest for quality.
In this talk, we will be outlining those non technical disciplines that a tester has, from historian to lawyer, and even spy. Testers will come away from this talk full of ideas of questions to ask of their product, while other members of the team will come away with a greater understanding of the knowledge a good tester can bring to the table.
Items covered will include accessibility, data protection, misuse of a product, and being culturally sensitive.
It should be a great conference. And if you donāt fancy TestBash Essentials, thereās always the main conference day later in the day.
See you there?
ā Testers, get away from your desk
Iām writing this blog post as part of the Ministry of Testing Bloggerās Club. The subject is āWhatās the non-technical skill that every tester should have, but most donāt seem to?ā. Most answers in the thread seem to involve communication, in some form. As important as communication is, most testers have reasonable skills in this.
I think thereās something else weāre missing. Itās an easy skill to pick up, itās the least technical one you can think of. Weāve being doing it all our lives, yet we forget to do it while testing.
Itās to get away from your desk while testing.
Most of the time, while weāre testing, weāre usually testing on a high speed network connection, on a good quality computer or device, in a room with minimal screen glare and good lighting, in an office environment. But our users donāt always use our products in that way.
In my time as a tester, Iāve tested several different products. Iāve tested web games, mobile games, casino games, corporate apps, and internal tools. Few of these will be used in an office setting, so why am I testing these in an office setting?
Of course, Iām not perfect with this. I should definitely follow my advice a lot more. But getting away from the desk can help with a lot of test cases, including but not limited to:
- Network connectivity is poor
- Network connectivity is non-existent
- Network connectivity is non-existent, but the phone reports that there is one
- User is in motion (on a train, or a bus), which could cause network problems
- User is in motion, which could make it harder for the user to read or use the touchscreen
- User canāt use sound because it will disturb others (and theres no headphones), or thereās background noise
- Screen glare from the sun
- Screen is being used in a dull/dark environment
- User is distracted by environment
- User is in a confined space (eg. on a bus) so has minimal use of gestures
- User is using an old computer
- User isnāt using a top of the range desk/chair
- User is using a laptop/tablet on a sofa/in bed
- User is using a mobile phone lying on their side in bed
So, next time youāre doing some testing, have a think: is this how the user is going to be using the product? Are there other ways people will be interacting with it? Is there somewhere else I can go or something I can do?
(As an example, I once found a few bugs on an iPhone app by taking public transport into town and using the app on route. It turns out, HTTP requests were failing with the poor network connectivity on the route, which caused some interesting behaviour)
Using PHP composer with multiple versions of PHP
This problem has hurt me a few times while updating a Drupal website, so Iām mostly posting this braindump for myself. But, it might help somebody else.
The main cause of my problem is I commit composerās /vendor directory into git. (Why do I do this? Hereās an article which explains better than I could on why you may want to commit the vendor directory. In short, I find it more helpful to have all the code in git, for easier deployment. But, I may change my mind in the future, given these recent problems Iāve been having).
Anyway. My computer (which I use to update Drupal for my site) runs php 7.1, and my server is still running php 7.0. This causes problems in terms of dependencies, when composer assumes Iāll be using php 7.1, so I inevitably get code errors.
I solved this problem by adding a few lines to my composer.json:
"config": {
"sort-packages": true,
"platform": {
"php": "7.0"
}
},
Which worked fine, until a dependency update meant I didnāt have the correct php version. This caused composer to update to an older version of Drupal (Drupal 8.4.8 rather than Drupal 8.5.3), and this output when I forced composer to update to Drupal 8.5.x:
Problem 1
- drupal/core 8.6.x-dev requires php ^5.5.9|>=7.0.8 -> your PHP version (7.1.7) overridden by "config.platform.php" version (7.0) does not satisfy that requirement.
- drupal/core 8.5.x-dev requires php ^5.5.9|>=7.0.8 -> your PHP version (7.1.7) overridden by "config.platform.php" version (7.0) does not satisfy that requirement.
- drupal/core 8.5.3 requires php ^5.5.9|>=7.0.8 -> your PHP version (7.1.7) overridden by "config.platform.php" version (7.0) does not satisfy that requirement.
- drupal/core 8.5.2 requires php ^5.5.9|>=7.0.8 -> your PHP version (7.1.7) overridden by "config.platform.php" version (7.0) does not satisfy that requirement.
- drupal/core 8.5.1 requires php ^5.5.9|>=7.0.8 -> your PHP version (7.1.7) overridden by "config.platform.php" version (7.0) does not satisfy that requirement.
- drupal/core 8.5.0-rc1 requires php ^5.5.9|>=7.0.8 -> your PHP version (7.1.7) overridden by "config.platform.php" version (7.0) does not satisfy that requirement.
- drupal/core 8.5.0-beta1 requires php ^5.5.9|>=7.0.8 -> your PHP version (7.1.7) overridden by "config.platform.php" version (7.0) does not satisfy that requirement.
- drupal/core 8.5.0-alpha1 requires php ^5.5.9|>=7.0.8 -> your PHP version (7.1.7) overridden by "config.platform.php" version (7.0) does not satisfy that requirement.
- drupal/core 8.5.0 requires php ^5.5.9|>=7.0.8 -> your PHP version (7.1.7) overridden by "config.platform.php" version (7.0) does not satisfy that requirement.
- Installation request for drupal/core ~8.5 -> satisfiable by drupal/core[8.5.0, 8.5.0-alpha1, 8.5.0-beta1, 8.5.0-rc1, 8.5.1, 8.5.2, 8.5.3, 8.5.x-dev, 8.6.x-dev]. (As a side-note, my initial reaction was to try `composer update --with-dependencies --ignore-platform-reqs`, which worked, but of course meant that composer wasn't downloading dependencies for php 7.0, which is what I wanted).
Turns out, as I had php 7.0.30 installed on my server, I could just update my composer.json to update the platform tag, and do a regular composer update
. Which did the job, for now. Iāll inevitably need to update my server to php 7.2, eventually.
So, tl;dr, thereās two things I need to remember:
- If I commit my composer vendor folder, I need to ensure that both my work computer and my server have exactly the same version of php on both machines
- If not, then to ensure code works properly, I need to add a platform tag into my composer.json file, ensuring it is accurate, including fix bug versions. And remember never to use
--ignore-platform-reqs
. It will mean that composer will update, but to the wrong version.
Speaking at UK Northwest VMUG
Iāll be speaking at UK Northwest VMUG tomorrow, doing my code review talk. Come along?
So, you do code reviews, and thatās great. But thereās always more that you can check during the review. More places you can check for any potential bugs or problems before deployment, before you find yourself with technical debt. Or worse: unforeseen downtime.
In this talk I will be going through the things that you should be checking to ensure confidence for developers, project owners and stakeholders. Weāll be looking at documentation, commit messages, and common code problems, with examples and tips along the way. s