Introducing the mobile testing wiki

I've wanted a resource for quite some time to help testers with mobile testing. Details on every operating system and device. Details on the features, the known issues, hardware specs, and hardware support.

I've decided to create a wiki to make this resource. Of course, I can't do this all by myself. I need the help of others. If you think something is missing, please help! Don't worry about spelling, or putting things in the right place. I can worry about that.

Visit the Mobile testing wiki here.

Testers, get away from your desk

I'm writing this blog post as part of the Ministry of Testing Blogger's Club. The subject is "What’s the non-technical skill that every tester should have, but most don’t seem to?". Most answers in the thread seem to involve communication, in some form. As important as communication is, most testers have reasonable skills in this.

I think there's something else we're missing. It's an easy skill to pick up, it's the least technical one you can think of. We've being doing it all our lives, yet we forget to do it while testing.

It's to get away from your desk while testing.

Most of the time, while we're testing, we're usually testing on a high speed network connection, on a good quality computer or device, in a room with minimal screen glare and good lighting, in an office environment. But our users don't always use our products in that way.

In my time as a tester, I've tested several different products. I've tested web games, mobile games, casino games, corporate apps, and internal tools. Few of these will be used in an office setting, so why am I testing these in an office setting?

Of course, I'm not perfect with this. I should definitely follow my advice a lot more. But getting away from the desk can help with a lot of test cases, including but not limited to:

  • Network connectivity is poor
  • Network connectivity is non-existent
  • Network connectivity is non-existent, but the phone reports that there is one
  • User is in motion (on a train, or a bus), which could cause network problems
  • User is in motion, which could make it harder for the user to read or use the touchscreen
  • User can't use sound because it will disturb others (and theres no headphones), or there's background noise
  • Screen glare from the sun
  • Screen is being used in a dull/dark environment
  • User is distracted by environment
  • User is in a confined space (eg. on a bus) so has minimal use of gestures
  • User is using an old computer
  • User isn't using a top of the range desk/chair
  • User is using a laptop/tablet on a sofa/in bed
  • User is using a mobile phone lying on their side in bed

So, next time you're doing some testing, have a think: is this how the user is going to be using the product? Are there other ways people will be interacting with it? Is there somewhere else I can go or something I can do?

(As an example, I once found a few bugs on an iPhone app by taking public transport into town and using the app on route. It turns out, HTTP requests were failing with the poor network connectivity on the route, which caused some interesting behaviour)

Using PHP composer with multiple versions of PHP

This problem has hurt me a few times while updating this (Drupal) website, so I'm mostly posting this braindump for myself. But, it might help somebody else.

The main cause of my problem is I commit composer's /vendor directory into git. (Why do I do this? Here's an article which explains better than I could on why you may want to commit the vendor directory. In short, I find it more helpful to have all the code in git, for easier deployment. But, I may change my mind in the future, given these recent problems I've been having).

Anyway. My computer (which I use to update Drupal for my site) runs php 7.1, and my server is still running php 7.0. This causes problems in terms of dependencies, when composer assumes I'll be using php 7.1, so I inevitably get code errors.

I solved this problem by adding a few lines to my composer.json:

"config": {
  "sort-packages": true,
    "platform": {
        "php": "7.0"
    }
  },

Which worked fine, until a dependency update meant I didn't have the correct php version. This caused composer to update to an older version of Drupal (Drupal 8.4.8 rather than Drupal 8.5.3), and this output when I forced composer to update to Drupal 8.5.x:


Problem 1
    - drupal/core 8.6.x-dev requires php ^5.5.9|>=7.0.8 -> your PHP version (7.1.7) overridden by "config.platform.php" version (7.0) does not satisfy that requirement.
    - drupal/core 8.5.x-dev requires php ^5.5.9|>=7.0.8 -> your PHP version (7.1.7) overridden by "config.platform.php" version (7.0) does not satisfy that requirement.
    - drupal/core 8.5.3 requires php ^5.5.9|>=7.0.8 -> your PHP version (7.1.7) overridden by "config.platform.php" version (7.0) does not satisfy that requirement.
    - drupal/core 8.5.2 requires php ^5.5.9|>=7.0.8 -> your PHP version (7.1.7) overridden by "config.platform.php" version (7.0) does not satisfy that requirement.
    - drupal/core 8.5.1 requires php ^5.5.9|>=7.0.8 -> your PHP version (7.1.7) overridden by "config.platform.php" version (7.0) does not satisfy that requirement.
    - drupal/core 8.5.0-rc1 requires php ^5.5.9|>=7.0.8 -> your PHP version (7.1.7) overridden by "config.platform.php" version (7.0) does not satisfy that requirement.
    - drupal/core 8.5.0-beta1 requires php ^5.5.9|>=7.0.8 -> your PHP version (7.1.7) overridden by "config.platform.php" version (7.0) does not satisfy that requirement.
    - drupal/core 8.5.0-alpha1 requires php ^5.5.9|>=7.0.8 -> your PHP version (7.1.7) overridden by "config.platform.php" version (7.0) does not satisfy that requirement.
    - drupal/core 8.5.0 requires php ^5.5.9|>=7.0.8 -> your PHP version (7.1.7) overridden by "config.platform.php" version (7.0) does not satisfy that requirement.
    - Installation request for drupal/core ~8.5 -> satisfiable by drupal/core[8.5.0, 8.5.0-alpha1, 8.5.0-beta1, 8.5.0-rc1, 8.5.1, 8.5.2, 8.5.3, 8.5.x-dev, 8.6.x-dev].

(As a side-note, my initial reaction was to try composer update  --with-dependencies --ignore-platform-reqs, which worked, but of course meant that composer wasn't downloading dependencies for php 7.0, which is what I wanted).

Turns out, as I had php 7.0.30 installed on my server, I could just update my composer.json to update the platform tag, and do a regular composer update. Which did the job, for now. I'll inevitably need to update my server to php 7.2, eventually.

So, tl;dr, there's two things I need to remember:

  • If I commit my composer vendor folder, I need to ensure that both my work computer and my server have exactly the same version of php on both machines
  • If not, then to ensure code works properly, I need to add a platform tag into my composer.json file, ensuring it is accurate, including fix bug versions. And remember never to use --ignore-platform-reqs. It will mean that composer will update, but to the wrong version.

Speaking at UK Northwest VMUG

I'll be speaking at UK Northwest VMUG tomorrow, doing my code review talk. Come along?

 

So, you do code reviews, and that's great. But there's always more that you can check during the review. More places you can check for any potential bugs or problems before deployment, before you find yourself with technical debt. Or worse: unforeseen downtime.

In this talk I will be going through the things that you should be checking to ensure confidence for developers, project owners and stakeholders. We'll be looking at documentation, commit messages, and common code problems, with examples and tips along the way. 

 

When is a wifi connection not a wifi connection?

So, when building apps we always check that the app fails correctly with the correct error message when the phone is offline. But what if the phone reports as being online, but the data isn't what you expect it to be?

This is a common state, and a very common case of this happening is when the user receives a captive portal, usually when first connecting to a public wifi. We need to ensure that the app doesn't behave in unexpected ways in this case.

There's a way you can check this, without needing to visit your local cafe. It can be checked with Charles Proxy. (If you don't have this tool, then you should go and get it. It's pretty awesome)

Once you've got Charles Proxy up and running, and your device talking to the proxy (there's instructions elsewhere on how to connect your phone to Charles Proxy), you can use the Map Local tool test this scenario.

  • First, you'll need to create your "portal" to return. This can be something as simple as a text file, or as complex as a full webpage. The tool will return any file you give it.
  • Then, in Charles Proxy, go to Tools -> Map Local
  • Ensure that Enable Map Local is switched on
  • Click Add to create a new rule
  • You don't need to fill in all these fields: I've got this feature working by completing the "Protocol" and "Host" fields inside the Map From section, and "Local path" from Map To.
    • Inside Protocol and Host I enter the domain of the URL I want to overwrite (eg. "http" and "thatdamnqa.com")
    • Inside Local path I enter the path of the file I want to send instead

This blog post was written as part of Ministry of Testing's Blogger's Club.

Testing on previous versions of browsers

This is something that comes up quite a lot, so I decided to write down once and for all my opinion on supporting old versions of those browsers that are updated often.

This was a ten minute thread (and reply) that I posted to the Ministry of Testing Club.

 

In recent years, Chrome and Firefox have started having regular release cycles on their browsers, which automatically update every 6 weeks or so.

But the major problem is you can’t assume that everyone has restarted their browser to ensure they’re using the latest version, and you can’t easily download an old version of the browser and not have it automatically update while you’re testing.

How does everybody go about browser testing, keeping older versions in mind?

 

We depend on Acceptance Criteria too much

In the last year or so, it realised that perhaps we might be relying on acceptance criteria too much. Doing so can be dangerous; it can easily lead to checkbox testing, and it can lead to the tester (and even the developer) not thinking at all.

That's not to say that we shouldn't use ACs: let me explain.

We're writing down things that we don't need to write, just for the sake of writing it down

I think that ACs should be high level. They're the things that the product owner (or customer) cares about. For example, they want to increase sales of a widget, so in order to do that they want a hero image on their homepage to advertise their sale of widgets. So the acceptance criteria would be there's a hero image (widget-sales.png) on the homepage which links to the widgets page.

If there are already hero images on the site (or even if this is replacing an existing hero image), then there's certain things that can be taken for granted which simply don't need to be stated.
We don't need to know what the size of the hero image should if it's the same as every other hero image on the site because it can be assumed knowledge.
As can the behaviour if the site is responsive (does it resize or are other images displayed on smaller screens).
As can the alt text of the image, if this is something provided by the copywriter alongside the image itself.

If we include all of this information, we've fallen into the trap of checkbox testing. We're writing down specific things to check where instead these are things that are checked as part of the greater ticket. We're writing things down that are already obvious, that will be provided as part of design docs, that are part of domain knowledge.

I'm not saying "don't write anything down". I'm saying don't write down things that are already part of domain knowledge, already provided elsewhere, or are obvious.

We're taking decisions away from developers and testers and expecting the product owner to make them all, even when it's not their areas of expertise

As I said, ACs are things that the product owner actually cares about. If we return to our widget hero image example again, all the product owner cares about is "I want to sell more widgets, so I want a hero image on the homepage that will take the user to a page where they can buy widgets".

They probably don't care how it looks: that's for the designer to decide. They don't care how you make the image accessible, as long as it's accessible: that's for the developer/UX people to decide. They don't care whether the hero image uses Javascript to load the page, or whether the image should be hosted on a CDN: these are decisions best made by the developer.

If we expect the product owner to answer these questions while writing the ACs, then we expect the product owner to have the answers; where usually they aren't in a position to know them (or don't have any strong opinions). If we leave them out of the acceptance criteria, we're giving the designers, UX people, developers, and testers the freedom to do what they feel is best.

That's of course not to say that the product owner can't have an opinion — and these can be expressed in the work ticket — but they shouldn't be part of the formal acceptance criteria.

That's also not to say that if the people implementing the body of work isn't sure they can't ask the product owner. As a tester, if I'm unsure how something should act, I can always go to the product owner and ask. Because ACs aren't a replacement for communication.

The acceptance criteria aren't a list of tests

Once we write down acceptance criteria, we fall down the trap of a non-tester thinking "well this is what I need to do in order to have a working solution". Developers will stop thinking of edge cases. And the people testing the feature — especially if they aren't testers — will test nothing else. They won't read between the lines, and they'll only follow the happy path. They'll start checkbox testing.

They should also never be used as some kind of log of "what's been tested". Because it will always be incomplete. There are always things that we, as testers, always do that we just don't think about. There are always extra things we check, outside of the acceptance criteria. That's not to say we shouldn't make a note of things we've checked: this can be an appropriate thing to do. But if we do, they aren't acceptance criteria.

The acceptance criteria isn't documentation

We shouldn't have ACs as documentation, either. And it would be dangerous to think otherwise. This is what a ticket is (or at least, should be), at a given point of time, independent of every other feature (implemented or planned).

The feature might later change, or the site might later change. For example, if the AC for calculating the price of a shopping cart says "It should be the price of the product, plus 20% VAT, plus £5 shipping charge", this will no longer be documentation if the VAT amount has changed, or after the shipping charge is removed.

You can never have a full list of acceptance criteria, anyway

To expect to have a full list of acceptance criteria would be a fool's errand. There's always more things that you can test. There's always things you can ask. There's always assumptions that can be made. To write down all of these assumptions and questions would be a waste of time: especially when the answer is already known.

So let's start being selective

Let's start being selective on what our ACs say. Heck, let's start being selective over whether a ticket has AC at all. If we're writing things in a ticket just because process says we should, are we really following an agile workflow, or are we just blindly following process without questioning it?

Talk: There's more to code review than you might think at PHP-Usergroup Hamburg

Not content with speaking at the local tester's group, I'll also be speaking at the local php usergroup next week. I'll be giving a 20 minute version of my code review talk.

So, you do code reviews, and that's great. But there's always more that you can check during the review. More places you can check for any potential bugs or problems before deployment, before you find yourself with technical debt. Or worse: unforeseen downtime.

In this talk I will be going through the things that you should be checking to ensure confidence for developers, project owners and stakeholders. We'll be looking at documentation, commit messages, and common code problems, with examples and tips along the way.

Meetup

Note that there will be two talks, one in German and mine, in English.

Talk: There's more to code review than you might think at Software Tester Usergroup Hamburg

I'm currently on a contract in Germany, so what better excuse to give a talk at the local testers usergroup?

So, you do code reviews, and that's great. But there's always more that you can check during the review. More places you can check for any potential bugs or problems before deployment, before you find yourself with technical debt. Or worse: unforeseen downtime.

In this talk I will be going through the things that you should be checking to ensure confidence for developers, project owners and stakeholders. We'll be looking at documentation, commit messages, and common code problems, with examples and tips along the way.

Links: xing, meetup