When is a wifi connection not a wifi connection?

So, when building apps we always check that the app fails correctly with the correct error message when the phone is offline. But what if the phone reports as being online, but the data isn’t what you expect it to be?

This is a common state, and a very common case of this happening is when the user receives a captive portal, usually when first connecting to a public wifi. We need to ensure that the app doesn’t behave in unexpected ways in this case.

There’s a way you can check this, without needing to visit your local cafe. It can be checked with Charles Proxy. (If you don’t have this tool, then you should go and get it. It’s pretty awesome)

Once you’ve got Charles Proxy up and running, and your device talking to the proxy (there’s instructions elsewhere on how to connect your phone to Charles Proxy), you can use the Map Local tool test this scenario.

  • First, you’ll need to create your “portal” to return. This can be something as simple as a text file, or as complex as a full webpage. The tool will return any file you give it.
  • Then, in Charles Proxy, go to Tools -> Map Local
  • Ensure that Enable Map Local is switched on
  • Click Add to create a new rule
  • You don’t need to fill in all these fields: I’ve got this feature working by completing the “Protocol” and “Host” fields inside the Map From section, and “Local path” from Map To.
    • Inside Protocol and Host I enter the domain of the URL I want to overwrite (eg. “http” and “thatdamnqa.com”)
    • Inside Local path I enter the path of the file I want to send instead

This blog post was written as part of Ministry of Testing’s Blogger’s Club.

Testing on previous versions of browsers 🔗

This is something that comes up quite a lot, so I decided to write down once and for all my opinion on supporting old versions of those browsers that are updated often.

This was a ten minute thread (and reply) that I posted to the Ministry of Testing Club.

In recent years, Chrome and Firefox have started having regular release cycles on their browsers, which automatically update every 6 weeks or so.

But the major problem is you can’t assume that everyone has restarted their browser to ensure they’re using the latest version, and you can’t easily download an old version of the browser and not have it automatically update while you’re testing.

How does everybody go about browser testing, keeping older versions in mind?

★  We depend on Acceptance Criteria too much

In the last year or so, I realised that perhaps we might be relying on acceptance criteria too much. Doing so can be dangerous; it can easily lead to checkbox testing, and it can lead to the tester (and even the developer) not thinking at all.

That’s not to say that we shouldn’t use ACs: let me explain.

We’re writing down things that we don’t need to write, just for the sake of writing it down

I think that ACs should be high level. They’re the things that the product owner (or customer) cares about. For example, they want to increase sales of a widget, so in order to do that they want a hero image on their homepage to advertise their sale of widgets. So the acceptance criteria would be there’s a hero image (widget-sales.png) on the homepage which links to the widgets page.

If there are already hero images on the site (or even if this is replacing an existing hero image), then there’s certain things that can be taken for granted which simply don’t need to be stated.

We don’t need to know what the size of the hero image should if it’s the same as every other hero image on the site because it can be assumed knowledge. As can the behaviour if the site is responsive (does it resize or are other images displayed on smaller screens). As can the alt text of the image, if this is something provided by the copywriter alongside the image itself.

If we include all of this information, we’ve fallen into the trap of checkbox testing. We’re writing down specific things to check where instead these are things that are checked as part of the greater ticket. We’re writing things down that are already obvious, that will be provided as part of design docs, that are part of domain knowledge.

I’m not saying “don’t write anything down”. I’m saying don’t write down things that are already part of domain knowledge, already provided elsewhere, or are obvious.

We’re taking decisions away from developers and testers and expecting the product owner to make them all, even when it’s not their areas of expertise

As I said, ACs are things that the product owner actually cares about. If we return to our widget hero image example again, all the product owner cares about is “I want to sell more widgets, so I want a hero image on the homepage that will take the user to a page where they can buy widgets”.

They probably don’t care how it looks: that’s for the designer to decide. They don’t care how you make the image accessible, as long as it’s accessible: that’s for the developer/UX people to decide. They don’t care whether the hero image uses Javascript to load the page, or whether the image should be hosted on a CDN: these are decisions best made by the developer.

If we expect the product owner to answer these questions while writing the ACs, then we expect the product owner to have the answers; where usually they aren’t in a position to know them (or don’t have any strong opinions). If we leave them out of the acceptance criteria, we’re giving the designers, UX people, developers, and testers the freedom to do what they feel is best.

That’s of course not to say that the product owner can’t have an opinion — and these can be expressed in the work ticket — but they shouldn’t be part of the formal acceptance criteria.

That’s also not to say that if the people implementing the body of work isn’t sure they can’t ask the product owner. As a tester, if I’m unsure how something should act, I can always go to the product owner and ask. Because ACs aren’t a replacement for communication.

The acceptance criteria aren’t a list of tests

Once we write down acceptance criteria, we fall down the trap of a non-tester thinking “well this is what I need to do in order to have a working solution”. Developers will stop thinking of edge cases. And the people testing the feature — especially if they aren’t testers — will test nothing else. They won’t read between the lines, and they’ll only follow the happy path. They’ll start checkbox testing.

They should also never be used as some kind of log of “what’s been tested”. Because it will always be incomplete. There are always things that we, as testers, always do that we just don’t think about. There are always extra things we check, outside of the acceptance criteria. That’s not to say we shouldn’t make a note of things we’ve checked: this can be an appropriate thing to do. But if we do, they aren’t acceptance criteria.

The acceptance criteria isn’t documentation

We shouldn’t have ACs as documentation, either. And it would be dangerous to think otherwise. This is what a ticket is (or at least, should be), at a given point of time, independent of every other feature (implemented or planned).

The feature might later change, or the site might later change. For example, if the AC for calculating the price of a shopping cart says “It should be the price of the product, plus 20% VAT, plus £5 shipping charge”, this will no longer be documentation if the VAT amount has changed, or after the shipping charge is removed.

You can never have a full list of acceptance criteria, anyway

To expect to have a full list of acceptance criteria would be a fool’s errand. There’s always more things that you can test. There’s always things you can ask. There’s always assumptions that can be made. To write down all of these assumptions and questions would be a waste of time: especially when the answer is already known.

So let’s start being selective

Let’s start being selective on what our ACs say. Heck, let’s start being selective over whether a ticket has AC at all. If we’re writing things in a ticket just because process says we should, are we really following an agile workflow, or are we just blindly following process without questioning it?

Slides: DrupalCamp Dublin 2017

I gave my code review talk at DrupalCamp Dublin today. Here’s the slides.

Talk: There's more to code review than you might think at Software Tester Usergroup Hamburg

I’m currently on a contract in Germany, so what better excuse to give a talk at the local testers usergroup?

So, you do code reviews, and that’s great. But there’s always more that you can check during the review. More places you can check for any potential bugs or problems before deployment, before you find yourself with technical debt. Or worse: unforeseen downtime.

In this talk I will be going through the things that you should be checking to ensure confidence for developers, project owners and stakeholders. We’ll be looking at documentation, commit messages, and common code problems, with examples and tips along the way.

Links: xing, meetup

Note that there will be two talks, one in German and mine, in English.

Talk: There's more to code review than you might think at PHP-Usergroup Hamburg

Not content with speaking at the local tester’s group, I’ll also be speaking at the local php usergroup next week. I’ll be giving a 20 minute version of my code review talk.

So, you do code reviews, and that’s great. But there’s always more that you can check during the review. More places you can check for any potential bugs or problems before deployment, before you find yourself with technical debt. Or worse: unforeseen downtime.

In this talk I will be going through the things that you should be checking to ensure confidence for developers, project owners and stakeholders. We’ll be looking at documentation, commit messages, and common code problems, with examples and tips along the way.

Meetup

Hmmm... _there's_ a bug

I spent some time tonight speaking with Michael Bolton, about automation. Something he said that could be done really struck me:

He set things up to alert him whenever there was a significant variation in how long functions took. At least half the time, a change in timing would cause a developer to say “hmmm… there’s a bug.”

That’s totally going into my bag of tricks.

Rest and Ruby in MacOS

If you’re getting the following error in Ruby, and no matter what you do you just can’t fix it:

/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/net/http.rb:921:in `connect': Connection reset by peer - SSL_connect (Errno::ECONNRESET)

Try using a non-system version of Ruby. I used brew to install it, but it doesn’t seem to matter how you accomplish it.

★  On making a conference lineup more diverse

(Or, why I didn’t submit to the PHP Unicorn conference)

I don’t usually blog about internal community struggles. They’re often important discussions to have, sure, but I usually don’t like to get involved. But this one is important to me, because I had an opinion on it before it was a problem. But I decided to keep silent at the time. In hindsight, maybe I shouldn’t have done.

So. The problem: (another) conference has a speaker lineup which is 10% female, 0% PoC. This problem is far too common; I’ve called out conferences for this in the past. But the main difference in this case is Peter has held his hands up, admitted he could’ve done better, and openly asked for (and importantly, listened to) advice on how to be better next year.

[By the way, Peter, if you’re reading this (and I hope you are): hats off for making this a learning experience for both you and other conference organisers. I have no beef with you or your conference, I want to watch but sadly I cannot get the day off work. I just wanted to share my experience to add to the pool of knowledge that’s out there.]

From what I can tell, the generally accepted reason why the conference doesn’t have a very diverse speakers lineup is because there wasn’t a very diverse choice of talk submissions. Which was caused by who the CfP was marketed to.

I saw the tweets advertising the CfP, but deliberately chose not to submit. I spent the last couple of days putting into words why that was. Here’s my attempt.

The tweet that I saw advertising the CfP was very close to this one:

Watch 8 of the top PHP experts (unicorns) in the world streaming live (or access the videos later) for just $50. https://t.co/fRWxIoGuZW

— PHP Unicorn Conf (@PHPUnicornConf) March 14, 2017

The three important words are “top PHP experts”.

There were also two initial (I assume invited) speakers already announced: I guess this was to drive ticket sales. They are undoubtedly two well respected names in the community. And they’re both incredibly smart and accomplished people. Between them they have contributed to Core, written books, written very popular development tools, and keynoted at conferences.

So, that’s the level of this conference. Experts (or unicorns: legendary creatures known to be hard to capture). People who have contributed to php in a big way. Like, for example, writing books or contributing to Core.

Which is why I didn’t submit. I have never written a book (the most I’ve done is written articles for other blogs), and I wouldn’t even know where to begin when it comes to contributing to the php language itself.

And is, probably, why many other people – especially underrepresented people – thought the same as me. That the CfP probably isn’t for them. The conference wants more intelligent people. More accomplished people. What am I doing even considering I deserve to share a stage with these giants of php?

Which is a lesson to all other conference organisers. I see the same faces on conference websites all the time. Some of these faces I saw at my very first conference I attended, almost 10 years ago. I literally learned my craft from these people.

If we want to bring in more speakers – more diverse speakers – we need to say that yes, you can submit to a CfP. It’s not for the elite, chosen, few. You don’t have to look, sound, or act like the other speakers you’ve been watching and learning from for the last ten years.

We need to be say that even if you’ve only been developing for a few years, you absolutely have something to say. Something which the most experienced developer can learn from. You bring something to the table, and we want to hear it. We promise.

We need to say that we don’t need unicorns on stage. We need YOU.

On Apple buying Workflow

I’m sure over the next few days, there will be lots of opinion pieces written and published by lots of different people over the news that Apple have purchased Workflow.

My opinion, however, isn’t that special. It’s little more than “oh, that’s pretty cool. They wrote a great app, so good that Apple wanted in. They did pretty well there!”.

But there’s one part of the Techcrunch article announcing the deal which struck with me. It’s near the bottom, and it’s a couple of paragraphs about some words Apple said about the app:

Apple confirmed the deal, and has said the following about Workflow:

“The Workflow app was selected for an Apple Design Award in 2015 because of its outstanding use of iOS accessibility features, in particular an outstanding implementation for VoiceOver with clearly labeled items, thoughtful hints, and drag/drop announcements, making the app usable and quickly accessible to those who are blind or low-vision.”

The accessibility features of Workflow are super impressive, especially for an app that is a tool for building complicated macros. It would have been much easier to say hey, this is for heavy users maybe we don’t need to make sure it’s 100% accessible — but they didn’t, and they won a bunch of awards (and an exit) for their trouble.

It’s all too easy to say that in an advanced app for pro users, with a lot of complicated features, accessibility isn’t at the top of the priority list. But Workflow has proven that it’s still an important thing for app developers to worry about.

And Apple, apparently, agrees.