Tuesday, 25 October 2016

Test Bash Manchester 2016

Usually Test Bash events in the UK are held in Brighton making them a bit inaccessible to people living in the North. However this changed on Friday 21st October when I was lucky enough to attend Test Bash Manchester, a software testing conference held at the Lowry in Salford Quays. Organised by Richard Bradshaw @friendlytester and Rosie Sherry @rosiesherry, this was the first time a Test Bash event had been held in the North West.

I woke up bright and early at 5:30am Friday morning and made my way to the station to catch a train to Manchester. Travelling on the day of the conference unfortunately meant that I missed the first speaker, James Bach @jamesmarcusbach. I was however able to catch the live tweets taking place during his talk about critical distance and social distance. There were some interesting slides. Sadly I had not heard the term 'critical distance' before and Google only revealed a reference to an acoustics calculation. I found because I had lack of context and was missing a definition this made the slides cryptically undecipherable. I heard the talk was very good, but I am really not in a position to be able to comment on it.

I arrived at the venue just in time to sit down and listen to the second talk.

"Psychology of Asking Questions" by Iain Bright

The main thing I took from this talk was that when encountering resistance in the workplace to keep asking "Why not?", in a loop, until no more objections exist. I have heard of this tactic before in sales environments to handle customer objections. I did feel that the message within this talk could have been stronger. Credit to Iain however, it takes guts to get up in front of a large room of your peers and deliver a talk. I really liked his slide with Darth Vader on it that described the dark side of asking questions.

"On Positivity – Turning That Frown Upside Down." by Kim Knup @punkmik

Kim's talk really connected with me. She said that as humans we were all hard wired to look for negativity because recognising it was a key survival mechanism. She spoke about the testing wall and throwing things over it. The word "Wagile" was used and how this usually resulted in lots of overtime work for testers. Kim explained how her testing job had made her start hating people and this negativity had manifested into the act of logging as many bugs as possible. Essentially in Kim's job software development had turned into a warzone. Her description really stirred a lot of memories of the games testing I did in the early days of my career. Kim mentioned that during these dark days her record was 102 translation bugs logged in one day. This was very impressive, higher than my personal best of 63.

Kim told us not to start the day by complaining and explained that happiness gives us an advantage because Dopamine invigorates the brain and turns on learning centres. She went on to explain that being grateful for three things a month can help re-program our brains so that they are more likely to scan for positive things. Happy brains have a wider perspective, increased stamina and creativity. A thoroughly enjoyable talk that left me feeling very positive about testing.

"Listening: An Essential Skill For Software Testers" by Stephen Mounsey @stephenmounsey

Stephen set about trying to prove that we don't listen hard enough. He asked us to listen and then to really listen. We eventually heard the sound of a buzzing fridge in the background, something which we had previously been blocking out. He told us that the amount of stuff we block out and ignore in everyday life is amazing.

Stephen went on to explain that listening was not skills based and that we had different listening modes such as critical or empathetical. He said that men and women both tend to listen in different ways and that we should evaluate our listening position. He reminded us that listening is not about us, it's about the speaker so we shouldn't interrupt them. It was an interesting talk that gave me a lot to think about.

"Testers! Be More Salmon!" by Duncan Nisbet @DuncNisbet

Duncan told us that shared documentation was not the same thing as shared understanding. He said that testing is asking questions to squash assumptions. He went on to explain that even though test first development tries to understand the need, it could be the wrong software that is being created.

As testers, Duncan wants us to ask questions of designs before code gets written and talk about testability. Duncan wanted us to not only question the idea, but also question the need and the why.

"The Four Hour Tester Experiment" by Helena Jeret-Mäe @HelenaJ_M and Joep Schuurkes @j19sch

The Four Hour Tester Experiment was inspired by a book called the Four Hour Chef which attempts to teach someone to cook in just four hours. Helena and Joep wanted to know if it would be possible to teach someone to test software in four hours. As for what to test, they knew it needed to be a familiar concept, something not too hard to learn yet sufficiently complex so they decided to use Google Calendar. If you know someone that would be interested in trying to learn testing in four hours, the four hour tester activities can be found online at http://www.fourhourtester.net/

The four hour tester talk concluded that while it was possible to illuminate testing to some degree, it's not possible to learn software testing in just four hours. The question and answer session afterwards echoed that being unable to teach someone to test software in four hours should not be viewed as a failure as it demonstrates how complex testing is which in turn proves that it is skilled work.

"The Deadly Sins Of Acceptance Scenarios" by Mark Winteringham @2bittester

Very early on Mark informed us this would be a rant about BDD scenarios. A show of hands revealed that a significant number of people were using cucumber style "given, when, then" scenarios. Mark explained that we write scenarios, then try to implement them, then realise that we missed bits and that we need more scenarios. He reminded us not to go too far. He wrote each of the deadly acceptance scenario sins in given, when, then format.

Mark told us that you can't specify love, but you can ask a loved one for declarative examples e.g. 'what could I do to make you feel loved?'. He continued that a loved one might say "Well you could email me sometimes or send me flowers". Mark explained that if we were to set up a cron task to automate emailing and ordering flowers online for a loved one, the love would be lost. He warned us that we shouldn't let scenarios become test cases and reminded us to put the human back in the centre of our automation efforts.

"A Road to Awesomeness" by Huib Schoots @huibschoots

Huib was exceptionally confident and chose not to stand behind the microphone stand but instead stood further forwards and addressed us directly. "I am awesome" he proclaimed, "Take note and be awesome too!" A video then played of The Script featuring Will.I.Am singing "Hall Of Fame".

Huib told us about his background. He had started automating, then became tester, then became an ISTQB instructor. He went on to say that you wouldn't teach someone how to drive a car by telling them how but without letting them do any actual driving. Yet ISTQB doing this with testing. He said the most value comes when you put someone in front of software and let them test it.

Huib said there was a difference between the kind of testing a business person might do and professional testing. He confirmed that professional testers are professional learners and explained that if we do the same work for 10 years, we might not have 10 years’ experience, we might just have 10 lots of the same 1 year experience. During his talk, Huib said he really liked feedback so I tweeted him with a tip for the data in his pie chart. Huib wanted us to ask ourselves the following questions: Who am I? What are my skills? What do I want? What do I need? How do I get there?

Huib's passion was so strong there were times I wasn't sure if I was listening to a tester or a motivational speaker. His talk was delivered with huge amounts of energy. It reminded me that there is always something new to learn and that receiving feedback is very important.

For part of the conference, I sat just behind Huib with some testers from Netherlands and Belgium. During this time I learned that his name is pronounced like 'Hobe'.

"Is Test Causing Your Live Problems?" by Gwen Diagram @gwendiagram

Gwen asked us if we can do load and performance testing in our test environments and reminded us that there is lots of room for error when humans carry out manual deployments. She dropped the f-bomb, repeatedly. Gwen spoke passionately about monolithic test environments that do more harm than good. She talked about deployments and the inevitable OMG moments which followed deployment. During her talk, Gwen reminded us that monitoring is a form of testing. She also said to keep in mind that even when a company to does monitoring and logging well, it can still get liquidated if its products don't sell.

Gwen's desire to make things better and do a good job was infectious. So much so that the first question asked after her talk concluded was "Would you like to come work for us?". My mind was blown.

"Getting The Message Across" by Beren Van Daele @EnquireTST

Beren spoke from experience about one particular test role he had held. They had called in the cavalry and enlisted the help of consultancy, but it soon turned into 'us and them' situation. It was September and they had to finish their project by December. He was a junior tester at the time and they had hired a QA manager with a strict inflexible way of working. None of the bugs were getting fixed so the testers decided to print out all the bugs, add pictures of bugs to them and cut them out. They then decided to create a 'Wall of Bugs' in the most visible place in the office, the entrance way. This was an extreme measure but management saw the problem and gave the developers more bug fixing time.

Beren's story continued and went to some pretty dark places like how the QA manager mysteriously disappeared and how the testers tried their best to cope with increasing levels of negativity in their work place. Beren told us that he eventually left that job but he stayed in touch with some of the people that worked there. He said that exploratory testing is still not accepted as valuable there and the testers have to hide the exploratory work that they do. Beren said that he felt like he had failed and then he did something incredibly brave and courageous, a slide titled "My Mistakes" appeared and he told us where he thought he had gone wrong. Even though Beren is a new speaker I was enthralled by his story. I really hope he continues sharing his experiences with others as stories like his deserve to be told.

Test Bash Manchester was a resounding success.

It felt really good to finally meet so many of the brilliant people in the testing community that I have only ever spoken to online. The event left me recharged, re-energised and brimming with positivity. Test Bash left me feeling like I was part of a giant, global testing family. I have so much love and respect for the software testing community right now. I'm really looking forward to Test Bash 2017 next year.

This post was also published on my company's blog Scott Logic Blog

Tuesday, 27 September 2016

Learning to talk - Finding your voice and telling a story

At the start of August I attended North East Agile Testing (NEAT) meet up at Campus North in Newcastle. While I feel I am active within the software testing community (through Slack, Twitter, blogging etc.) this was actually the first time I had ever attended a face to face testing event. While at NEAT I found it easy to express my views about testing (possibly helped by the free beer) and tried to share my experience by answering some of the questions which were being asked.

After the meet-up I followed the group to the pub where a few people told me they would be really interested to hear me talk about testing. I said that I hadn't really given any talks before but they assured me it was no big deal. Anyway long story cut short, I agreed to give a talk at the next meet up which was scheduled for October.

I initially thought I should give a talk based on the survey of software testers which I carried out over the summer. I tried to write down some ideas but I struggled to come up with something interesting enough to turn into a talk. I was also concerned that having already written a number of lengthy blog posts on the topics of testers, surveys and data analysis that I would be repeating old content that everyone had already read about. I didn't want to sound like a broken record and put my audience to sleep.

Even though I knew it would take much more effort, I decided I was going to write an original talk based on my personal experience of testing.

I also decided that I was going to completely commit to completing and delivering this talk. After all this was something I had agreed to do and it's not in my nature to disappoint or let people down.

I adopted an agile approach to writing my talk in that I treated it like a "steel thread". It was the most important project that I had outstanding and it involved venturing into territory which I had not previously explored. A steel thread in terms of software engineering identifies the most important path and reinforces it making it stable and strong. I knew it would be better to have one steel thread, one project seen through to the end and completed well than a number of half finished projects that individually didn't have that much value. So I put all my other projects on hold and made delivering a talk on software testing my number one priority.

It's ok to fail, but fail quickly and learn from it

The first mistake I made when I tried to write my talk was that I tried to write it in the same way that I write blog posts. I wrote lots of long passages of text and kept editing and tweaking them. This didn't go very well. I realised that if wrote out the whole talk and stood in front of a group of people to deliver it, I would be simply be reading, rather than talking. I didn't want my talk to feel like an awkward best man's speech nervously read out loud.

So I switched from writing out large passages of text to making slides instead. I knew one of my co-workers had given a number of technical talks and one lunch time while I was working on my slides I casually asked if they had any tips. They rummaged around their desk then handed me a book called Presentation Patterns, Techniques for Crafting Better Presentations. I was told that this book was awesome and that I needed to read it. So I stopped making slides and spent the rest of lunch time reading the book.

Make a map to see where you are going

The Presentation Patterns book was certainly an eye-opener and I made notes while I read it. It listed lots of common traps and mistakes and provided helpful advice on how to avoid falling into those bad patterns. It said that one of the most important thing in a talk is structure and story.

Like a good movie, a talk has to have a direction. It needs to take the audience on a journey. I decided I was going to draw a mind map of my ideas and try to explore all the directions and possibilities. I use mind maps at work to record exploratory testing and it felt like a good idea to try map out my talk. I used a Chrome plug-in called Mind Mup to draw my map. I improved my map and refined it as new ideas occurred to me. My final mind map eventually looked like this.

Once my map was finished, I returned to my slides. I started re-arranging them using my map as a guide, improving them and making them better. I thought about the story I wanted to tell and the key points that I wanted to put across.

Check and double check your slides

I showed my slides to other people so I could start getting feedback. During this process I discovered that presentation slides are not immune to bugs!

If you use a certain term or word to refer to something, stick with that word or term throughout the presentation. Changing to something else halfway through is really confusing.

Spell check everything and then have it read by another human to be sure that there are no mistakes. On one of my presentation slides I wrote 'texting' instead of 'testing' and this was not picked up by a spell checker. As someone that works in software testing if would have been quite embarrassing if spelling errors had slipped through. Especially as the primary audience for this talk is other software testers, the kind of people that are quick to notice any mistakes. Watch out for text which is too small to read. Also watch out for contrast issues between text colour and background colour. Be aware some people in the audience could be colour blind.

If you have any transitions or animations on your slides, play them through in order and make sure they work as expected. Some of mine weren't perfect first time and needed adjustments.

When I was finally happy with my slides, there was still something important I needed to do before standing up in front of my peers.

Get comfortable with your voice

I found that the more I practised my talk, the more confidence I built up. I would practice my talk by putting on some fairly loud music (so no-one in my house could actually here me talking), sitting at my computer and talking through my slides to myself. If I said something I thought sounded bad, I would immediately say it again in a different way. I'm lucky enough to have two monitors at home so I used PowerPoint presenter view to practice. This shows the active slide on one screen and the other screen shows a preview of the next slide. Presenter view also has a timer which I used to get a feel for how long my talk was. I knew roughly the length of the slot I had been given but I made sure that my talk was a little bit longer because I had a feeling I would naturally try to rush through it when I gave it. As a safety net, I worked out which sections in my talk which could be expanded or contracted based on time.

After I had gone through my talk a few times in presenter view, I knew the key points that I needed to mention for each slide. I made a small list of ordered bullet points and printed this out to have in hand while I was actually talking. I did this mainly to make sure I didn't forget anything important and also so that I would be able to recover quickly if my mind went completely blank.

Seize the opportunity to practice

I was still preparing and practising my talk for NEAT when a surprise opportunity came up at work to give my talk. Once a month on a Friday, short lunch time tech talks happen in the library at work. This month, I heard that there had been a shortage of speakers coming forward. I thought it would be good practice to give my talk and I agreed that I would speak. The audience would be a small group of developers with only one or two testers present. I was initially slightly concerned about giving a testing talk to developers but the beauty of a talk is that because it exists mostly in your head, it is very easy to make minor adjustments to it. I was going to assume that my audience had no prior knowledge of testing and make extra effort to explain any niche terminology or concepts that I had used.

I also decided that I was going to record my talk using my Android smart phone. I thought it would be good to listen to myself afterwards and see how it sounded and also find out if I was subconsciously doing any umming, erring or repeating the same word over and over again. These were all things the Presentation Patterns book had told me to watch out for.

My first though when I heard the audio recording back was "OMG, I don't actually sound that bad!".

When I first started learning to play violin I would regularly video myself practising so I could learn from the videos and also look back on them and see the progress I had made. I decided I was going to edit the audio and the slides together and share my talk on Youtube. This way I would be able to keep my first talk, learn from it and also look back on it to see how I am progressing. If you would like to listen to me give my talk you can find the video I made Both Sides of the Coin - Exploratory Testing vs Scripted Checking on YouTube.

The actual talking part I found stressful and uncomfortable at first. However like getting into a really hot bath, I found that I slowly got used to the discomfort and started to relax.

To anyone who is considering giving a talk or presentation, but currently undecided (and possibly feeling daunted or scared about it) my advice would simply be jump in with both feet and go for it. Remember you have nothing to lose and everything to gain. For me I have learned a great deal from the whole process. The experience has really helped me grow and develop some more advanced communication skills. I also feel like I now have another channel that I can use to express my views and thoughts.

I am certainly ready to give my talk to a larger audience next month at NEAT.

This post was also published on my company's blog Scott Logic Blog

Wednesday, 3 August 2016

Exploring Data - Creating Reactive Web Apps with R and Shiny

Back in May I taught myself a programming language called R so that I could solve the problem of analysing large amounts of data collected as part of a survey of software testers.

After writing some R code to analyse the data from my survey and blogging about the findings I realised something. I was sharing my findings with other people mainly through static images, graphs and charts. I felt like there were a large number of combinations and queries that could be applied to the data and I wasn't able to document all of them. I was also aware that the target audience of the survey would likely be unable to write R code to explore the data themselves. I wanted to find a way for non-technical people to be able to explore the data created by my survey.

I decided I was going to solve this problem and the solution I chose was Shiny. Shiny is a web application framework for R that turn data analyses into interactive web applications. Shiny also lets you host your completed web app in the shinyapps.io cloud so they can be shared with other people.

I made a Shiny web app to explore a sample of software testers. It can currently be found at: https://testersurvey.shinyapps.io/shiny_testers/

The user is able to interact with check boxes and radio buttons to define a group of software testers. Data for the defined group is then displayed across multiple tabs. As the inputs are changed, the data displayed changes at the same time.

The web app makes it possible for a user to ask their own questions of the data. For example, defining the group of testers as testers which responded "No" when asked if they were happy in their current job (Setting option 4. to "No Group") and looking at the 'Positive 2' tab reveals that only 41.7% of testers in this group feel that their judgment is trusted. Now if option 4 is changed to be the "Yes group", the percentage of tester which say they feel their judgment is trusted now jumps up to 91.7%, a big increase.

While I have written a lot about the findings of the survey I conducted, I am hopeful that the creation of this Shiny web app will allow anyone interested in exploring the collected data to do so independently without the need for technical skills.

I want to take a different direction from my previous blog posts (where I have been discussing the data discovered) and instead share the process of creating a Shiny web app with R.

Getting started with R

I would highly recommend using RStudio to write and execute R code. RStudio is an open-source, integrated development environment (IDE) for R that can be downloaded for free. Once downloaded and installed, R code can be typed in at the console or a new R script file can be made to hold the R code written.

R works slightly differently to other programming languages I have used (Python & Golang). The main difference with R is that it is built around vectors. A vector is simply is a sequence of data elements which share the same basic type. A bit like a one dimensional array.

R has a special function called c() which can be used to make vectors.

The assignment operator is <- this is used to perform operations in R

The following code snippets can either be typed line by line or saved as an R script and executed in RStudio.

The snippet below shows how to make a vector which contains numerical values 1,2,3,4 & 5, name this vector 'numbers' and print it to the console.

[1] 1 2 3 4 5

Note, RStudio defaults to prefixing all output with line numbers, this is why the output starts with [1]

In R, when a transformation is applied to a vector, it is applied to each component in the vector. So if numbers was transformed by adding 3, this addition would take place on each component in the vector. Output:
[1] 4 5 6 7 8

This vectorisation where operations are automatically applied to each component in a vector makes loop statements redundant and unnecessary in R. While it is possible to force R into loop statements, this is widely considered a bad practice, it's always better to try do things in a vectorised manner instead of forcing R into a loop.

Data frames are created by combining vectors.

An important data structure for importing and analysing data in R is the data frame. A data frame is a rectangular structure which represents a table of data. It is essentially a list of vectors which are all of equal length.

The following R code snippet creates four vectors of equal lengths and then combines them into a data frame named hurricanes and prints hurricanes to the console

> hurricanes
name date_of_impact highest_gust_mph power_outages
1 Abigail 2015-11-12 84 20000
2 Barney 2015-11-17 85 26000
3 Clodagh 2015-11-29 97 16000
4 Desmond 2015-12-04 81 46300

Data can be selected within a data frame by referencing rows and columns. Typing hurricanes[1,2] on the console will return "2015-11-12". This is the data held in row 1, column 2 of the data frame.

It is also possible to select a row without a column value or a column without a row value. For example, hurricanes[,3] will return all the values in column 3, the highest gust in mph.

Queries can be applied to data using indexes.

The which() function can be used to make an index of values which match an expression.

The following code snippet uses which() to create an index called outages_index. This index is a vector which contains the row numbers of the data frame where column 4, power_outages, is greater than 25,000. The R script prints this index to the console. This index of row numbers is then applied to the data frame by assigning the data held only in those rows to a new variable named over_25000_outages. This over_25000_outages is then also printed to the console.

> outages_index <- which(hurricanes[,4] > 25000)
> outages_index
[1] 2 4
> over_25000_outages <- hurricanes[outages_index,]
> over_25000_outages
name date_of_impact highest_gust_mph power_outages
2 Barney 2015-11-17 85 26000
4 Desmond 2015-12-04 81 46300

Data can be imported into Rstudio from .csv and .xlsl formats and held in a data frame. R code can then be written to query and explore this data.

If you are interested in learning more basic R functionality the interactive lessons at Try R will let you practice by writing real R code in a few minutes

Creating Reactive data driven web applications

All Shiny apps consist of two basic components that interact with each other, a user-interface script (ui.R) and a server script (server.R).

The user interface script ui.R lists all the front end inputs that the user can manipulate, things like radio buttons, check boxes, drop down selection lists. It also contains the names of outputs which will be displayed and the location of inputs and outputs on the page.

The server script server.R is passed input values from ui.R, executes R code using those input values and generates outputs. Outputs can be anything from a text string to graphical plot of data.

Shiny stores all the input values in a list named input and the values of outputs in a list named output. As soon as a value in the input list changes, all the values in the output list are immediately recalculated.

This means as soon as the user changes a front end input, by selecting a check box or an item from a drop down list, all of the output elements on the page update to immediately reflect the user's selection.

This is very powerful because R code is executed on demand and the results are displayed to the user as soon as they are requested.

Continuing with our example hurricane data frame, let's take a look at how this data could be turned into a simple Shiny web application.

Here is the ui.R script

The ui.R script has been intentionally kept minimal. It consists of a select drop down box, a horizontal rule and some html output.

This is the corresponding server.R script which sits in the same directory as ui.R

The server.R script receives input$name from the ui.R and it generates output$data which ui.R displays. The output$data is generated by the renderUI() function. The renderUI() function was chosen as the output generated is HTML which contains line breaks. If the output was plain text without HTML then renderText() could have been used instead. Inside the renderUI() function, the input$name is received from ui.R, a switch statement makes a variable called 'row' which is set to the row number containing the data which matches the name.

HTML is then generated using 'row' as an index on the hurricanes data frame. This HTML output is displayed by the ui.R script

The web application created by this code can be seen running at: https://testersurvey.shinyapps.io/shiny_demo/

Final thoughts

I found the Shiny framework highly effective and flexible as it enabled me to create a complex interface that interacted with and displayed my data. The input & output system for reactivity did the majority of the hard work making it easy for me to concentrate on the queries and results I wanted to display. Development time was pretty quick and the handful of bugs found during testing (mostly edge cases) turned out to be solvable with some very straight-forward changes

I would highly recommend the detailed tutorials at shiny.rstudio.com/tutorial/ for anyone wishing to explore Shiny in more detail.

This post was also published on my company's blog Scott Logic Blog

Monday, 4 July 2016

A Snapshot of Software Testers in 2016

Back in May I carried out a survey of Software Testers and I have been continuing to analyse these survey results. My previous blog post about the survey was well received and focused on experience in the workplace. One of the objectives I set out to achieve with the survey was to examine the backgrounds and experiences which have led testers to be where they are today. I wrote another R script to help me interpret the survey results data. For transparency my R script that crunched the numbers and generated the charts used in this post can be found here on github.

Exploring the data captured by my survey may give us clues as to how and why people are entering software testing as a career. The results could help dispel myths around hiring testers such as what the average tester looks like on paper and if we share any similar traits. For some people, the results of this survey may open their eyes to any bias or prejudice they may hold.

There was one person that responded to my survey who stated they had never actually worked in testing. Either a very curious tester decided to test the form to find out what would happen if they said they did not work in testing (if you are reading this, you know who you are). Or someone stumbled on the survey from social media without understanding what it actually was. Either way, the person that did not work in testing has been excluded from this analysis.

Testers in Industry

186 people which had held testing jobs were asked which industries they had tested in. The tree plot below shows the industries where testers work or have worked. The colours of the boxes map to the number of testers that have worked in that industry. Keep in mind that it is possible for a single tester to have worked in multiple industries (which is why the percentages on the tree plot add up to more than 100%).

Business was the most popular industry with 95 out of 186 testers having worked within Business at some point in their careers. I did however feel that the Business category was also by far the broadest category which could explain this result.

Things start to get a bit hard to read down in the bottom right corner as the very small boxes show industries where only 1 out of 186 testers surveyed have worked. So I made a REALLY BIG version of the tree plot which can be found here

For each industry, the lower the %, the harder it may be to find someone with experience of testing in that specific industry. For example the tree plot shows that it's harder to find someone with experience testing social media software than it is to find someone with experience of testing financial software.

But does specific industry experience actually matter? I wanted to see the % of testers that had tested in multiple industries vs testers which had only tested in one industry.

Given that such a large % of the sample have tested in multiple industries, this indicates that testing skills are highly transferable between industries and gaps in specific domain knowledge can be bridged. Roughly 4 out of every 5 testers have already adapted and moved between industries.

Testers in Education

I wanted to know about the education levels of testers. The sample of testers which responded to the survey had a wide range of education levels which can be seen on the bar plot below.

The most common level of education is a bachelors degree with 46.24% of testers achieving this level of education. Testers with PhDs are incredibly rare and make up for only 1.08% of the sample. There are also some testers (5.8%) which have no formal qualifications at all.

Overall, I wanted to know the proportion of graduate to non-graduates working in testing.

In the sample of testers approximately 7 out of 10 testers had graduated university (a ratio of 7:3).

Some of the testers in the sample did not graduate. I wanted to know if these testers were early or late in their careers. I wanted to see if the industry currently had a trend to only hire graduates.

On the following plots because the number of testers in the groups for "less than a year" and "one to two years" were very small so I chose to group them together into a 'less than two years' group.

The plot below compares number of years testing experience for Graduates and Non-graduates.

Once this data was plotted it revealed that the most experienced testers in the sample were not graduates. However the number of testers with 20+ years experience is very small. The fact that none of the testers with 20+ years experience have a degree may not be particularly significant due to the sample size being small. Non-graduate testers were dispersed throughout all the experience groups. It certainly seems that experience can replace a degree and there are testers out there which have had careers lasting over twenty years without graduating university.

Before I carried out my survey, I had previously held a belief that the vast majority of testers were falling into testing 'by accident' without actively choosing a testing career path while in education. This was one of the reasons I included the question 'While you were studying did you know you wanted to work in testing?'. The response to this question is shown below.

So 88.6% of testers did not actively want to work in software testing while they were studying. It seems that testing software is a fairly non-aspirational career choice among students. I was curious to see if this was a recent trend or if aspiration levels had always remained low. I did this by grouping the responses by number of years experience which produced the following plot.

None of the testers which had started their careers in the last two years aspired to be Software Testers while they were students. Between 2 to 20 years experience there were some people which had known they wanted a testing career while in education.

Testers After Education

I wanted to find out how many testers were entering the industry straight from education without any previous work experience. I also wanted to know if this was different for new testers compared to experienced testers. I created a stacked percentage bar plot to illustrate this. Testers were divided into groups based on number of years experience. Each group was then divided based on the percentage which had held a different job before testing and the percentage which had entered a testing job straight from education.

It appears that as the years have gone by, fewer testers have entered testing with no previous work experience. Only 20.83% of testers with less than 2 years experience had entered testing straight from education without holding a previous job. In the 10 - 20 year experience group, 31.11% had entered testing without holding a previous job. I think this shows that most companies are looking for people with at least some previous work experience for entry level testing jobs. A shortage of graduate testing opportunities may also be the reason that the percentage of testers entering testing straight from education is low.

Given that such a low percentage of people (11.4%) had known that they wanted to be a tester while they were a student I wanted to find out why people were applying for their first testing job. The survey presented a list of possible reasons for applying for first testing job and allowed respondents to select all that applied. The chart below shows these reasons in order of frequency selected.

Being second from last jobs and careers fairs don't seem an especially effective way of recruiting testers. Unemployment seems much more of a motivator to apply for a testing job.

Testers in Computing

I wanted to know if testers were studying computing, and also if computing knowledge was viewed as necessary to be a tester. Again, I grouped the testers by number of years experience and divided these groups based on the percentage of each group which had studied computing against the percentage which had not. This created the stacked percentage bar plot below. Keep in mind that the 20+ years experience group is very small, so the data for this group may not be a good representation of the whole population of testers with 20+ years experience.

The most experienced testers had all studied computing, however in the group of the newest testers (less than 2 years experience) two out of every three testers (66.66%) had not studied computing or a computer related subject. Recently in the last couple of years it looks like the requirement for testers to have studied computing has relaxed and computer knowledge is no longer a barrier to entering the Software Testing industry.

Testers in Training

My survey also investigated training courses. Each person completing the survey was asked if they had participated in any of the following training courses:

  • Rapid Software Testing
  • AST BBST Foundations
  • AST BBST Bug Advocacy
  • AST BBST Test Design
  • ISEB/ISTQB Foundation
  • ISEB/ISTQB Advanced
  • ISEB/ISTQB Expert

I started by assessing the percentage of testers which had attended at least one of the above training courses.

Given that 29.6% of testers surveyed did not have a degree I wanted to see if testers were undertaking training courses to qualify entry to a software testing career instead of graduating from university. The following bar plot shows numbers attending the above training courses grouped by education level.

The foundation group stands out as being very different to all the other groups. In this group 95.24% have attended one of the named training courses. This is significantly higher than in all of the other groups. The Masters degree group had 67.44% attending one of the named training courses and the Bachelors Degree group had 59.3% attending one of the courses. Maybe graduates see themselves as not needing as much training. The size of the PhD, None and GCSE groups are small so the results for those groups may not be as accurate as representation of the whole population of testers compared to as some of the larger groups.

For each training course testers were asked if they had attended, and if they had, to rate the course for effectiveness.

Each training course was scored based on the total number of responses received.

  • Did not attend = 0 points
  • Very ineffective = 1 point
  • Ineffective = 2 points
  • Average = 3 points
  • Effective = 4 points
  • Very effective = 5 points

I named the total points the course rating. The course rating reflects how many people attended and also how effective those people thought the course was. A course attended by five people that all believed it was very ineffective would have the same course rating as a course attended by just one person that thought it was very effective.

The following bar plot shows course rating for all the courses named in my survey.

Rapid Software Testing (RST) was the highest rated with a score of 229. Second place was ISEB/ISTQB Foundation Level with a score of 220. Third place AST BBST Foundations scoring 132.

The RST course is regarded as belonging to the modern context driven testing school. While the ISEB/ISTQB is an old style traditional course. We are still seeing many recruiters list ISEB/ISTQB foundation as a necessary requirement for some jobs. I personally think this is the only reason that this particular qualification is popular.

Testers in Summary

Software Testers come from a huge variety of different backgrounds. We are a diverse group of people who took a variety of paths into our careers. There is no one thing that testers specifically do to become testers but a lot of us are drawn to the profession because we find it interesting. Most testers are graduates but quite a few of us don't have a degree. There isn't a single degree or training course that churns out good testers. Hands on experience is very important because once a tester has gained some experience, formal education no longer matters quite so much. There has certainly been a trend in the last couple of years to hire new testers which do not come from a computing background. Most testers move into testing from a non-testing job rather than from education. Levels of testers entering the profession straight from education in the last two years are the lowest they have ever been.

Whether you are a tester or not I hope you have enjoyed reading about how testers fit into the big software development picture. If any of the findings here have surprised you or challenged your existing beliefs then see this as a good thing. The gift of knowledge is best when it is shared. It makes me very happy that this project has allowed me give something back to the testing community that I am so proud to be part of.

This post was also published on my company's blog Scott Logic Blog

Sunday, 5 June 2016

A Study of Software Testers

Why is it really difficult to hire testers?

A few months ago I found myself involved in a number of discussions not about testing, but about testers.

All the conversations I had revolved around the following questions:

  • Why is it difficult to hire testers?
  • How do people actually become testers?
  • Does anyone actually choose to be a tester or do they just fall into testing by accident?
  • Is it possible to persuade computer science students to pick a testing career over development?

I had my own thoughts and opinions about the answers to these questions so I immediately drafted a blog post. It was all about testers and based on my own personal experiences and conversations with some of my friends that work in testing.

It got me thinking about testers in the workplace and the challenges we face every single day. I thought about the best and worst testing roles I had previously held. I considered how these jobs had changed my outlook on testing.

I considered publishing this draft blog post, but I felt something was missing. I had spoken to some tester friends on Facebook to get their opinions and compared their experiences to mine, but I still felt uneasy. I asked myself how could I make sweeping statements such as "most testers do not have a degree in computer science" based solely upon my own subjective personal experience and opinions. The simple answer was I couldn't. So with a heavy heart I didn't publish that post. Even so, I still found myself unable to stop thinking about these questions that I really wanted to know objective answers to.

I decided to do some research into testers and the challenges we face in the work place to find some answers. I created a survey on Google Forms and distributed it to testers via Twitter, Slack chat, social media and word of mouth. I knew from the start that I wanted this project benefit the testing community so I also decided that I would make all the raw data available on Github. I wanted everything to be transparent so that other people could also potentially use the data for their own analysis.

It was very important that anyone responding to this survey could not be personally identified as sensitive questions about work were asked. I found that because the survey was conducted in an anonymous way, a lot of testers were actually very happy to help fill it in for me. But what was even more amazing was that testers also wanted answers and passed the link to the survey on to other testers they knew.

The survey ran from 4th May 2016 until 20th May 2016 and was completed by 187 people. The number of testers that responded honestly astonished me. I am very proud and thankful to be part of such a helpful and supportive community. My sincerest thanks go out to everyone that helped by taking part.

If you are interested in the raw data collected by my survey, this can be found on Github as survey_results_raw.csv

The whole time the survey was running and collecting data I knew that I was going to need to carry out analysis and crunch the numbers to see if the data collected answered any questions. I studied Maths with Statistics at A-level so had some pretty basic knowledge of statistical analysis. I was however concerned about having to manipulate large amounts of data in Excel. This lead me to investigate learning a programming language called R. The book I worked through and used as reference was Learning R by O'Reilly. The R script I wrote, named survey_analysis_1.R, is also available on Github to support my research and make my findings reproducible. I have included my commented R code as some people may find it interesting or useful to see how the charts and graphs were generated.

The actual survey results contained so much data that I could probably write volumes about testers in the workplace. Instead, I thought it was wiser to try use the survey data collected to answer one question per blog post.

The first question that I specifically wanted to try tackle was "Why is it so difficult to hire testers?"

Our survey says: not all testing jobs are equal

In the survey I presented testers with 12 positive statements about testing in their workplace such as "When I make a decision, I feel my judgement is trusted." and 12 negative statements such as "I am usually excluded when decisions are made." I asked respondents to answer true or false to this set of 24 questions. One point was scored for answering true to a positive question and one point subtracted for answering true to a negative question. This resulted in each tester achieving a score ranging from -12 to + 12. I named this score a tester's "Workplace Happiness Index". To score +12 a tester would need to say all the positive questions were true and none of the negative questions were true.

The frequency histogram below shows the Workplace Happiness Index of 181 testers.

  • The minimum Workplace Happiness Index score is -8 and the maximum is +12. This certainly proves that not all testing jobs are equal.
  • Some have many positives and few negatives, some have many negatives with few positives. The mean (average) score for a tester's Workplace Happiness Index is 4.6 so the average testing job has more positive traits than negative.
  • Out of all the testers currently working in testing, only 5.5% had jobs where all the positive statements were true and all of the negative statements were false i.e. scored +12.

One of the questions asked by the survey asked was "Are you happy in your current testing job?".

I wanted to compare whether testers said they were happy (or not) against their Workplace Happiness Index. I wanted to know if there were happy testers in jobs where lots of negative things happened. Or if there were lots of miserable testers in jobs where lots of positive things happened.

I used a box plot to visualize the relationship between a tester happiness and their workplace happiness index. A box plot is a way to show patterns of responses for a group. The groups in my box plot are Happy and Not Happy Testers. The thick black lines inside the coloured boxes show the median or mid point of the data. The coloured boxes represent the middle 50% of scores. The circles show outliers which are observations that lie an abnormal distance from all the other values in the group. For the Happy tester group there is one outlier, this is a single tester that said they were happy despite their work place ranking -7 on the Workplace Happiness Index. There were also two testers which said they were not happy despite their workplaces scoring +10 and +11 on the workplace Happiness Index.

The box plot does show a strong connection between how positive or negative the tester's workplace is and whether the tester working there say they are happy or not.

  • The median Workplace Happiness Index for a tester that says they are happy is 7
  • The median Workplace Happiness Index for a tester that says they are not happy is -1

One theory I had in my original draft blog post was that it was difficult to hire testers because testers were very unlikely to change jobs for a number of reasons. I thought some testers would be reluctant to change job in case they found themselves jumping out of the frying pan into the fire by moving from a workplace with positive traits to one with more negative traits.

I needed to know if the workplace happiness index had an influence on whether or not a tester was likely to change job. Would testers only look for new testing jobs if they were currently working in a negative workplace? Would testers working in positive workplaces be unlikely to leave.

I divided testers into groups based on their likelihood to look for a new testing job and measured these groups against the workplace happiness index. The box plot below shows this data.

There definitely was a pattern between negativity in the workplace and testers looking for new testing jobs:

On the Workplace Happiness Index from -12 to +12 The median values were as follows:

  • For a tester very likely to look for a new testing job, 1
  • For a tester likely to look for a new testing job, 2
  • For a tester not sure about looking for a new testing job 5.5
  • For a tester unlikely to look for a new testing job, 8
  • For a tester very unlikely to look for a new testing job 10

So lets summarize the data that has been crunched so far:

  • Approximately 3 out of every 4 testers say they are happy.
  • Happy testers are, for the most, part unlikely to look for new testing jobs.
  • It's harder to hire testers which have experience because only 1 in 4 testers are not happy in their current testing job making them likely to look for a new testing job.

I wanted to know about the new testers that were coming into the profession to fill the jobs that experienced testers were not moving into. The survey asked "How much testing Experience do you have" and the responses to this question have been plotted below.

The graph above shows that there is certainly a lack of new testers within the profession.

I chose to split the data on experience level at 2 years experience. I did this because many testing job adverts in the UK specifically ask for 2 years previous experience. The pie chart below compares numbers of testers with more than 2 years experience with testers that have not yet reached 2 years experience.

I found these numbers a little disturbing. Perhaps the "you must have two years experience" barrier imposed in most job adverts is very difficult to overcome and currently serves as a gate blocking new people from a testing career path. It feels like as an industry we are not providing enough opportunities for people to move into testing. What will happen when all the testers that have been testing 10+ years start to retire? I can only see it becoming increasingly more difficult to hire testers in the future if the number of people entering the profession does not increase.

I feel very proud because I can honestly say that my current employer actively recruits graduates with no previous experience and provides training for them. This training is not just for the developer path but there is also a graduate software testing training path too. Another initiative at my company which started this year was to launch a paid software testing internship over 12 weeks, designed to give a "taste" of a software testing.

More companies need to be aware that when the testing jobs they seek to fill are not ranked highly on the workplace happiness index, they simply won't be able to hoover up the best existing talent in the industry. Employers which are struggling to hire testers will need to either start growing their own testing talent or improve testing in their workplace to the level where working there is better than working elsewhere. I certainly think that providing in-house training and mentoring is one way the difficulty of hiring testers can be eased.

Retaining testers starts to become as important as hiring new testers

A company can mitigate against the fact that it's hard to hire testers by making testing in their workplace a more positive experience. The survey conducted proves that once the positivity in a workplace reaches scores of 8+ on the workplace happiness index, testers become unlikely or very unlikely to leave.

The following actions contribute to a higher workplace happiness index:

  • Do not place unachievable expectations on your testers
  • Make your testers feel like they are part of the team
  • Include your testers when decisions are made
  • Get people outside of the testing department to start caring about the quality of the project
  • Implement automated testing
  • Do not measure testers using metrics (e.g. bug count or number of production rollbacks)
  • Give your testers access to the tools and resources they need
  • Value the technical skills your testers possess
  • Share important information with your testers
  • Let your testers work collaboratively with others
  • Address technical debt and don't let it build up
  • Provide opportunities for your testers to progress and take on more responsibility
  • Appreciate the role testing plays in development
  • Take steps to reduce the volume of employees (not just in testing) leaving the company
  • Trust the judgement of your testers
  • Stabilise foundations, avoid building on broken production code, broken infrastructure and broken services
  • Stop forcing testers to work unpaid hours
  • Start viewing testing work with equal importance as development work
  • Stop forcing testers to sign disclaimers stating code "contains no bugs"
  • Support your testers to attend training courses, workshops or conferences
  • Allow your testers feel like they are making a positive difference
  • Educate management to understand what testing is and is not
  • Permit testers more time to test, this may mean testing needs to start sooner
  • Do not blame testers for missed bugs

If a company wants to be above average in terms of positivity and happiness they would need to at least apply 17 out of the 24 above actions to their workplace (based on the mean workplace happiness index of 4.6)

So far I feel like I have only scratched the surface with this data set and I intend to continue exploring this data in future posts.

This post was also published on my company's blog Scott Logic Blog

Thursday, 10 March 2016

The Lonely Tester's Survival Guide - How to stay fresh, focused and super effective when testing alone

Modern software testing has become agile

Anyone that cares about making good software has moved away from the old waterfall ways of *"throw it at QA when it's finished"*. One recent trend is to embed a single skilled tester within a small development team to test early, test often and add as much value as they possibly can.

In the old days, before test automation was as common as it is today, large numbers of human testers were required to carry out large quantities of laborious repetitive checking. Fortunately, in these modern times, test automation takes care of simple, boring, repetitive checking. This has significantly reduced the need for large numbers of human testers.

So as testing has evolved, the test team has also evolved. A traditional test team was large and it is now much smaller. It's common to only have a single tester working within a small group of developers. At companies where there are multiple testers it is likely that each tester will be working alone in isolation from the other testers. Most companies put different testers on different products or projects and it's a rarity to have two testers testing exactly the same thing.

Now we test alone

When you are the only dedicated tester within a small development team it's easy to start feeling overwhelmed. The responsibility of testing everything and establishing a good level of confidence that it 'works' is on your plate. You may have pressurised people trying to shift some of the pressure that's on them onto you. It's essential to get as many people as possible involved with testing efforts and create a culture within the team where everyone cares about quality.

But even when everyone does care about quality and untested code is not thrown in your general direction, things can still get really tough. You will be staring at the same piece of software day in, day out and trying to constantly generate and execute test ideas which attempt to cover as many paths though the software as possible. Assumption can start to creep in, which is very dangerous. If the save button worked yesterday, is it less urgent to test it again today?

The lonely tester is limited to their own ideas and strategies. Every software tester will test in a different way, with different ideas and different reasoning for those ideas. The lonely tester won't naturally experience any opportunities to learn from other testers. The lonely tester will be missing out on the kind of learning that testers working co-operatively experience every single day. Once a lonely tester becomes familiar with the software they are testing, they will test it in a completely different way to a tester which is unfamiliar with it.

I used to work in very large teams, frequently working with a minimum of at least 6 other testers. Then in 2014 I became a lonely tester. I've learned a lot since making the switch from co-operative testing to testing alone. This is my survival guide written especially for all the other lonely testers out there.

Create as many opportunities as you can to interact with as many other testers as possible

Take charge of your situation and be proactive. If there are testing meet-up or conferences near you, go to them. Meet other testers and hear what they have to say. If you can't attend in person, watch some Youtube videos of respected software testers talking at conferences. Sign up for twitter and follow some other software testers. Search for some blogs on software testing and read them. Start forming your own opinions about what other testers have to say.

You might agree with them, you might disagree with them. It doesn't matter. It's the exposure to other tester's thoughts, experience and ideas which is valuable. The lonely tester, will be lacking this kind of exposure. Slowly you will find that things you have heard about testing will help spark your own ideas about how to test. You can even borrow other people's ideas and see if they work for you.

Join forces with another (possibly lonely) tester

Recently an opportunity came along for me to be less lonely. A new project was due to start which had some similarities to a project I had been working on. I was asked to share some knowledge with the tester due to start work on the new project. So I set aside an hour to team up and do some pair testing.

I have done pair testing before and knew it would be useful for both of us but the experience was remarkable.

It's already known that there are massive advantages in pairing an unfamiliar tester with familiar tester for both parties involved. We have all heard the mantra *"fresh eyes find failure"* (as made famous by lessons learned in software testing). The unfamiliar tester won't be making any assumptions about the system or product and will be more likely to interact with it in a different way to the familiar tester. The familiar and the unfamiliar will both be looking at the software from different angles, from different vantage points. Working as a pair helps keep ideas fresh and stops testing from becoming stale and repetitive.

I was familiar with the software we were testing and other tester was completely unfamiliar with it. We worked together sharing a single keyboard and mouse. I let the unfamiliar tester take control of the software first while I observed, explained and took notes.

I described out loud how the software worked, the purpose of each input box and how they linked together as a whole. As I was describing, the other tester used the keyboard and mouse to manipulate them and started fiddling with the application. The software fell under intense focus and lots of scrutiny was applied. Lots of questions were asked out loud by both of us

"Why is it doing that?"

"Is that what you would expect to see?"

"Try doing this instead, is it the same as before?"

We found a few inconsistencies which warranted further investigation. After 30 minutes, we swapped around and I took the keyboard and mouse. Then, and I'm not entirely sure why, I said...

"Let me show you what used to be broken."

And I started trying to demonstrate some of the previous issues we had encountered, which I knew we had already fixed.

Guess what happened then? The application behaved in an unexpected way and we found a valid bug. It was a Hi-5 moment. The issue was new, it had only been introduced in the last couple of weeks. I knew straight away that I hadn't seen this bug as I was suffering from some kind of perceptual blindness. My subconscious was making assumptions for me. My sub-conscious had been overlooking areas I had recently tested and observed working correctly.

I learned a lot during that hour. I learned that no matter how good the lonely tester is at testing, a second opinion and someone to bounce ideas off is essential. Afterwards we both agreed that pair testing was an activity which we should continue doing.

As a lonely tester you may be able to negotiate an exchange of testing with some of the other lonely testers within your organisation. At an agreed date and time set aside a window to spend time with another tester. Allow them to come test with you by your side. In exchange agree that you will do the same and sit and test alongside them.

Every tester tests in different ways and through pairing with others we can learn different approaches to testing. Through these sessions we can learn about the existence of things that we were previously unaware of such as tools, tips and tricks. Having someone to discuss ideas with will help keep your testing fresh and you will learning about different testing styles.

Share your experiences, ideas and problems with others.

The lonely tester receives less feedback than testers working in small co-operative teams. When a lonely tester tests, the testing ideas happen in their head and they apply them to the software. This process is completely invisible to other people. No-one can question, critique or give any kind of feedback about invisible testing. This is especially true if the lonely tester is weak at recording or documenting the testing they have performed. So the lonely tester needs to make sure they are communicating well with others, at all levels.

If you are lucky enough to sit with the developers on your team, talk to them about the testing you are doing. This might be as simple as a casual conversation about what you are testing, what you have observed or if have got stuck and aren't sure how to test something specific. Developers genuinely care about testability. They want to make it as easy as possible for you to test their code and they might have some ideas that can help. Suffering in silence is the worst thing a lonely tester can do. Don't sit on your problems, talk about them.

If there are other lonely testers in your office, engage them. Talk about what, when, why, where and how you are testing around the water cooler. Share stories about testing on forums, tweet about testing or start a blog and write about your testing experiences. When you are a lonely tester, this sharing your experiences is essential so you don't end up facing really hard problems alone and are able to get helpful feedback that you can react to.

Above all else, the most important piece of advice for the lonely tester is this...

If you don't want to be alone any more, you don't have to be.

This post was also published on my company's blog Scott Logic Blog

Monday, 8 February 2016

Data Mocking - A way to test the untestable

Some of the biggest challenges when testing software can be getting the software into some very specific states. You want to test that the new error message works, but this message is only shown when something on the back-end breaks and the back-end has never broken before because it always "just works". Maybe the software you have to test is powered by other people's data, data that you have no direct control over and you really need to manipulate this data in order to perform your tests.

Imagine you are testing a piece of software which displays the names of local businesses as values as a drop-down list.

This software might look something like this...

There are only three items on this list at the moment, but this may not always be the case.

There is currently no option within the software itself to change or manipulate the text displayed on the list because the software retrieves this list of data from someone else's API. We have no control over the data returned by the API, our software under test just displays it.

You have been asked to test the drop-down box. What would you do?

Well most testers would start by looking at it. It appears to work correctly. Items can be selected, the Submit button can be clicked. But how would this drop-down behave with a different set of data behind it? Well we don't know (yet) but it is possible that it could appear or behave differently.

One solution which would allow more scenarios to be tested would be to force the drop-down list to use some fake made-up data. This approach is commonly referred to as testing with mock data or simply "mocking".

Mock data is fake data which is artificially inserted into a piece of software. As with most things, there are both advantages and disadvantages to doing this.

One of the big advantages with mock data is that it makes it possible to simulate errors and circumstances that would otherwise be very difficult to create in a real world environment. A disadvantage however is that without good understanding of the software, it will be possible to manipulate data in ways which would never actually happen in the real world.

Let me give an example. If an API is hard-coded to always respond with 0, 1 or 2 as a status code and you decide to mock this API response to return "fish". As soon as the software asks "what's the status?" and it gets the reply "fish" it might explode because it wasn't expecting "fish". Although this explosion would be bad, this might not be a really big problem because it was your mock data that caused the fish explosion and "fish" is really not a valid status code. You could argue that in a real world environment this would never happen (famous last words).

Mocking is essentially simulating the behaviour of real data in controlled ways. So in order to use mock data effectively, it is essential to have a good understanding of the software under test and more importantly how it uses its data.

To start using mock data the software under test needs to be "tricked" into replacing real data with fake data. I'm sure there are many ways to do this but one way I have seen this successfully achieved is through the addition of a configuration file. This configuration file can contain a list of keys and values. The keys being paths to various API end points and the values names of files that contain fake API responses. The application code is told to check the config file and if it contains any fake responses to use those instead of the real responses.

Collecting data to make mocks from is a fairly straight forward process if the application can be opened inside a browser. Opening the browser developer tools (f12), inspecting the Network tab then interacting with the software (i.e.. changing the value on the drop-down box) will usually reveal API requests made and display the associated response received.

Let's continue with the example of our software which displays the names of local businesses as values as a drop-down list. To keep things simple I'm going to say that this software uses a REST API with the following request and response.

A request URL might be:


And a response might be:

[{"id":"0000001","name":"Tidy Town Taxis" },
{"id":"0000002","name":"Paul's Popular Pizzeria" },
{"id":"0000003","name":"Costalotta Coffee Shop" }]

So to set up some mock data for this app, we could copy and paste the response into a file and tell the software to use that data instead of the data at the real API endpoint.

And this is where the fun begins. Once the software has been tricked into using mock data we have direct control over the data used by our application and we can start manipulating it.

If we wanted to test what happens when the list has many values, we could just change the mock data by adding more values to the file so it looks like this...

[{"id":"0000001","name":"Tidy Town Taxis" },
{"id":"0000002","name":"Paul's Popular Pizzeria" },
{"id":"0000003","name":"Costalotta Coffee Shop" },
{"id":"0000004","name":"Hey guess what, this is fake data" },
{"id":"0000005","name":"And this is also fake data" },
{"id":"0000006","name":"This data was made up" },
{"id":"0000007","name":"But the app thinks it's real" }]

Once this new mock is fed back into the application, it might look something like this...

When there are 7 items on the list, the contents of the list now covers the Submit button. We may also find that application performance is degraded when a larger number of items are displayed.

It is now possible to test lots of new ideas. These could be things like...

  • Many values
  • Duplicate values
  • Long strings
  • Short strings
  • Accented characters
  • Asian characters
  • Special characters
  • Alpha-numerical values
  • Numerical values
  • Negative numerical values
  • Blank values
  • Values with leading spaces
  • Values with multiple spaces
  • Reserved words "NULL", "False" etc.
  • Code strings
  • Comment flags e.g. "//"
  • Profanity
  • False positive profanity e.g. "Scunthorpe"

Test ideas are now only limited by your imagination, not the application!

Mock data can also be used to see how an application handles API responses which are not "200 OK". We can start testing error states by tricking the software into thinking the API end point returned an error when it didn't. Testing error handling becomes especially important when the software reacts in different ways to different types of errors which can occur.

Imagine an application that handles each of the following error codes in a different way:

  • 400 - Bad Request
  • 401 - Unauthorised
  • 404 - Not Found
  • 408 - Request Timeout
  • 500 - Internal Server Error
  • 503 - Service Unavailable
  • 504 - Gateway timeout

It would be very difficult without mock data to force each of the above error states manually. Testing error handling is where mock data really shines and becomes a very powerful tool.

If you're looking for ways to improve the 'testability' of applications that you are building, consider adding a way to launch the application using mock data. You might be surprised how creative testers can be with data and you could start to spot issues that otherwise would have been missed.

This post was also published on my company's blog Scott Logic Blog