Trevor Wolter

07/15/2011

Prioritizing Defect Reports

Trevor Wolter // in Technology

Most defect management systems have a field called Priority. It's usually a simple field with three to five possible settings that indicate how important an issue is. However, like Lady Gaga's wardrobe, the logic involved in choosing the best setting can be quite complicated.

I like to think of there being four components to determining priority: Severity, Exposure, Business Need, and Timeframe. Evaluating each component in sequence is a good process to use to reach a logical conclusion. However, each issue is unique and should also be evaluated on its sum total.

Prioritizing Defect ReportsSeverity

Severity addresses the prevalence and penetration of the issue. Prevalence is how widespread the issue is—perhaps it occurs in multiple modules in the application. Penetration is how much the issue hurts—can the user easily work around it or does it cause a system failure?

Exposure

Exposure indicates who can see the issue. Is it restricted to just the internal development environment? Is the client exposed to the issue? Are end users able to experience the issue?

Business Need

Business need is an evaluation of the effect of the issue upon productivity, relationships, and reputations. Does the issue cause lost productivity for one person or many or the client? Does the issue pose a potential for damage to the developer-client relationship; perhaps through missed deadlines? Does the issue pose a potential for damage to the client-end user relationship through the exposure of the issue to the end user?

Timeframe

Considering Severity, Exposure, and Business Need, Timeframe is how soon the gears should start turning to resolve the issue. A good rule of thumb is to establish a timeframe that correlates directly with each priority. In the context of pre-production environment, think of natural milestones in the development lifecycle to establish timelines such as in time for a build, within an iteration, or before release to production.

 

Share Article

Trevor Wolter

03/29/2011

Who is your Tester?

Trevor Wolter // in Technology

When you need someone to write a program, who does it? A developer that knows how to program.

When you need someone to manage a project, who does it? A person who is well-organized, shows leadership, is able to motivate, and capable of coordinating all of the different pieces.

When you need someone to design a logo, who does it? A graphic designer that understands visual principals and is able to transform and abstract concept into something visual.

When you need someone to test a product, who does it? Any one of the above when they have some free time. They’ve all worked on the project so now they should be able to make sure it works, right? Wrong.

Testing is a highly complex process just like development, management, and design. You wouldn’t think of having someone with no knowledge of a programming language write your program. The same should be true for testing-someone with no knowledge of testing should not test. So when you need someone to test a product, who does it? This is where it gets confusing.

There is no holy grail of how to become a software tester. A tester needs to be logical, inquisitive, fearless, daring, rebellious, thorough, and organized, to list a few qualities. This sounds a lot like the qualities you might look for in a developer or a project manager or a graphic designer. I told you it would get confusing.

You see, every role on a project performs some variation of problem solving. The difference is that project managers, developers, and designers all know what their problem is (deliver a website, write some code for a widget, design a logo) and the challenge is to find the best solution. On the other hand, testers do not know what their problem is. It isn’t possible to make a list of all of the defects before they are found. Based on past experience, you can describe types of defects, scenarios under which defects commonly occur, and various methods for finding defects.

Random non-tester team members often fail miserably at the job because they don’t know how to find the unknown. Finding the unknown requires going off script and it is not a linear process. It’s almost paradoxical because a good tester is simultaneously logical and irrational.

It’s not that a developer, project manager, or designer can’t be a good tester. They can-I’ve witnessed several examples first hand, but that’s not what they’ve been trained to do. Software testers have a unique role and perspective that brings great value to your team that can be easily lost when someone else “fills in.” Is that a risk you really want to take?

 

Share Article

Trevor Wolter

02/18/2011

Ode to Software Testing

Trevor Wolter // in Vodori Culture

Spring cannot come soon enough as far as I’m concerned. While it’s been a relatively mild winter, I’m looking forward to warmer temperatures, longer days, and more sunshine. In spite of the winter doldrums, I’ve decided to lighten things up on the Vodori Blog and do something never before seen here. I’ve written a little song about the awesomeness of software testing! I’ve kept the novice singer in mind too. The melody is from Beethoven’s Ninth Symphony, Ode to Joy; so feel free to sing it yourself. I’m thinking about having the Vodori Choir prepare it. Perhaps in a few weeks there will be some video to go along with this.

Software testing is the greatest
discipline there ever was;
keeping programs working smoothly,
free of bugs and other flaws.
Testing is not about blaming
but rather happy end users.
Who else gets to look for problems?
Code that works means no losers.

Software testing is a skilled trade;
many do not make the mark.
Thorough, focuses, always curious,
gentle soul but with a bark;
not afraid to raise a red flag
when the quality does lack;
questioning and always pushing
to make sure it’s on the track.

Software testing happens more ways
than just using a test script.
Exploration of the program
helps keep things from being skipped.
Learning all we possibly can
about the  program under test.
We do all that’s necessary
to assure it’s at its best.

Software testing never ends though
because programs are so huge.
There are countless possible ways
that a bug can hide its kluge.
So we plan and prioritize
to cover all that might impede.
But we strive to be efficient
while we balance time and need.

Software testers clear the bases
loaded by the devs before;
hitting home a long-worked project
bringing in the winning score.
But we know that we are a part
of a larger family.
Nonetheless you’re always welcome
to thank us with jubilee.

 

Share Article

Trevor Wolter

12/17/2010

Top Secret Testing Process Revealed

Trevor Wolter // in Technology

Christmas is coming early at Vodori! I am revealing my top secret personal approach to testing. I’ve blogged about testing theory but haven’t shared my specific methodology. This isn’t a “How To” nor is it a list of what I’m looking for. This is my immersion process for software testing.

I test websites, and that specifically influences my approach. To date, the majority of sites I work with are informational rather than ecommerce or social media, which is also important when considering the forthcoming list. When I’m given a new site to test, my strategy usually has ten components. These components tend to mix together and overlap.

Testing Process for Informational Websites

  1. If I have access to them, I look through requirements documentation, which I keep handy to reference later on.
  2. I crawl through the site and look at every page I can access from the navigation tree. This crawl isn’t so much for testing as it is for learning. I want to find out how the content is arranged and discover all the unique features baked into the user experience.
  3. Crawl through the site again, this time using in-text links because oftentimes there are pages of the site that are intentionally not shown in the navigation tree. My focus switches to testing and applying what I’ve learned from the first crawl and requirements. I also start incorporating elements of components 4-9.
  4. Check each and every header and footer link.
  5. Try out the page tools and utilities.
  6. Tinker with whatever search functionality there is.
  7. All-out test the Membership/Restricted Access section. I sign-up/sign in during my initial crawl, but I reserve thorough testing of the registration and login processes until this step.
  8. Test other forms – Contact Us, Send More Information, etc.
  9. Try to break all the fun little features on the site (like calculators, flash rotators, etc.). Again, generally I “poke” them as I see them during my initial crawl to make sure they work. In this step, I look at them more intensely, trying to break them.
  10. The last thing I do is look at any customizations made in our content management system, Pepper. It’s not that verifying this functionality is less important. Rather, these customizations get tested practically when the content loaders USE them and this happens before my principal involvement.

There you have it. It’s not rocket science, but it has worked well for me. Please feel free to share your process in the comments. I’m always eager to learn what other people are doing in order to help improve my own process.

 

Share Article

Trevor Wolter

10/22/2010

Props to Rapid Reporter

Trevor Wolter // in Technology

I have recently come across* a nifty little tool called Rapid Reporter, for which I absolutely must give a shout out. Rapid Reporter, created by Shmuel Gershon, is an easy to use interface for taking notes while you test. If you’re not a tester, Rapid Reporter could easily be adapted for taking notes about your work throughout the day. The data is saved to a comma-separated values (CSV) file which, if you open it in Excel, can be easily sorted, filtered, and formatted however you like.

Rapid Reporter

The Rapid Reporter tool itself is basically a little text entry field that floats on top of your desktop. You can go to Gershon’s website to read the full instructions about how it works. There are nine default prompts that indicate the type of note you’re entering: Name, Charter, Setup, Note, Test, Check, Bug, Question, and Next Time. After you enter the first two, you can cycle through the others to select the appropriate prompt as needed. Also, the prompts other than Name and Charter can be easily customized. Just click next to the prompt, type your note and press Enter. The note is saved with a time stamp in the CSV.

Additional handy Rapid Reporter features

Clicking the S button will take a screenshot, while clicking the N button opens a window to add an extended rich text note.

Rapid Reporter

Rapid Reporter is always on top of whatever programs you have open which makes it easy to find. If you’re pressed for screen real estate, the transparency can be adjusted to make it less obtrusive.   

Although it doesn’t bother me, Rapid Reporter is built on the .Net 3.5 framework and therefore is incompatible with Mac.

The benefits of Rapid Reporter

I’ve used Rapid Reporter on several test cycles and have been quite pleased with how convenient and easy it makes keeping test notes. I don’t have to hunt around for Rapid Reporter. It’s much easier to record fragile information, like URLs, than on paper. After testing, it’s so simple to review the data to make annotations for defect reports or distill the information to pass on to the project manager.

*Special thanks to The Tester’s Headache, uTest, and especially James Bach for the tip!

 

Share Article

Trevor Wolter

09/07/2010

Predator, Schmedator

Trevor Wolter // in Technology

PredatorIn 1987, Major Alan "Dutch" Schaefer landed with his team in the rainforests of Guatemala to rescue a kidnapped presidential cabinet member. The situation quickly soured, however, after he discovered that his mission had been a ruse to destroy a rebel encampment. That wasn't the end though. One by one, Dutch's comrades were being mysteriously and heinously slain. Dutch didn't know what his enemy looked like. He didn't know where he hid. He didn't know his weaknesses, and he didn't know how to defeat him. He just knew the enemy was there, causing havoc, and out to get him. What was Dutch to do?

Here's my advice, because I do this every day.

The Mission

Check ListAs a software tester, I have two missions: assert the unknown problems and verify the known problems. Verifying that the known problems don't exist is relatively straightforward and seems to be what upper command likes to focus on. It looks tidy, seems efficient, and it lends itself to tracking because we can compile a list of everything we know can go wrong. Then simply run down the list and check them off as we make sure they don't happen. While checking off the list, we can time how long it takes to do each item which command loves because it can be used for future estimates.

I don't know, I don't know, I don't know

The Unknown BugsAsserting the unknown problems is not neat and tidy (though that does not equate to inefficient). It's generally everything that command doesn't like – difficult to report and impossible to fully estimate. How do you hunt a defect when you don't know where it exists or how it manifests itself? You know they're there, WAITING. But unlike the Predator, they don't shoot at you, they wait until you step on them to explode. All you can do is start testing and while doing so, learn everything you can about the application and bugs because there are a bazillion ways to use the software and there are a bazillion defects (times two) that could occur.

Enter Spiraling

I test in a pattern that I call a test spiral. Imagine each component of the software as a wedge in a circle. The critical aspects of each component are at the center, and less Testing Spiralcritical aspects further and further away. When I test, I start at the center, exercising the software in ways I know it will be used and needs to work. Then I gradually transition to exercising more obscure parts of the system in less common ways. When I discover a problem, I dip back towards the center to make sure I didn't miss something similar, closer in. Using this pattern, I can be sure that the critical aspects are covered first, which command likes. Because there are a bazillion ways to execute the software, I know I can't catch every bug, but I don't stop until I run out of time or at least have had sufficient time (an ambiguous thing that command doesn't like) to feel really comfortable with the stability and usability of the product. To stop sooner creates a false sense of security and doesn't maximize the client benefit.

Dutch Confronts the Predator (Spoiler Alert)

VictoryAll that was left of Dutch's team was himself and the girl (because what epic tale doesn't have a beautiful woman, and what epic tale dares to kill off the beautiful woman?). She manages to escape to safety but Dutch remains behind to confront the enemy. In the final showdown between the two, Dutch gets beat down, but ultimately gains the upper hand and mortally wounds the Predator with one of his traps. As awe-inspiring as this story is, it is but a work of fiction. On the other hand, software testing is real, and there isn't just one predator/defect to search and destroy. Every day, I'm hot on their trail, making the world a safer place.

 

Share Article

Trevor Wolter

07/09/2010

Sapience, the Engine of Software Testing

Trevor Wolter // in Technology

Over the last couple months, I've written articles about software testing to frame this craft in a metaphor that anyone can relate to. (Read Part 1, Part 2, or Part 3.) You could say that I've been showing you different types of cars to get from Point A to Point B. Today, I'd like to "pop the hood" on software testing and talk about the driving force in testing: sapience.

Sapience is synonymous with wisdom. When I was in school, I remember being asked the difference between knowledge and wisdom. Scholars throughout the centuries have tried to explain. Democritus said, "There are many who know many things, yet are lacking in wisdom." Later, Leonardo da Vinci said, "Wisdom is the daughter of experience." And more recently, Will Durant said, "Knowledge is power, but only wisdom is liberty." Simply put, knowledge is facts but wisdom is the ability to apply facts to a given situation.

In testing, we usually have lots and lots of FACTS

The requirements document is one big encyclopedia of facts. Bug reports are facts. Test scripts are facts. That blog article you read three weeks ago has facts. Your experience over the last five years of using Facebook has a wealth of facts. Your observation that a certain developer often forgets to check in code before sending you a defect report for testing is a fact.

Fact application

So we have oodles of facts, but how do we apply them in a wise, meaningful way? Your first thought might be to make a list of them and start going through the software/website to make sure the software and the facts line up. The problem here is that this is checking, not testing because there is no application, just verification. For example, our fact is that all tables must have a border of 1px. What happens when we come to a page that is using an invisible table to better organize various elements on a page? Well, through a simple check, the page fails evaluation. However, a wise application of the fact says that tables must have a border of 1px when the table is being used to organize a set of related data displayed on the page, not various elements on a page.

Sapience at work

The bug report you've been assigned to test states that a critical error occurs when you click on "Next Page" after running a site search that yields multiple pages of results. Your sapience comes into play when you relate that to the fact that the product search has a similar feature, and you decide to test that search as well.

Software testing is a lot like everyday decision-making. The key is to make intelligent decisions based on the collection of knowledge that we have. We do this by making use of our whole collection of experience and that which makes us unique, our intuition. It is this last part that makes software testing an activity that is truly human.

Sapience is a requirement in software testing, just as an engine is a requirement in an automobile. Sure there are different types of engines – gasoline, diesel, hydrogen, etc. but a car won't go without one. So too, software testing doesn't go without an intelligent, intuitive, sapient being behind it.

"A wise man will make more opportunities than he finds." – Francis Bacon

 

Share Article

Trevor Wolter

06/25/2010

3 Advantages of Small Consulting Companies

Trevor Wolter // in Vodori Culture

It wasn't that long ago that I worked for a 300-person company. I never met the CEO, and my input was lost in the sea of voices. Vodori does things a little differently. We're a web boutique: our clients get the personalized service that is harder to come by from larger consulting companies. Here's why:

Our dedication comes from within. It circulates amongst the team and is evident in every design, every piece of code, and all the content we create.

Advantage One: An open door policy

Every day I am able to interact with the company leadership. While we have a structured management system, there is no disconnect between management and employees, which often occurs between layers of middle management. Every Friday, we get together for lunch and get updated on the status of every project on our desks and on the horizon. This enables true transparency in operations because business and project plans are relayed directly to me. I can respond directly with questions and concerns.

Advantage Two: Everybody knows my name

I know everybody in the office. Transitioning from one team to another is seamless because I already have a feel for the personalities and working styles of my teammates. We have camaraderie without cliques. We collectively celebrate our successes and collaboratively work through difficulties.

Advantage Three: I have ownership in what I do

The combination of transparency and cooperation creates the perfect conditions for ownership. It has been incredibly easy for me to take personal ownership in my work, my position, and the overall success of the company. I can see how my work directly contributes to the company, and I know that my work is recognized by my peers and by our founders.

I value my time in the cubicled trenches of a large consulting company, but at a small interactive agency, I feel I am valued.

 

Share Article

Trevor Wolter

06/11/2010

Software Testing at the Museum - Part 3

Trevor Wolter // in Technology

We’re going to explore the website "museum" as if we were conducting exploratory testing. If you need a little review first, you can also read my posts on how websites are like museums and using test scripts.

Exploratory Testing (ET) is best defined as simultaneous learning, test design, and test execution.

It’s a very simple concept but detailed in execution.

Divide to Conquer

One of the biggest misconceptions about this approach is that there’s no planning or strategy involved. Rather, we generally want to divide our software or website into pieces that we think someone can test in 1-2 hours. For simple applications or websites, this might be the entire thing. For larger applications and websites, it might be an individual function or module.

Before we begin our museum testing, let’s list some of the ways we could divide the museum into smaller pieces. What are some individual components or functions that would need to be tested? HVAC, lighting, water dispensers, doors, security cameras, fire alarm, public address system, restroom fixtures, audio tour headphones, and the list goes on.

Example Website Components

Home Page, flash objects, page tools, shopping cart

For our testing today, we'll test the lighting. We don’t have any specific instructions about how to do this, but our objective is to make sure the gallery lighting is working. This is what makes this approach exploratory. While we do our testing, we’re going to take notes. (You can find my completed notes further down in this entry.) These notes include the objective, information about the session, and findings during testing.

Start the Adventure

Here it’s time to put on our logical thinking hats. What does it mean to make sure that the lighting is working? The lights can be turned on and off. No light bulbs are burned out.

So we head to the Central Gallery and find that there are no light switches. We look in the East and then the North Galleries and find the same thing. This leads me to suspect that they are controlled from a central location—possibly the front desk or more likely the security room.

In the security room, we talk to the head of security, Steve Wilkos, who confirms that all the gallery lighting is controlled from a lighting panel. Not only can we turn the lights on and off in each gallery, we can adjust the intensity of the light. We flip the lights on and off in each gallery and notice on the security monitors that things went as expected.

Testing Your Website

What would you do if your objective was to test page tools? Identify what page tools your site uses - perhaps Print Page, Email Page, Increase Text Size Next, HAVE AT IT! Navigate to a page and try the Print Page function.  What happens? Do you see a print dialog box or does the page rerender? What does it look like out of the printer? Does text get cut off? Does it seem like there's wasteful use of color ink? What happens  in black and white? Learn how the feature works and determine if it's working as intended. If you see something else that might not be working, it's okay to check it out!

Next, we try adjusting the intensity. First we notice that the South Gallery is set at 65%, which is at least 15% lower than the other galleries; this may be a problem. While adjusting the settings, we notice that some of the lights don’t change in the East Gallery but stay bright no matter what the setting is. There is obviously a problem with the dimmer controller on one bank of lights.

We’ve reset all the intensity settings to their originals and made sure all the lights were turned on. We head back out to the floor to look for burned out bulbs. In the South Gallery, we find one burned out bulb and also notice that it seems a bit dark. In the West Gallery, we notice that some of the bulbs are casting a cool light, which is different from every other bulb. This may or may not be a problem.

Review Your Findings

At this point, we finish writing down our findings and submit them to the building manager. (Here are my complete notes.) We discuss the day’s tests. We identify missing pieces, if any. We also determine if the two issues we found with the lighting intensity in the South Gallery and the cool lights in the West Gallery are problems.

Here's the Key

Record and discuss all your findings with your manager. If you're the test manager, review with the PM or designer.

 

Share Article

Trevor Wolter

04/30/2010

Software Testing at the Museum - Part 2

Trevor Wolter // in Technology

There are many approaches in software testing, but in general, all approaches fall on a continuum between two extremes. At one end of the spectrum is exploratory testing while at the opposite end is scripted testing. Everywhere in between is a combination of the two, and in practice, no approach is purely one or the other. 

In this session, we will focus on scripted testing.

A scripted test instructs the test executioner exactly what to do. There is some variation in how this can be done but generally, the instructions tell you what to do and how to know if the test passed or failed. Sometimes you can keep going after a failed step and sometimes you can’t, depending upon the nature of the failure.

Using the analogy of the museum I created in part one, here is a short procedure to make sure a special exhibit is ready to be opened.

StepIntentProcedureExpected OutcomePass/Fail
1 Verify Accessiblility Enter the Victor Vodori Gallery The Victor Vodori Gallery can be entered  
2 Verify the absence of construction equipment Visually scan the gallery for paint supplies, ladders, and tools No paint supplies, ladders, or tools are present  
3 Verify the room is clean After putting on a white glove, run your gloved hand over a two foot section of the floor and a two foot section of the north wall The glove remains white with no visible dust or debris collected  
4 Verify the content is present Observe the west wall of the gallery paying attention for a photograph approximately 6’ x 4’ depicting a cute little puppy  
5 Verify correct positioning Using a measuring tape, measure the distance from each wall to the center of the painting The measurement is exactly 72 inches  

Scripted testing has some really positive features. For one, we can use this to verify extremely precise and detailed requirements just as long as they are specified in the script. For another, we can run this test as many times as we want in the exact same way. If something ever fails, we can go back and repeat the exact same procedure to confirm the findings or check a fix. Also, someone who is new to testing or your organization can use this script easily because it guides each step of the way.

On the other hand, this approach is quite rigid. It takes a good deal of preparation to formulate a test script before it can be used because we need to know the expected outcomes before testing. If you want something to be tested, it must be specified, which can add up to a lot of steps. No matter how many steps are added, you’ll never cover every possibility. This is where we start getting into exploratory testing. While exploratory testing also will not cover everything, it allows the tester the freedom to follow their intuition to discover problems that we might not have thought of earlier. Next time we’ll see what happens when we visit the museum without a script.

 

Share Article