• 6 hours
  • Easy

Free online content available in this course.

course.header.alt.is_video

course.header.alt.is_certifying

Got it!

Last updated on 3/4/20

Locate a Bug Based on a Bug Report

Game of Throws: What is a Bug Report?

When software fails, inconvenienced users may complain. To make sure you can figure out what went wrong and fix it, gather as much information as possible about the failure. This is a bug report.

For example, imagine you're assisting the maintainers of an application that helps sales teams to calculate saddle sizes of dragons based on their age. Here's a quick description of their product:

As everyone knows, dragons are mythical creatures that don't exist. 🐲 Not naturally, that is. It is a little known fact that in the year AD 1, the wise alchemist Lou Tan Dey-ta created a small batch of dragons using unnatural means and some fizzy potions. Those dragons still fly the skies today. 🐉

Dragons are extremely long-lived and grow at a constant, but slow rate. Buying the right saddle is a tricky business. To help those lucky enough to one day ride one, Lou Tan published an algorithm for calculating the size of a saddle based on the dragon's age. Centuries later, this algorithm was hurriedly re-written in Java to help those fortunate enough to require dragon saddles.

The Dragon Saddle Size Guesser has also become an unlikely party favorite!

The code leaves a lot to be desired. Sadly, a user raised a bug report on the application! The sales team has recently lost an important customer, Princess Drag'on, who bought a saddle that was too small due to faulty estimations on the app. 

Have a look at the bug report:

A bug report which illustrates that the sales team are struggling due to the Dragon Saddle Size guesser application failing. We are shown how to reproduce the bug by running ./gradlew run
A bug report from Sander in sales

A good bug report should include:

  • The person who reported the bug.
    You need a contact to help reproduce the bug and answer any further questions. The person who reported it can also help verify that it's fixed. 

  • The date (when the bug occurred).
    Knowing when a bug was reported can help you verify if it relates to other known issues. For example, there might have been a network outage on that day.

  • A description or explanation of the bug.
    Until you talk to the user, encourage them to provide as much relevant information as possible in the bug report. For example, it doesn't hurt to have the user also report the version of the software or the build number (if they have access to this information). Encourage them to provide you with as much detail as possible, so you can reproduce and understand the issue.

  • A set of steps to reproduce the bug.
    To help you reproduce the bug, have the user detail the steps involved. They can describe exactly what they did and screenshot the result. This also encourages them to make sure they have not made any mistakes when documenting the process preceding the bug.

  • Severity and impact of the bug.
    Is the bug impacting users? Is it directly affecting them? Is it costing money? Is it damaging your brand? The consequences of a bug will help determine its importance and priority. Working on a bug takes time away from improving code, adding features, and responding to other issues.

How Do I Respond to a Bug Report?

In the example, the person in sales wants it fixed ASAP, so it must be urgent! 😱 Get moving!

Wait! Stop! 🛑 First, to make sure it's worth spending any further time on, check the severity field on your bug report. If you're working on a team with a product owner and others who care about the software, always ask if it's more important than other work.

Next, slow down a little to understand the bug. That way, you can methodically fix it and go fast again. Even fixing the simplest issue can accidentally result in either an incomplete fix or causing another one to raise its head elsewhere. Remember, if the fix was obvious, you (or another coder) might not have broken it in the first place. 

.To methodically reproduce this bug, you need to:

  1. Understand the bug report (what you need to observe).

  2. Talk to the user.

  3. Find a suitable version of the app to test against.

  4. Reproduce the bug.

Fortunately, the user has shown how to reproduce the bug in the report above, so that last part will be a bit easier. 😉 On to Step 1!

Step 1: Understand the Bug Report

First, make sure you read the bug report and try to understand what each section is telling you. With the above report, you see that they lost a customer, and those saddle estimations don't work properly. This means there are faults and failures that directly impact users, including the sales team!

They reported the bug in 2019, and the output suggests that the program failed due to a negative and unexpected saddle size.

InvalidSaddleSizeException and a stack trace. The output indicates the program was run in 2019 and failed by calculating a negative saddle size.
InvalidSaddleSizeException and stack trace

We'll get into the details of how to read the rest of that exception later as we investigate the issue more deeply. First, we need to make sure we can reproduce the issue - as it might not even be real!

What do you mean by not real? 

That's a good question, and one that will be answered in the next section! 🙃

Step 2: Talk to the User

A bug report may come from real users, testers, or anyone else who has used your code. These are real people, and it's good to chat with them first to understand the bug. The bug reporter has reported on how the bug was triggered. To help them, you need to be able to reproduce the issue.

It's often a good idea to visit the user in person and see if they can demonstrate the issue for you. Many software bugs are caused by PEBKAC (Problem Exists Between Keyboard and Chair). In other words, a manual error. As much as you appreciate and trust bug reporters, you know how easy it is to get frustrated and irrationally upset with a program that is not behaving. I certainly do!

How can you be sure that this is not a case of PEBKAC? A short chat can usually verify if it's a real issue. The user will be grateful if you can demonstrate that the problem has magically gone away with no effort at all. Perhaps the user didn't have a network cable plugged in or was running the wrong version of the software.

Chatting with the user can provide other insights to use in your investigation. It can also be useful to have someone to show your solution to when you're done and respond to your questions.

What if I don't have access to my users?

If the user is not accessible, ask them for a short screen recording of the issue! You can then look at the user's behavior and reproduce what they do exactly. If possible, you should also do this when you visit them in person. The nuances of the moment can be lost, and the described steps may not always capture every detail.

Here's a video which a developer might have captured when sitting alongside a user who raised the bug.

Imagine that the user showed you the bug, and it plays out just as in the ticket. It's a software issue.

That user has to get on with other work, so you'll need another environment to reproduce the bug in and dig deeper. It's time to sleuth to the root cause issue. That is, the reason why it's happening.

Step 3: Set-up a Representative Environment

You need a version of this code to run and test against. The first thing to do is  git clone  the repository and  git checkout master  its product branch. This will vary from project to project, but start by answering the question, "How do I run a version of our production code locally?"

To verify the code and fix it, you need to clone the repository, import it into your IDE, and ensure that all of this has worked by running the tests in the project.

Let's see this in action before breaking it down:

As you saw, we've just learned a few things about the bug:

  • The user can reproduce it.

  • The user was using the latest version of the code.

  • The user just follows simple instructions to checkout the code from git and run it with    gradlew run .

Here's a recap:

  • Clone the repository. 
    We cloned the Git repository at https://github.com/OpenClassrooms-Student-Center/Java-debugging.gitThis gives the Java source code of the project we're debugging so we can build, modify, and run it locally.

    • This project uses its master branch to represent software which has been released to production. This is the branch we'll be investigating. 

    • Although we used IntelliJ for this, you can also do so from the command line if you prefer. Remember to import your project into IntelliJ before you start debugging. 

git clone https://github.com/OpenClassrooms-Student-Center/Java-debugging.git
  • Run the tests. 
    We used both Gradle and IntelliJ to run the existing unit tests in the project. Run these before starting as you want to know that there aren't any known issues in the code which would cause it to fail.

    • If the code doesn't build, it means you're not looking at the correct version for this bug. Since the user is currently using a running copy of the code, although buggy, it has been compiled once.

    • You can eliminate any broken builds from your initial investigation. Avoid going down a rabbit hole of unknowns, and make sure you start with the version of the source code that represents the one utilized by the user. Elementary, my dear student! 🕵️

Step 4: Reproduce the Bug

If you have another look at the bug report, you'll see the steps to reproduce the bug involve typing:

./gradlew run

Let's do that and see what happens! I'll show you how to do it the way the user did and also through IntelliJ's Gradle integration:

As you saw, we validated that the bug is real. By reproducing it, we know that there is an underlying issue that needs addressing and that it had nothing to do with the local machine of the user who reported it.

Ok! We know the bug is real. An exception is shown to the user. But what should the code really do?

If you're going to fix a bug, it's important to know how it should behave. A review of the ticket does not tell us. Next step, look at the tests. 

Look at the tests in this project under the src/test/java folder and open up the  com.openclassrooms.debugging.DragonSaddleSizeEstimatorTest. This contains tests with expected values for a selection of different years.

The test for the year AD 2 ( *. don't forget to read the backstory in the  README  of the project ) is titled estimateSaddleSize_shouldReturnASizeOfOne_forEarlyEraTwoAD and looks like this:

@DisplayName("Given we have a saddle size estimatorUnderTest spell")
@ExtendWith(MockitoExtension.class)
class DragonSaddleSizeEstimatorTest {
...
    @DisplayName("When estimating for a saddle size in the year 2 AD then the size is 1 centimeter")
    @Test
    public void estimateSaddleSize_shouldReturnASizeOfOne_forEarlyEraTwoAD() throws Exception {

        double estimatedSaddleSize = estimatorUnderTest.estimateSaddleSizeInCentiMeters(2);
        // A one year old dragon has a 1 cm saddle size
        assertThat(estimatedSaddleSize, is(equalTo(1.0)));
    }
...
}

As you saw earlier, the tests passed. This means that the saddle size for AD 1 is correctly calculated, at least in our tests. We also know that there is an DragonSaddleSizeEstimator class with a estimateSaddleSizeInCentiMeters(Int year) method, which is used to estimate the size of a saddle when provided with a year. 

Line 9 calls the estimateSaddleSizeInCentiMeters method, and Line 11 validates that the expected result is 1 cm. There is also another test, which checks the saddle size for dragons living in the year 2021. If you look at the DragonSaddleSizeEstimatorTest, you'll see that there are two expected values in the tests:

Year

Saddle Size Estimate

2

1.0 cm

2021

2020.0 cm

You can assume that the test is correct as it represents behavior that business experts and warlocks who predict dragon sizes would have agreed with. 😎

There is another clue as to how the program should behave. If you dig a bit deeper and look at the README file, it tells you that we can run the application to calculate the dragon size for any year. We're going to do just that.

Instructions from the README telling us that we can use ./gradlew run --args 2019 to run the app for the year 2019.
Instructions from the README

Try it Out for Yourself!

See if you can run the application to figure out the saddle size for 2019 by specifying the year.

On Linux and OS-X, use the following command:

 ./gradlew run --args 2019

On Windows that should be:

gradlew.bat run --args 2019 

Did it crash or did you get a value back?

What happens if you run it for year 2 and 2021? Do you get back the same value as the test expected?

Write a Failing Test to Check Your Theory

As a responsible engineer, you reach out to the user who raised the bug and point out that they can pass in a value for the year 2019. The user is delighted that there's a workaround but would prefer not to specify the year. It's a distraction from the usual workflow. They confirmed that 2018.0 is the correct value for 2019.

You're ready to write a failing test! Be careful. This bug crept into production once before. As you've seen, all of its existing tests pass. You need to duplicate how the program runs in the real world, to get this test case right!

Let's have a look at the main(String[] args) method, Java's entry point for any program. 

    public static void main(String[] args) throws Exception {
        DragonSaddleSizeEstimator estimator = DragonSaddleSizeEstimator.INSTANCE;
       // Loop needlessly to replicate and ancient ritual
        for (int i=0; i<Integer.MAX_VALUE; i++) {
        }
        // Estimate the saddle size of a dragon,
        // relative to the year One when all extent dragons were born.
        int targetYear = Calendar.getInstance().get(Calendar.YEAR);
        // Take the year from the command line arguments
        if (args.length != 0) {
            targetYear = Integer.parseInt(args[0]);
            estimator.setCopyOfUniversalConstant(42); // The universal constant
            estimator.setYearOfBirth(1); // All dragon's were spawned in 1 AD
        }
        
        System.out.println("Calculating saddle size for a dragon in the year " + targetYear);
        
        // Calculate Saddle Size
        double saddleSize = DragonSaddleSizeEstimator.INSTANCE.estimateSaddleSizeInCentiMeters(targetYear);

        // Report
        new SaddleSizeReporter(targetYear, saddleSize).report();
    }

Let's process this!

  • Line 2: This main class obtains an instance of a DragonSaddleSizeEstimator class at Line 2 from a static variable called INSTANCE.

  • Line 4 to 5: Strange rituals recommended by the mystical specialists, which don't affect execution. This is a little like business requirements. Don't judge them - yet. 😉

  • Line 8: The current year is obtained using Java's Calendar.getInstance().

  • Line 10 to 14: Allow the user to pass in a specific year using arguments from the command line.

  • Line 19: Estimate the saddle size for a given year.

  • Line 22: Report on the saddle size.

Let's look at this code. If you ignore Lines 10 to 14, the remaining code is all that would be required to run the app without the user passing a date to it. What if we turned that into a test?

Why write a test?

Remember, tests offer a fast way to reproduce a bug repeatedly. If the test expects the correct behavior, it will keep failing until you put in the first fix that corrects the bug. The scientific method suggests running multiple experiments to prove theories. As you attempt to fix the issue, use the test to validate theories. Also, accurately reproducing the bug in each run helps you examine what the software is doing.

What is our first theory?

Here are some questions for you. How did the code behave when passed a year on the command line? How did this compare with running the application without a year passed in?

Do you get where this is going? The first theory is that you can write a failing test demonstrating that if you run the application without passing a date, it will throw the exception raised in the bug report.

If you look at the exception in the bug report (reproduced below), you can follow the chain of lines starting with "at com.openclassrooms.debugging"  in Line 3  right down to the last such occurrence.

Calculating saddle size for a dragon in the year 2019
Exception in thread "main" com.openclassrooms.debugging.exception.InvalidSaddleSizeException: Unexpected saddle size:-49.0
	at com.openclassrooms.debugging.DragonSaddleSizeVerifier.verify(DragonSaddleSizeVerifier.java:13)
	at com.openclassrooms.debugging.DragonSaddleSizeEstimator.estimateSaddleSizeInCentiMeters(DragonSaddleSizeEstimator.java:60)
	at com.openclassrooms.debugging.DragonSaddleSizeGuesser.main(DragonSaddleSizeGuesser.java:34)

Can you see that this is in com.openclassrooms.debugging.DragonSaddleSizeGuesser? It occurs right after a call to estimateSaddleSizeInCentiMeters(..) . Look at the example of our main method above. It corresponds with the Line 19 where we called  estimateSaddleSizeInCentiMeters()

The bug has something to do with how DragonSaddleSizeEstimator is called when you don't provide a value for the year on the command line. The test is going to duplicate how the main class calls the DragonSaddleSizeEstimator.

The existing unit test for the  DragonSaddleSizeEstimator  uses mocks for the  DragonSaddleSizeVerifier. That is, it uses fake versions of this class to speed up the tests and make them more predictable. This is a normal testing practice.

As  DragonSaddleSizeVerifier   is the class that throws the exception in the snippet above (Line 2 of the stack trace), you want to use a real version of this in the new test. Stealing straight out of the  main  method, create a new DragonSaddleSizeEstimatorIntegrationTest. This test will create a real instance of the verifier and any other classes it may need to collaborate with. It will prove that they integrate properly, so that means an integration test. 

How do we turn what our code is doing into a failing test case?

Let me show you how I can take the behavior in our main method and turn it into a failing integration test:

As you saw, I was able to create a test that duplicates the behavior of the main method above. Don't replicate all the lines of the main method but its normal flow. As it's a test, I've skipped print statements and setting parameters passed to the program.

Here's our test:


@DisplayName("Given that we have a DragonSaddleSizeEstimator")
public class DragonSaddleSizeEstimatorIntegrationTest {

    @DisplayName("When the year is 2019 Then the saddle size should be 20.18 meters")
    @Test
    public void estimateSaddleSizeInCentiMeters_shouldReturnTwentyPointEighteen_whenCalculatingTheSizeIn2019() throws Exception {
        // ARRANGE
        DragonSaddleSizeEstimator estimator = DragonSaddleSizeEstimator.INSTANCE;

        // Act
        Double expectedSaddleSize = DragonSaddleSizeEstimator.INSTANCE.estimateSaddleSizeInCentiMeters(2019);

        // Assert
        assertThat(expectedSaddleSize, is(equalTo(2018.0)));
    }
}

The test was written in three steps:

  • Line 9: Arrange to use an instance of the  DragonSaddleSizeEstimator. This is done exactly as it was done by the main method.

  • Line 12: Act on it by calling the  estimateSaddleSizeInCentiMeters()  method with the value 2019. 

  • Line 15: Assert that the correct estimate is 2018.0.

Unlike the  main  method, you don't use  Calendar.getInstance()  to obtain the value passed in at Line 12. The thrown exception has been successfully reproduced. 

But it's still failing. What can we investigate now?

Always read the exception. 😇 The first few lines are:

com.openclassrooms.debugging.exception.InvalidSaddleSizeException: Unexpected saddle size:-49.0

at com.openclassrooms.debugging.DragonSaddleSizeVerifier.verify(DragonSaddleSizeVerifier.java:13)

at com.openclassrooms.debugging.DragonSaddleSizeEstimator.estimateSaddleSizeInCentiMeters(DragonSaddleSizeEstimator.java:60)
at 

If you run the test with the debug option, it allows you to ask the JVM to pause when it throws an exception. The topmost line shows its location and where it was thrown from. You can then call methods on the exception and interact with it to see details at the point it was thrown.

I'm going to run the test one more time using the debugger and tell the app to stop when the exception is thrown, and then we'll look at it more closely.

Congratulations, you've just seen the debugger in action. Have a go yourself with the repository you've already cloned.

Let's Recap!

  • A bug report gives details about a bug such as who reported it, its impact, and how to reproduce it.

  • You write a test for an application that demonstrates what should happen when the bug is not present. This is called a failing test as it will fail in the presence of the bug.

  • Using a debugger, you can to run a  test in and suspend the JVM when a specific exception is thrown.

 In the next chapter, we'll investigate that top line of our stack trace!

Example of certificate of achievement
Example of certificate of achievement