• 12 hours
  • Hard

Free online content available in this course.

course.header.alt.is_video

course.header.alt.is_certifying

Got it!

Last updated on 11/8/22

Identify the Advantages of Concurrency and Parallelism

Understanding Serial Processing: How Humans Think

Have you ever had so many things to do, that you wished you could maximize the power of your mind and do them all at once? Attempting to prepare breakfast, while also working out and completing your tax returns could save you so much time! Sadly for us mere humans, we don't fare too well at sharing our main focus between multiple activities. Whether it's creating lists or breaking down complex problems, we usually end up working on one thing at a time. We call this style of working serial, as it involves a serialized set of steps one after the other. While not the most efficient way of working, it is explicit and sequential, reducing the risk of leaving you with a wrecked kitchen. 😉

Is it a surprise then, that developers write code sequentially? Many Java programs do one thing at a time. Serial execution is a journey from method to method, and statement to statement, where you gradually work through a problem, remembering things and working out new ones.

For example, let's have a look at a simple program which when called with a list of numbers, prints out their sum:

public class ArgumentSummer {

    public static void main(String[] numbers[]) {
        int sum = 0;
        for(String number : numbers) {
            sum = sum + Integer.parseInt(number);
        }
        
        System.out.println("Sum: " + sum)
    }
}

This code works in steps. It sets the variable sum to zero and then loops through each argument in the numbers array, one after the other. With each iteration of its loop, it converts that value to an integer and adds it to sum. Serial execution is great, because:

  • You can usually follow these journeys from beginning to end, and ultimately see the final solution and how it was calculated.

  • You can easily follow code like this or debug step by step. 

  • Its behavior is more likely to be deterministic or predictable. 

Perfect! I should keep coding all my programs that way, right? 

Well, not necessarily. Imagine that each number represented votes in a country like India, with a population of just over a billion people. If every vote could only be counted one at a time, at a rate of one vote per second, it would take just under 32 years to decide who won an election. 😱 While the governing parties might love this, I don't think it would do much for democracy. 😝

So, how do you avoid this slowdown in the real world? Specifically, in the election example, each electoral station has multiple people counting votes at the same time. The subtotals eventually get added together and published as another subtotal for a district. They are then, in turn, added to other subtotals. These are counted at approximately the same time and added together in a tiny fraction of the time. 

Thinking about all the people involved in counting votes might hurt your head. It certainly hurts mine! It's far easier to think of a sequential workflow, like the code above.

Guess what? Computers, being our natural overlords 😉, are far superior and have no trouble doing multiple things at once! By serially writing your code, you're not making the best use of your overpowered and expensive computers.

Explaining Processors and Cores: How Computers Think

So how do I put my computer to work and prevent it from getting depressed like Marvin? 

OK, your computer is not going to become under-motivated or even depressed; however, it can easily be underutilized. Computers may not have brains, but they have lightning-fast chips called processors (or CPUs, central processing units), which can do many things all at once using special cores.

To the left is a Computer Processor.  An arrow points to the right, showing the inside of the processor. This processor contains 4 cores, which are represented by squares.
Processor (CPU) on the left. The inside of the processor is shown on the right. It contains cores.  

core is a sophisticated electronic component that performs calculations and operations in response to instructions (i.e., they perform the actual programmatic steps which make your code run). Instructions are sent to the processor by the JVM so it can run the byte code compiled from your Java programs. Most processors have multiple cores, which can all process instructions at the same time. As you can imagine, more cores allow you to make your software faster! The more cores a computer has, the more a program using those processors can do at the same time. 

Think of cores as regions of your brain which can work together to solve a problem. A program can do many things at once by using multiple cores. It's like having the power to use the different regions of your brain at will!

Doing Many Things at Once: How Does It Work in Hardware?

To understand how this works for programming, you'll have to think hard - as in hardware. 🤓 Do you know what processor powers the device you're currently using? Whether it's a desktop, laptop, or mobile device, it certainly has one! Why don't you take a moment to find out and see how many cores it has?

OK, cool scavenger hunt. But how does what I write in code relate to these cores?  

When you run a serial Java process that only does one thing at a time, it will only ever be scheduled to do that thing on a single-core, one task at a time. That leaves the rest of the cores free to do other things: 

Two tasks as executed in Serial
Serial execution of two tasks

As you can see, Task 1 is completed before Task 2. What if Task 2 could complete without waiting for Task 1? You would finish it so much faster. That extra core is just sitting idle!

In the real world, not using your cores efficiently translates to writing programs that take far longer to run than they could otherwise, which may or may not be desirable, depending on the problem you're solving and how long it takes to resolve.

Super! How do I use all my cores then?

You must design your software to work on two or more tasks at the same time. There are two main patterns typically used to write software that better utilizes your cores, parallelism, and concurrency.

Using Your Cores More Efficiently With Parallelism and Concurrency

Option 1: Parallelism

In his book, The Art of Concurrency, A Thread Money's Guide to Writing Parallel Applications (O'Reilly, 2009), Clay Breshears defines parallelism as creating software, "which can support two or more actions (tasks) executing simultaneously." These tasks are performed at the same time, starting and ending together. For example, parallelism for you might be reciting Shakespeare while patting your head and rubbing your belly at the same time. Computers would have no issue with this!

These tasks would typically be long-running and start at the same time. Although long-running is a relative term, think of how much of your program runs at the same time. As a rule of thumb, 70-100% of your code running at the same time may count as long-running. You can see this on the diagram below, where both tasks start together and are identical in size.

Two tasks begin running together. This helps us fully utilise both cores of our application.
The parallel execution of two tasks

Parallelism describes a style of software design you can apply to ensure that your code makes use of multiple cores by executing tasks at precisely the same time on different processors. One of the easiest ways of ensuring your Java program uses several cores is to run numerous versions of your program as independent processes (i.e., a program running on your computer) using different input data. 

That said, Parallelism has great power, which comes with great responsibility. 🕷️ For example, a long time ago, in a galaxy far away, I wrote many programs using the fork system call in C, which creates duplicates of the current process as well as a new child one. It also copies all data in the memory of the first process. Can you imagine what happens if you do this too often? Say a few thousand times? It's called a fork bomb, and sadly I've let one loose in production.

You can use up all the memory and CPU on your computer, making you go from efficiency to a prolonged wait for each process. If all your memory is used, each new process must wait until it can be dealt with on one of the limited number of processors. You’ve probably experienced this before when you try to start up multiple programs on an old computer - excruciatingly slow!

It’s the same for our program. Our program cannot progress in a timely fashion while it is waiting for a resource. We call this starvation. 💀

Option 2: Concurrency

Concurrency is software that supports two or more actions( tasks) in-progress at the same time. These actions are broken down and mixed up so they may stop, start, and run in unpredictable orders. The tasks happen to be working on their own tasks at the same time, like two joggers in a park.

Wait, how is this different from parallelism? 

The distinction is between parallelism's executing simultaneously, and concurrency's in progress at the same time. In concurrency, the tasks are typically short-running and smaller parts of the overall solution. Additionally, for two tasks to be in progress, a program has to have started working on them, but might temporarily switch to other essential tasks.

Two cores at work, non-deterministically executing multiple tasks.
Two cores concurrently executing many tasks 

Notice that on the right of the diagram, multiple tasks are scheduled onto each core in a hard-to-predict, non-deterministic fashion. 

Concurrent vs. Parallel

Naturally, you may wonder if the execution of your software can be both concurrent and parallel. The short answer is yes, although this is rarely intentional.

Let's compare all three types of situations:

Parallel Execution runs multiple tasks at the same time. Concurrent execution runs multiple tasks in chunks. Concurrent execution can be parallel by not the other way around!
Parallel vs Concurrent and Parallel vs Concurrent Execution

If you look at the diagram above, you can see that the concurrent and parallel scenario is just a special concurrent case scenario, where the ordering is equivalent.

Defining a Method for Optimizing Your Code

As you've seen, there are benefits to writing serial software that allowing you to follow each step. If you plan to write a program that makes good use of your underlying hardware, be sure that the benefit outweighs the cost of a solution, which is less deterministic.

Step 1. Consider the Approaches You Can Use to Optimize Your Code

Are you going to write your solution to be concurrent or parallel?

There is no right answer, as it is down to personal preference and the problem you are trying to solve. A good rule of thumb is that if your program can be broken into smaller tasks that can work away without having to wait too long for one another, you should probably go with concurrency.

In most cases, when scaling up Java programs, developers tend to pick concurrency as their go-to paradigm, mostly since Java is so good at it!

Step 2. Deconstruct Your Program's Task so That It Can Work on Multiple Cores

Do you have large tasks, or can you break your code into smaller ones? This comes to the coding solution, which makes the most sense for solving your particular problem.

Concurrent solutions perform better than parallel solutions if they are deconstructed into many small tasks that can be solved together. Think about your problem and focus on what needs to happen for its hard tasks to be made simpler.

It's like recognizing that you need a local voting station to avoid waiting 31 years for a vote count.

Step 3. Identify if There Is a Benefit in Optimizing Your Code

Sure, you might be able to modify your program to use more than one core, but before you even start to complicate or over-engineer your program, you need to be sure that it's worth the extra risk. What is the benefit you're getting from making your program efficient? Consider each design option in turn.

Undoubtedly, the extra work and complexity involved in making your code efficient are worth the effort. If it isn't, then don't even start! To help you remember this, here is a famous saying by one of the founding fathers of computer scientists, Donald Knuth.

Premature optimization is the root of all evil, or at least most of it.

When you improve the efficiency of code that may not need to be made more efficient, it's called premature optimization and generally results in an unnecessarily complicated solution; if that isn't the root of all evil, what is?

Knuth, who himself has optimized several algorithms and won the Turing award, reminds us that since we write software as part of creating a product, we should acknowledge the risk of the investment of cost and time to optimize or speed up code. 

Step 4. Implement it

Now that you've designed and selected your solution, it's time to implement it!

That’s easier said than done. How does this methodology work in practice, in a real situation? 

Great question. Actually, that’s what you'll learn in the next chapter. 😎

Let's Recap!

  • Serial processing is doing tasks one after another, in a series.

  • Parallelism is when two or more tasks start and end together.

  • Concurrency is when actions are broken up and mixed up so they may stop, start, and run in unpredictable orders.

  • Concurrent programs can also be parallelized, though usually by accident. 

  • Parallel and concurrent programs are made possible by having multiple cores within processors. Each core can manage one or more tasks, and multiple cores can run at the same time.

In the next chapter, we will execute tasks in parallel within separate JVM processes using ProcessBuilder.

Additional resources:

Check out the book, The Art of Concurrency, A Thread Monkey's Guide to Writing Parallel Applications by Clay Breshears, O'Reilly, 2009.

Example of certificate of achievement
Example of certificate of achievement