• 4 hours
  • Hard

Free online content available in this course.



Got it!

Last updated on 2/21/22

Choose the Right Communication Style for Your Application

In our Dairy Air application, the communication between layers is very straightforward. We are using internal Java function calls. However, if we were to expand this application to run on multiple servers (for example, the database portion to run on a dedicated server or servers), what communication options do we have available?

Sending messages from one layer to another can take several forms, some popular communication options are:

  • Synchronous 

  • Asynchronous 

  • Polling 

  • Callback

  • Message queuing/publish-subscribe 

We’ll take a look at the pros and cons of each type, and how we might use them.

Before we dive in, there are two terms you'll need to know:

  • Consumer: This is the side of communications making the request. If you go to a restaurant and place your order, you are the consumer.

  • Producer: This is the side of the communication that responds to the consumer’s request. In our restaurant example, this is the waitperson. 

A producer in one situation may be a consumer in another. For example, after the waitperson takes our order, it is sent to the kitchen to be prepared. Sending this message to the kitchen makes our waitperson into a consumer for that part of the whole meal process.

Got it?  Now let's dive in.


Let’s start with a straightforward one: call and response. The consumer makes a request (can I have a glass of water), and the provider does it (here is your glass of water). The consumer can’t do anything until the producer fulfills the request. (Yes, the person could continue a conversation with their dinner guest, but the point is they can’t drink any water until the water is brought to the table by the producer).

This is how Dairy Air is currently implemented. All the internal function calls work this way. And, all of our external endpoints work this way as well. The biggest advantage of this approach is simplicity. The request is made, and the consumer waits for the response.

But what if something goes wrong along the way? Internally, we could throw and catch exceptions. But what about our external customers? They might be waiting a long time if we didn’t send back an error response. Therefore, the major disadvantage of this approach is waiting. The consumer can’t do anything else while the request is being processed.  


So, what if the consumer needs to do other things in the meantime? In this situation, the consumer makes the request but continues to do something else while waiting.

Within Java, we can solve this problem by launching requests in separate threads. For our small application, this is a bit of overkill. Our consumer thread doesn’t have anything else to do. But what happens when we deploy this to a web server, and we get multitudes of requests simultaneously?

At that point, it would be worth investigating a multi-threaded approach to handle all of the incoming requests. Spring Boot is doing this behind the scenes. Each request gets its own thread. So it is synchronous per request. 


But if the consumer is doing other things instead of waiting, we have a problem. How does the provider indicate the result is now ready for the consumer? We need to introduce a little more coordination between the consumer and the provider.

In this situation, the consumer continues to ask the provider if the information is ready. A separate call is made to the API to check on how it’s going. If you have ever driven with a small child in a car, you are familiar with this approach: “Are we there yet?” Over and over.

This is a simple option to implement. The majority of coordination is on the consumer side. It just keeps asking to see if the provider is finished. The tricky part is if multiple consumers are making similar requests. The provider needs to give the consumer an ID or token. The consumer then passes this token back to the provider, so it can see if that request is finished.

It's kind of like a coat check person at a fancy theater where you give them your coat and get a card back with a number on it.  At the end of the show, when you give them the card back, it lets the person there know which coat you want.

The drawback of this approach is resources. There can be a lot of asking if you are “there yet” before the answer is "yes." Each question (and response) takes bandwidth (networking, CPU, etc.)  from both sides of the interaction. So how can we limit the number of interactions?


A more efficient approach is to stop the consumer from asking. In this approach, a single request is made. When the provider finishes, the consumer is called back. There are only two messages sent (the request, the response), which significantly reduces the bandwidth.

However, there needs to be one more communication path opened up. The provider needs some way to call back the consumer. 

Therefore, the consumer needs to pass some mechanism as a parameter in the original message. In our endpoint approach, the consumer’s endpoint can be sent in the JSON message. The producer needs to save this endpoint, so it knows where to call when the response is ready.

Also, the consumer has no idea when the provider will send the result. The consumer will need to dedicate a resource (like a thread) to wait and listen for the response.

Similar to polling, there is some coordination needed. Many requests may have been sent to the provider by the consumer. When a result comes back, the consumer needs to know which request is being responded to. Again, an ID or token approach is used in this case.

Message Queueing, Publish/Subscribe

What about when many consumers may be interested in the result of a provider’s response to another consumer. Or maybe the other way around. A consumer submits a request but doesn’t care which provider fulfills it, as long as it’s handled quickly. For example, I don’t care which cook makes my food. Whoever isn’t busy should handle it.

In this approach, there is a central queue or holder of all requests. Similarly, there is a queue or holder of all responses. A controller makes sure all the messages (requests and responses) go to where they need to.

The big advantage of this approach is efficiency. Messages are only sent when needed. They are also only stored as long as necessary. At the medical data place I worked at, we had several publish/subscribe queues. Some were set up to process within minutes, deleting the sent messages immediately. Others were configured to save messages for a month.

For the Dairy Air application, this is more horsepower than we need. However, if requests for reservations were to really “take off,” we could use this approach to help distribute the load across multiple servers.

So depending on the resources available and the speed of responses, you have several options to handle messaging between layers and between your system and external customers.

Let's Recap! 

  • There are multiple communication options to manage layers, most of which use consumers and providers. 

    • Synchronous calls to a provider can be implemented with REST.

    • Asynchronous mechanisms should be used if the provider takes a long time to process.

      • Polling is easy to implement, but potentially slow.

      • Callbacks are efficient.

      • Message queuing, or publish/subscribe, is very efficient, but usually requires a third-party implementation to handle all the details.

Now that you've got a handle on communication, let's wrap up the course!

Ever considered an OpenClassrooms diploma?
  • Up to 100% of your training program funded
  • Flexible start date
  • Career-focused projects
  • Individual mentoring
Find the training program and funding option that suits you best
Example of certificate of achievement
Example of certificate of achievement