Hi random reader!
I’m Glad you chose to read this article about asynchronous programming’s fundamentals. In here, along with many interesting topics, we’ll try to cover a general idea and computational model of async programming using java. In future articles we’ll deep dive into framework called Vert.x. To make this topic much clearer we’ll introduce several practical examples, for which we’ll mostly use Java programming language.
Apart from many interesting facts and details, the main answers/clarifications you’ll get from this article are:
- What is async Java programming and how it works?
- When to use async Java programming?

What is asynchronous programming?

So what’s this reactive and async programming all about ? What are it’s pros and cons? Who uses it?
As for everything in life, to deeply understand something and answer all its’ questions, you should dig into its’ past, into its’ history.
So lets’ first review Java’s traditional threading model which was original threading solution for web applications on java, in hope that it will help us to identify modern state of java threading models better.

It’s widely known fact that the traditional threading model for web applications on server side Java was: “Thread per request model”. This mechanism is implemented using regular Socket API and it basically implements the abstract idea that:” Each request from a unique user must be served in its own thread”(We should mention that typical implementation of this idea also involves thread pool, So there is some limit to how many threads can be allocated to avoid overflow of resources on server).

The idea seems to be interesting and sufficient at first sight. But let’s consider edge/IO intensive cases for this solution. What if too many requests arrive on the server? Let’s say millions of requests. Will there be created millions of threads?

Millions of requests? huh.. according to resources and cs journals this many requests to a server were simply unimaginable in the nearest past. But in the modern world, where everything and everyone is connected to the internet, this very idea became the reality.
The idea of thread per request seemed to be very reasonable in the nearest past but in the modern world, where unimaginable cases become reality new solutions must emerge.

As a popular quote says: “Modern problems require modern solutions”

Nowadays, modern web applications need to scale and serve thousands or even millions of request every minute. Let’s agree that the original solution of “thread per request” won’t work here since i doubt that anyone wants millions of threads on their servers. The actual reason for too much threads being not optimal is that creating and operating threads have some overhead associated with them.
This stated overhead might be in several contexts:

  • Memory — Since each threads needs additional memory for it to be managed and metadata to be persisted.
  • Scheduling — For server more threads mean more scheduling work to be done, be it OS threads and OS scheduling or JVM-s threads and JVM-s internal scheduling.
  • Context switching —No matter how many cores a real world machine has, intensive context switching will take place in case of many threads operating at the same time. And let’s agree that it’s defenitely not an optimal case, since instead of using computational power on evaluating valuable computations and business logic we waist it on deciding which thread to execute next.

To target these problems associated with original threaded model new solution emerged called: “Event loop model”. Which is well visualized in the next imaage:

Event loop

In contrast to “thread per request model” we don’t use a separate thread for each incoming request. We put incoming requests into the queue and have one “always on” thread called event loop or main thread.
This thread, one by one, gets events from the queue and executes callback attached to them. If million requests arrive to a server, all million of them will be put into the queue as request events ( which we defenitely can do in contrast to creating million threads ) and will have callback functions attached to all request cases which event loop will execute when time comes. One special case we should defenitely point out is that this concurrency model uses non-blocking IO sockets ( In contrast to “thread per request model” which uses Blocking Sockets) , which means that at the edges of our application ( edge: module which communicates without outside world from our app ) threads aren’t being held idle waiting for some out of application process to finish and give response.
For example, if our application sends network request to database, to query some data, application thread executing this piece of code doesn’t hang there waiting for network call to return. It delegates this waiting process to some lower/kernel level entities and itself, on user space/process level continues to process/execute other events in the queue. When database call finished and gives back response via network, lower/kernel level mechanisms will notify our application that the response was returned and according events/callbacks will be placed in the event loop queue, so that they get processed accordingly.
One very intuitive question here might raise up.
If under the “event loop” abstraction only one thread is meant. Won’t this make our app slow? Since only one set of instructions can be executed at the same time?
Answer to this question is a bit specific to use cases of async programming. We should remember that Event loop concurrency model is excellent for IO intensive applications, where there are many CRUD operations intensive socket communications, intensive network calls and etc. Not for computationally intensive applications, like computing substrings and variations of huge genomic strings or processing huge arrays of data.

So each callback function should be executed snap fast by event loop since no CPU intensive code should be used there by premise. Our code won’t be delayed at all and resources of server machines will be conserved since redundant threads aren’t being created and manipulated.


General computation model described above as “Event loop model” is so widely used and popular that there is a whole separate design pattern associated with it. It’s called reactor pattern. I think that getting to know this generic design pattern will benefit you in many ways in the future, to fully understand all the details and mechanisms of Event loop concurrency model.


In this article we tried to make you familliar with general concepts of async programming. Without introducing any language specific implementation details. As you probably know first runtime in modern software engineering that introduced and popularized asynchronous programming was NodeJs. But our main language of focus will be Java and framework/toolkits that make JVM async programming possible. In the next, follow-up articles we’ll start introducing one of the most popular and cool frameworks for reactive Java called Vert.x. So stay tuned and let’s rock async world together.

Software engineer. Clean code enthusiast.