Project Loom: Lightweight Java threads

Structured concurrency can help simplify the multi-threading or parallel processing use cases and make them less fragile and more maintainable. On my machine, the process hung after 14_625_956 virtual threads but didn’t crash, and as memory became available, it kept going slowly. It’s due to the parked virtual threads being garbage collected, and the JVM is able to create more virtual threads and assign them to the underlying platform thread. As noted above, virtual threads are not considered to be active threads in a thread group. Consequently the thread lists returned by the JVM TI function GetThreadGroupChildren, the JDWP command ThreadGroupReference/Children, and the JDI method com.sun.jdi.ThreadGroupReference.threads() include only platform threads.

Java 21, the Next LTS Release, Delivers Virtual Threads, Record … – InfoQ.com

Java 21, the Next LTS Release, Delivers Virtual Threads, Record ….

Posted: Tue, 19 Sep 2023 07:00:00 GMT [source]

Java, from its inception, has been a go-to language for building robust and scalable applications that can efficiently handle concurrent tasks. Concurrent programming is the art of juggling multiple tasks in a software application effectively. In the realm of Java, this means threading — a concept that has been both a boon and a bane for developers. Java’s threading model, while powerful, has often been considered too complex and error-prone for everyday use. Enter Project Loom, a paradigm-shifting initiative designed to transform the way Java handles concurrency.

Why are some Java calls blocking?

Another possible solution is the use of asynchronous concurrent APIs. CompletableFuture and RxJava are quite commonly used APIs, to name a few. Instead, it gives the application a concurrency construct over the Java threads to manage their work.

By embracing Project Loom, staying informed about its progress, and adopting best practices, you can position yourself to thrive in the ever-changing landscape of Java development. But before we dive into the intricacies of Project Loom, let’s first understand the broader context of concurrency in Java.

Getting Started With Project Loom

We’re exploring an alternative to ThreadLocal, described in the Scope Variables section. The primitive continuation construct is that of a scoped (AKA multiple-named-prompt), stackful, one-shot (non-reentrant) delimited continuation. To implement reentrant delimited continuations, we could make the continuations cloneable. Continuations aren’t exposed as a public API, as they’re unsafe (they can change Thread.currentThread() mid-method).

Previews are for features set to become part of the standard Java SE language, while incubation refers to separate modules such as APIs. The second of these stages is commonly the last development phase before incorporation as a standard under OpenJDK. Before looking more closely at Loom’s solution, it should be mentioned that a variety of approaches have been proposed for concurrency handling. Some, like CompletableFutures and Non-Blocking IO, work around the edges of things by improving the efficiency of thread usage. Others, like JavaRX (the Java implementation of the ReactiveX spec), are wholesale asynchronous alternatives.

How the current thread per task model works

Without it, multi-threaded applications are more error-prone when subtasks are shut down or canceled in the wrong order, and harder to understand, he said. Loom is a newer project in the Java/JVM ecosystem (hosted by OpenJDK) that attempts to address limitations in the traditional concurrency model. In particular, Loom offers a lighter alternative to threads along with new language constructs for managing them. Threads are lightweight sub-processes within a Java application that can be executed independently. These threads enable developers to perform tasks concurrently, enhancing application responsiveness and performance. Structured concurrency aims to simplify multi-threaded and parallel programming.

project loom java

Why not “simply” use reactive programming for high throughput java applications? Well, per Java’s core team, reactive paradigm is not in harmony with the rest of the Java Platform and it’s not a natural way to write programs in Java. Hence https://www.globalcloudteam.com/ implementing virtual threads that, per Oracle, align perfectly with everything that currently exist in Java and in the future it should be the number one approach when building high scale thread-per-request style programs in Java.

On Project Loom, the Reactive model and coroutines

Virtual threads are not faster threads — they do not run code any faster than platform threads. They exist to provide scale (higher throughput), not speed (lower latency). There can be many more of them than platform threads, so they enable the higher concurrency needed for higher throughput according to Little’s Law. The vast majority of blocking operations in the JDK will unmount the virtual thread, freeing its carrier and the underlying OS thread to take on new work.

project loom java

They are managed by the Java runtime and, unlike the existing platform threads, are not one-to-one wrappers of OS threads, rather, they are implemented in userspace in the JDK. In the simplest terms a virtual thread is not directly tied to a particular OS thread while a platform thread is a thin wrapper around an OS thread. The primary driver for the performance difference between Tomcat’s standard thread project loom java pool and a virtual thread based executor is contention adding and removing tasks from the thread pool’s queue. It is likely to be possible to reduce the contention in the standard thread pool queue, and improve throughput, by optimising the current implementations used by Tomcat. Servlet asynchronous I/O is often used to access some external service where there is an appreciable delay on the response.

Small-to-none change for developers

Existing JVM TI agents will mostly work as before, but may encounter errors if they invoke functions that are not supported on virtual threads. These will arise when an agent that is unaware of virtual threads is used with an application that uses virtual threads. The change to GetAllThreads to return an array containing only the platform threads may be an issue for some agents. Existing agents that enable the ThreadStart and ThreadEnd events may encounter performance issues since they lack the ability to limit these events to platform threads. Despite the slower performance of the virtual threading compared to Kotlin’s coroutines, it is important to remember that the Project Loom code is very new and “green” compared to the Kotlin Coroutine library.

Developers will typically migrate application code to the virtual-thread-per-task ExecutorService from a traditional ExecutorService based on thread pools. Thread pools, like all resource pools, are intended to share expensive resources, but virtual threads are not expensive and there is never a need to pool them. We also believe that ReactiveX-style APIs remain a powerful way to compose concurrent logic and a natural way for dealing with streams. We see Virtual Threads complementing reactive programming models in removing barriers of blocking I/O while processing infinite streams using Virtual Threads purely remains a challenge.

Virtual threads

The Java runtime knows how Java code makes use of the stack, so it can represent execution state more compactly. Direct control over execution also lets us pick schedulers — ordinary Java schedulers — that are better-tailored to our workload; in fact, we can use pluggable custom schedulers. Thus, the Java runtime’s superior insight into Java code allows us to shrink the cost of threads. The implementation of the networking APIs in the java.net and java.nio.channels  packages have as been updated so that virtual threads doing blocking I/O operations park, rather than block in a system call, when a socket is not ready for I/O.

  • This places a hard limit on the scalability of concurrent Java apps.
  • An important note about Loom’s fibers is that whatever changes are required to the entire Java system, they are not to break existing code.
  • If a virtual thread is blocked due to a delay by an I/O task, it still won’t block the thread as the virtual threads can be managed by the application instead of the operating system.
  • Splitting the implementation the other way — scheduling by the OS and continuations by the runtime — seems to have no benefit at all, as it combines the worst of both worlds.
  • In addition, structured concurrency offers a more powerful API to create and manage virtual threads, particularly in code similar to this server example, whereby the relationships among threads are made known to the platform and its tools.

Structured concurrency can help simplify the multi-threading or parallel processing use cases and make them less fragile and more maintainable. On my machine, the process hung after 14_625_956 virtual threads but didn’t crash, and as memory became available, it kept going slowly. It’s due to the parked virtual threads being garbage collected, and the JVM…