Vert.x Verticle Scaling
In this tutorial, we are going to discuss about the Vert.x Verticle Scaling. In the previous tutorial, we have seen how to use verticles. Now we are going to scale them.
Vert.x Verticle scaling allows you to deploy multiple instances of a verticle to handle higher loads and improve the concurrency of your application. This is especially useful in scenarios where a single verticle instance cannot handle all the incoming events or requests due to limited event loop capacity. Vert.x Verticle Scaling helps distribute the workload across multiple instances, each running on its own event loop thread.
How Vert.x Verticle Scaling Works
- Multiple Verticle Instances: You can deploy multiple instances of a verticle. Each instance will have its own execution context and will be assigned to a separate event loop thread.
- Load Balancing: Vert.x automatically load-balances incoming events (like HTTP requests or messages on the event bus) across multiple verticle instances.
- Concurrency: By deploying more instances, you increase the application’s ability to handle more concurrent events, as each instance runs on its own event loop.
Key Concepts
- Event Loop Threads: Vert.x uses event loop threads to handle non-blocking tasks like I/O. Verticles deployed in an event loop share these threads. Typically, there is one event loop thread per CPU core.
- Worker Threads: Worker verticles run on separate worker threads and are used for handling blocking operations.
- Deployment Options: You can configure how many instances of a verticle should be deployed using
DeploymentOptions
.
Coming to our previous Verticles example, We will add a verticle that will be deployed multiple times. For this we are going to copy Verticle B and rename it to verticle N. It has the same content as verticle B and we are deploying it in the main verticle.
package com.ashok.vertx.vertx_starter.verticles;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import io.vertx.core.AbstractVerticle;
import io.vertx.core.Promise;
/**
*
* @author ashok.mariyala
*
*/
public class VerticleN extends AbstractVerticle {
private static final Logger LOG = LoggerFactory.getLogger(VerticleN.class);
@Override
public void start(final Promise<Void> startPromise) throws Exception {
LOG.debug("Start {} ", getClass().getName());
startPromise.complete();
}
}
So let’s open the main verticle class and deploy verticle N multiple times. And after the deployment of the verticle B, we are adding a new line
vertx.deployVerticle(VerticleN.class.getName(),
new DeploymentOptions().setInstances(4)
);
Notice that we are using the name of the verticle n and we are not creating a new instance. This is because we are deploying multiple instances so we can’t pass an object. So we need to pass the name. So Vertx internally creates this object. So make sure when deploying multiple instances to only pass the name as first parameter.
The second parameter, we are defining some deployment options. One config of the deployment options is to set the number of instances. We are setting the instances to 4. That means the verticle N is deployed 4 times. Why are we doing this or why should we do this? Even one of the reasons is to utilize our resources as good as possible. So let’s say you have a verticle that needs a lot of CPU and you have a CPU with 4 cores to utilize the full power of the CPU. It makes sense to run 4 instances of your verticle. With that, all the CPU resources can be utilized heavily, but be careful to not run all the verticles multiple times as they would compete for resources. So think about where you need the concurrency and add more instances there.
Now let’s start our example to see it in action.
package com.ashok.vertx.vertx_starter.verticles;
import java.util.UUID;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import io.vertx.core.AbstractVerticle;
import io.vertx.core.DeploymentOptions;
import io.vertx.core.Promise;
import io.vertx.core.Vertx;
import io.vertx.core.json.JsonObject;
/**
*
* @author ashok.mariyala
*
*/
public class MainVerticle extends AbstractVerticle {
private static final Logger LOG = LoggerFactory.getLogger(MainVerticle.class);
public static void main(String[] args) {
final Vertx vertx = Vertx.vertx();
vertx.deployVerticle(new MainVerticle());
}
@Override
public void start(final Promise<Void> startPromise) throws Exception {
LOG.debug("Start {}", getClass().getName());
vertx.deployVerticle(new VerticleA());
vertx.deployVerticle(new VerticleB());
vertx.deployVerticle(VerticleN.class.getName(),
new DeploymentOptions().setInstances(4)
);
startPromise.complete();
}
}
5:33:16 pm: Executing ':MainVerticle.main()'...
> Task :compileJava
> Task :processResources NO-SOURCE
> Task :classes
> Task :MainVerticle.main()
Start com.ashok.vertx.vertx_starter.verticles.MainVerticle
Start com.ashok.vertx.vertx_starter.verticles.VerticleA
Start com.ashok.vertx.vertx_starter.verticles.VerticleB
Start com.ashok.vertx.vertx_starter.verticles.VerticleAA
Start com.ashok.vertx.vertx_starter.verticles.VerticleAB
Deployed com.ashok.vertx.vertx_starter.verticles.VerticleAA
Start com.ashok.vertx.vertx_starter.verticles.VerticleN
Start com.ashok.vertx.vertx_starter.verticles.VerticleN
Start com.ashok.vertx.vertx_starter.verticles.VerticleN
Start com.ashok.vertx.vertx_starter.verticles.VerticleN
Stop com.ashok.vertx.vertx_starter.verticles.VerticleAA
Deployed com.ashok.vertx.vertx_starter.verticles.VerticleAB
We deployed 4 verticle N, so we also see now 4 times the start output.
So indeed Vertx deployed us multiple instances of the verticle N. Let’s also see how this affects the threading. For this we are going into verticle N and we are adding to the system auto print line, the thread name.
package com.ashok.vertx.vertx_starter.verticles;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import io.vertx.core.AbstractVerticle;
import io.vertx.core.Promise;
/**
*
* @author ashok.mariyala
*
*/
public class VerticleN extends AbstractVerticle {
private static final Logger LOG = LoggerFactory.getLogger(VerticleN.class);
@Override
public void start(final Promise<Void> startPromise) throws Exception {
LOG.debug("Start {} ", getClass().getName() + " on thread " + Thread.currentThread().getName());
startPromise.complete();
}
}
Now we run the application again.
5:33:16 pm: Executing ':MainVerticle.main()'...
> Task :compileJava
> Task :processResources NO-SOURCE
> Task :classes
> Task :MainVerticle.main()
Start com.ashok.vertx.vertx_starter.verticles.MainVerticle
Start com.ashok.vertx.vertx_starter.verticles.VerticleA
Start com.ashok.vertx.vertx_starter.verticles.VerticleB
Start com.ashok.vertx.vertx_starter.verticles.VerticleAA
Start com.ashok.vertx.vertx_starter.verticles.VerticleAB
Deployed com.ashok.vertx.vertx_starter.verticles.VerticleAA
Start com.ashok.vertx.vertx_starter.verticles.VerticleN on thread vert.x-eventloop-thread-6
Start com.ashok.vertx.vertx_starter.verticles.VerticleN on thread vert.x-eventloop-thread-8
Start com.ashok.vertx.vertx_starter.verticles.VerticleN on thread vert.x-eventloop-thread-5
Start com.ashok.vertx.vertx_starter.verticles.VerticleN on thread vert.x-eventloop-thread-4
Stop com.ashok.vertx.vertx_starter.verticles.VerticleAA
Deployed com.ashok.vertx.vertx_starter.verticles.VerticleAB
Now we see again 4 times the start verticle end message on thread and notice it’s running on 4 different threads. One with the ID 6, 8, 5 and 4. As we have seen, this is a fairly straightforward way to achieve concurrency with the thread safety inside one verticle.
Benefits of Vert.x Verticle Scaling
- Increased Concurrency: More instances means more concurrent events can be handled, improving throughput and responsiveness.
- Load Distribution: Vert.x automatically distributes load across instances, ensuring that no single instance becomes a bottleneck.
- Parallelism: By deploying more instances, Vert.x can make better use of the available CPU cores, maximizing parallelism.
When to Scale Verticles
- When you have high concurrent load, such as many incoming HTTP requests.
- When one verticle instance can’t handle the load due to the volume of events or requests.
- When you want to optimize CPU usage by using more event loop threads to handle work.
Considerations When Scaling Verticles
- Shared State: Verticles should generally avoid sharing state (i.e., shared variables) since each instance runs on its own event loop. If you need shared state, you can use Vert.x’s SharedData (e.g.,
LocalMap
,ClusterWideMap
) or an external data store (e.g., Redis, database). - Thread Safety: Verticles are single-threaded, but if you scale multiple instances, be cautious with thread safety, especially when dealing with shared resources.
- Resource Allocation: Deploying too many instances can lead to excessive resource use, such as high CPU or memory consumption. Monitor your application’s performance and adjust accordingly.
Summary
- Vert.x allows scaling verticles to handle higher concurrency and distribute load across multiple instances.
- You can use
DeploymentOptions
to specify the number of instances to deploy. - Worker verticles can also be scaled to handle blocking tasks concurrently.
- Be cautious of shared state and thread safety when scaling verticles.
- Proper scaling, monitoring, and tuning will help optimize the performance of your Vert.x application.
By scaling verticles, you can make full use of the non-blocking, asynchronous nature of Vert.x and build high-performance, reactive systems that handle a large number of concurrent requests efficiently.
That’s all about the Vert.x Verticle Scaling with example. If you have any queries or feedback, please write us email at contact@waytoeasylearn.com. Enjoy learning, Enjoy Vert.x tutorials..!!