Skip to main content

2 posts tagged with "concurrency"

View All Tags

JavaScript developers may feel at home with Rust's concurrency model, which uses the async/await concept. In JavaScript we have Promise to represent eventual results of async operations, and in Rust we have Future.

The following shows an example of async and await syntax in JavaScript and Rust code:


async function main() {  await greet("Hi there!");}
async function greet(message) {  console.log(message);}
Hi there!*/

#[tokio::main]async fn main() {    greet("Hi there!").await;}
async fn greet(message: &str) {    println!("{}", message);}
Hi there!*/

Currently, Rust does not include an async runtime out-of-the-box. We use the Tokio runtime in our examples.

Let's imagine we have a web server that uses multi-threading as its concurrency approach with a fixed thread pool size of 100. Supposing that the server's request handler sleeps for 2 seconds for every 10th request, how many requests can this server handle every second?

This question persisted at the back of my mind as I was reading Deepu's series of posts on concurrency in modern programming languages. In Deepu's series he performed several web server benchmarks with the same 2 second delay for every 10th request, and the resulting requests per second figures appeared to be capped at a fixed ceiling. This post explores the factors affecting the maximum requests per second that a multi-threaded web server's can handle.