As part of the Digital Ocean Kubernetes Challenge, I deployed the Elasticearch, Fluentd and Kibana stack for log analytics. It's my first time deploying a Statefulset and Daemonset, and I encountered several challenges along the way, which gave me the opportunity to practice debugging Kubernetes issues.
How actix-web's application state and Data extractor works internally
When developing web servers, we sometimes need a mechanism to share application state, such as configurations or database connections. actix-web
makes it possible to inject shared data into application state using the app_data
method and retrieve and pass the data to route handlers using Data extractor.
Let's explore just enough to understand how it works under the hood!
One year as a software engineer
One year has passed since I started my first job as a software engineer. The one-year milestone is significant because I've dreamed of building software as a career since I was a teenager. As a teenager, I developed a fascination with how computers worked. I began learning to code after my uncle passed me a copy of The C Programming Language book by Brian Kernighan and Dennis Ritchie. At that young age, I set my mind on pursuing Computing in university. But things did not pan out as my younger self had expected.
Lazy async operations in Rust
JavaScript developers may feel at home with Rust's concurrency model, which uses the async
/await
concept. In JavaScript we have Promise
to represent eventual results of async operations, and in Rust we have Future
.
The following shows an example of async
and await
syntax in JavaScript and Rust code:
JavaScript
async function main() { await greet("Hi there!");}
async function greet(message) { console.log(message);}
/*Output:
Hi there!*/
Rust
#[tokio::main]async fn main() { greet("Hi there!").await;}
async fn greet(message: &str) { println!("{}", message);}
/*Output:
Hi there!*/
Currently, Rust does not include an async runtime out-of-the-box. We use the Tokio runtime in our examples.
Effects of thread pool size on concurrency
Let's imagine we have a web server that uses multi-threading as its concurrency approach with a fixed thread pool size of 100. Supposing that the server's request handler sleeps for 2 seconds for every 10th request, how many requests can this server handle every second?
This question persisted at the back of my mind as I was reading Deepu's series of posts on concurrency in modern programming languages. In Deepu's series he performed several web server benchmarks with the same 2 second delay for every 10th request, and the resulting requests per second figures appeared to be capped at a fixed ceiling. This post explores the factors affecting the maximum requests per second that a multi-threaded web server's can handle.
Evaluating performance on Android devices
Performance refers to a device’s ability to swiftly accomplish a task that matters to the user. When talking about performance users might think of speedy app launches, high benchmarks scores, smooth gaming experience, or that innate (but hard to describe) feeling of the device not holding you back during usage.