FREE Training Courses — 100+ Hours of Akka and Reactive Architecture Expertise Get Started
Support
akka database-sharding akka-cluster akka-edge akka-projections active-active zero-trust

Akka Innovations: Multi-Cloud, Edge, and Security Enhancements

Jonas Bonér Founder, CTO and Chairman of the Board, Lightbend, Inc.

We have promised to continue investing in Akka and pushing the envelope even further. Looking back on our accomplishments in the last few years makes me proud. Building on the foundation we have created over the previous 15 years and our experiences helping thousands of companies build mission-critical systems on Akka, the main theme we have focused on has been extending Akka’s capabilities to the full cloud-to-edge continuum: first-class support for multi-region, multi-cloud, and edge. This includes security and building zero trust systems since moving to a more decentralized system with data and processing closer to the end users served from local geographic regions means exposing oneself to more security threats and legal restrictions for governing the data.

In the quest to meet customers where they are and their increasing demands on low latency, resilience, and data mobility, companies are inevitably forced toward global multi-region and multi-cloud deployments, decentralization, locally served data and compute, and efficient, dynamic, and automatic replication of distributed data—allowing for applications that gracefully span the entire continuum of on-premise, cloud, multi-cloud, and edge environments. The new features in Akka are targeted toward tackling these challenges, exploiting its inherent opportunities, and laying the foundation for building inherently decentralized, highly performant, resilient, elastic, and scalable applications that thrive in this new and rapidly changing world. What we have accomplished is a testament to Akka's adaptability to the companies’ ever-evolving needs.

Let’s explore the most critical features supporting these themes that we have added since the license change in September 2022.

Eliminate Messaging Infrastructure With Brokerless Publish-Subscribe

Brokerless Pub/Sub allows users to get reliable publish-subscribe service-to-service communication between services while eliminating the burden of operating message broker infrastructure—e.g., running an Apache Kafka or Apache Pulsar cluster. Its stellar performance compared to alternatives is a testament to its value; for more details, check out this blog.

It is implemented through projections over gRPC, making building “brokerless” publish-subscribe service-to-service communication easy. “Brokerless” means no message queue or broker is needed as an intermediary for the event delivery. Instead, it uses gRPC for point-to-point communication, relying on an event journal on the producer side and Akka Projections event processing and offset tracking on the consumer side. The implementation allows exactly-once event delivery, with automatic retransmission de-duplication on failure.

Run Your App on Multi-Cloud and Multi-Region Using Distributed Cluster

Akka Distributed Cluster helps maximize cloud capabilities by allowing one to seamlessly extend an Akka application to multi-region, multi-cloud, or hybrid, ensuring efficiency, correctness, and resilience. Building on the Brokerless Pub/Sub, it also adds:

  • Event Filtering allows dynamic event filters on the producer or consumer side to prevent the sending and processing of unnecessary data. Consequently, network costs are reduced, and hardware resources are freed up by only transferring the required data when needed. This feature is particularly crucial at the Edge.
  • Active-Active Event Sourcing is a key feature that leverages distributed event journal replication. It runs on gRPC and utilizes CRDTs and the new Brokerless Pub/Sub. This results in ultra-efficient, low-latency replication, ensuring strong eventual consistency of event-sourced actors/entities/services across different data centers or Point-of-Presences (PoPs) at the Far Edge.
  • Durable State Queries allow you to search for data on multiple fields without additional read models, reducing the cost and time spent storing and interacting with duplicate data.

For more information, see this page.

Elastic Data Across the Cloud-to-Edge Continuum Using Akka Edge

Akka Edge is building on the Distributed Cluster and extending its capabilities to the far edge. It provides a single programming model, runtime, and data fabric for and across the cloud-to-edge continuum. It is a continuation of the work we have done in Akka for the last 15 years, leveraging the power of the Actor Model. This allows you to write your service once and see it run anywhere throughout the continuum—from the centralized cloud to the far edge.

Where something will run—on-prem, cloud, edge, or device—should not dictate how it is designed, implemented, or deployed. The optimal location for a service at any specific moment might change and is highly dependent on how the application is being used and the location of its users. Instead, the guiding principles of Akka Edge evolve around data and service mobility, location transparency, self-organization, self-healing, and the promise of physical co-location of data, processing, and end-user—meaning that the correct data is always where it needs to be for the required duration, nothing less or longer, even as the user moves physically in space. For more information, see this page.

Run Akka Natively on Devices Using Akka Edge Rust

Running a single Akka application seamlessly from the cloud to the edge was a significant leap forward, but we didn’t stop there. The introduction of Akka Edge Rust completes the vision of raising the abstraction level, reducing complexity, and simplifying developers' lives through a single unified programming model and runtime for the whole cloud-to-edge continuum. It allows using Akka outside the JVM to run natively on devices, empowering developers to run their Akka applications even closer to where they are used and where the user’s data resides.

Akka Edge Rust provides a subset of Akka implemented with the Rust programming language, providing excellent reliability and efficiency for resource-constrained devices where CPU, memory, and storage are at a premium. This client library runs natively under 4 Mb of RAM (running on Arm32, Arm64, x86, amd64, RISC-V, and MIPS32). It has rich features, such as an actor model, event-sourcing, streaming projections over gRPC, local persistent event storage, WebAssembly (WASM) compatibility, and security (through TLS and Wireguard). Using this Akka client, one can extend an application to devices while maintaining its programming model, semantics, and core feature set. For more information, see this page.

Bullet-Proof Security With Zero Trust

Data security is becoming increasingly important; it’s at the top of every company’s mind. The problem is that creating a fully secure cloud system is challenging. One usually ends up with a large surface area that can be hard to understand and protect fully, a problem that multiplies as one moves close to the edge where the end users reside. To address these challenges, we have spent much time researching what it means to build a bulletproof Zero Trust system based on Akka. We can now ensure our customers can easily navigate the complexities of creating such a system, which includes new security features and thorough documentation.

For more information, see this page.

Stellar Database Performance at a Lower Cost with Database Sharding

Akka has always been extremely good at managing distributed data reliably at scale with remarkably consistent low latency (most often sub 20 ms). However, customers sometimes ask what storage layer to use underneath Akka. Using a distributed database can frequently cause the cost to shoot through the roof, making it hard to scale and operate reliably. Alternatively, using a single centralized database usually becomes a bottleneck and cost concern for applications with high throughput requirements.

We have focused on this issue by adding the capability to spread the data and load over many physical backend databases . This approach eliminates the need for partitioning or sharding support from the database, so one can use ordinary, cost-efficient, non-distributed databases to do the job. Relying on many small and cheap Postgres databases for storage instead of expensive and complicated distributed databases significantly reduces cost and complexity while ensuring that the application can scale out linearly in terms of throughput and latency. In a recent internal benchmark, we reached 1.4 million requests per second (for a single entity type) with average latency of 11 ms (p99) on 16 standard Postgres instances, scaling linearly as we scaled the application while keeping the cost per 1k IOPS flat.

Learn More and Get Started

Please see the Akka blog to learn more about the newest features, and download Akka to get started. Try them out and let us know your thoughts; we would love to hear from you. Happy Hakking!