Introducing Akka Cloud to Edge Continuum. Build once for the Cloud. Seamlessly deploy to the Edge - Read Blog
Support
cloudflow openshift

Lightbend and Red Hat discuss Cloudflow, Kubeflow, and Open Data Hub on OpenShift

New Name - Cloudflow is now Akka Data Pipelines

This podcast was originally recorded on the Red Hat X Podcast Series, so navigate to the aforementioned hyperlink to listen (10min). What appears below is the full transcription of the discussion, which has been lightly edited to improve clarity and readability.

LISTEN TO THE PODCAST


Red Hat: Welcome to the Red Hat X Podcast Series. My name is Brian, joining me today is Karl Wehden, VP of Product Strategy and Product Marketing at Lightbend. Today we'll be discussing how Lightbend and Red Hat are bringing repeatable, reliable, open-source machine learning infrastructure to the market. Karl, welcome to the Red Hat X Podcast Series.

Karl Wehden: Thanks for having me.

RH: Of course. Well, appreciate you being here, and you know we're gonna have fun. We always have fun on this show. We get to talk about some fun stuff. So tell us about Lightbend and what you're doing and also how you're positioned in the market.

KW: Sure, thanks. So, Lightbend is the company that founded a couple interesting open-source projects, principally Akka, which is a distributed, state-based runtime framework for the JVM, and Scala, which is a functional and object-oriented language that marries well with the former framework. We've been around for about eight years in some capacity or another, and we've been working on really pushing the boundaries of how you build systems, especially distributed systems that deal with a lot of state. Lately, we've been doing really interesting things at the intersection of distributed systems and machine learning (ML) and deep learning (DL).

RH: What is it Lightbend does in this whole ML ecosystem and all that? I'm curious about that. And also talk about some of the problems that your company can help us solve as well within that.

KW: Absolutely. In learning about a lot of these data processing systems that are at the core of the machine learning ecosystem, Lightbend has endeavored to deliver a platform we call Cloudflow, which is an open-source framework for quickly deploying and managing streaming applications.

These [streaming applications] are kind of the vascular system, the life blood, of the machine learning ecosystem in many ways. What is hard when working through that domain is doing a lot of stuff over and over again. It can be sometimes difficult. Things like serialization or configuration, managing oodles of operational parameters, and worrying about data durability between processing stages. Cloudflow is the system that's designed to take all those really boring day-to-day things out of the way, so you can focus on stuff like model performance, and interesting capacities along those lines.

RH: Cool. So yeah, it's making things a little easier. It takes that boring side out. When is it really needed as well?

KW: Having the simple idea of processing a stream of data–an unending, unabating stream of data–is a good sign, and you should start thinking about doing this. Cloudflow is a cloud native solution that runs on bomb-proof, reliable infrastructure, like Red Hat OpenShift. It leverages the commonalities in the Kubernetes ecosystem to help drive those things.

So when you're in a scenario where you're building applications that really need to stay up consistently over a very long period of time, and you want the flexibility and strength of something like OpenShift under it, Cloudflow is a great solution to be that pump, that way to deliver reliable data to your machine learning processing.

If you're working in that environment, probably working with other technologies like Kubeflow or Seldon, tools that are really good at processing and delivering results in terms of machine learning and scoring, and we're married well to that ecosystem.

RH: Excellent. Well, what are some of the ways you can keep your model scoring up-to-date and also accurate?

KW: When you're dealing with models and model serving at scale–where model serving's just the idea of producing an answer as quickly as you can–you have some trade-offs to manage.

You can either do what's thought of as a “model as a service”, which is the idea that you remotely deploy a model server and ask it over the wire, over the network, for the answer when you need it. That works great, and the nice part is that it does what we all want in these types of architectures: it decouples the stream from the processor, so you can change things quickly and adapt or score the models, or measure the efficacy of the model, and change it out if you need to.

When you start to get to higher speed scenarios, you have a couple different choices. Typically what you have to do is score the model local to the stream processor that you have, and by stream processor I mean something like Apache Spark, Apache Flink, or Akka Streams. That's what we call “embedding the model”. We see a lot of our customers, especially with real-time applications, doing that. That means that the model artifact is just a code dependency that's built directly into the package, and every time you make a new build of the application, it gets updated. Now, that's good because it's really fast; in many cases, orders of magnitudes faster, depending on what features you have turned on. But it's got more friction associated with it.

A third approach, which we often tout as a kind of middle ground, is something we call "model as data". So in this case, the model is treated and published independently of the actual application, and just as you send messages to apps to get them to do things, you send models to the applications in a model as data mode. That gives you another form of decoupling, but with a little bit more of the runtime embedded in the app itself, so it can go faster.

RH: You mentioned trade-offs at the beginning. How do you measure these trade-offs?

KW: The beautiful thing about streams is they amount to throughput, which relates effectively to the “garden hose” analogies that you've heard, in the sense that you can only push so much water through something as fast as you can.

Measuring things with a stream processor means that you have to decouple your logging facilities from the data processing facility, and that's something that Cloudflow makes really easy. Often what we'll do is take that processing time and pass it on as another attribute to what you would think of as typical model metadata, and send that along to something like Kubeflow or other metadata packages you might use.

RH: Okay, how do I collect the metadata about the model scoring activities?

KW: You would really want to just let the model tell you what is interesting to itself. So what version are you? What data did you just receive and what was your answer? And how accurate did you think it was?

This is where being a part of this ecosystem means that once we get that, we can simply push that into another queue, and associate it with the Kubeflow instance that might have called it or the Seldon Target as part of that. A lot of that is built into some of the more industrial grade model management facilities, like Seldon, as well.

RH: Nice. What are some of the common use cases involving streaming data and machine learning as well, with some of your customers?

KW: Our customers use streaming data and machine learning in cases where there is just too much for an analytics application to store the data and then process it and come out with some insights or answers.

So, great example of that are the streaming services for Disney Plus, who use us for real-time analytics to best understand where load is hot, where usage is important, and to essentially optimize the offers and options that you get as part of the story. It's generally how they keep their finger in the pulse of their audience.

RH: Yeah, they're doing a good job doing that too, so that's nice...

KW: Especially right now. Another example is Capital One, who uses us for automating loan decisioning for auto loans. And this is not just for “What's your credit score?”–it's a complex set of 10 models that they run in parallel to make sure that they have a good understanding of how to give you the best product while keeping their risk low as well.

RH: Excellent. Yeah, very robust. Any other examples?

KW: Sure, there's a lot of other situations where real-time personalization plays an interesting role on trips, travel, a few other pieces as part of that, but also in your trips to pick up your coffee in the morning.

We work with Starbucks to understand how long your drink is going to take between different stores all over the country. We also make sure that you get a very similar experience in the app regardless of what country you're in through a unified commerce platform and the modern ecommerce stack. All those things mean that we're evaluating lots of data and passing that to a complex set of models that are guided by some clearer visibility into the business.

RH: Yeah. Wow, that's...

KW: So it's very valuable for maintaining your brand as well as just doing work. You can really get a sense of what you think the customer experience should ultimately be when you can do things at scale.

RH: That's great. Well, thank you for sharing all of this. This is a really, really exciting to see what you're doing and obviously having some huge impact. Now, where can we go for more information about the products, the projects, everything you're doing right now, everything we've talked about, and also just to learn more about Lightbend?

KW: Absolutely, thanks, Brian. So, you can go to https://cloudflow.io/, which is the product site. I think that's a great place to start, so you pull that software down and have a good time with it. You can also go https://lightbend.com, and that'll lead you to a majority of these use cases and the stories that we've talked about today. And look for Cloudflow available on a couple of your more popular public clouds coming out soon.

RH: Awesome. Karl, thanks so much, appreciate it.

KW: My pleasure.

RH: Karl, thanks for your time. Karl Wehden, VP of Product Strategy and Product Marketing at Lightbend. My name is Brian and thank you for joining us here on the Red Hat X Podcast Series. Have a most excellent day. We'll talk to you in the next episode.

LISTEN TO THE PODCAST

 

The Total Economic Impact™
Of Lightbend Akka

  • 139% ROI
  • 50% to 75% faster time-to-market
  • 20x increase in developer throughput
  • <6 months Akka pays for itself