Should Kafka act as the source of truth in Lagom?

Should Kafka act as the source of truth in Lagom?

We are using Lagom with event sourcing and CQRS. Would you say that keeping the source of truth on the Kafka side is an option?

Pragmatism always wins the day, and with that being said this idea has plenty of merit. The only concern we have with Kafka as the "source of truth" is that it is probably not the place to store business critical data for many many years. I think Kafka has it's place as:

  • a durable queue (with possible multiple subscribers) as with our Cloudflow product
  • an event bus between microservices as in the Lagom usage
  • a data ingestion queue to Akka streams or other fast data processing

So, building off of what you probably already know and the second bullet, having a true database as the single source of truth is probably best for the long haul because that's for what it was built and optimized. Kafka can definitely act as a "secondary" source of truth which would optimally mirror the single source except in rare circumstances. In other words Kafka is best as a temporary source of truth, depending on the TTL of the topic, noting that event data should be written for critical data as raw immutable for various replay reasons, not just recovery (at least from a data perspective).

Of course as mentioned, there are those who would debate the Kafka position as just as valid, so it ultimately falls on you to make the decision. That said, we do not believe Kafka should be relied on in this way at this time (for the long haul records).

Additional Thoughts

Another thing that is kind of implicitly in the Lagom design is the question of who owns the data schema? Any time you have a communication channel between services, then it restricts the owning service from making certain types of changes. In Lagom, it intentionally decouples the internal representation owned by the service from the published representation(s) in the Kafka topic. This may seem objectionable at first because it seems like extra boilerplate and redundant data types, but then most services will quickly run into cases where these need to be different, or you need to evolve the internal representation in some way that isn’t transparently compatible for consumers. One example is as soon as you have some kind of `UserCreated` event with a password hash. You may want your user service to publish the fact that a user was created to other services, but not share the password hash. You could accomplish this with Kafka only if you have some topics that are “internal” and others that are for external consumers, however it might prove too great a temptation to take the shortcut and have consumers read from the internal topic. And of course using a database as the source of truth doesn’t innately prevent this problem, but with Lagom, the clear delineation between database as the internal event log and Kafka as the message bus makes it easier to avoid that problem.

    • Related Articles

    • Does Lagom support multitenancy?

      Our suggestion is to not go down this path. Multi-tenancy was very popular back in the time when people used to deploy one single application on a huge application server. In the era of cloud computing, it's much easier to have different deployments, ...
    • Can I customize error handling in Lagom?

      Yes, the best way to customize error handling in Lagom is to use a custom ExceptionSerializer.  This Knoldus blog post explains it well: https://blog.knoldus.com/2018/02/05/exception-serializer-in-lagom/ That example, however, is more focused on how ...
    • How to implement versioning in Lagom microservices?

      Lagom doesn't provide any solution, plugin or feature that enables API versioning. However you can use the following to decide and pick what works best for you: When versioning you must keep in mind both the service endpoints and payloads and the ...
    • Can Lagom errors be handled using Play error handling?

      No, the Play error handler is not invoked from Lagom, because the Lagom exception serializer intercepts it first. Any exception thrown by application code will be wrapped in a transport exception, serialized to json and a response is built from it. ...
    • How to run some Lagom services instead of one or all

      Currently, Lagom’s built-in tooling allows for running a single service via run or all of them via runAll. In some cases you may find it more convenient to run a specific subset of services. The simplest solution for this is to create ...