In this blog, I am going to show and demonstrate how to maximise actionable insight of data in near real-time and with massive scalability. Before I do so, let me start by explaining the reason why speed on actionable data is so critical for businesses.
Moving data from System to System is not longer enough, but it has to be done rapidly, securely and reliably.
According to Gartner the value of data decreases dramatically as time passes, where the value of data is at its highest when it can be either “Preventive or Predictive”, followed by Actionable (within seconds).
Human adults make over 35,000 conscious decisions every single day and each of them are triggered by impromptu events. As we mentioned before, the value of data decreases dramatically as time passes. For example, think of a scenario where you are driving a car on a narrow street in a neighbourhood and suddenly you see in front of the car a kid’s ball rolling through the parked cars into the middle of the street. That event tells us that it is very likely that a child will follow the ball without paying attention of the cars in the street, yet we don’t wait to see if that is the case, we instantly decide to slow down and prepare for a full stop, so that by the time we actually see the child, the car is almost stopped. What would have been the value of seeing that ball and taking 2 minutes to make a conscious decision to stop the car? Probably by then it would have been way too late.
The same happens in business, running data from multiple data sources and systems. The problem is that this natural type of communication style does not follow a traditional synchronous request/response type of pattern. First of all, because in most cases we don’t even know what to ask, but we will simply “act” on the basis of having received those events. The closest traditional pattern would be based on a polling or a scheduling system, which is not only inefficient, but also ineffective.
In businesses, the best way to achieve a rapid near real-time data correlation and integration is by following the natural way humans make the most important conscious decisions, that is: working with “Business Events”.
These business events can be generated by any type of systems, such as those operated by suppliers, vendors, customers, employees, contractors, facilities, products, etc. These systems can also be running together or completely on separate geographical locations, across on customer managed data centres or Public Clouds.
The biggest challenge when it comes to trying to work on data as a preventive, predictive and actionable measure, is to do it at scale, having to manage a massive amounts of events that need to be reliably stored, correlated and integrated into backend systems, without compromising processing latency, message reliability, high availability and security.
The recipe – Using MuleSoft and Solace Event Broker
In order to accomplish this goal, modern applications use a new pattern of communication called, “Event Driven Integrations” or ED-I for short, based on both, Event Driven Architecture (EDA) and Enterprise iPaaS.
MuleSoft iPaaS is well known and recognised globally for bringing simplicity and agility to integrate data among systems, in a way that promotes faster delivery and increased developer productivity with a single platform. It focuses on both, Enterprise Integration and the full API life-cycle, with a strong Governance, Risk Management and Compliance management.
In situations where handling high volume of events need to guarantee subsecond performance and infinite scalability to publish, correlate and generate actionable insight to downstream systems or applications, MuleSoft is well positioned to work together with Solace Event Broker.
Solace Event Broker is the “backbone” of Event Driven Integrations. It determines what applications should know of what events and facilitates access to that data in a way that is completely non-intrusive for other subscribers with perhaps a totally different agenda.
In the next section I am going to show how to implement a simple, yet powerful ED-I scenario using MuleSoft and Solace technologies. This scenario ensures guaranteed delivery of events to downstream applications, which is key for Mission Critical and Disaster Recovery solutions.
In this scenario, we are going to exchange Patient test records, sourced by an EMR system. The architecture will follow this approach:
- Publish events to an Event Broker Topic using REST. These events could be sourced by IoT appliances, systems, applications, mobile devices, etc.
- Enforce Guaranteed Delivery of these events, by subscribing a Queue into the Topic. This will cause the Topic to “copy” the target events into the Queue for guaranteed delivery.
- Configure a REST Delivery Point (RDP) on the Queue, to propagate the events to a MuleSoft application via REST
- The MuleSoft application will then use data from the event to action business logic by integrating into various downstream systems and accomplish a business goal.
The great potential of this Event Driven Integration architecture is that we can allow multiple types of Events to freely flow from multiple sources into the Event Broker and completely non-intrusively, without disrupting the architecture, nor having to open costly regression-testing cycles, we can bring new Subscribers that will get a “copy” of the Events to overcome new business initiatives and goals.
Also, by subscribing the Queues into the Topic, we can apply strict guaranteed delivery of messages, to ensure that no event gets lost, by enforcing a mandatory acknowledgement mechanism. Even in those cases where the underlying downstream application has to involve human interventions, the events will remain in the Queues until they are successfully processed.
In order to complete this exercise, it is expected that you have access to:
- MuleSoft Anypoint platform account – If not, you can subscribe for free here.
- If you are new to MuleSoft, have a stop here first.
- Solace PubSub+ Account – If not, you can subscribe here – In this blog, we are going to be using the Cloud based option.
- If you are new to Solace PubSub+ Cloud, have a stop here first.
Setting up the Event Broker
For this demo, I am spinning up a new PubSub+ Cloud Service in AWS US East (N. Virginia)
- The Event Broker Topic name that I am planning to use for my Patient Test Records use case is as follows:
- Before setting up the Queue, let’s do a quick test using a REST client to POST an event into the Topic.
- First, we need to grab the REST connection details. Click on the Connect tab and expand the REST section to identify the REST Host of the Event Broker Service, as well as the credentials.
- Use Postman or any other REST client to configure a POST request that follows the previous configuration.
- Now, go back to the Solace PubSub+ Cloud web console and click on the Try Me! Tab.
- In the “Subscriber” section, click Connect in the number 1 step. Make sure you were able to connect to the Event Broker service.
- For the number 2 step, enter the name of the Topic: api/patients/tests and click Subscribe.
- While it is listening for events on that Topic name, go back to Postman and send the request that we configured earlier.
- You should be able to see the event coming on the Subscriber section of the page.
First we are going to set the REST Service Mode to Gateway, so that incoming REST messages into the Queue get propagated to REST services.
- Click on the Manage Service at the top right of the page.
- Click on the Service tab
- Click on Edit at the top right. Search for the REST section and set the Service Mode to Gateway
- Click Apply.
- Now, let’s create a new Queue. Click on Queues on the left vertical menu and click on + Queue to create a new queue. Give it a name and click Create.
- Now, let’s subscribe the Queue into our Topic. Click Subscriptions on the top menu.
- Click on + Subscription button at the top right to add a new subscription.\
We are going to be sending the events to the Topic via REST POST calls. Then, based on the Queue’s subscription, those events that match the Topic, will be “copied” and sent to the Queue. We can have as many subscribers as we wish, each will have their own copy of messages delivered to their Queues.
- Since we are in REST Gateway Service Mode, we need to prefix our Topic name with the name of the REST Method, in this case: POST/api/patients/tests
Now, we need to create a new REST Client Connection to act as a REST Delivery Endpoint (RDP).
- Click on Client Connections on the left vertical menu.
- Click on REST top tab
- Click on + REST Delivery Point at the top right to create a new RDP. Give it a name and click Create.
- Click on the new RDP entry.
- First let’s bind into the Topic. At the top, click on the Queue Bindings tab.
- At the top right, click on + Queue Binding button. Drop down to find your Topic and click Create.
- Make sure the Operational State is a grep Up message.
Finally, we need to create the REST Consumer. This is the REST endpoint to which the Queue will propagate the Event messages as they arrive from the Topic into the our Queue. For this we need to have the actual endpoint of the MuleSoft microservice that will be listening for the Events.
Let’s create this MuleSoft microservice first.
Setting up the MuleSoft Applications
MuleSoft is the muscle that will be receiving the correlated events from the Event Broker and make smart conscious business decisions. In this case, given we are dealing with Patients Tests Result Messages, let’s assume that the “business objective” is to create a quick Notification mechanism, that as soon as events drop into the Queue, we are going to receive those events and invoke an SMS Notification service… Something pretty close to what most Health State departments are doing right now to automate and simplify the notification of COVID-19 test results.
In Mule world, everything “should” start with an API Specification. It’s up to you if you use RAML or OAS. I’m personally a big RAML fan, so I came up with this spec using Design Center, feel free to reuse or create your own in Anypoint Design Center.
The important thing is to create a POST resource API that follows the structure of the Topic that we defined previously, in this case: /patients/tests
- Feel free to play with the Mocking service to get used to its way to work.
- Once your API spec is ready, publish to Exchange.
- Now, let’s open AnyPoint Studio and scaffold a Mule Project based on the API Spec from Exchange.
- Once it scaffolds, the sky’s the limit! In my case, I added logic into my Mule application to invoke an actual Notifications service that I had built previously on top of Twilio APIs. However, that drops outside the scope of this blog. For an easy test, simply start by adding a “Logger” processor into the POST /patients/tests flow, that shows the incoming payload. Something like this:
- When ready, deploy your Mule application into CloudHub and copy the runtime URL from Anypoint Runtime Manager.
Finally, to complete the Event Broker configuration, go back to Solace PubSub+ Cloud console.
We are going to add the REST Consumer now that we have the runtime endpoint of it.
- Go back to Client Connections and click on the REST Consumers tab.
- Click on + REST Consumer button at the top right of the page. Give it a name and click Create.
- Click on the new REST consumer
- Enter the Host of the Mule runtime application in Host and port 80. Enable the service and click Apply.
- Make sure your REST Consumer is up and running.
Congratulations, we are ready to test our Event Driven Integration.
- Go back to your initial Postman POST request that sent an event directly into the Topic and send it again.
- As it happened last time, it should come back quickly with a successful 200 response code. More importantly, go to Solace PubSub+ Queues > Consumers tab. There should be Messages Confirmed Delivered – In my case, I have already 13 messages that I have sent.
If you just configured the Mule application to log the incoming payload, let’s make sure that we can see this entry in the Logs.
- Go to MuleSoft Anypoint Runtime Manager and click on your running Mule Application and then click on Logs
- Make sure you see the entry in the log.
- Alternatively, if you did implement a Notification service, like in my case, you can make sure to see the SMS being delivered to the Max The Mule’s mobile phone.
The point is that whatever your Mule Application does, it is up to you and the business requirements. At this point, we have established a framework for Event Driven Integrations, in which we can simply subscribe to events to be taken out of a stream of events from the Event Broker and delivered to us with guaranteed delivery, simplicity and agility.
As we can see, although traditional synchronous request/response type of communication patterns are highly relevant when we know what to ask, the reality is that the real value of data comes when we don’t even know what action we should do next, but we simply follow a natural way of human communication, based on “events” that will determine our conscious decisions.
Event Driven Integrations do bring the best of the two worlds, it allows events to fly freely from any type of device or application, yet allowing a rapid and agile way to correlate and integrate them into “predictive, preventive and actionable” business actions.
I hope you found this article useful. If you have any question or comment, feel free to reach me at https://www.linkedin.com/in/citurria/
- MuleSoft AnyPoint Platform ® integration with Solace ® (here)
- MuleSoft® – Introducing the new MuleSoft developer quick start guides (here)
- Solace ® White Paper: Event Driven Integration with iPaaS – The Power of Real-time Connectivity (here).
- Solace ® White Paper: Event Driven Integration with iPaaS – The Architect’s Guide to Implementation (here)
- Solace ® – REST Consumers documentation (here)
- Solace ® – REST Delivery Points documentation (here)
- Solace ® – REST Examples (here)
- Solace ® – The Four Ways to Create and Configure a Message Queue in Solace (here)
- Solace ® – How to Event-Enable your REST Architecture with Solace PubSub+ (here)