This blog is the first out of a series of articles that explain different mechanisms to deploy APIs with different requirements of isolation, security, performance and resilience.
In this article I will explain an implementation using MuleSoft VPC with a Shared Load Balancer (SLB) to deploy publicly accessible APIs that are secured with policy enforcement.
In order to see the full series of use cases go to this article.
Pre-requisites
In order to complete this exercise, it is expected that you:
- Have access to MuleSoft Anypoint platform account – If not, you can subscribe for free here.
- If you are new to MuleSoft and want to have a full overview first, have a stop here.
- Anypoint Studio should be already installed. I am using Anypoint Studio 7.9
Use case: CloudHub VPC – Shared Load Balancer – Public APIs
This use case is the most simple one. It is ideal when the use case aims for simplicity and a quick implementation, with the minimum configuration being required.
From an isolation point of view, in this use case, it should be acceptable to expose the APIs directly on the Public Internet, just with the right level of policy enforcement, for example around authentication, authorisation, rate limiting, throttling, caching, etc. – That is, from a policy enforcement point of view, as opposed to an infrastructure level.
Pros:
- Quick implementation, inherited support for HA and Multitenant SLA resilience
- Easy to maintain with minimum configuration.
- We can still extend use of VPN tunnels to access backend systems and applications running on private networks.
Cons:
- Lack of ability to network-isolate APIs at an infrastructure level
- No ability to apply custom certificates and mutual TLS
- No custom/vanity DNS domain rules available.
Defining the APIs to be used
First, let’s talk about the APIs that we are using for the demos. For our use cases, we are going to use 2 APIs:
- 1 x System API: Internal API – For this, we are going to follow the great tutorial by Jordan Schuetz and within 5 seconds, configure a MySQL DB connection to retrieve a list of contact details.
- 1 x Experience API: Commonly an external API – Enforces the right level of policy compliance, that allows end users to invoke externally. It then connects to our internal system API and brings up the list of contact details to end users.
Feel free to bring your own APIs and simply adjust the instructions as needed. However, if you want to use this set of demo APIs, for the purpose of this demonstration, feel free to download an example from this link – In this repository, you will find 2 Hello World Mule Projects:
- mysystemapi
- myexperienceapi
- Open up “mysystemapi” project. As mentioned, it is based on the Quick start series to retrieve a list of Contact details from a MySQL DB. Very simple stuff, but functional.
- Deploy this Mule Project and wire it into a Mule API in Anypoint API Management, by following the standard API Management steps :
- Use the provided API specs to create a new API using “API Designer” and publish to Exchange.
- Create a new API in “API Manager” from Exchange (see step 1) and copy the API instance ID.
- Back in Anypoint Studio, replace this API ID in your Global Elements – So that when deploying your Mule app, it wires itself into the API that you created.
- Then simply deploy your “mysystemapi” project into CloudHub. Make sure that your application is up and running. You might need to alter the name to ensure it is unique.
- Confirm your Mule Application is deployed successfully and wires to the API in Anypoint API Management.
NOTE: If you are foregin to any of the previous steps, have another look at the Quick start series – In particular, to Tutorial 2 and 3.
- Now go back to the experience API and edit the “request” processor to make sure it is wired into the actual runtime location of your system API that you just deployed into CloudHub.
- Test your Experience API locally and ensure it is functional, i.e. make sure that it returns a list of Contacts. Then deploy into CloudHub. If you need the API spec for the experience API to push into Exchange and create the logical API in API Manager console, click here.
Again, you don’t necessarily have to use these 2 APIs (system and experience APIs). Feel free to use your own APIs and adjust accordingly.
Configuring the CloudHub VPC Deployment Model
At this point we have a set of 2 APIs running in CloudHub, that is: 1 system API that connects into a MySQL DB to retrieve a list of contact details and 1 experience API that is hooked into the system API via the Shared Load Balancer.
In this section, we are going to:
- Create a VPC
- Run our applications inside this VPC,
- Finally, we are going to modify our experience API to talk to the system API via theShared Load Balancer (SLB)
Create a VPC
Creating a VPC is a simple process.
- Make sure that you are within the same Business Group as your running applications.
- Also make a note of the region where you have deployed your applications. In my case I deployed into US East (Ohio)
- Then, click VPCs on the left menu within “Runtime Manager”.
- Click “Create VPC”
- Enter the required fields.
- Make sure to select the same region as where you deployed your application.
- For CIDR, you need to select a range from 16 to 24 bit masks. If you need more information about designing the right size for your VPC network, refer to this great article.
- Make sure to select the environment where your applications are deployed.
Then click Create VPC
- Restart your 2 applications to make sure that the internal DNS point to IP Address in the selected CIDR range, etc.
- It should take a minute or 2 to reload.
Once your applications are back up, let’s go through some quick explanation on how CloudHub Runtime Manager works from a Networking perspective – for a full documentation go here.
- We know that there is by default a Shared Load Balancer (SLB) that will balance the load across any number of worker nodes assigned to our applications. At the moment we have 1 worker node per application, but we could easily add more.
- The DNS record for the Shared Load Balancer follows the form:
HTTP/HTTPS: [app-name].[region].cloudhub.io
For example, since we are deploying into US East Ohio, the region is us.e2
- That means that the DNS record for my 2 applications are:
- mysystemapi.us-e2.cloudhub.io
- myexperienceapi.us-e2.cloudhub.io
Note: Your link will be different, as it depends on the name that you used to deploy your applications.
- Let’s test our applications to make sure that they are still functional. For this, we can easily point to our External API and make sure it still returns a list of contacts:
Great, it is still functional.
- Now, from a Worker node perspective, we know that “each” worker node is associated with:
- 1 Public IP Address,
- 1 Private IP Address.
Each of these IP addresses are associated with an external DNS name. That is:
- External IP Address: mule-worker-[app-name].[region].cloudhub.io
- Internal IP Address: mule-worker-internal-[app-name].[region].cloudhub.io
- Notice that each of these DNS names are external, that is, they exist and resolve externally speaking, even if the underlying associated IP address is private. This means that we can easily know the IP address associated with each of them with a simple nslookup or even a ping.
- Let’s prove this, do a simple “nslookup” for each of these DNS records. The theory here is that the one associated with the Internal IP Address, will be within the range of our VPC CIDR block (i.e. 10.0.0.0/24).
- First, let’s try the external one:
nslookup mule-worker-myexperienceapi.us-e2.cloudhub.io
Non-authoritative answer:
Name: mule-worker-myexperienceapi.us-e2.cloudhub.io
Address: 3.129.216.71
Note: This IP Address is taken internally from the Amazon EC2 IP Pool. However, it is not important for us as part of this test.
- Now, let’s try the internal one:
nslookup mule-worker-internal-myexperienceapi.us-e2.cloudhub.io
Non-authoritative answer:
Name: mule-worker-internal-myexperienceapi.us-e2.cloudhub.io
Address: 10.0.0.8
Note: This IP Address is important though, because it proves to be within the CIDR range that we selected as part of our VPC CIDR block.
- Now, let’s briefly talk about the VPC Firewall rules. By default, from a Network Infrastructure point of view, a VPC is configured with public and private access for both HTTP and HTTPS across different ports. Let’s have a look at these Firewall rules:
- Go back to the VPC that we just created.
- Then, click on the Firewall Rules section
- Let’s analyse the 4 rules that come by default:
- Allowing HTTP Access from the Public Internet on port 8081:
- Type: http
- Source: anywhere (0.0.0.0/0)
- Port Range: 8081
- Allowing HTTPS Access from the Public Internet on port 8082:
- Type: https
- Source: anywhere (0.0.0.0/0)
- Port Range: 8081
- Allowing only HTTP Private Access within the VPC on port 8091:
- Type: http
- Source: Local VPC (10.0.0.0/24)
- Port Range: 8091
- Allowing only HTTPS Private Access within the VPC on port 8092:
- Type: https
- Source: Local VPC (10.0.0.0/24)
- Port Range: 8092
- Allowing HTTP Access from the Public Internet on port 8081:
- This means that from the Experience API, we could potentially point directly to one of the worker nodes via the internal DNS name on either http (8091) or https (8092).
- Let’s quickly prove this.
- Go back to Anypoint Studio and change the listening port for the System API to port 8091
Note: For the purpose of this demonstration and for simplicity purposes, we are hardcoding this port. In a real implementation, we would be using properties and adjusting them as part of an automated CI/CD pipeline.
- Re-deploy this System API Mule project into CloudHub.
- Now, back to the Experience API, change the Experience API – HTTP Request configuration to point to the private IP Address via the internal DNS name. Also modify the request port to 8091. – As this is the port on which our System API is now listening on.
For me, this is the internal DNS name, based on the name of my app and the AWS region being used:
mule-worker-internal-myexperienceapi.us-e2.cloudhub.io
Note: Once again, for the purpose of this demonstration and for simplicity purposes, we are hardcoding these fields. In a real implementation, we would be using properties and adjusting as part of an automated CI/CD pipeline deployment.
- Save re deploy the Experience API into CloudHub
- Test again your Experience API
- The Experience API is up and running! It returns the list of Contact details, the difference now is that at this very point, the Experience API is NOT going through the Shared Load Balancer to access the System API.
Something like this:
- In some situations, we may want to keep the traffic private and communicate directly to worker nodes within the VPC, but by doing so, we are missing the out-of-the-box load balancing and scalability capabilities that we obtain for free using the Shared Load Balancer.
- Unless you have strong reasons to keep this configuration, I would recommend changing it again to always communicate via a Load balancer, in this case the Shared Load Balancer (SLB).
That is:
- Using the SLB in the form of: <app-name>.cloudhub.io
- Let the SLB do the native port mapping into: mule-worker-<app-name>.cloudhub.io:<port>
Where <port> is:
- HTTP 80 -> 8081
- HTTPS 43 -> 8082
- Then, simply keep communicating vai the SLB to any other API via the app-name, i.e. <app-name>.cloudhub.io
- You can also use a VPN tunnel if you need your System APIs to access any backend application from a private network.
From a policy enforcement point of view, notice that you can apply any type of policies to both the Experience and System APIs. Some policies will make more sense than others, for example, it might be silly to apply an IP Whitelisting to the Experience API, if we were expecting calls from anywhere in the Internet, but we could enforce some strong Authentication, Authorisation, Rate Limiting, etc. Similarly, for the System API, we can apply any number of policies that we require. Applying policies is out of the scope of this article, but if you want to play with them, have another look at the Quick start series – Tutorial 3, to understand how easy it is to apply policies to APIs in Anypoint Platform.
And that’s it, congratulations!
As mentioned, this is the first Use Case / Deployment model out of a series of 4. In order to see the full series of use cases go to this article.
In this first deployment model, we aim for:
- Simplicity
- High Availability
- Resilience
However, we lack a few things as well, such as:
- The ability to segregate access to our APIs at an network infrastructure level
- Ability to use custom/vanity domains (DNS)
- Ability to separate between public/private APIs
- IP Whitelisting, etc.
As you can guess, as we keep exploring the other deployment models, we will start capturing more of the missing non-functional requirements that we missed with this current approach.
I hope you found this article useful. If you have any question or comment, feel free to reach me at https://www.linkedin.com/in/citurria/