Deploying APIs can vary a lot, especially around non-functional requirements, such as how much isolation we need to effectively secure APIs at a policy and infrastructure levels.
There are valid use cases where we need our APIs to be publicly available to the Internet, just being protected with policy enforcement, such as Authentication and Authorisation policies. In other circumstances, APIs need to be completely isolated from an infrastructure network level.
In this blog, I am going to explore how to implement 4 different typical use cases, in which we apply different mechanisms to deploy and protect our APIs, based on the business requirements.
For each of the use cases, I will also delve into the main pros and cons, as well as the level of security and benefits that each method provides.
These are the 4 use cases:
- Using a MuleSoft VPC (or not), but still using a Shared Load Balancer (SLB) to easily publicly access APIs.
- Using MuleSoft VPC and a Dedicated Load Balancer (DLB) to allow public access to APIs.
- Using VPC and a VPN tunnels to allow only Private/Internal access to APIs
- Using VPC and VPN tunnels to segregate and allow both Public and Private/Internal access to APIs.
In the coming sections, we will open up each of these 4 use cases and cover them in more detail.
Pre-requisites
In order to complete this exercise, it is expected that you:
- Have access to MuleSoft Anypoint platform account – If not, you can subscribe for free here.
- If you are new to MuleSoft and want to have a full overview first, have a stop here.
Use case 1: CloudHub VPC – Shared Load Balancer – Public APIs
The first use case is the most simple one. It is ideal when the use case aims for simplicity and a quick implementation, with the minimum configuration being required.
From an isolation point of view, in this use case, it should be acceptable to expose the APIs directly on the Public Internet, just with the right level of policy enforcement, for example around authentication, authorisation, rate limiting, throttling, caching, etc. – That is, from a policy enforcement point of view, as opposed to an infrastructure level.
Pros:
- Quick implementation, inherited support for HA and Multitenant SLA resilience
- Easy to maintain with minimum configuration.
- We can still extend use of VPN tunnels to access backend systems and applications.
Cons:
- Lack of ability to network-isolate APIs at an infrastructure level
- No ability to apply custom certificates and mutual TLS
- No custom/vanity DNS domain rules
In order to see the implementation in more detail, follow this link.
Use case 2: CloudHub VPC – Dedicated Load Balancer – Public APIs
In this use case, we will also provide public API access, but instead of using a Shared Load Balancer (SLB), we are going to make use of a Dedicated Load Balancer (LDB) within the VPC. This will give us extra benefits that we did not have before using a multi tenant Shared Load Balancer.
For example, with this approach we can provide additional security protections, such as mutual TLS (mTLS), vanity domain routing rules to segregate environments (e.g. routing rules for non-production dev and test environments within the same VPC). Also, using a DLB, we can balance internal load balancing traffic within the VPC and all paired VPCs, without having to leave a private network.
This approach also allows internal clients to privately communicate using VPN Tunnels to access Highly Available APIs within the VPC.
So, in summary, some of the most relevant pros and cons are:
Pros:
- High Availability and resilience. No single point of failure, as even DLBs can run over multiple workers.
- Ability to network-isolate APIs at an infrastructure level, while maintaining internal communication within the VPC and all paired VPCs
- Ability to use custom/vanity DNS domain rules
- Increased security, with the ability to apply custom certificates, SSL termination/propagation and mutual TLS
Cons:
- Having just 1 DLB in a VPC, we are still allowing public access to APIs. This might be what we want, like in the case of Experience APIs, but if not, we would need to strengthen this approach, as we are going to see in the use case number 4, later in this blog.
In order to see the implementation in more detail, follow this link (Coming soon).
Use case 3: CloudHub VPC – Dedicated Load Balancer – Private APIs
The third use case in this series applies when we want to completely isolate a group of APIs from external access. The idea is that even using CloudHub, we can restrict any external access and only allow access to APIs via the use of a private network using VPN tunnel.
This approach makes sense when we don’t need to expose APIs to external users or systems. This approach would not be applicable if we need to have a combination of public and private APIs.
In summary, some pros and cons:
Pros:
- High Availability and resilience. No single point of failure, as even DLBs can run over multiple workers.
- Obviously this approach provides an increased security capability, as we keep all APIs only accessible via private networks using a VPN tunnel.
- We can apply whitelisting policies and mutual TLS for increased security.
Cons:
- We cannot have an ability to expose APIs to external users or systems. If this is required, we would need to use yet another configuration, that is mentioned in the next use case.
In order to see the implementation in more detail, follow this link (Coming soon).
Use case 4: CloudHub VPC – Dedicated Load Balancers – Public and Private APIs
Finally, the last use case is when we need to isolate public and private APIs from an infrastructure point of view. So that we can expose those APIs that need to be accessed externally and keep private APIs isolated to external users, while maintaining an efficient internal applications communication via the use of Dedicated Load Balancers.
This approach is the most common one among secured implementations, as we isolate at the network infrastructure level private APIs, avoiding any type of security breach.
Summary of pros and cons:
Pros:
- High Availability and resilience. No single point of failure, as even DLBs can run over multiple workers.
- Ability to split APIs across 2 categories: Public and Private APIs. Public APIs are needed when communicating with external consumers, like mobile devices or other API Experience channels. On the other hand, Private APIs are only allowed to be consumed internally within the VPC or VPN Tunnels across Private networks. For example for Process or System APIs.
- Security segregation of APIs at a network infrastructure level.
Cons:
- Well the only drawback that I can think of is that for obvious reasons this option is the “relatively harder” to implement that requires 2 DLBs. I would be interested in hearing your thoughts about some other disadvantages.
In order to see the implementation in more detail, follow this link (Coming soon).
And that’s it, congratulations!
We managed to go through various deployment models that go from the easiest and quickest to implement, to the most sophisticated one with the strongest security access and level of network isolation.
Depending on your needs, you will need to implement one, multiple or a combination of them into a particular configuration.
I hope you found this article useful. If you have any question or comment, feel free to reach me at https://www.linkedin.com/in/citurria/