it’s Patrick again with the second part of the series on how to setup Workspace ONE Access. In the last episode we covered, how to configure the Workspace ONE Access appliance and stopped at a point which is relevant for most production environments, enabling the Load Balancer in the environment. In this tutorial, we will have a look at NSX Advanced Load Balancer (ALB) formerly called AVI.
In this blog series, I will cover the following topics:
– Part 1: Import and Setup Workspace ONE Access 22.09 On-Premises (incl. troubleshooting steps)
– Part 2: Loadbalancing with NSX Advanced Loadbalancing (former AVI)
– Part 3: Configure Microsoft Azure for federation
– Part 4: Integration with Horizon On-Premises
– Part 5: Integration with Horizon Cloud on Azure
Table of Contents
High Level Design
Before installing NSX ALB in your environment, let’s have a look back at the context with Workspace ONE Access in combination. During the part 1 of the series, we defined the following FQDN’s and IPs for our access appliances.
Now we zoom a bit to the Load Balancer portion of this picture and represent, how I’ve set up the environment in my lab for this usecase.
In VMware NSX Advanced Load Balancer, a Service Engine is a logical representation of a load balancing instance that is responsible for distributing traffic to a set of servers. Service Engines are created and managed through the NSX Advanced Load Balancer user interface or API. Each Service Engine is associated with a virtual IP (VIP) address and port, and can be configured to distribute traffic using various load balancing algorithms, such as round-robin, least connections, and weighted least connections. Service Engines can also be configured to perform health checks on the servers they are distributing traffic to, and to perform SSL termination or offloading.
In my case these Service Engines sit in the DMZ and are associated with the VIP of my Workspace ONE Access service 192.168.3.100. The FQDN used for the service is called access.avdlogix.com.
Installing the NSX Advanced Load Balancer
This article really focusses on the configuration steps required for Workspace ONE Access in combination with NSX ALB. As there are so many brilliant articles out there on how to install and import the first NSX ALB controller in your environment, I’ve collected a few great resources that you can review to initially setup the appliance. The following video by our colleague Nick Robbins explains the initial setup and how to generally get started with NSX ALB.
Configuring NSX Advanced Load Balancer for Workspace ONE Access
Before we start to deploy Workspace ONE Access in the load balanced mode its important to understand, which settings are relevant for the setup. Mandatory settings are to enabled X-Forwarded-For (XFF) headers, setting the load balancer time-out correctly, and enabling sticky sessions. In addition to that, SSL trust must be configured between the appliances and the load balancer. To do so, we already imported the SSL certificate to our appliance in the previous article, however the step of changing the FQDN is still missing to get to an operable state.
Let us quickly review which settings are relevant and what they mean in detail:
– X-Forwarded-For Headers (XFF)
The X-Forwarded-For header helps to determine the authentication method, when configuring network ranges e.g. the IP address gets redirected to the appliance via the header which allows Access to determine a certain authentication method.
– Load Balancer Timeout
Usually you want to increase the request timeout for Workspace ONE Access. If the setting is too low, it can happen that you receive a 502 error: The service is currently unavailable.
– Enable Sticky sessions / Session Persistence
Sticky sessions must be enabled on the Load Balancer, if the deployment includes multiple machines, like in our example. The load balancer binds the specific user session to the appliances where the user is connected to.
– Don’t block session cookies
It’s wise to not block session cookies on the Load Balancer.
Create a Service Engine Group
One thing you might start off with, when you followed the installation procedure above is to configure a Service Engine Group for the Workspace ONE Access Service.
A NSX-ALB Service Engine (hereafter also defined as SE) group is basically a collection of one or more Service Engines that share properties like network access and failover information. One of the characteristics is, that a Service Engine cannot fail over to another one in a different group.
In the NSX-ALB portal you navigate to Infrastructure (top menu) > Service Engine Group (left sidebar) and click on “Create” to create a new SE group for our Workpsace ONE Access environment.
The first thing to do, is to provide a name for our Service called “SEG-WorkspaceONEAccess“.
Another setting is the “High Availability Mode”. In here you have the following options:
- High Availability Mode:
- Elastic HA Active/Active
- Elastic HA N + M
- Compact Placement: When enabled, new virtual
services are placed on existing SEs with other virtual services.
Disabling this option places each new virtual service in its own SE
until the maximum number of SEs for the SE group is reached. At
that point, a new virtual service will be placed on the SE with the
least number of virtual services. When this option set, Vantage will
attempt to conservatively create new SEs.
- Virtual Services per Service Engine: Controls the
maximum number of virtual services that may be deployed on a single SE.
Another SE must be created or used if this maximum is reached. If
Vantage reaches the maximum number of SEs, then no more virtual services
can be deployed within the SE group.
- Scale per Virtual Service – Minimum: The virtual
service may be scaled across multiple SEs, which both increases
potential capacity and ensures recovery from any failure while
minimizing impact. Setting the minimum above 1 ensures that every
virtual service starts out scaled across multiple SEs regardless of
- Scale per Virtual Service – Maximum: Sets the maximum number of SEs across which a virtual service may be scaled.
- Service Engine Failure Detection: Sets the maximum amount of time a primary SE can remain silent before the SE is declared dead by the Avi Controller.
- Standard: Primary SE can remain silent (stop sending heartbeats) for a maximum of 9 seconds before being declared dead.
- **Aggressive: **Primary SE can remain silent (stop sending heartbeats) for a maximum of 1.5 seconds before being declared dead.
- Buffer Service Engines: This option sets the
value of M for elastic HA N+M mode. Compact placement should be left in
its default state for N+M, which is OFF. Vantage will maintain spare
capacity in the SE group to be used to replace any failed SE.
- Health Monitoring on Standby SE: Enables the standby SE in a legacy HA configuration to send health checks to back-end servers.
In our case we select “N+M (buffer)” and we set the VS Placement across Service Engines to “Compact“. When talking about the Service Maximums, those can be defined directly below the HA settings. I keep mine for the lab environment pretty low and follow the standards with 10 virtual services per Service Engine and a maximum of 10 Services Engines in this group. For our scenario I keep the rest pretty standard, however changes might apply based on your environment’s needs and must be adjusted accordingly.
If you’re interested into more options for Service Engine groups visit:
Import SSL certificate for the service
To import the SSL certificate to the service is an important step, as your users will otherwise receive certificate errors, and the Workspace ONE Access appliance might not be able to verify the change of the FQDN. To upload your certificate go to Templates (top menu bar) > Security (left sidebar) > SSL / TLS certificates. While clicking on Create followed by Application Certificate in the right upper corner you can upload your certificate.
Create an Appliaction Profile
After successfully creating a group for our Service Engines (attention! they are not yet created in vCenter) we need to define an Application Profile to the specifics mentioned above for Workspace ONE Access.
To do so, navigate to Templates (top menu) > Profiles (left sidebar) > Application (left sidebar) and click “Create“.
In the new pop up we need to fill out the following settings for the service:
– Name: I chose APP-WorkspaceONEAccess
– Type: HTTP
In HTTP settings I’ve checked the following boxes:
– Connection Multiplex: This allows HTTP requests to be load balanced accross servers. roxied TCP connections to servers may be reused by multiple clients to improve performance.
– X-Forwarded-For: With the Alternate Name = X-Forwarded-For – The client’s IP Address will be inserted into the HTTP header. This is required to define Network ranges at a later time in Workspace ONE Access to determine from where a user is connecting from.
– WebSockets Proxy: Allows for an upgrade of the connection mode to WebSockets Proxy if the client supports it.