Skip to content
2 min read By Zach Snell

Navigating Networking Challenges with Amazon Connect and Zscaler

When working with Amazon Connect, a contact center as a service in the AWS ecosystem, networking nuances can arise, particularly when dealing with client networks and security solutions like Zscaler. In this article, we’ll explore a case study involving an auto manufacturing client with 500+ agents and the challenges faced while ensuring call quality and durability.

Note: This article primarily discusses Zscaler 1.0 and while these concepts likely apply to Zscaler 2.0, the client has not yet updated to 2.0. If this happens and new information comes about, an update article will be added.

One recurring source of issues encountered involved Zscaler, a security solution that can impact network traffic, even if it claims it will not.

Salesforce Service Cloud Voice and the Omni-Channel Heartbeat

Another aspect to consider is the integration with Salesforce Omni Channel. In our case, the heartbeat mechanism of Omni Channel caused intermittent drops in Service Cloud Voice, which is a wrapper over Amazon Connect. Identifying the root cause took weeks of investigation.

The problematic URL was sending TCP traffic, and Zscaler’s packet inspection was altering the packets enough to cause occasional request failures. To resolve this, we had to analyze HAR logs from multiple agents’ computers until we finally caught the specific issue. This is exactly the kind of problem that prototyping and testing against real conditions catches early — no amount of architecture diagrams would have surfaced it.

The culprit was a URL handling a heartbeat: *.salesforceliveagent.com. By simply permitting this URL through Zscaler without inspection, we resolved the intermittent drops.

Amazon Connect Endpoints

To further improve overall performance, we permitted as many Amazon Connect domain/endpoints as possible through Zscaler without inspection. The application is quite chatty, making non-inspection essentially required. AWS provides a comprehensive networking page with the necessary URLs, which you can find at: https://docs.aws.amazon.com/connect/latest/adminguide/ccp-networking.html

Here are the primary URLs that you should populate for each environment:

  • *.telemetry.connect.{region}.amazonaws.com
  • participant.connect.{region}.amazonaws.com
  • .transport.connect.{region}.amazonaws.com
  • {prod_bucket_name}.s3.us-east-1.amazonaws.com
  • {instance_name}.my.connect.aws/ccp-v2
  • {instance_name}.my.connect.aws/api
  • {instance_name}.my.connect.aws/auth/authorize
  • {instance_name}.awsapps.com/connect/ccp-v2
  • {instance_name}.awsapps.com/connect/api
  • {instance_name}.awsapps.com/connect/auth/authorize
  • *.connect-telecom.{region}.amazonaws.com
  • *.static.connect.aws

These URLs should be trusted as they explicitly route traffic to controlled AWS services.

UDP and WebRTC Media Traffic

Although Zscaler 1.0 doesn’t touch UDP, it’s crucial to allow UDP endpoints for WebRTC media traffic through firewalls. With Zscaler 2.0, ensure that these endpoints remain untouched.

The UDP endpoints follow the pattern: TurnNlb-*.elb.{region}.amazonaws.com. If needed, you can obtain the specific turnlb(s) from the NLB endpoints documentation.

Lessons Learned

Throughout our journey with the auto manufacturing client, we encountered several bumps in achieving the desired call quality and durability. Nearly every issue traced back to networking, either on-site or at third-party companies’ networks. It’s the kind of real-world trade-off I explore more broadly in The Artisan’s Dilemma — the gap between a solution that looks right on paper and one that survives contact with production. Softphones and WebRTC are chatty and sensitive, requiring fast and untouched network conditions.

While this is relatively straightforward for on-prem and non-VDI users, it becomes more complex with VDIs. A future article may delve into the challenges of VDIs, split media, and the unsolvable call drop issues experienced on VDIs, AppStream, and WorkSpaces until switching to split media.

Resolving these networking challenges required significant collaboration between internal development, networking, AWS networking, Salesforce, and Microsoft support teams. By sharing this information, we hope to assist others who may encounter similar issues, saving them valuable time and effort in ensuring seamless customer support experiences with Amazon Connect.