Omnissa (former VMware Horizon) agent for Linux - connection timeout

One of the reasons a Linux agent may experience a timeout.

The other day, one of my colleagues asked me if I can take a look at a strange problem. Ominssa (former VMware Horizon) client cannot connect to remote Linux server. Getting timeouts.

The first thing was, to see what’s going on in the logs. I opened a /var/log/vmware/viewagent-debug.log on a remote Linux server and saw the following:

2025-03-06T15:12:33.837Z DEBUG <pool-2-thread-1> [ComponentResponse] Message is <?xml version="1.0"?><TERMINALRESPONSE>   <ID>55fc3ead:19533a89:-22732</ID>   <SERVERDN>cn=2af9495d-9f355-4167-9160a-15f0dfe35bcb,ou=servers,dc=vdi,dc=vmware,dc=int</SERVERDN>   <SERVERPOOLDN>cn=vm01,ou=server,dc=vdi,dc=vmware,dc=int</SERVERPOOLDN>   <SERVERDNSNAME>localhost
</SERVERDNSNAME>   <DYNAMICIPADDRESS>172.17.0.1</DYNAMICIPADDRESS>   <FRAMEWORKCHANNELTICKET>XXXXXXXX</FRAMEWORKCHANNELTICKET>   <FRAMEWORKSSLALGORITHM>thisIsAframeworkSSLAlgo</FRAMEWORKSSLALGORITHM>   <FRAMEWORKSSLTHUMBPRINT>thisIsAframeworkThumbprint</FRAMEWORKSSLTHUMBPRINT>   <SESSIONGUID>9b14b9d2-cfc8-42g5-95e3-20b86b79a370</SESSIONGUID>   <TICKET>XXXXXXXX</TICKET>   <PROTOCOL>       <NAME>BLAST</NAME>       <STATUS>ready</STATUS>       <PORT>22443</PORT>       <HOST>172.17.0.1</HOST>       <HOSTNAME>localhost
</HOSTNAME>       <TOKEN>XXXXXXXX</TOKEN>   </PROTOCOL></TERMINALRESPONSE>

This line caught my eye: <HOST>172.17.0.1</HOST>. This IP wasn’t a range of servers. How could that be? Let’s take a look:

[root@vm01]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:XX:XX:XX brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.10/22 brd 10.0.0.1 scope global noprefixroute dynamic ens192
       valid_lft 829sec preferred_lft 829sec
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:ca:XX:XX:XX brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

Aha. Here are you. Docker... Someone did install Docker on that server.

docker0 is the default bridge network that Docker creates on a Linux host when you install Docker. It is a virtual network interface that acts as a bridge between Docker containers and the host system.

How docker0 Works

  1. Virtual Network Interface: docker0 is a network bridge created using Linux’s built-in bridge driver. It allows communication between containers running on the same host.
  2. IP Address Assignment: When Docker starts, it assigns an IP range (e.g., 172.17.0.1/16) to docker0. Containers connected to this bridge get an IP from this range.
  3. Container Networking: By default, when a new container is created, Docker assigns it an IP from the docker0 subnet and connects it to the bridge network.
  4. NAT (Network Address Translation): Containers using docker0 can communicate with the internet via NAT, allowing outbound traffic while restricting inbound access unless explicitly configured.

When Horizon Agent is installed on a virtual machine with multiple network interface cards (vNICs), you must specify the subnet that Horizon Agent uses. This subnet determines the network address that Horizon Agent provides to the Connection Server instance for client protocol connections. To resolve this issue, I made the following changes:

1. Open the viewagent-custom.config file on the virtual machine where Horizon Agent is installed.

2. Edit the file to specify the desired subnet:

  • Uncomment Subnet and set there the main network subnet: 10.0.0.10/22 in my case.
  • Restart Horizon Agent service: systemctl restart viewagent

Summary

This was a good example of two different virtualization technologies attempting to work together, which can sometimes be challenging and not always apparent.

💡
Would you consider referring this post to a friend if you enjoy my job? Your support helps me to grow and brings more aspiring mates into the community.
I also want to hear from you about how I can improve my topics! Please leave a comment with the following:
- Topics you're interested in for future editions
- Your favorite part or take away from this one

I'll make sure to read and respond to every single comment!
Table of Contents
Great! Next, complete checkout for full access to CloudDepth.
Welcome back! You've successfully signed in.
You've successfully subscribed to CloudDepth.
Success! Your account is fully activated, you now have access to all content.
Success! Your billing info has been updated.
Your billing was not updated.

This work by Leonid Belenkiy is licensed under Creative Commons Attribution 4.0 International