-
Getting Started with Citrix ADC
-
Deploy a Citrix ADC VPX instance
-
Optimize Citrix ADC VPX performance on VMware ESX, Linux KVM, and Citrix Hypervisors
-
Apply Citrix ADC VPX configurations at the first boot of the Citrix ADC appliance in cloud
-
Install a Citrix ADC VPX instance on Microsoft Hyper-V servers
-
Install a Citrix ADC VPX instance on Linux-KVM platform
-
Prerequisites for Installing Citrix ADC VPX Virtual Appliances on Linux-KVM Platform
-
Provisioning the Citrix ADC Virtual Appliance by using OpenStack
-
Provisioning the Citrix ADC Virtual Appliance by using the Virtual Machine Manager
-
Configuring Citrix ADC Virtual Appliances to Use SR-IOV Network Interface
-
Configuring Citrix ADC Virtual Appliances to use PCI Passthrough Network Interface
-
Provisioning the Citrix ADC Virtual Appliance by using the virsh Program
-
Provisioning the Citrix ADC Virtual Appliance with SR-IOV, on OpenStack
-
Configuring a Citrix ADC VPX Instance on KVM to Use OVS DPDK-Based Host Interfaces
-
-
Deploy a Citrix ADC VPX instance on AWS
-
Deploy a VPX high-availability pair with elastic IP addresses across different AWS zones
-
Deploy a VPX high-availability pair with private IP addresses across different AWS zones
-
Configure a Citrix ADC VPX instance to use SR-IOV network interface
-
Configure a Citrix ADC VPX instance to use Enhanced Networking with AWS ENA
-
Deploy a Citrix ADC VPX instance on Microsoft Azure
-
Network architecture for Citrix ADC VPX instances on Microsoft Azure
-
Configure multiple IP addresses for a Citrix ADC VPX standalone instance
-
Configure a high-availability setup with multiple IP addresses and NICs
-
Configure a high-availability setup with multiple IP addresses and NICs by using PowerShell commands
-
Configure a Citrix ADC VPX instance to use Azure accelerated networking
-
Configure HA-INC nodes by using the Citrix high availability template with Azure ILB
-
Configure a high-availability setup with Azure external and internal load balancers simultaneously
-
Configure address pools (IIP) for a Citrix Gateway appliance
-
Upgrade and downgrade a Citrix ADC appliance
-
Solutions for Telecom Service Providers
-
Load Balance Control-Plane Traffic that is based on Diameter, SIP, and SMPP Protocols
-
Provide Subscriber Load Distribution Using GSLB Across Core-Networks of a Telecom Service Provider
-
Authentication, authorization, and auditing application traffic
-
Basic components of authentication, authorization, and auditing configuration
-
On-premises Citrix Gateway as an identity provider to Citrix Cloud
-
Authentication, authorization, and auditing configuration for commonly used protocols
-
Troubleshoot authentication and authorization related issues
-
-
-
-
-
-
-
-
Dynamic round trip time method
-
Use case: Deployment of domain name based autoscale service group
-
Use case: Deployment of IP address based autoscale service group
-
-
Persistence and persistent connections
-
Advanced load balancing settings
-
Gradually stepping up the load on a new service with virtual server–level slow start
-
Protect applications on protected servers against traffic surges
-
Retrieve location details from user IP address using geolocation database
-
Use source IP address of the client when connecting to the server
-
Use client source IP address for backend communication in a v4-v6 load balancing configuration
-
Set a limit on number of requests per connection to the server
-
Configure automatic state transition based on percentage health of bound services
-
-
Use case 2: Configure rule based persistence based on a name-value pair in a TCP byte stream
-
Use case 3: Configure load balancing in direct server return mode
-
Use case 6: Configure load balancing in DSR mode for IPv6 networks by using the TOS field
-
Use case 7: Configure load balancing in DSR mode by using IP Over IP
-
Use case 10: Load balancing of intrusion detection system servers
-
Use case 11: Isolating network traffic using listen policies
-
Use case 12: Configure Citrix Virtual Desktops for load balancing
-
Use case 13: Configure Citrix Virtual Apps for load balancing
-
Use case 14: ShareFile wizard for load balancing Citrix ShareFile
-
Use case 15: Configure layer 4 load balancing on the Citrix ADC appliance
-
-
-
-
-
Authentication and authorization for System Users
-
-
Configuring a CloudBridge Connector Tunnel between two Datacenters
-
Configuring CloudBridge Connector between Datacenter and AWS Cloud
-
Configuring a CloudBridge Connector Tunnel Between a Datacenter and Azure Cloud
-
Configuring CloudBridge Connector Tunnel between Datacenter and SoftLayer Enterprise Cloud
-
Configuring a CloudBridge Connector Tunnel Between a Citrix ADC Appliance and Cisco IOS Device
-
CloudBridge Connector Tunnel Diagnostics and Troubleshooting
This content has been machine translated dynamically.
Dieser Inhalt ist eine maschinelle Übersetzung, die dynamisch erstellt wurde. (Haftungsausschluss)
Cet article a été traduit automatiquement de manière dynamique. (Clause de non responsabilité)
Este artículo lo ha traducido una máquina de forma dinámica. (Aviso legal)
此内容已经过机器动态翻译。 放弃
このコンテンツは動的に機械翻訳されています。免責事項
이 콘텐츠는 동적으로 기계 번역되었습니다. 책임 부인
Este texto foi traduzido automaticamente. (Aviso legal)
Questo contenuto è stato tradotto dinamicamente con traduzione automatica.(Esclusione di responsabilità))
This article has been machine translated.
Dieser Artikel wurde maschinell übersetzt. (Haftungsausschluss)
Ce article a été traduit automatiquement. (Clause de non responsabilité)
Este artículo ha sido traducido automáticamente. (Aviso legal)
この記事は機械翻訳されています.免責事項
이 기사는 기계 번역되었습니다.책임 부인
Este artigo foi traduzido automaticamente.(Aviso legal)
这篇文章已经过机器翻译.放弃
Questo articolo è stato tradotto automaticamente.(Esclusione di responsabilità))
Translation failed!
Dynamic round trip time method
Dynamic round trip time (RTT) is a measure of time or delay in the network between the client’s local DNS server and a data resource. To measure dynamic RTT, the Citrix ADC appliance probes the client’s local DNS server and gathers RTT metric information. The appliance then uses this metric to make its load balancing decision. Global server load balancing monitors the real-time status of the network and dynamically directs the client request to the data center with the lowest RTT value.
When a client’s DNS request for a domain comes to the Citrix ADC appliance configured as the authoritative DNS for that domain, the appliance uses the RTT value to select the IP address of the best performing site to send it as a response to the DNS request.
The Citrix ADC appliance uses different mechanisms, such as ICMP echo request or reply (PING), UDP, and TCP to gather the RTT metrics for connections between the local DNS server and participating sites. The appliance first sends a ping probe to determine the RTT. If the ping probe fails, a DNS UDP probe is used. If that probe also fails, the appliance uses a DNS TCP probe.
These mechanisms are represented on the Citrix ADC appliance as Load Balancing Monitors and are easily identified due to their use of the “ldns” prefix. The three monitors, in their default order, are:
ldns-ping
ldns-dns
ldns-tcp
These monitors are built into the appliance and are set to safe defaults. But they are customizable like any other monitor on the appliance.
You can change the default order by setting it explicitly as a GSLB parameter. For example, to set the order to be the DNS UDP query followed by the PING and then TCP, type the following command:
set gslb parameter -ldnsprobeOrder DNS PING TCP
<!--NeedCopy-->
Unless they have been customized, the Citrix ADC appliance performs UDP and TCP probing on port 53, however unlike regular load balancing monitors the probes need not be successful to provide valid RTT information. ICMP port unavailable messages, TCP Resets and DNS error responses, which would usually constitute a failure are all acceptable for calculating the RTT value.
Once the RTT data has been compiled, the appliance uses the proprietary metrics exchange protocol (MEP) to exchange RTT values between participating sites. After calculating RTT metrics, the appliance sorts the RTT values to identify the data center with the best (smallest) RTT metric.”
If RTT information is not available (for example, when a client’s local DNS server accesses the site for the first time), the Citrix ADC appliance selects a site by using the round robin method and directs the client to the site.
To configure the dynamic method, you configure the site’s GSLB virtual server for dynamic RTT. You can also set the interval at which local DNS servers are probed to a value other than the default.
Configure a GSLB virtual server for dynamic RTT
To configure a GSLB virtual server for dynamic RTT, you specify the RTT load balancing method.
The Citrix ADC appliance regularly validates the timing information for a given local server. If a change in latency exceeds the configured tolerance factor, the appliance updates its database with the new timing information and sends the new value to other GSLB sites by performing a MEP exchange. The default tolerance factor is 5 milliseconds (ms).
The RTT tolerance factor must be the same throughout the GSLB domain. If you change it for a site, you must configure identical RTT tolerance factors on all Citrix ADC appliances deployed in the GSLB domain.
To configure a GSLB virtual server for dynamic RTT by using the command line interface
At the command prompt, type:
set gslb vserver <name> -lbMethod RTT -tolerance <value>
<!--NeedCopy-->
Example:
set gslb vserver Vserver-GSLB-1 -lbMethod RTT -tolerance 10
<!--NeedCopy-->
To configure a GSLB virtual server for dynamic RTT by using the configuration utility
Navigate to Traffic Management > GSLB > Virtual Servers and double-click the virtual server.
Set the probing interval of local DNS servers
The Citrix ADC appliance uses different mechanisms, such as ICMP echo request or reply (PING), TCP, and UDP to obtain RTT metrics for connections between the local DNS server and participating GSLB sites. By default, the appliance uses a ping monitor and probes the local DNS server every 5 seconds. The appliance then waits 2 seconds for the response. If a response is not received in that time, it uses the TCP DNS monitor for probing.
However, you can modify the time interval for probing the local DNS server to accommodate your configuration.
To modify the probing interval by using the command line interface
At the command prompt, type:
set lb monitor <monitorName> <type> -interval <integer> <units> -resptimeout <integer> <units>
<!--NeedCopy-->
Example:
set lb monitor ldns-tcp LDNS-TCP -interval 10 sec -resptimeout 5 sec
<!--NeedCopy-->
To modify the probing interval by using the configuration utility
Navigate to Traffic Management > Load Balancing > Monitors, and double-click the monitor that you want to modify (for example, ping).
Share
Share
This Preview product documentation is Cloud Software Group Confidential.
You agree to hold this documentation confidential pursuant to the terms of your Cloud Software Group Beta/Tech Preview Agreement.
The development, release and timing of any features or functionality described in the Preview documentation remains at our sole discretion and are subject to change without notice or consultation.
The documentation is for informational purposes only and is not a commitment, promise or legal obligation to deliver any material, code or functionality and should not be relied upon in making Cloud Software Group product purchase decisions.
If you do not agree, select I DO NOT AGREE to exit.