-
Getting Started with Citrix ADC
-
Deploy a Citrix ADC VPX instance
-
Optimize Citrix ADC VPX performance on VMware ESX, Linux KVM, and Citrix Hypervisors
-
Apply Citrix ADC VPX configurations at the first boot of the Citrix ADC appliance in cloud
-
Install a Citrix ADC VPX instance on Microsoft Hyper-V servers
-
Install a Citrix ADC VPX instance on Linux-KVM platform
-
Prerequisites for Installing Citrix ADC VPX Virtual Appliances on Linux-KVM Platform
-
Provisioning the Citrix ADC Virtual Appliance by using OpenStack
-
Provisioning the Citrix ADC Virtual Appliance by using the Virtual Machine Manager
-
Configuring Citrix ADC Virtual Appliances to Use SR-IOV Network Interface
-
Configuring Citrix ADC Virtual Appliances to use PCI Passthrough Network Interface
-
Provisioning the Citrix ADC Virtual Appliance by using the virsh Program
-
Provisioning the Citrix ADC Virtual Appliance with SR-IOV, on OpenStack
-
Configuring a Citrix ADC VPX Instance on KVM to Use OVS DPDK-Based Host Interfaces
-
-
Deploy a Citrix ADC VPX instance on AWS
-
Deploy a VPX high-availability pair with elastic IP addresses across different AWS zones
-
Deploy a VPX high-availability pair with private IP addresses across different AWS zones
-
Configure a Citrix ADC VPX instance to use SR-IOV network interface
-
Configure a Citrix ADC VPX instance to use Enhanced Networking with AWS ENA
-
Deploy a Citrix ADC VPX instance on Microsoft Azure
-
Network architecture for Citrix ADC VPX instances on Microsoft Azure
-
Configure multiple IP addresses for a Citrix ADC VPX standalone instance
-
Configure a high-availability setup with multiple IP addresses and NICs
-
Configure a high-availability setup with multiple IP addresses and NICs by using PowerShell commands
-
Configure a Citrix ADC VPX instance to use Azure accelerated networking
-
Configure HA-INC nodes by using the Citrix high availability template with Azure ILB
-
Configure a high-availability setup with Azure external and internal load balancers simultaneously
-
Configure address pools (IIP) for a Citrix Gateway appliance
-
Upgrade and downgrade a Citrix ADC appliance
-
Solutions for Telecom Service Providers
-
Load Balance Control-Plane Traffic that is based on Diameter, SIP, and SMPP Protocols
-
Provide Subscriber Load Distribution Using GSLB Across Core-Networks of a Telecom Service Provider
-
Authentication, authorization, and auditing application traffic
-
Basic components of authentication, authorization, and auditing configuration
-
On-premises Citrix Gateway as an identity provider to Citrix Cloud
-
Authentication, authorization, and auditing configuration for commonly used protocols
-
Troubleshoot authentication and authorization related issues
-
-
-
-
-
-
-
How domain name system works with GSLB
-
Use case: Deployment of domain name based autoscale service group
-
Use case: Deployment of IP address based autoscale service group
-
-
Persistence and persistent connections
-
Advanced load balancing settings
-
Gradually stepping up the load on a new service with virtual server–level slow start
-
Protect applications on protected servers against traffic surges
-
Retrieve location details from user IP address using geolocation database
-
Use source IP address of the client when connecting to the server
-
Use client source IP address for backend communication in a v4-v6 load balancing configuration
-
Set a limit on number of requests per connection to the server
-
Configure automatic state transition based on percentage health of bound services
-
-
Use case 2: Configure rule based persistence based on a name-value pair in a TCP byte stream
-
Use case 3: Configure load balancing in direct server return mode
-
Use case 6: Configure load balancing in DSR mode for IPv6 networks by using the TOS field
-
Use case 7: Configure load balancing in DSR mode by using IP Over IP
-
Use case 10: Load balancing of intrusion detection system servers
-
Use case 11: Isolating network traffic using listen policies
-
Use case 12: Configure Citrix Virtual Desktops for load balancing
-
Use case 13: Configure Citrix Virtual Apps for load balancing
-
Use case 14: ShareFile wizard for load balancing Citrix ShareFile
-
Use case 15: Configure layer 4 load balancing on the Citrix ADC appliance
-
-
-
-
-
Authentication and authorization for System Users
-
-
Configuring a CloudBridge Connector Tunnel between two Datacenters
-
Configuring CloudBridge Connector between Datacenter and AWS Cloud
-
Configuring a CloudBridge Connector Tunnel Between a Datacenter and Azure Cloud
-
Configuring CloudBridge Connector Tunnel between Datacenter and SoftLayer Enterprise Cloud
-
Configuring a CloudBridge Connector Tunnel Between a Citrix ADC Appliance and Cisco IOS Device
-
CloudBridge Connector Tunnel Diagnostics and Troubleshooting
This content has been machine translated dynamically.
Dieser Inhalt ist eine maschinelle Übersetzung, die dynamisch erstellt wurde. (Haftungsausschluss)
Cet article a été traduit automatiquement de manière dynamique. (Clause de non responsabilité)
Este artículo lo ha traducido una máquina de forma dinámica. (Aviso legal)
此内容已经过机器动态翻译。 放弃
このコンテンツは動的に機械翻訳されています。免責事項
이 콘텐츠는 동적으로 기계 번역되었습니다. 책임 부인
Este texto foi traduzido automaticamente. (Aviso legal)
Questo contenuto è stato tradotto dinamicamente con traduzione automatica.(Esclusione di responsabilità))
This article has been machine translated.
Dieser Artikel wurde maschinell übersetzt. (Haftungsausschluss)
Ce article a été traduit automatiquement. (Clause de non responsabilité)
Este artículo ha sido traducido automáticamente. (Aviso legal)
この記事は機械翻訳されています.免責事項
이 기사는 기계 번역되었습니다.책임 부인
Este artigo foi traduzido automaticamente.(Aviso legal)
这篇文章已经过机器翻译.放弃
Questo articolo è stato tradotto automaticamente.(Esclusione di responsabilità))
Translation failed!
How domain name system supports GSLB
The domain name system (DNS) is considered as a distributed database, which uses the Client/Server architecture. Name Servers are the servers in the architecture, and the resolvers are the clients that are library routines installed on an operating system that create and send queries across the network.
The logical hierarchy of the DNS is shown in the following diagram:
Note:
The second-level root servers are responsible for maintaining Name server to Address mappings for Name server delegations within the .com, .net, .org, .gov domains, and so on. Each domain within the second-level domains is responsible for maintaining Name Server to Address mappings for the lower-level organizational domains. At the organization level, the individual host addresses are resolved for www, FTP, and other service providing hosts.
Delegation
The main purpose of the current DNS topology is to ease the burden of maintaining all address records on one authority. This allows for delegation of an organization name space to that particular organization. The organization can then further delegate its space to subdomains within the organization. For example, under citrix.com you can create subdomains called sales.citrix.com
, education.citrix.com
, and support.citrix.com
. The corresponding departments can maintain their own set of Name servers that are authoritative for their subdomain, and then maintain their own set of host name to address mappings. No single department is responsible for maintaining all of Citrix address records. Each department can change addresses and modify topologies, and not impose more work at the higher-level domain or organization.
Benefits of the hierarchical topology
Some of the benefits of the hierarchical topology include:
- Scalability
- Adding caching functionality into Name servers at each level, where a DNS request is served by a host that is not authoritative for a particular domain but can contribute the answer to the query, and cut down on congestion and response time.
- Caching also creates redundancy and resiliency to server failure. If one name server fails, it is still possible that records can be served from other servers that have recent cached copies of the same records.
Resolvers
Resolvers are the client component in the DNS system. Programs that are running on a host that need information from the domain name space use the resolver. The resolver handles:
- Querying a name server.
- Interpreting responses (which might be resource records or an error).
- Returning the information to the programs that requested it.
The resolver is a set of library routines that are compiled into programs like telnet, FTP, and ping. They are not separate processes. The resolvers can put together a query, send it, and wait for an answer. And, send it again (possibly to a secondary Name Server) if it is not answered within a certain time. These types of resolvers are known as stub resolvers. Some resolvers have the added functionality to cache records, and honor time to live (TTL). In Windows, this functionality is available through the DNS Client service; viewable through the “services.msc” console.
Name Servers
Name Servers generally store complete information about a particular part of a domain name space (called a zone). The Name Server is then said to have authority for that zone. They can also be authoritative for multiple zones.
The difference between a domain and a zone is subtle. A domain is the full set of entities including its subdomains while a zone is only the information within a domain that is not delegated to another Name Server. An example of a zone is citrix.com
, while sales.citrix.com
is a separate zone if that zone is delegated to another Name Server within the subdomain. In this case, the primary Citrix zone can include citrix.com
, it.citrix.com
, and support.citrix.com
. Because the sales.citrix.com
is delegated, it is not part of the zone that the citrix.com
Name Server is authoritative over. The following diagram shows the two zones.
To properly delegate a subdomain, you must assign authority for the subdomain to different Name Servers. In the preceding example, the ns1.citrix.com
does not contain information about the sales.citrix.com
subdomain. Instead, it contains pointers to the Name servers that are authoritative for the ns1.sales.citrix.com
subdomain.
Root name servers and query resolution
Root Name servers know the IP addresses of all the Name servers authoritative for the second-level domains. If a Name Server does not have information about a given domain in its own data files, then it only needs to contact a root server to begin traversing the proper branch of the DNS tree structure to eventually get to the given domain. This involves a series of requests to multiple Name servers to help with the tree traversal to find the next authoritative Name server, which needs to be contacted for further resolution.
The following diagram shows a typical DNS request, assuming that there is no cached record for the requested name during the traversal. The following example uses a mock up of the Citrix domain.
Recursive and non-recursive queries
The preceding example demonstrates the two types of queries that can occur.
-
Recursive query: The query between the resolver and the locally configured Name server is recursive. This means that the Name server receives the query and does not respond to the resolver until the query is fully answered, or an error is returned. If the Name Server receives a referral to the query, then the Name server follows the referral until the Name server finally receives the answer (IP address) returned.
-
Non-recursive query: The query that the locally configured Name server makes to the subsequent authoritative domain-level Name server is non-recursive (or iterative). Each request is immediately responded with either a referral to a lower-level authoritative server or the answer to the query, if the queried Name server contains the answer in its data files or its cache.
Caching
Although the resolution process is involved, and might potentially require small requests to several hosts, it is fast. One of the factors that increases the speed of DNS resolution is caching. Each time a Name server receives a recursive query, it might have to communicate to other servers to eventually get to the proper authoritative server for the specific request. It stores all of the information that it receives for future reference. When the next client makes a similar request, such as a different host but in the same domain, it already knows the Name server that is authoritative for that domain, and can send a request directly there instead of starting at the root Name server.
Caching can occur for negative responses also, such as the queries for hosts that do not exist. In this case, the server must not query the authoritative Name server for the requested domain to figure out that the host does not exist. To save time, the Name server simply checks the cache and responds back with the negative record.
Name servers do not cache records indefinitely, or else you can never update the IP addresses. To avoid synchronization problems, DNS responses contain a time to live (TTL). This field describes the time interval for which the cache can store a record before it must discard it and check with the authoritative Name server for any updated records. If the records have not changed, the use of TTL also allows rapid dynamic responses from devices performing GSLB.
Resource record types
Various RFCs provide a comprehensive list of DNS resource record types and its description. The following table lists the common resource record types.
Resource record type | Description | RFC |
---|---|---|
A | A host address | RFC 1035 |
NS | An authoritative name server | RFC 1035 |
MD | A mail destination (Obsolete - use MX) | RFC 1035 |
MF | A mail forwarder (Obsolete - use MX) | RFC 1035 |
CNAME | The canonical name for an alias | RFC 1035 |
SOA | Marks the start of a zone of authority | RFC 1035 |
WKS | A well known service description | RFC 1035 |
PTR | A domain name pointer | RFC 1035 |
HINFO | Host information | RFC 1035 |
MINFO | Mailbox or mail list information | RFC 1035 |
MX | Mail exchange | RFC 1035 |
TXT | Text strings | RFC 1035 |
AAAA | IP6 Address | RFC 3596 |
SRV | Server selection | RFC 2782] |
How GSLB supports DNS
GSLB uses algorithms and protocols that decide which IP address must be sent for a DNS query. GSLB sites are geographically distributed and there is a DNS authoritative Name server at each site running as a service on the Citrix ADC appliance. All Name servers at the various sites involved are authoritative for the same domain. Each of the GSLB domains is a subdomain for which a delegation is configured. Therefore, the GSLB Name servers are authoritative and can use one of the various load balancing algorithms to decide which IP address to return.
A delegation is created by adding a Name server record for the GSLB domain in the parent domain database files and a subsequent address record for the Name servers that are used for the delegation. For example, if you want to use GSLB for www.citrix.com
, then the following Bind SOA file can be used to delegate requests to www.citrix.com
to Name Servers: Netscaler1 and Netscaler2.
###########################################################################
@ IN SOA citrix.com. hostmaster.citrix.com. (
1 ; serial
3h ; refresh
1h ; retry
1w ; expire
1h ) ; negative caching TTL
IN NS ns1
IN NS ns2
IN MX 10 mail
ns1 IN A 10.10.10.10
ns2 IN A 10.10.10.20
mail IN A 10.20.20.50
### Old Configuration if www was not delegated to a GSLB name server
www IN A 10.20.20.50
### Updated Configuration
Netscaler1 IN A xxx.xxx.xxx.xxx
Netscaler2 IN A yyy.yyy.yyy.yyy
www IN NS Netscaler1.citrix.com.
www IN NS Netscaler2.citrix.com.
###
IN MX 20 mail2
mail2 IN A 10.50.50.20
###########################################################################
<!--NeedCopy-->
Understanding BIND is not a requirement for configuring DNS. All compliant DNS server implementations have a method of creating the equivalent delegation. Microsoft DNS servers can be configured for delegation using the instructions at Create a zone delegation.
What makes GSLB on Citrix ADC appliance different from using the standard DNS service for distributing traffic is that the Citrix ADC GSLB sites exchange data using a proprietary protocol called Metric Exchange Protocol (MEP). With MEP, the GSLB sites are able to maintain information about all other sites. When a DNS request is received, the MEP considers the GSLB metrics to determine information such as the following:
- Site with the least number of current connections
- Site that is closest to the LDNS server, which sent the request based on round trip times (RTT).
There are several load balancing algorithms that can be used, but GSLB is a DNS with the brain underneath telling the Name Server (hosted on the Citrix ADC appliance) which address must be sent based on metrics of the participating sites.
Other benefits that GSLB provides are the ability to maintain persistence (or site affinity). Responses to the incoming DNS queries can be compared with the source IP address to determine if that address was directed to a particular site in the recent past. If so, then the same address is sent in the DNS response to ensure that the client session is maintained.
Another form of persistence is obtained at the site level by using HTTP redirects, or HTTP proxying. These forms of persistence occur after the DNS response occurs. Therefore, if you get an HTTP request at a site that contains a cookie to direct the request to a different participating site, then you can either respond with a redirect or proxy the request to the appropriate site.
Metric exchange protocol
Metric Exchange Protocol (MEP) is used to share the data used in GSLB calculations across sites. Using MEP connections, you exchange three types of data. These connections need not be secure over TCP port 3011 or can be secure using SSL over TCP port 3009.
The following three types of data are exchanged, and have their own intervals and exchange methods.
-
Site metric exchange: This is a polling exchange model. For example, if site1 has a configuration for site2 services, then every second site1 asks site2 for the status of the GSLB services. Site2 responds with the state and other load details.
-
Network metric exchange: This is the LDNS RTT information exchange, which is used in the dynamic proximity load balancing algorithm. This is a push exchange model. Every five seconds, each site pushes its data to other participating sites.
-
Persistency exchange: This is for SOURCEIP persistency exchange. This is also a push exchange model. Every five seconds, each site pushes its data to other participating sites.
By default, site services are monitored over MEP based on polling information only. If you bind monitors based on monitor interval, the state is updated and you can control the frequency of the updates by setting the monitoring interval accordingly.
Share
Share
This Preview product documentation is Cloud Software Group Confidential.
You agree to hold this documentation confidential pursuant to the terms of your Cloud Software Group Beta/Tech Preview Agreement.
The development, release and timing of any features or functionality described in the Preview documentation remains at our sole discretion and are subject to change without notice or consultation.
The documentation is for informational purposes only and is not a commitment, promise or legal obligation to deliver any material, code or functionality and should not be relied upon in making Cloud Software Group product purchase decisions.
If you do not agree, select I DO NOT AGREE to exit.