-
Getting Started with NetScaler
-
Deploy a NetScaler VPX instance
-
Optimize NetScaler VPX performance on VMware ESX, Linux KVM, and Citrix Hypervisors
-
Apply NetScaler VPX configurations at the first boot of the NetScaler appliance in cloud
-
Configure simultaneous multithreading for NetScaler VPX on public clouds
-
Install a NetScaler VPX instance on Microsoft Hyper-V servers
-
Install a NetScaler VPX instance on Linux-KVM platform
-
Prerequisites for installing NetScaler VPX virtual appliances on Linux-KVM platform
-
Provisioning the NetScaler virtual appliance by using OpenStack
-
Provisioning the NetScaler virtual appliance by using the Virtual Machine Manager
-
Configuring NetScaler virtual appliances to use SR-IOV network interface
-
Configure a NetScaler VPX on KVM hypervisor to use Intel QAT for SSL acceleration in SR-IOV mode
-
Configuring NetScaler virtual appliances to use PCI Passthrough network interface
-
Provisioning the NetScaler virtual appliance by using the virsh Program
-
Provisioning the NetScaler virtual appliance with SR-IOV on OpenStack
-
Configuring a NetScaler VPX instance on KVM to use OVS DPDK-Based host interfaces
-
-
Deploy a NetScaler VPX instance on AWS
-
Deploy a VPX high-availability pair with elastic IP addresses across different AWS zones
-
Deploy a VPX high-availability pair with private IP addresses across different AWS zones
-
Protect AWS API Gateway using the NetScaler Web Application Firewall
-
Configure a NetScaler VPX instance to use SR-IOV network interface
-
Configure a NetScaler VPX instance to use Enhanced Networking with AWS ENA
-
Deploy a NetScaler VPX instance on Microsoft Azure
-
Network architecture for NetScaler VPX instances on Microsoft Azure
-
Configure multiple IP addresses for a NetScaler VPX standalone instance
-
Configure a high-availability setup with multiple IP addresses and NICs
-
Configure a high-availability setup with multiple IP addresses and NICs by using PowerShell commands
-
Deploy a NetScaler high-availability pair on Azure with ALB in the floating IP-disabled mode
-
Configure a NetScaler VPX instance to use Azure accelerated networking
-
Configure HA-INC nodes by using the NetScaler high availability template with Azure ILB
-
Configure a high-availability setup with Azure external and internal load balancers simultaneously
-
Configure a NetScaler VPX standalone instance on Azure VMware solution
-
Configure a NetScaler VPX high availability setup on Azure VMware solution
-
Configure address pools (IIP) for a NetScaler Gateway appliance
-
Deploy a NetScaler VPX instance on Google Cloud Platform
-
Deploy a VPX high-availability pair on Google Cloud Platform
-
Deploy a VPX high-availability pair with external static IP address on Google Cloud Platform
-
Deploy a single NIC VPX high-availability pair with private IP address on Google Cloud Platform
-
Deploy a VPX high-availability pair with private IP addresses on Google Cloud Platform
-
Install a NetScaler VPX instance on Google Cloud VMware Engine
-
-
Solutions for Telecom Service Providers
-
Load Balance Control-Plane Traffic that is based on Diameter, SIP, and SMPP Protocols
-
Provide Subscriber Load Distribution Using GSLB Across Core-Networks of a Telecom Service Provider
-
Authentication, authorization, and auditing application traffic
-
Basic components of authentication, authorization, and auditing configuration
-
Web Application Firewall protection for VPN virtual servers and authentication virtual servers
-
On-premises NetScaler Gateway as an identity provider to Citrix Cloud
-
Authentication, authorization, and auditing configuration for commonly used protocols
-
Troubleshoot authentication and authorization related issues
-
-
-
-
-
-
Configure DNS resource records
-
Configure NetScaler as a non-validating security aware stub-resolver
-
Jumbo frames support for DNS to handle responses of large sizes
-
Caching of EDNS0 client subnet data when the NetScaler appliance is in proxy mode
-
Use case - configure the automatic DNSSEC key management feature
-
Use Case - configure the automatic DNSSEC key management on GSLB deployment
-
-
-
Persistence and persistent connections
-
Advanced load balancing settings
-
Gradually stepping up the load on a new service with virtual server–level slow start
-
Protect applications on protected servers against traffic surges
-
Retrieve location details from user IP address using geolocation database
-
Use source IP address of the client when connecting to the server
-
Use client source IP address for backend communication in a v4-v6 load balancing configuration
-
Set a limit on number of requests per connection to the server
-
Configure automatic state transition based on percentage health of bound services
-
-
Use case 2: Configure rule based persistence based on a name-value pair in a TCP byte stream
-
Use case 3: Configure load balancing in direct server return mode
-
Use case 6: Configure load balancing in DSR mode for IPv6 networks by using the TOS field
-
Use case 7: Configure load balancing in DSR mode by using IP Over IP
-
Use case 10: Load balancing of intrusion detection system servers
-
Use case 11: Isolating network traffic using listen policies
-
Use case 12: Configure Citrix Virtual Desktops for load balancing
-
Use case 13: Configure Citrix Virtual Apps and Desktops for load balancing
-
Use case 14: ShareFile wizard for load balancing Citrix ShareFile
-
Use case 15: Configure layer 4 load balancing on the NetScaler appliance
-
-
-
-
-
Authentication and authorization for System Users
-
-
-
Configuring a CloudBridge Connector Tunnel between two Datacenters
-
Configuring CloudBridge Connector between Datacenter and AWS Cloud
-
Configuring a CloudBridge Connector Tunnel Between a Datacenter and Azure Cloud
-
Configuring CloudBridge Connector Tunnel between Datacenter and SoftLayer Enterprise Cloud
-
Configuring a CloudBridge Connector Tunnel Between a NetScaler Appliance and Cisco IOS Device
-
CloudBridge Connector Tunnel Diagnostics and Troubleshooting
This content has been machine translated dynamically.
Dieser Inhalt ist eine maschinelle Übersetzung, die dynamisch erstellt wurde. (Haftungsausschluss)
Cet article a été traduit automatiquement de manière dynamique. (Clause de non responsabilité)
Este artículo lo ha traducido una máquina de forma dinámica. (Aviso legal)
此内容已经过机器动态翻译。 放弃
このコンテンツは動的に機械翻訳されています。免責事項
이 콘텐츠는 동적으로 기계 번역되었습니다. 책임 부인
Este texto foi traduzido automaticamente. (Aviso legal)
Questo contenuto è stato tradotto dinamicamente con traduzione automatica.(Esclusione di responsabilità))
This article has been machine translated.
Dieser Artikel wurde maschinell übersetzt. (Haftungsausschluss)
Ce article a été traduit automatiquement. (Clause de non responsabilité)
Este artículo ha sido traducido automáticamente. (Aviso legal)
この記事は機械翻訳されています.免責事項
이 기사는 기계 번역되었습니다.책임 부인
Este artigo foi traduzido automaticamente.(Aviso legal)
这篇文章已经过机器翻译.放弃
Questo articolo è stato tradotto automaticamente.(Esclusione di responsabilità))
Translation failed!
VLAN configuration for admin partitions
VLANs can be bound to a partition as a “Dedicated” VLAN or a “Shared” VLAN. Based on your deployment, you can bind a VLAN to a partition to isolate its network traffic from other partitions.
Dedicated VLAN – A VLAN bound only to one partition with the “Sharing” option disabled and must be a tagged VLAN. For example, in a client-server deployment, for security reasons a system administrator creates a dedicated VLAN for each partition on the server side.
Shared VLAN – A VLAN bound (shared across) to multiple partitions with the “Sharing” option enabled. For example, in a client-server deployment, if the system administrator do not have control over the client side network, a VLAN is created and shared across multiple partitions.
Shared VLAN can be used across multiple partitions. It is created in the default partition and you can bind a shared VLAN to multiple partitions. By default, a shared VLAN is bound to the default partition implicitly and hence it cannot be bound explicitly.
Notes
A NetScaler appliance deployed on any hypervisor (ESX, KVM, Xen, and Hyper-V) platform must comply with both the following conditions in a partition setup and traffic domain:
- Enable the promiscuous mode, MAC changes, MAC spoofing, or forged transmit for shared VLANs with partition.
- Enable the VLAN with port group properties of the virtual switch, if the traffic is through a dedicated VLAN.
In a partitioned (multitenant) NetScaler appliance, a system administrator can isolate the traffic flowing to a particular partition or partitions. It is done by binding one or more VLANs to each partition. A VLAN can be dedicated to one partition or Shared across multiple partitions.
Internal routing between partitions that are hosted on the same NetScaler appliance is not supported.
Dedicated VLANs
To isolate the traffic flowing into a partition, create a VLAN and associate it with the partition. The VLAN is then visible only to the associated partition, and the traffic flowing through the VLAN is classified and processed only in the associated partition.
To implement a dedicated VLAN for a particular partition, do the following.
- Add a VLAN (V1).
- Bind a network interface to VLAN as a tagged network interface.
- Create a partition (P1).
- Bind partition (P1) to the dedicated VLAN (V1).
Configure the following by using the CLI
-
Create a VLAN
add vlan <id>
Example
add vlan 100
-
Bind a VLAN
bind vlan <id> -ifnum <interface> -tagged
Example
bind vlan 100 –ifnum 1/8 -tagged
-
Create a partition
Add ns partition <partition name> [-maxBandwidth <positive_integer>][-maxConn <positive_integer>] [-maxMemLimit <positive_integer>]
Example
Add ns partition P1 –maxBandwidth 200 –maxconn 50 –maxmemlimit 90
Done
-
Bind a partition to a VLAN
bind partition <partition-id> -vlan <id>
Example
bind partition P1 –vlan 100
Configure a dedicated VLAN by using the NetScaler GUI
- Navigate to Configuration > System > Network > VLANs* and click Add to create a VLAN.
-
On the Create VLAN page, set the following parameters:
- VLAN ID
- Alias Name
- Maximum Transmission Unit
- Dynamic Routing
- IPv6 Dynamic Routing
- Partitions Sharing
- In the Interface Bindings section, select one or more interfaces and bind it to the VLAN.
- In the IP Bindings section, select one or more IP addresses and bind to the VLAN.
- Click OK and Done.
Shared VLAN
In a shared VLAN configuration, each partition has a MAC address, and traffic received on the shared VLAN is classified by MAC address. Only a Layer3 VLAN is recommended because it can restrict the subnet traffic. A partition MAC address is applicable and important only for a shared VLAN deployment.
Note
Starting from NetScaler version 12.1 build 51.16, shared VLAN in a partitioned appliance support dynamic routing protocol.
The following diagram shows how a VLAN (VLAN 10) is shared across two partitions.
To deploy a shared VLAN configuration, do the following:
- Create a VLAN with the sharing option ‘enabled’, or enable the sharing option on an existing VLAN. By default, the option is ‘disabled’.
- Bind partition interface to shared VLAN.
- Create the partitions, each with its own PartitionMAC address.
- Bind the partitions to the shared VLAN.
Configure a shared VLAN by using the CLI
At the command prompt, type one of the following commands to add VLAN or set the sharing parameter of an existing VLAN:
add vlan <id> \[-sharing \(ENABLED | DISABLED)]
set vlan <id> \[-sharing \(ENABLED | DISABLED)]
add vlan 100 –sharing ENABLED
set vlan 100 –sharing ENABLED
Bind a partition to a Shared VLAN by using the CLI
At the command prompt, type:
bind partition <partition-id> -vlan <id>
bind partition P1 –vlan 100
add ns partition P1 –maxBandwidth 200 –maxconn 50 –maxmemlimit 90 -partitionMAC<mac_addr
Done
Configure a Partition MAC Address by using the CLI
set ns partition <partition name> [-partitionMAC<mac_addr>]
set ns partition P1 –partitionMAC 22:33:44:55:66:77
Bind partitions to a shared VLAN by using the CLI
bind partition <partition-id> -vlan <id>
bind partition <partition-id> -vlan <id>
bind partition P1 –vlan 100
bind partition P2 –vlan 100
bind partition P3 –vlan 100
bind partition P4 –vlan 100
Configure Shared VLAN by using the NetScaler GUI
-
Navigate to Configuration > System > Network > VLANs and then select a VLAN profile and click Edit to set the partition sharing parameter.
-
On the Create VLAN page, select the Partitions Sharing check box.
-
Click OK and then Done.
Dynamic routing over a shared VLAN across admin partitions
Admin partitions in a NetScaler appliance provide a way to host multiple tenants.
Starting from NetScaler version 12.1 build 51.16, a shared VLAN in a partitioned appliance supports the dynamic routing protocol. Routing can be configured in dedicated or shared VLANs associated with admin partitions.
Dedicated VLAN of an admin partition. In a dedicated VLAN, the data path for the tenant is identified using one or more VLANs. It results in strict configuration and data-path isolation for the tenant. For advertising the health of a VIP address, dynamic routing is enabled in each partition and the routing adjacency is established per partition.
A shared VLAN across admin partitions. In a shared VLAN, VIP addresses configured in a non-default partition can be advertised through a single adjacency or peering formed in the default partition. A SNIP address in the non-default partition is used as the next-hop for all the VIP addresses (configured with advertiseOnDefaultPartition option) in that non-default partition. The configured SNIP address is marked as a next-hop IP address in the routing advertisements.
Consider an example setup of admin partitions in a NetScaler appliance, VLAN 100 is shared across the default partition, and non-default partitions: AP-3 and AP-5. SNIP addresses SNIP1 is added in the default partition, SNIP3 is added in AP-3, and SNIP5 is added in AP-5. SNIP1, SNIP3, and SNIP5 are reachable over the vlan-100. VIP addresses VIP1 is added in the default partition, VIP3 is added in AP-3, and VIP5 is added in AP-5. VIP3 and VIP5 are advertised through the single adjacency or peering formed in the default partition.
Before you begin
Before configuring dynamic routing over a shared VLAN in a non-default admin partition, make sure that:
-
Dynamic routing is configured on the shared VLAN in the default partition. Configuring dynamic routing on the shared VLAN in the default partition consists of the following steps:
- Enable dynamic routing on the shared VLAN.
- Add a SNIP IP address with dynamic routing enabled. This SNIP IP address is used for dynamic routing with the upstream.
- Bind the SNIP IP subnet to the shared VLAN.
- One or more dynamic routing protocol is configured on the default partition. For more information, see configure dynamic routing protocols.
Configuration steps
Configuring dynamic routing over a shared VLAN in a non-default admin partition consists of the following steps:
-
Add a SNIP IP address in the non-default partition. This SNIP IP address must be in the same subnet of the SNIP IP address that is being used for dynamic routing in the default partition.
-
Set or enable the following parameters for advertising a VIP address, in a non-default partition, using dynamic routing.
- Host route gateway (hostRtGw). Set this parameter to the SNIP address added in the preceding step.
- Advertise on default partition (advertiseOnDefaultPartition). Enable this parameter.
Sample configuration
Consider an example of an admin partition setup in a NetScaler appliance. A non-default admin partition AP-3 is configured on this appliance. A shared VLAN VLAN100 is bound to AP-3. The following sample configuration configures dynamic routing, through VLAN100, in AP-3.
Steps | Sample configuration |
---|---|
On default admin partition | - |
Enable dynamic routing on shared VLAN 100. | set vlan 100 -dynamicRouting enabled |
Add SNIP IP address 192.0.2.10 with dynamic routing enabled. This SNIP IP address is used for dynamic routing with the upstream. | add ns ip 192.0.2.10 255.255.255.0 -type SNIP -dynamicRouting enabled |
Bind subnet of 192.0.2.10 to shared VLAN 100. | bind vlan 100 -IPAddress 192.0.2.10 255.255.255.0 |
On non-default admin partition AP-3 | - |
Add SNIP IP address 192.0.2.30. This SNIP IP address is in the same subnet as the SNIP IP address 192.0.2.10 on the default partition. | add ns ip 192.0.2.30 255.255.255.0 -type SNIP |
For advertising VIP address 203.0.113.300 using dynamic routing, enable advertiseOnDefaultPartition parameter and set hostRtGw parameter to 192.0.2.30. |
set ns ip 203.0.113.300 255.255.255.255 -hostRoute enabled -advertiseOnDefaultPartition enabled -hostRtGw 192.0.2.30 |
Dynamic routing of IPv6 over a shared VLAN across admin partition
The enable ns feature IPv6PT
and set L3Param –ipv6DynamicRouting ENABLED
commands must be enabled for an IPv6 address to dynamically route over a shared VLAN in an admin partition. The following sample configurations help you to configure dynamic routing of IPv6 over shared VLAN.
Sample configuration
The following sample configuration configures dynamic routing, through VLAN 100, in AP-3.
Steps | Sample configuration |
---|---|
On default admin partition | - |
Enable dynamic routing on shared VLAN 100. | set vlan 100 -dynamicRouting enabled |
Add SNIP IP address 2001:b:c:d::1/64 with dynamic routing enabled. The SNIP IP address is used for dynamic routing with the upstream. | add ns ip6 2001:b:c:d::1/64 -type SNIP -dynamicRouting enabled |
Bind subnet of 2001:b:c:d::1/64 to shared VLAN 100. | bind vlan 100 -IPAddress 2001:b:c:d::1/64 |
On non-default admin partition AP-3 | - |
Add SNIP IP address 2001:b:c:d::2/64. This SNIP IP address is in the same subnet as the SNIP IP address 2001:b:c:d::2/64 on the default partition. | add ns ip6 2001:b:c:d::2/64 -type SNIP |
For advertising VIP address 2002::1/128 using dynamic routing, enable advertiseOnDefaultPartition parameter and set ip6hostRtGw parameter to 2001:b:c:d::2. |
set ns ip6 2002::1/128 -hostRoute enabled -advertiseOnDefaultPartition enabled -ip6hostRtGw 2001:b:c:d::2 |
The VIP present in the admin partition must be seen on VTYSH of default partition as a kernel route.
> switch partition default
Done
>vtysh
ns#
ns# sh ipv6 route kernel
IPv6 routing table
Codes: K - kernel route, C - connected, S - static, R - RIP, O - OSPF,
IA - OSPF inter area, E1 - OSPF external type 1,
E2 - OSPF external type 2, I - IS-IS, B - BGP
Timers: Uptime
K 2002::1/128 via 2001:b:c:d::2, vlan0, 01:24:15 >> on Default Partition, VIP : 2002::1 present in AP known via SNIP6 : 2001:b:c:d::2 is present in AP as a Kernel Route
It can be advertised to upstream by using “redistribute kernel” option under OSPFv3/BGP+ in default partition.
ns# sh run router ipv6 ospf
!
router ipv6 ospf 1
redistribute kernel
!
Shared VLAN with admin partition on NetScaler SDX appliance
On an SDX appliance, you must generate and configure the PMAC address by using the Management Service user interface, before using the admin partitions with shared VLANs. Management Service enables you to generate partition MAC addresses by:
- Using a base MAC address
- Specifying custom MAC addresses
- Randomly generating MAC addresses
Notes
- The randomly generating MAC addresses are used for other deployments other than high availability.
- After generating the partition MAC addresses, you must restart the NetScaler instance before configuring the admin partitions. For more information on generating partition MAC addresses from the SDX appliance, see Generating Partition MAC Addresses to Configure Admin Partition on a NetScaler instance in the SDX Appliance.
In this article
- Dedicated VLANs
- Configure the following by using the CLI
- Configure a dedicated VLAN by using the NetScaler GUI
- Shared VLAN
- Configure Shared VLAN by using the NetScaler GUI
- Dynamic routing over a shared VLAN across admin partitions
- Dynamic routing of IPv6 over a shared VLAN across admin partition
- Shared VLAN with admin partition on NetScaler SDX appliance
This Preview product documentation is Cloud Software Group Confidential.
You agree to hold this documentation confidential pursuant to the terms of your Cloud Software Group Beta/Tech Preview Agreement.
The development, release and timing of any features or functionality described in the Preview documentation remains at our sole discretion and are subject to change without notice or consultation.
The documentation is for informational purposes only and is not a commitment, promise or legal obligation to deliver any material, code or functionality and should not be relied upon in making Cloud Software Group product purchase decisions.
If you do not agree, select I DO NOT AGREE to exit.