Tuesday, July 2, 2013

How to Enable Legacy Client Drive Mapping Format on XenApp 6 and 6.5

Summary
With XenApp 6.0 and XenApp 6.5, Citrix changed the format on how to display mapped client drives. In earlier releases, drives where mapped to a physical letter. With this release, a redirection similar to Terminal Services is implemented, that displays the drive as a local disk and the source device it is mapped from. Client drives therefore appear as “Local Disk” as in the following example:
There are instances that require an administrator to enable legacy client drive mapping in XenApp 6.0 and XenApp 6.5 so that unique drive letters are used to map client drives:
Procedure
To enable legacy client drive mapping on XenApp 6.0 and XenApp 6.5, the following registry key must be set on the server:
Caution! This procedure requires you to edit the registry. Using Registry Editor incorrectly can cause serious problems that might require you to reinstall your operating system. Citrix cannot guarantee that problems resulting from the incorrect use of the Registry Editor can be solved. Use the Registry Editor at your own risk. Back up the registry before you edit it. Create this registry key if it does not exist:
HKEY_LOCAL_MACHINE\Software\Citrix\UncLinks\
  1. Under the key, create a DWORD: UNCEnabled.
  2. Set the value of UNCEnabled to “0”.

Monday, July 1, 2013

Basic Guide To vDisk Update Management


Basic Guide To vDisk Update Management

Document ID: CTX137757   /   Created On: May 31, 2013   /   Updated On: May 31, 2013
Average Rating: not yet rated

View products this document applies to link to products this document applies to

Summary
This article describes how to set up a managed vDisk for automatic updates in Provisioning Services 6.x.
Basic Guide To vDisk Update Management
This section provides a procedure for setting up a managed vDisk for automatic updates in Provisioning Services 6.x.
Before getting started, you must review the following steps that occur when a scheduled automatic update is initiated:
  1. A new version of the managed vDisk is auto-created in maintenance (read/write) access mode by Update Services.
  2. Your designated Update Virtual Machine (VM) powers on and attaches to its associated vDisk, incorporating the new vDisk version just created by Update Services. Boot up begins and the vDisk performs the update you have configured in the task (SCCM. WSUS, Other Scheduled Task, etc.).
  3. Upon completion of the scheduled update, the VM will shut down and the newly created version will be placed in the Access status that you pre-selected when creating the task (Maintenance-Test-Production).
Note: The following steps outline the procedure for creating a designated diskless VM, adding the host to the host node, adding a managed vDisk for auto updates, and creating a scheduled task. In this example, we are using a XenServer environment, but you should have no trouble adapting the instructions to your hypervisor environment.
Creating your Designated Update Virtual Machine and Adding a Host Connection to vDisk Update Management
  1. Create a diskless VM on a hypervisor, for example AutoUpdateVM.
    This VM will attach to your production vDisk, so configure this to start from network only. As you will see later, an Active Directory machine account will be created with the same name as the VM you created.
  2. In your Provisioning Server Console, expand the vDisk Update Management node.
  3. Right-click on the Hosts node as shown in the following screen shot and select Add host...
  1. Select the appropriate hypervisor type. Click Next.
  1. Enter an arbitrary Name and optional Description for your host connection. Click Next.
  1. Enter the Hostname or IP Address of your hypervisor and click Next:
  1. Enter the administrator credentials for your hypervisor, then select Verify Connection.
  1. Click OK and if connection is successful, click Next.
  1. Click Finish:
Your hypervisor host should now appear in the right pane, as shown in the following screen shot:
Adding a Managed vDisk to the vDisks Node Under vDisk Update Management
  1. Right-click on the vDisk node and select Add vDisk. Click Next on the initial screen.
  2. On the following screen, make your desired selection for Store and Provisioning Services Server (or leave default of All) and highlight the desired vDisk to be added. Click Next:
  1. Choose the desired Host Connection from the list and type in the name of the designated VM you created in Step 1 at the beginning of this tutorial.
    Note
    : The name is case sensitive and a machine account with that name cannot already exist in Active Directory.
  2. Click Next.
  1. Select the OU that you would like the machine account for your designated VM to be placed in. It will have the same name as the VM itself. Click Next.
  1. After reviewing the settings (as in the following screen shot), click Finish.
Your vDisk node should display similar to the following in the right pane:
Creating an Update Task to be Performed at a Scheduled Time
  1. Right-click on the Tasks node, select Add Task… and click Next on initial screen:
  1. Give the task a Name and enter an optional Description of the task to be performed:
Note: The vDisk must already be configured to receive its Windows Updates from a WSUS Server before beginning this procedure. If not, before allowing this task to run, start your vDisk in Private Mode and make the necessary changes. Afterward, shut down vDisk and place it back in Standard Mode. See http://support.citrix.com/proddocs/topic/provisioning-61/pvs-vdisks-update-vm-create-configure-esd.html for details regarding the steps to setup the vDisk for WSUS or consult Microsoft.
  1. Schedule the task according to your preference, (for example, if you always want to run the task manually, select None), and click Next:
  1. In the following screen, click Add to select the managed vDisk(s) this task applies to:
  1. Select the vDisk(s) you want this task to update and click OK.
    Notice the vDisk listed below. It shows the host and the VM associated with it for auto-updates. This tells you that each managed vDisk must have a unique update VM associated with it. Keep this in mind when naming your Update VMs in the early steps of this tutorial. Consider using a name that tells you which vDisk it is for. If you had forgotten to add a vDisk in the Adding A Managed vDisk To The vDisks Node Under vDisk Update Management section previously covered, you can add it here using the Add Managed vDisk button. You will be returned to this same step when finished. Click OK then Next on the Update Task Wizard screen:
  1. Select the appropriate ESD task from the list, as displayed in the following screen shot: 
    Note
    : In this tutorial, WSUS is selected.
Note: If selecting None as your option, you must create and install a batch file called update.bat. Place this file in the product folder of your Provisioning Services software under C:\program files\citrix\provisioning services (or Program Files (x86) for 64-bit machines.).
Update.bat can contain anything you want. If it is not there, you will see an error in the Provisioning Services Server event log saying that update.bat was missing and an error will be returned. If missing, the automatic versioning will not take place. If using WSUS or SCCM updates, update.bat is not required. "None" is generally selected when the users have a Windows Scheduled Task configured they want to run when the task is run.
  1. In the following screen shot, there are optional scripts you can specify to run at specific times during the task execution as per the information in the following link: 
    http://support.citrix.com/proddocs/topic/provisioning-61/pvs-vdisks-wizards-task-create.html
    .

    For this tutorial, leave the optional scripts blank.
Note: If using the optional scripts, place the scripts in a new folder called Scripts that you must create under C:\program files\citrix\provisioning services\scripts (or Program Files (x86) for 64-bit machines.)
  1. Select the Access Mode from the given option for the new vDisk version being created. Click Next.
  1. Review the information on the following screen. If all is satisfactory, click Finish.
  1. In the right pane, if you right-click, on the Task you just created, you will see the following options.
    Using Properties, you can modify your current settings. This includes your scheduled time and frequency for the task to run. You can also choose to run the task immediately.

Tuesday, June 25, 2013

XenDesktop 7: Upgrade & migration paths for XenDesktop and XenApp customers

XenDesktop 7: Upgrade & migration paths for XenDesktop and XenApp customers


With the release of XenDesktop 7 right around the corner we are asked frequently what version of XenDesktop would we recommend and how to migrate existing environments from previous versions of XenApp and XenDesktop.  I want to outline a few of the ways you can easily adopt XenDesktop 7 for your deployments in addition to highlighting a bit of the direction we are headed to further simplify migrations in the future.
Moving to XenDesktop 7
If you are a XenDesktop customer today, there are three paths to go to XenDesktop 7 when it becomes available later this month:
1.     New installation. If you are a new Citrix XenDesktop customer or if you are XenApp customer wanting to deploy XenDesktop 7 App edition as benefit of your subscription advantage, a new installation is the recommended approach. New installation offers a fresh-start approach to deploying XenDesktop 7 in your environment. You’ll also benefit from the simplified install, easy to configure and latest features available in XenDesktop 7.
  • For XenApp customer, you can use StoreFront 2.0 to aggregate apps and desktops between your existing XenApp farm and your new XenDesktop 7 Apps edition site – offering users a single point of transparent access to all of applications, whether running on Windows Server 2008, Windows Server 2008R2 or Windows Server 2012.
  • For XenDesktop 5.x customers who want to take advantage of new HDX features in XenDesktop 7 (e.g., new HDX feature or deliver Windows 8 VDI), but don’t have time to roll-out or upgrade your 5.x delivery controller yet, you can install the version 7 of the VDA on Windows 7 or Windows 8 , and it will register and work with your existing 5.x delivery controller.
2.     In-place upgrade. XenDesktop 7 supports in-place upgrade from XenDesktop 5 VDA and Controller. This approach allows existing XenDesktop 5 customers to seamlessly and quickly upgrade to the latest version without necessarily disrupting or re-planning of their XenDesktop deployment. Customers can run the in-place VDA upgrade to bring their golden image to the latest version. Once upgraded, users will be able to immediately benefit from the new HDX features delivered in XenDesktop 7. Do note that Citrix does not support in-place upgrade from the Excalibur Tech Preview, or XenDesktop 4.
3.     Migration tool. For XenDesktop 4 customers, XenDesktop 7 delivers a migration tool to help you migrate from XenDesktop 4 to XenDesktop 7. The migration tool will export your XenDesktop 4 data to XML file, which can then be imported into your new XenDesktop 7 site. Due to a number of new enhancements in XenDesktop 7, the migration tool will not migrate administrators, licensing, desktop group folders or registry keys.
For XenApp customers, if you are wondering how to migration from your XenApp farm to XenDesktop 7, read on…
Simplified migration through Merlin
Enterprise software migrations can be agonizing for IT administrators. This agony can be described as a function of the end user unhappiness, the cost (capex and opex), and service uptime throughout the migration.  For desktop virtualization deployments, many of the migrations accompany windows migrations.
At Synergy Anaheim, we showcased the new app orchestration layer as part of the upcoming Merlin release of Avalon.  The App Orchestration layer will provide customers with the ability to manage XenApp 6.5 and XenDesktop 7.x delivery sites together from a single interface through a desired-state orchestration engine. This orchestration layer also enables customers to manage deployments spanning multiple locations. Further reducing site-specific consoles and providing a global view to your Citrix deployment and ensuring consistent configuration.  You can watch the breakout session on Merlin here.
When we announced the Avalon strategy one of the most requested features was the ability to support existing deployments with the app orchestration layer.  The migration capability allows you to adopt app orchestration for your new delivery sites and then migrate your existing XenApp and XenDesktop deployments at your own pace.  We designed this feature around 5 basic principles for a successful migration:
  1. Don’t modify the production environment. Migration activities take place while users are accessing the production environment for their day-to-day activities. Eliminating any changes to the production environment provides the assurance that users cannot be affected during the migration activity.
  2. Make the migration transparent to users. When you are ready to migrate users to the new delivery site, its important that the migration event is completely seamless to the user. Think about a web service, as a user you don’t care what version of ShareFile you are accessing. All you care is that set of apps and desktops that you had when you logged off are available the next time you need them.
  3. Enable migration in stages. Many XenApp and XenDesktop deployments serve up hundreds of apps and desktop catalogs to thousands of users across different delivery groups.  When approaching migration, you want to migrate different app and desktops in stages based upon the mission criticality status. Users are no different; you should be able to migrate different user groups in stages.
  4. Allow the ability to rollback quickly if issues arise. During the migration of the separate apps, desktops, and user groups if an issue does arise, you should be able to automatically rollback to the existing production environment without having to manually reconfigure access paths or restore entitlements.
  5. Provide feedback on migration status. Because you approaching the migration in stages and continuing to operate your existing production environment alongside the new deployment its important to have clear dashboard view to what has already been migrated and the status.  If changes are made on the production deployment allowing you to re-migrate the offering or user groups to the new site. 
In the Merlin release, we are delivering on these principles through the new migration feature built into the powerful app orchestration layer.  This will allow you to migrate at your own pace by decoupling user groups, app and desktop catalogs, and entitlements from the existing environment through a guided experience.  Providing a repeatable, automated approach that aligns to how customers approach migration today without requiring homemade scripts or manual tasks.
But what if you have apps or desktops on XenApp or XenDesktop that you don’t want to migrate but just aggregate under StoreFront. App orchestration simplifies this process by taking the manual task of updating the aggregation rules that would otherwise need to be repeated with every StoreFront cluster and automating it. Because app orchestration has the user affinity, tenancy, and location policy it can help update the aggregation rules automatically across your multi-site deployment.
Summary
Migrating to the best in class desktop virtualization product has never been easier.  Customers looking to migrate to XenDesktop 7 have multiple options based on your use case and existing environment. Then with the Merlin release we will take the next step in simplified guided migration through app orchestration. Look out for more posts in the future that will go into further details on migration.

Citrix IMA and Zone Data Collector Communication

Citrix IMA and Zone Data Collector Communication

Summary
The following text is written to assist Citrix customers in understanding how IMA traffic works in reference to Zone Data Collector (ZDC) elections in a Presentation Server 4.0 environment. Detailed information on IMA traffic is used to help you understand the finite communication processes Citrix uses in zone server-to-server communications. The data was obtained from information provided by Citrix Engineering and Citrix Technical Support.
To demonstrate this information, the following information uses a fictional company named Miami Inc.
Case/Customer
A large Citrix customer, Miami Inc, is working to establish an understanding of how zone elections work in their environment, and gain the ability to understand what to look at should troubleshooting IMA Zone Elections in the environment be necessary. The customer needs to understand not only how zones should be set up, but also how communication amongst member servers and data collectors works across multiple zones during normal farm operations.
Miami Inc has approximately 12,000 Citrix users connected at any given time. The users access the farm from multiple global locations, some from the Corporate LAN and others via the Citrix Access Gateway SSL VPN.
Case Study Outline
IMA Communication – Traffic Fundamentals
The following information addresses questions about IMA basics.
How does IMA traffic get sent and processed amongst Citrix servers?
Most customers understand that IMA traffic goes over port 2512, but very few, like Miami Inc, understood how the traffic is attached to machines for processing by IMA.
Due to the stringent security requirements Miami Inc has around data traversal amongst networks, they need to understand how data is transported amongst servers in a zone.
Citrix has what we call a transport “function” that is responsible for getting packets of information from one host to another. The transport component is relatively small and does not actually care about the data it is transporting. This is a small set of functions for setting up bindings to host and subsequently sending packets to those hosts.
How does IMA know who the hosts in a farm are to ensure communication requests are from approved sources?
A set of functions that we refer to as the Host Resolver component are responsible for providing information about all of the hosts in the farm. It provides APIs for enumerating hosts, setting/getting a host’s zone, and mapping between some of the various ways used to refer to a remote host. Hosts may be identified by name (a simple UNICODE string), by HOSTID (a unique integer representing a host), or by host binding (HBINDING).
While this is good information, Miami Inc needs this put into greater detail for its internal security review so below we will explain more on how the actual connections are made.
Mapping connections for traffic sending between servers (hosts) in the farm
Various parts of the IMA system use different specifiers to refer to remote hosts. These types of specifiers include:
Host Names – Used by user interface components to refer to hosts. A host name is used in conjunction with a port specifier (typically the default IMA port, 2512) in order to create a binding with the Transport component detailed above. Every host has a definitive name that it determines itself when joining the farm.
Host ID – This is an integer used mostly be subsystems to refer to hosts.
As mentioned above, any time a message is sent to a remote host, it needs to have a host binding for that host.
The host resolver maintains two mapping hash tables for quick translations.
The host resolver’s main data structure is the HOST_RECORD, which contains a host’s name, zone name, IMA port, Management Console port, host ID, version, and ranking information. The ranking information is used by the Zone Manager, which is described below, when electing a zone master.
Connection State Information
A binding attempt is always in one of three states:
• Connecting
• Active
• Closing
When an outgoing connection is created, it is first placed in the state CONNECTING. This is a temporary state that quickly is changed to WAIT_BIND_REQUEST as the connection waits for a bind request to come back from the remote host. Once a BIND_REQUEST is received, the original host sends a BIND_RESPONSE packet and moves into the WAIT_BIND_COMMIT state. Once the BIND_COMMIT packet is received from the remote host, the connection is fully initialized and moves into the ACTIVE state.
The case of handling an incoming connection is similar. The connection is first placed into CONNECTING temporarily. A BIND_REQUEST packet is sent to the connecting client, and the local host moves to WAIT_BIND_RESPONSE. Once the BIND_RESPONSE comes back from the other host, the local host sends a BIND_COMMIT and moves into the ACTIVE state.
How many connections to servers in the farm can IMA process/keep at one time?
While there is no finite answer to this, there is a registry setting that limits the Host Resolver to keeping only 512 open connections to hosts. This is very important in large farm design, and it can be manipulated.
The connections to hosts in a zone by a ZDC do not last forever, and can be torn down and re-established. It is important to farm performance that steps are taken in the zone to limit this teardown/setup process from occurring, and bumping up the registry setting alleviates this in zones with more than 512 hosts. The registry setting is:
HKEY_LOCAL_MACHINE\Software\Citrix\IMA\Runtime\ MaxHostAddressCacheEntries
When Miami Inc designs their global farm, the ZDC setup is of the utmost importance as the number of servers in each zone will grow over time to very high levels. A thorough understanding of this setting and the following information is critical.
Zone Setup and Information
What is the function of a zone?
Zones perform two functions:
    • Collecting data from member servers in the zone
    • Distributing changes in the zone to other servers in the farm
What is a Zone Data Collector (ZDC)?
Each zone in a Presentation Server farm has its own “traffic cop” or ZDC. A ZDC may also at times be referred to as the Zone Manager. The ZDC maintains all load and session information for every server in the zone. ZDCs keep open connections to other farm ZDCs for zone communication needs. Changes to/from member servers of a ZDCs zone are immediately propagated to the other ZDCs in the farm.
How does the ZDC keep track of all of the hosts in the farm to make sure they are live?
If ZDC does not receive an update within the configured amount of time from a member server (default 1 minute) in its zone, it sends a ping (IMAPing) to the member server in question. This timeframe can be configured in:
HKEY_LOCAL_MACHINE\Software\Citrix\IMA\Runtime\KeepAliveInterval
If ZDC does not receive an update within the configured amount of time from a peer ZDC server, it does not continually ping the “lost” ZDC. It waits a default of 5 minutes, which is configurable in: HKEY_LOCAL_MACHINE\Software\Citrix\IMA\Runtime\GatewayValidationInterval
How does the ZDC ensure servers communicating with are in the farm and authorized to trade information?
There are several layers of security used in this process, including those that exist in the Transport and Host Resolver functions. One of the most important checks a ZDC does to allow a server to communicate within the farm is called a magic number check. Magic Numbers are set the first time a server in a farm is joined into a farm.
If a server in the farm has a different magic number than the ZDC expects, it can cause the server to believe that it is in it’s own farm and declare itself a data collector, thus causing two data collectors to exist in a single zone and causing further zone elections.
The document offers more information around this setting:
Is there a setting for when the member servers in a zone update the Data Collector?
All updates a member server has are sent to the ZDC as soon as they are generated. Below is a graphical image of how both inter and intra zone IMA communications occur in an idle farm.

Most IMA traffic is a result of the generation of events. When a client connects, disconnects, logs off, and so on, the member server must update its load, license count, and so on to the data collector in its zone. The data collector in turn must replicate this information to all the other data collectors in the farm.
The client requests the data collector to resolve the published application to the IP address of the least loaded servers in the farm.
The client then connects to the least loaded server returned by the data collector.
The member server then updates its load, licensing, and connected session information to the data collector for its zone.
The data collector then forwards this information to all the other data collectors in the farm.
Important: Notice in the communication diagram there is no communication to the data store. Connections are independent of the data store and can occur when the data store is not available. Connection performance is not affected by a busy data store.
Election Process in Detail
What is meant by a Zone Data Collector election?
Should for any reason this ZDC not be available, another server in the zone can take over this role in its place. The process of taking this role is known as an election. The setup of how these elections take place are very important in a Presentation Server farm design, especially in large environments like Miami Inc’s. Miami Inc has a global distributed Citrix environment, where farm communication is heavily reliant on zone setup.
What server is the “boss,” and how is that determined?
Server Administrators must choose the Zone Data Collector strategy carefully during farm design. There are many variables associated with this process that are outside the scope of this document. When an election needs to occur in a zone, the winner of the election is determined using the following criteria:
    • Highest Presentation Server version first (should always be 1)
    • Highest rank (as configured in the Management Console)
    • Highest Host ID number (a Host ID is just a number – every server has a unique ID)
If you want to see the HostID number and its version, you can run the queryhr.exe utility (with no parameters).  You’ll get something that looks like this:
C:\>QueryHR.exe
---- Showing Hosts for "10.8.4.0" ----
Host 1:
-----------------------------
Zone Name: 10.8.4.0
Host Name: FTLDTERRYDU02
Admin Port: 2513
Ima Port: 2512
Host ID: 8022
Master Ranking: 1
Master Version: 1
-----------------------------
--- Show Host Records Completed ---
New Data Collector Election Process
When a communication failure occurs between a member server and the data collector for its zone or between data collectors, the election process begins in the zone. Here are some examples of how ZDC elections can be triggered and a high level of summary of the election process. A detailed description of this process and the associated functions used is further below in this document.
1. The existing data collector for Zone 1 has an unplanned failure for some reason, that is, a RAID controller fails causing the server to blue screen. If the server is shutdown gracefully, it triggers the election process before going down.
2. The servers in the zone recognize the data collector has gone down and start the election process.
3. The member servers in the zone then send all of their information to the new data collector for the zone. This is a function of the number each server has of sessions, disconnected session and applications.
4. In turn the new data collector replicates this information to all other data collectors in the farm.
Important: The data collector election process is not dependent on the data store.
Note: If the data collector goes down, sessions connected to other servers in the farm are unaffected.
Misconception: “If a data collector goes down, there is a single point of failure.”
Actual: The data collector election process is triggered automatically without administrative intervention. Existing as well as incoming users are not affected by the election process, as a new data collector is elected almost instantaneously. Data collector elections are not dependent on the data store.
Detailed Election Process:
As we know, each server in the zone has a ranking that is assigned to it. This ranking is configurable such that the servers in a zone can be ranked by an administrator in terms of which server is most desired to serve as the zone master. “Ties” between servers with the same administrative ranking are broken by using the HOST IDs assigned to the servers; the higher the host ID, the higher-ranked the host.
The process that occurs when an election situation begins is as follows:
1. When a server comes on-line, or fails to contact the previously-elected zone master, it starts an election by sending an ELECT_MASTER message to each of the hosts in the zone that are ranked higher than it.
2. When a server receives an ELECT_MASTER message, it replies to the sender with an ELECT_MASTER_ACK message. This ACK informs the sender that the receiving host will take over the responsibility of electing a new master. If the receiving host is not already in an election, it will continue the election by sending an ELECT_MASTER message to all of the hosts that are ranked higher than itself.
3. If a server does not receive any ELECT_MASTER_ACK messages from the higher-ranked hosts to which it sent ELECT_MASTER, it will assume that it is the highest ranked host that is alive, and will then send a DECLARE_MASTER message to all other hosts in the zone.
4. When a server that has previously sent an ELECT_MASTER message to the higher-ranked host(s) in the zone receives an ELECT_MASTER_ACK from at least one of those hosts, it enters a wait state, waiting for the receipt of a DECLARE_MASTER from another host. If a configurable timeout expires before this DECLARE_MASTER is received, the host will increase its timeout and begin the election again.
At the conclusion of the election, each host will have received a DECLARE_MASTER message from the new zone master.
Questions
What happens if a server incorrectly believes a new ZDC has won (false winner)?
Once the two ZDCs “fix” themselves through ZDC to ZDC communications establishing who the proper ZDC is, a direct communication to the member server(s) is sent notifying it of the correct ZDC for member servers to use.
Supporting data:
    • Any state change on server (logon/logoff, disconnect/reconnect, load change) triggers a dynamic data update.
    • Member server notifies its DC of the change, and in turn….
    • The member server’s DC notifies ALL other DCs of the change.
Communication Events:
    • Member server to zone DC heartbeat check.
    • Key: HKEY_LOCAL_MACHINE\Software\Citrix\IMA\Runtime\KeepAliveInterval
    • Default value: 60000 milliseconds REG_DWORD: 0xEA60
What happens if a server believes it is the new ZDC but the PZDC is still alive and has not resigned?
There are two ZDCs for a finite amount of time, however our code ensures that the ZDCs communicate to each other and communicate the true ZDC to all member servers in the farm once the election process has run its course. Presuming that the original server does not have a lower preference level than the “new” ZDC, it will close to always remain the ZDC, and in turn broadcast its status to all servers in the farm.