The Impact of AI on EUC: VDI
In this blog
As AI workloads migrate to the edge, the opportunity exists to leverage virtual desktops (vDesktops) delivered by virtual desktop infrastructure (VDI) and/or remote desktop services (RDS) to scale out the processing similar to physical endpoints.
In many ways, VDI and RDS represent a more dynamic and scalable approach as the ability to provision and deprovision workloads on demand in the data center or cloud is far more agile and flexible than procuring and deploying laptops. This dynamic bursting capability has been a differentiator for VDI/RDS technologies since their inception.
Hardware and hosting
The addition of AI workloads to VDI/RDS environments raises a number of considerations. From an obvious base level, any change to the application or service burden executing within a vDesktop will affect the overall resource requirement. If the AI workload increases CPU, RAM, storage, GPU and networking demand, then those increases will aggregate up to an increased resource load in the hypervisor environment. Depending on the hosting platform, these increased workload demands will be expressed and remediated in different ways:
- For on-prem deployments, the increased demand may necessitate scaling up resources within individual VMs and hosts (e.g., adding more vCPU, RAM, vGPU, or storage capacity), adding additional hosts to established clusters, and/or adding new clusters to the overall environment. These changes may also be amplified by Layer 8 business decisions such as acceptable per-host VM ratios and per-cluster sizing and redundancy.
- For cloud-hosted deployments, the increased workload will drive up compute, storage and network consumption, which may increase the per-VM cost. This could also necessitate changing instance types and/or sizes to maintain an acceptable user experience, which will also have a potentially significant impact on your cloud economics. Additionally, the performance requirements may necessitate selecting a GPU-enabled instance type, which may vary in availability between cloud providers and even regions within a given provider. Some hosting models allow for more granular changes to the instance type and sizing, but some would require different licenses to provide different VM capabilities.
As with all things VDI/RDS, establishing proper candidacy will be critical. Based on resource requirements and the associated economics, it may be best to run some AI workloads on-premises; for other workloads, the cloud may be more appropriate.
Perhaps more importantly, it may be that the AI workloads should not run on a virtual desktop at all, whether due to architecture issues (e.g., apps requiring ARM vs. x86 processors, NPU hardware requirements, etc.) or cost considerations. In contrast, it may be determined that VDI/RDS is required to run applications that are not compatible with bespoke AI-enabled endpoints due to similar architecture issues.
Management tooling
AI's greatest impact may be in the management tooling used by the various brokering and provisioning solutions. The possibilities are (nearly) endless, all of which are dependent on the depth of the integration across the stack. Here are some "low-hanging fruit" examples:
- Automated resizing of vDesktop workloads in response to trends in user activity. For example, changing a pool of 2vCPU/8GB RAM vDesktops to 4 vCPU/24GB RAM while factoring in desired guest-to-host ratios and overall host and cluster capacity without requiring manual IT admin intervention.
- Analyzing user application usage and experience metrics to recommend moving users from vDesktops to RDS-published apps and/or desktops or vice versa; and then once approved by IT, managing all the changes in resource entitlements as well as creating any additional capacity required to support the changes (new vDesktops in existing pools, new pools, new RDS hosts, etc.).
- Intelligent analysis of the running cost of the workloads in the environment and recommending and/or automatically moving workloads from on-prem to the cloud and vice versa based on specified criteria to gain the greatest efficiency and utilization of all the resources in a hybrid environment.
- Leveraging intelligent metrics to assess user proximity to their commonly consumed resources (e.g., apps, data, shares, etc.) and dynamically move them to the best location and platform to ensure optimal user experience. For example, moving them from a cloud vDesktop to an on-prem one, or to a vDesktop hosted in a different region in an automated fashion.
Autonomous user experience
A common thread through all these examples is the utilization of AI to facilitate a more intelligent and autonomously maintained user experience. Some institutions may desire to only use it as a recommendation engine, leaving the implementation of any changes to the VDI admins. Others may want to perform their own analysis and use automation to do the work. However, the real potential and power is in the extent to which the environment may be autonomously monitored and maintained, making dynamic changes to curate the best user experience possible.
More resources on the impact of AI on EUC
To learn more about the impact AI is having on end-user computing (EUC), check out the other articles in this series: