These days, you can't go anywhere without hearing about "AI." It's unavoidable because it's set to reshape our future, and nobody wants to miss out. As a result, nearly every company is advertising how AI enhances their product. But how will AI actually change things? The truth is nobody's certain.  Every organization and individual have their own take and they don't all agree.  I can't predict the future of AI, but by looking at its current state and market trends, we can begin to piece together a picture of what it means in the short-term and understand its implications, especially as it pertains to managing physical endpoints and its impact to the EUC Engineers.

Let's start with the elephant in the room

One of the biggest drawbacks to AI is power. AI tasks are very power hungry. Most companies that are looking to integrate their own internal AI in their data center are finding that they are not just upgrading or replacing a few servers, but rather they need to place in dedicated racks of new servers; so many that HVAC systems must be upgraded to compensate. The major cloud providers can already see the writing on the wall - imagine if most of the major enterprises determine that there isn't a justifiable ROI to upgrade their own data center, and instead decide to move their AI workloads into the cloud; what is that going to do to the folks running the Azure/Google/Amazon data centers?

In response, there is an attempt to push the compute pendulum back the other way. Following the path of projects like SETI@home, there is an effort to move this processing to the endpoint in a distributed processing model. OEMs like Apple, Google, Samsung, etc., are all releasing AI-enabled mobile devices. Apple has announced that starting Next month all current Apple devices will be getting Apple Intelligence.  And Microsoft has recently started hyping up the "Copilot+" certified PC. Almost all of the newly released Microsoft hardware is designed to the "Copilot+" certification, and quite a few other vendors already have devices available that meet this specification as well (Acer, ASUS, Dell, HP, Lenovo and Samsung at the time of this article release).

Who cares about having AI on the endpoints and why

There are lots of benefits to AI on end-user devices, some more obvious than others, but everyone can find some benefits to local AI.

  • End users will find that having AI-enabled devices will improve their experience. Some examples here include improving video and audio quality during calls, automating repetitive tasks, generating summaries from meetings, drafting email responses, etc.
  • Security experts will appreciate how AI-driven security solutions can have exponentially faster response times when threats are detected (calling this a benefit may be a gray area, given the threat landscape in today's world—I think this basically becomes table stakes for a corporate PC in the near future).
  • EUC admins will find that device lifespan may be extended as mobile devices with AI-controlled power consumption can extend the life of the battery from both a daily need as well as improving the potential lifespan of a battery

What makes a computer "AI enabled"

This is a good question, and the answer varies depending on who you ask (and what they are trying to sell you), but there are a few common themes. Having both a Neural Processing Unit (NPU) and a discrete Graphics Processing Unit (GPU) is generally accepted as a requirement for calling a PC an AI-enabled device. These NPUs are optimized to delegate specific tasks to either the GPU or the CPU, depending on the workload being performed. AI-optimized software is also a requirement. However, there are no clear-cut definitions of when a piece of software is considered to be truly "optimized for AI."

How are we going to get there?

For years, we, as EUC professionals, have been in a constant cycle of upgrading our standard end-user device specifications to keep them performing well enough so that our users are able to remain productive.  The cycle goes something like this: 1) We provide more powerful devices to our users.  2) The software our users have requested now takes advantage of the more powerful devices (think about Adobe Creative Suite, AutoCAD, Microsoft Office 365, etc.).  3) Security adds more agents to the devices, and those agents use more of the available resources to mitigate the ever-increasing and diverse threat landscape. 4) The net result is such an impact on the PC's performance that it negatively affects our user's ability to perform their jobs. This finally send us back to step 1 in the cycle where the EUC teams have to buy more powerful PC's, simply to keep them functional throughout the typical 3–5 year lifespan of a corporate device. This is the situation we have come to expect over the last decade or two, and while it may not be easy, it is predictable.

Now, with the rapid introduction of AI into everything, we are adding a new factor into the equation, and its impact is unknown. Enter the "AI-enabled device." Traditionally, we have simply bought devices with faster processors, more cores, added RAM, etc. This has worked so far, but in order to take advantage of AI, we need to be able to enable AI on the End-User Device itself. To accomplish this, we appear to have 2 paths in front of us. 

The first path is to provide yet another bump to the CPU and RAM with an additional NPU chip designed for AI workloads that will then be offloading a lot of that processing power to discrete graphics cards. The challenge with this option is money, and a lot of it; with some of these systems coming in at $20k or more. For reference, most of our enterprise customers find a typical End-user Windows device will cost between $900 - $1700 (whereas the Macs are usually between $1500 - $3000, but they are not going to be as drastically affected as Windows PCs which make up a majority of company supplied End-user devices). You can be sure that companies don't have this kind of price swing accounted for in their budget for End-user devices, and certainly not on ALL their end user devices. 

The second path: Switch over to a device that is optimized for AI workloads and intended to be low power devices. The main challenge here is the architecture. These low-power devices are run by ARM-based processors, not the standard x86-based processors that Windows End-User Devices have been running on for over 3 decades. Microsoft has documented their standard for a "Copilot+ PC" - it (currently) requires the Qualcomm Snapdragon ARM-based chips in order to be certified as a "Copilot+ PC".

How is this going to affect our end users?

This is a major architecture change that brings along with it a whole host of software incompatibilities to account for. Some of the apps our users need either will not run on an ARM-based processor or are not yet optimized for it - meaning that they will likely run poorly. We need to start the transition as soon as possible, but there will be holdouts. This is going to be very reminiscent of the change many of us went through when Microsoft Vista was first released, and we had to start looking at a 64-bit OS instead of a 32-bit OS and all of the software incompatibilities that came with it. The question is, how many of your business-critical apps fall into each category?

 

So, what does this mean for our EUC engineers?

There are a lot of implications to consider here. First of all, no matter how fast AI wants to go, the reality is that a transition like this takes time. There will be a lot of discovery and planning involved; and then the implementation will occur over a number of years not months. We need to make sure that we don't start the transition until the underlying requirements are met. A few top-of-mind considerations:

  • Do the business-critical apps function well (or at all) on a device with an ARM-based processor?
  • Do the management tools and security tools required to keep the devices up to date and properly secured work on this new architecture?
  • Do we need additional tools to ensure that these new, lower-powered PCs continue to meet the needs and expectations of our end users over time?
  • What workarounds can we implement during the transition? For instance, we can leverage VDI for unsupported apps or refactor apps to be browser-based rather than Win32 applications.
  • How do we organize the transition so that the users who will get the biggest benefit from the new PCs are the first to receive the new devices?
  • Is the user base prepared for the changes, and more importantly, have they been educated in how to take advantage of the new capabilities?

In short, our EUC Engineers are not going to be quickly displaced by AI, but rather we are seeing that the work required to implement AI on the endpoints in the Enterprise shows they are needed now more than ever.


More resources on the Impact of AI on EUC

If you would like to learn more about the impact that AI is having in end-user computing (EUC), check out these other articles in this series:

Technologies