May 28, 2020
Software-Defined Data Center (SDDC) - Expert Insights
Get an expert discussion on the state of Software-Defined Data Center (SDDC).
Chapters
- 00:06 - Meet the experts
- 00:39 - Defining SDDC
- 03:01 - SDDC and Software Abstraction
- 06:14 - SDDC vs Cloud
- 07:57 - SDDC and Vendor Lock-In
- 11:13 - Business Drivers for SDDC
- 16:33 - People, Process, and Tools
- 19:55 - How WWT Can Help
- 23:30 - Most Exciting Things about SDDC in 2020
Speakers
- Chris Weis - Practice Manager
- James Harless - Technical Solutions Architect
- John Tejada - Technical Solutions Architect
- Sean Hicks - Technical Solutions Architect
- Corey Wanless - Technical Solutions Architect
View the full transcript below.
Chris Weis:
My name's Chris Weis. I'm a practice manager for World Wide Technology.
John Tejada:
I am John Tejada. I am on the global engineering team as one of the technical solutions architects.
James Harless:
I'm James Harless. I'm a technical solutions architect on Chris' team, global engineering team, and I cover software-defined data center.
Jeff Mercier:
Jeff Mercier, also a technical solutions architect on the global engineering team on Chris Weis' team.
Sean Epstein:
Sean Epstein. I'm also on the software-defined infrastructure team.
Cory Wanless:
Hey guys, Cory Wanless, principal solutions architect, focused on infrastructure automation.
James Harless:
Software-defined data center is really the abstraction of services layer from the infrastructure layer in the data center. So we're trying to abstract out the hardware for compute, networking and storage. We're trying to present that up in an abstracted fashion to be consumed by services above it. And so we're trying to accomplish a few different things. We're either moving towards a private cloud model where we want people to be able to just, on demand, consume those resources, or we're trying to enable next generation applications so that we want people to be able to interact with the infrastructure, to not be dependent on specific hardware interface interactions. Or we're just trying to optimize our data center around cost. And we're trying to get to a model where we can say, "Hey, look, we're just going to buy units of compute, and we're going to build them in our data centers, and then we're going to abstract them, and we're going to present them up as a software layer."
James Harless:
There's really three goals that I think we're trying to accomplish with SDDC.
John Tejada:
I definitely agree with what James says. When I think of software-defined data center, I think it's a local on prem data center. And the benefits to me of it being software-defined is the ability to automate around that. To have companies, infrastructure teams be more agile with their environments, and therefore they're then able to quickly deliver for their customer base, whether that be their employees of the organization or some type of ISV.
Cory Wanless:
How is the infrastructure that we're deploying, how can I consume that from an API perspective? Are all the feature sets that I want to program within that device or set of devices? Can I actually do that from an API? There's still OEM solutions out there today that don't always have full API capability, at least publicly available, and making it really hard to actually automate the solution. So I would extend even what James said. I agree what he says, but if we look at a context from an infrastructure automation perspective, sometimes I don't really care that everything is software abstracted. All I care about is can I actually consume that infrastructure? Can I actually be able to program that remotely?
James Harless:
I think that is a good active debate that's going on. Sean and I have had this debate, [inaudible 00:03:09] but it's a good active debate about, does software-defined mean software-abstracted, or can it be hardware that has an API interface?
So my definition is, it strictly has to be software abstracted. That a piece of hardware with an API is by definition hardware-defined data center. So if your data center is defined by physical pieces of hardware, even if they have API that you can interact with, that's an HDDC. That is a data center that is defined by the hardware that goes inside of it. And even though it may functionally work very similar to a software-defined data center, it may solve some of those use cases, it's actually a different model in my opinion. Sean, I know you disagree.
Sean Hicks:
Yeah, this is a point of contention. James and I probably had a spirited two hour debate one day about this. My premise is basically that software-defined is a defined term. The software-defined data center is more marchitecture than anything else. It's not a defined term. And so when we think about what software-defined means, it essentially means that the object that's being managed, and had its management abstracted away from it, that it has interacted with, programmatically, reading APIs, and that it is managed in a policy-based model rather than banding individual resources.
Does it matter whether or not it's still hardware? I think when people really think about software-defined data center, they keep the fact that it starts with the word software. They really want it to be fully software driven.
James Harless:
So it depends on if you value what you get when something is software-defined. That abstraction of the hardware underneath is important to me. And it's important to be able to break these into units that I can build up. And if I'm constricted to a particular vendor or a particular solution, not even a particular vendor, but a solution set within a vendor, then it's too narrow, and it breaks too many things around software-defined data center contracts that we're trying to accomplish.
James Harless:
If you think of large web-scale providers. Azure, AWS, so forth, their infrastructure definitely wouldn't work properly if they were wedded to a particular piece of hardware from a particular OEM.
Sean Hicks:
Yeah. I want to believe in a world where all services that are required from IT are delivered as software. But to James's point, I don't think that world truly exists. The people who are going to come closest to it are the hyperscalers. The on demand nature of their model, plus the speed at which they roll out new services. It sort of dictates that everything has to be delivered as a service. But the idea that they're not calling to some kind of underlying hardware, we know it's not true.
Again, this goes back to SDDC being a marchitecture term. Is it a data center? Because if it is, and it's delivered by software, then technically it could exist anywhere A co-location facility could be my data center, a cloud environment could be my data center. I don't even have to operate with any owned infrastructure if I don't want to. I don't have to have a location. I don't have to have a privately owned facility. It could all be operating on a cloud provider, and you don't hear people use the word data center to refer to that, but that's essentially what you're doing.
James Harless:
Yeah. And I would argue that part of the reason that the public cloud providers are so good at what they do is because fundamentally they operate under the tenants of software-defined data center. Again, I think software-defined data center is this micro services layer that serves as the foundation that your infrastructure is service, your function is service, your platform is service. Those things all ride on top of there. And the software-defined data center is really just that abstracted layer from the hardware, and these hyperscalers are a lot better at doing that than the rest of us. And if anybody has an SDDC today, it's AWS, Azure and Google. Those are the companies that are doing it the best. And I think that fundamentally how they function, and how come they're so efficient, is because of that layer they have in there, and how they've abstracted their resources from the consumers. I think that's really what software-defined data center has. It has consumers on one side and resources on the other.
Cory Wanless:
I think it comes back to, what's the overall goal of the SDDC that you're trying to solve for? What is that goal that you're trying to solve for? Is it abstracting the hardware layer and not having to worry about the hardware layer anymore? Well, depending on how you go about that software-defined data center model, you might just lock yourself into a specific software model. So being careful about how you go about the software-defined data center is very important, to make sure that you don't solve one problem with another problem. And so overall, if we look at how hyperscalers are doing it, the way you would consume an SDDC, if you will, from an Amazon perspective is utilizing their APIs. And so if you go directly with some of the cloud formations or Azure Resource Manager, and you go directly against those APIs, you're going to be locked into utilizing that service.
Now, there's tools and solutions and methods to not have that problem, but you have to be intentional about how you solve that problem.
Sean Hicks:
A lot of the times when people talk about software-defined, they talk about it in terms of avoiding vendor lock-in. But what they're really talking about is avoiding vendor lock-in at a different level, right? Because they may well find themselves with vendor lock-in depending on what their software-defined data center strategy is.
James Harless:
Yeah. I agree. I don't think a software-defined data center is an approach that I would take if I'm trying to avoid vendor lock-in. That's not the way that I would approach it. In fact, I would pick a vendor and say, "We're going to partner with vendor XYZ, and that is going to be our software-defined data center story. And everything else is going to circle around that story." Trying to go in and say, "Well, we're trying not to get locked in, so we're going to pick one piece from each vendor," that is a bad, bad approach. That's not going to work out well. And even if you think of AWS and how they've done it, they hired an army of developers. But unless you literally have thousands of developers on staff, ready to roll, and you spent a few years ramping up, you're not going to do that either.
You've got to look at the various approaches available and then kind of pick your poison. If software-defined data center is the right approach for you, it does mean vendor lock-in. You should be picking a partner that you're going to play with.
Sean Hicks:
Because there are a lot of companies out there that have armies of software developers, as James just said, and they think that they can go down this path of building their own AWS, building their own Google Cloud because they have an army of software developers. And they might be able to. But if you're Netflix, or you're Twitter, or you're Instagram, VRVO, is that what your software developers are supposed to be doing? Are they supposed to be writing their own network operating system elements? Are they supposed to be writing their own hypervisors? And I think the answer is no. And a lot of people would agree with me that, at some level, you just have to buy into commercial off-the-shelf solutions, because your primary business is not offering a public cloud service. So you don't need to go rewrite something that's already out there.
In my opinion, and I think James is probably going to be able to expound on this better, but I do think he shares some of his with me. In my opinion, what you're looking for is uniformity. Because at the end of the day, you need to be able to deliver your services faster. Delivering services faster means some level of automation, and automation hates uniqueness. It does not play well when you have a patchwork of solutions that were built based on projects, rather than based on a strategy. It does not play well when you have a configuration creep across those environments.
Automation likes uniformity. And in my opinion, the move towards software-defined data center, and yes, we've already said it's marchitecture term, [inaudible 00:12:04], but the reality is it's about achieving some layer of uniformity across your environment.
James Harless:
I think that's a very big part of it. Probably the biggest is customers want, as Cory pointed out, you're looking for those APIs. But if you pick a software-defined layer in the middle there, you have a uniform API and all your data centers across all your environments, assuming that that is possible to accomplish. So you pick your partner, and now you have a uniform microservice player, as it were, to program to.
And I think even at other levels, people are looking at this model because this is how you grow at scale. So if you're a large enterprise, and you're looking at the cost of your data center operations, that thing becomes very difficult as you get beyond a certain scale.
So that becomes a problem as you go forward, especially if you put workloads in the public cloud, you've experienced a lot of agility and speed from that, then you decided to repatriate those workloads due to cost or data governance or some reason, and you're finding that, "Hey, my legacy data center doesn't work anything like my experience in the public cloud. And I need something faster and more agile. I don't want as many people installing updates on ESXi host, I want to modernize this."
So that's a big part of it, as well, as if looking at that model, how do I get more agile? And there is a longterm cost play to software-defined data center. It's a huge investment upfront. There's a lot of upfront costs. If you have the right scope and scale, it certainly can save you money in the long run. If you're thinking about how these costs are going to amplify out over time.
John Tejada:
Software-defined data center in itself doesn't make you agile, right? And it could even produce more siloed organizations. And so that's where Cory and I, I think, come in, is that we take advantage of the software-defined layer and make it agile. Give that single pane of glass for our customers to build on top of. And so it's great to have a software-defined data center, but if you don't have a plan in place to make it agile, to make it simplified, to simplify the complexity of it, it's just going to be more heartache, more technical debt.
And so I think that's where the cloud management platform in itself shines, because it brings all of that together for the infrastructure team, the developers and the customers, and then behind the cloud management platform, then there's other integration points where a lot of technologies like that, that Cory manages, just like Ansible, ServiceNow. All of that we need to take into consideration, not just software-defined Software-defined is just something, like I think James said, that we play on top of.
And so you got to also look at it from a people and process, because now that we're implementing this whole new data center, skill sets need to be identified. People's roles need to be changed. Probably need to bring in some skill sets to help that project along so that after we're done with an engagement, they're then able to successfully continue to move it forward.
Cory Wanless:
I'll piggyback off of that. So I think overall, what the business is demanding is speed, and speed to market. Which really then drives methodologies like dev ops, right?
And we look at the three ways, which is flow, feedback and experimentation. And overall, if we have a hardware-defined data center, like James was talking about earlier, it's really hard to do that last piece, that experimentation. "Hey, what does this look like if I just change this configuration? And how can I do that in a safe environment that I can quickly and reliably promote from a development environment to our QA, to staging, to production?" And creating that overall flow, and then creating feedback within that whole setup is key to help streamline and make the deployments and speed to market for the businesses our customers support go a lot faster.
Sean Hicks:
People, process, tools, right? I think we're very quick to sell the tool, right? Our vendors, ourselves. We're very quick to say, "Yeah, you need the [vRealize 00:16:46], or you need Morpheus." So we'll do all these things. But these guys know, John and Cory know that the people and the process part is super important, right? I think, Cory, you were alluding to this earlier. And John, I think you were as well. Silos. Silos are our problem. And they're also a problem for software-defined data center, by the way. It may be the underlying foundation that allows this great business value that John and Cory can bring to the organization, their speed. But these traditional silos have existed where people have had different disciplines, they don't work well with each other, and their processes have evolved separate from each other. And then on top of that, that you may actually be getting worse as we extend our environments into multiple cloud environments. Because now what we see is customers building out basically parallel IT. You have an AWS team, you could have an Azure team, you could have a Google cloud team. You have an on prem team. At the end of day, what we have to realize, and what we have to convince customers, and what is really helpful to John and Cory, whether it's cloud management or it's dev sec ops, is breaking down these silo walls and getting people to work together.
It's helpful at the software-defined data center layer, and it's helpful at the automation layers above it. And without it, it can't be either. None of those things can be successful.
John Tejada:
Right. And as we talk to our customers, and we kind of laid down like, "Hey, what are you looking to do with your cloud management platform?" Because a lot of times they're just thinking, "Oh, I just need it to build VMs." And as we start walking them through all of its capabilities, and they're like, "Oh, well, we don't need to worry about that because this other team does the application deployment." And so that opens the door to really start breaking down those silos within the organization, and then get them to start communicating with each other. And so I love some of our workshops that we have where we get these people in the same room to kind of help them understand like, "Hey where are your silos? So that we can work together, so that we can build this architecture for you, so that it's all in unison. It's all self-driven, almost."
Cory Wanless:
Yeah. To add to that, I just got off a call before moving over to this call where the customer that we were talking to, really, at the end of the day, the problem they were trying to solve was not a technology problem. It was a people, process problem.
In their use case, they're trying to upgrade their network infrastructure across their 600 plus sites, and they're spending hours and days just choreographing a single site for an upgrade. Who's going to be there for testing, or when can they actually do it, who approves it? So we're looking at a people, process problem, and then fixing that first, before you even dive into the automation problem. Which is going to solve 30 minutes of their time.
From a overall process problem, if the company understands they have a process problem, go on our platform, we have a value stream workshop that we can help out with. It's anywhere from a half day to multi day workshop. It's really all focused around pulling out what those actual problems are within your process. And a lot of times the organizations have an idea of what their problem is, but they haven't quantified it yet. And this process, that workshop is all about helping you quantify it. So you can go and get what you need, justification for your given project, whatever it may be. So that's one. And John and myself will go and tell all of our automation labs, whether it's the CMPs of the world, whether it's the Ansible or other infrastructures code technologies, go onto our platform, launch those labs, they're free for you to consume after you register an our account.
James Harless:
Jeff, I would say from a bigger, broader strategy, we have workshops on software-defined data center and on VM-ware cloud foundation, and we're putting one together around Nutanix. So if a customer is looking at evaluating SDDC as a principal on approach, we can definitely help them evaluate whether it's right for them.
We don't say, "Do your cost analysis," in there, but that's something that, if we got a lot of feedback on, I'm sure we can work on doing some cost justification. But if there's a lot of qualifiers, whether a customer is an appropriate candidate for software-defined data center. And I think we've talked a lot about private cloud operations, which is certainly, probably the premier use case.
John Tejada:
We hear the stories of OEM A, customer tried it, they weren't successful with it so they're shelfing it, and now they're trying OEM B, and they're just kind of continuously going through this process. And so the benefit of going WWT is that we hear it all. We see it all. And so therefore we've made our implementation methodologies so that customers are successful with whichever product they choose to go with, they will be successful. So I think that's a huge benefit that WWT has over any of our OEM partners. Because we see it all. We hear the stories, the good, the bad, and the ugly, and we're able to help navigate our customers through all of that so that they are successful at the end of the day.
James Harless:
I do think that it's important that customers consider when they're approaching something like software-defined data center. Well, maybe not so much software-defined data center, but private cloud. If you're approaching private cloud. This is a big transformative effort. It's capex intensive. This isn't something I would dip my toe into. If you're going to go down, if you're going to be buying these big chunks of tools that are going to cost millions of dollars, you should count on lots and lots of transformative type of efforts inside the organization to get the value out of that. I think John would agree with me, we see a lot of customers buy tools, and then that stuff ends up installed somewhere in the corner of the data center and not really used that much. We call it shelfware. And it's really because they didn't invest in the rest of the process. And I think that the ratio is probably somewhere between five and 10 to one. So for every dollar you're spending on software and hardware, you're probably spending $10 on transformative type efforts and consultative stuff.
So you're $10 million worth of hardware and software is a hundred million dollar project over probably a 10 year span.
John Tejada:
So for me, it's the number of cloud management platforms that we are able to showcase to our customers. It's a really exciting time right now. So I think that we brought [Dane 00:23:47] on to help with this effort. So we're very serious about it. And we're just excited to be able to help customers with whichever account manager platform they're looking into.
Chris Weiss:
Awesome.
Jeff Mercier:
Yeah, so for me, I would say that the thing I'm most excited about is there seems to be a pretty big trend where you can take your public cloud and bring it on premises. Or you can take your on premises private cloud and start pushing it up into the public cloud with the same management platform. So to me, to have those extend seamlessly is probably something that I'm pretty excited about to see play out.
Cory Wanless:
Yeah, for me, I think from just a pure technology perspective, Kubernetes and the adoption in a lot of our large enterprise accounts, this is going to be amazing this year. The things that you can do with a Kubernetes or a container based platform to help drive value back to the business is just, it's huge. I'll just put it that way.
Sean Hicks:
So I'm going to combine the previous two, because Jeff essentially described what I do. I resolve platforms. And then Cory took it the next step, which is the ubiquitous nature of the Docker runtime engine, or Docker compatible runtime engines. And Kubernetes as a container orchestrator.
What's been missing is not so much the ability to manage all these different clouds from a single place. We don't know if we'll ever get there. What has been missing is the ability to have workload and data portability between all these different environments so that wherever your data center happens to be, if it's a carved out space in AWS, or it's something you actually own, or it's something you're leasing from somebody else, the idea that I can move data and applications seamlessly across any of those environments, regardless of whether or not they look like each other, is interesting.
The approach Jeff spoke of is the hybrid cloud platform stuff. That's forcefully making these environments look like each other so that you can achieve that portability. And then what Cory just spoke about was containers. Gosh, containers get me out of bed in the morning because there's a lot of potential there to realize our dreams and not even have to force different environments to look like each other.
James Harless:
Yeah, and I would agree with Sean and Cory. I think for me, it's going to be all about Kubernetes going forward. And I think what Kubernetes is going to do to software-defined data center is really kind of neutralize and differences between our public cloud environment and our private cloud environment.
There's a lot of work being done right now by a couple of major ISPs, Red Hat, VMware, and others that are working together to develop a marketplace around Kubernetes. when that happens, we're going to see the ability to run Kubernetes services on whatever SDDC you have. So for example, if you're a VMware customer, or Red Hat customer, and you have your platform extended across two or three or more public clouds, as well as your private data centers, the services from those clouds, like let's say RDS and AWS and Azure SQL, stuff like this, they're going to be present in all of those locations. So that's a very powerful kind of time where we're going to see a convergence of services from all these major players because of how Kubernetes unifies those software-defined data centers.