Gemini Mundi: Creating Digital Twins at City Scale
In this article
Artificial intelligence (AI), quantum computing and the Metaverse are now part of the human lexicon, leading some scientists and philosophers to ponder whether simulation theory has become a more viable hypothesis to explain reality.
"Simulation theory" is the idea that everything around us is a simulation generated by some higher power or intelligent operator — a modern take on Plato's famous Allegory of the Cave dialogue from "The Republic," published in 375 BC.
Setting aside its theoretical value, simulation theory can help technologists consider the practical implications of blending multiple simulations. Specifically, gives us a framework to explore the implications of cloning physical objects into digital objects.
In fact, our ability to construct these digital worlds has come to fruition and is maturing rapidly. This article explores the growing practice of creating new bonds and examining the unforeseen interactions between these twin worlds (aka Gemini mundi) at scale.
The dawn of digital twins
The concept of a digital twin as a "living model" was conceived at NASA in the 1960s for the Apollo mission. The subsequent explosion of Apollo 13's oxygen tank in 1970 led NASA to use multiple simulators to evaluate the physical failure through digital counterparts. (The term "digital twin" likely wasn't coined until the early 2000s).
Technology advancements in the following years — including improvements in GPUs, scene graphs, ray tracing, edge computing, 5G, quantum computing, AI/ML, autonomous systems, 3D modeling software and IoT — are now being integrated to help the digital realm realistically mimic the physical. Today, digital twins are increasingly common across industries, including manufacturing, engineering, healthcare, transportation, energy and smart cities, as they've been found helpful in driving efficiency, quality, innovation and sustainability gains.
Definitions to know:
- Digital twin: A digital twin is a virtual model or representation that serves as the real-time digital counterpart of a physical object, process or system that can simulate, monitor and optimize its behavior and performance. Digital twins are connected to real-time data sources from their physical counterparts; they can run various simulations and analyses to generate valuable insights. They can also exist before, during and even after the lifecycle of the physical entity they represent. The paradigm of a digital twin consists of two ideal states: An individual digital twin that harmoniously consumes and creates data bidirectionally by conserving its functional condition while also potentially interacting with the holism of a larger, ubiquitous digital twin.
- Digital metamorphosis: Refers to transforming a physical entity into a digital twin — or vice versa, as the resulting versions are considered interchangeable and complementary rather than distinct.
- Digital verse: Refers to the utopian concept of an immersive and interactive digital environment where multiple digital twins are interconnected within a single system to form a comprehensive and dynamic digital model through the virtual representation of physical entities. This is the idea of the Metaverse.
Through our applied AI/ML research and development, WWT aims to expand on the current definition and understanding of digital twins. As the fidelity of digital twins improves, new linkages are added, and the human condition for each avatar is generated, we believe digital twins will eventually transform into a true Metaverse — the navigable convergence of virtual and real worlds.
Building digital twins at city-scale
Constructing digital twins at scale has recently become a more cost-effective, data-realistic and computationally feasible possibility for Industry 4.0 enterprises, government agencies and other entities that operate at massive scale.
For example, high-fidelity digital twins are now being developed to support a range of valuable smart city use cases, including:
- Intelligent disaster preparedness and response
- Traffic planning and modeling
- Road construction and traffic mitigation
- City design, cellular placement, lighting, smart cameras, etc.
- New facility construction, including the integration of REVIT (time-lapse)
Due to similar scale and complexity, large enterprises, higher-education institutions and federal campuses are likewise starting to benefit from digital twins. Examples include:
- Using twins to accelerate education initiatives and R&D work
- Using twins to meet Industry 4.0-specific requirements
- Turning a campus into a virtual training range that supports:
- Complex multi-team and concurrent training exercises
- Large-scale and single-agent training exercises
- Multiple asset configurations that leverage buildings, IoT, sensors, vehicles, autonomous systems and more (e.g., large-scale vehicles, weapons, C2 systems, etc., for military-adjacent use cases).
Building a digital twin at scale follows four general phases: Model design, layer design, living model creation, and final integration and automation.
Phase 1: Model design
Building digital twins at scale requires a strategic planning phase that answers some key questions about model design. Before you start building, you should agree on:
- Model size: The first step requires dialing in the level of fidelity of your digital twin. You will want to define how big (e.g., the universe) to how small (e.g., an atomic quark) your digital twin needs to be to achieve your goals — both now and in the future.
- Data availability (and acceptable uncertainty): The data that feeds your digital twins is the lifeblood of its modeling and simulation capabilities. If a dataset is not readily available, it must be created synthetically. You should ask the following questions:
- What percentage of synthetic vs. accurate data is available to build my digital twin? How much actual data can be faked?
- Can the physical entity be cloned to leverage custom synthetic data generation (including variational autoencoders (VAEs) and generative adversarial networks (GANs))?
Your team should also come to an understanding or agreement on the degree to which your simulation platform must adhere to real-world physics and the extent to which your digital twin will become useless if and when it no longer behaves like its physical counterpart.
- Content pipelines: Enable collaboration and data aggregation between digital twins and Metaverses. Tools to be familiar with include Autodesk, Dassault Systems, NVIDIA SDKs, Blender, Maya, etc.
- Model fidelity: Each layer of a digital twin contains hundreds to billions of data points, images, variables, physics and mathematical equations. The fidelity of each layer should be tuned according to a specific set of use cases with the understanding that increased fidelity between layers means increased model accuracy (and a reduction in documented uncertainty). The tradeoff is that as fidelity increases, the need for input data pipelines and computational complexities increases, driving up the cost of building your digital twin. Each layer's fidelity will eventually reach a maximum cost/benefit value — at which point any additional accuracy/uncertainty gains will not be worth the marginal increases in complexity and accuracy.
Phase 2: Layer design and adhesion
Once your design for a city-sized digital twin is mapped out — its size and initial fidelity agreed upon by relevant stakeholders, the acceptable levels of data availability well documented, and content pipelines flowing with labeled and cleansed data — your next focus should be on establishing the relationships between each layer of your digital twin model.
For a smart city, the layers of a complex linked chain of individual digital twins might include four layers:
- Layer 1: Airspace
- Layer 2: Surface
- Layer 3: Terrain
- Layer 4: Subsurface
Each layer of your digital twin must be aligned and synchronized with each other and their data sources to ensure your model accurately reflects the real physical system it's cloning.
We recommend thinking of the relationship between layers in terms of adhesion — a term that refers to the degree of compatibility and integration between the different individual layers of a digital twin. This effort will add to the actual value of the model and influence how it is used.
For example, the image below shows adhesion efforts during the build of a campus-scaled digital twin for a historically Black college or university (HBCU) in Virginia. Using advances in drone technology, photogrammetry and LIDAR, the layer design phase required two days' worth of FAA-approved flights — capturing billions of point clouds — plus thousands of high-definition imagery data points to construct each layer.
Phase 3: Building a living model
The next phase requires turning the layers and their adhesions into an operational, continuously updating, living digital twin.
For a digital city, here are two examples of how you might approach the surface and airspace layers of your living model.
Surface layer: Building out the surface layer might include the development of water, fiber, telecom and sewer lines; connections between buildings that include pipe diameters, flow direction and any metered water data used for fault detection and diagnostics (FDD); and adding features such as telecom taps, electrical meters, and other IoT sensors to monitor and manage.
Airspace layer: The layer of your living model to tackle is the airspace. Before proceeding much further, your digital twin stakeholders will want to ask:
- Does our digital twin require an airspace layer?
- At what heights does the airspace layer need to exist (e.g., from Earth to the troposphere, stratosphere, mesosphere, thermosphere, exosphere, the edge of outer space, LEO, MEO, GEO, deep space, etc.)? You'll want to set some parameters.
- How will data be collected on the airspace layer to maintain the value of the digital twin (in relation to the physical airspace layer above your city)?
For instance, our HBCU client used its airspace layer to research and develop corridors for radar detection for traffic flow management, accident mitigation and low-level weather modeling.
The airspace layer can be incredibly complicated and may require new IoT sensors for low-level weather modeling. This can include wind and wind vorticity modeling, temperature and humidity modeling, etc.
The image to the right shows a standard set of wind models around buildings. This type of modeling in a city-scaled twin allows for the simulation of drones acting and reacting to this weather 100 feet off the ground, which could create safety and security concerns for the public below as well as event operators.
Subsurface and terrain layers: The subsurface and the terrain layers can be equally complex. Make sure you consider the adherence and relationships between all applicable layers in your design.
Phase 4: Integration and automation
Finally, the integration phase of building your city-scale digital twin will take place in both physical and digital worlds. Once a completed digital twin has achieved total production, the last step is to automate the digital world's ability to monitor and apply changes it detects in the physical world in near-real-time.
To help visualize this phase, let's consider our smart city. Before introducing a digital twin, whenever the city determined a set of operations needed to be configured, modified or activated/deactivated, a designated individual would undertake that manual task in the real world.
Now that the city has a fully functioning digital twin, it can simply add a new automation layer between the digital twin and the physical city. When the city's digital twin operator makes those changes within the virtual model, those same changes take effect in the physical city environment. Such automation can help reduce update times, increase productivity and establish better governance standards between the digital twin and the physical environment.
Digital twin and Metaverse infrastructure
Dell PowerEdge Servers for accelerated workloads
Designing an infrastructure that can deliver success with digital twins and other demanding workloads requires a modern approach to architecture. One that leverages innovations in improved performance via dense acceleration at scale. Improved performance isn't only about implementing a comprehensive solution and infrastructure strategy; it requires innovations to the building blocks of your infrastructure in ways that unlock other benefits, such as improved costs, security and thermal/power design.
There are a number of innovations within the Dell PowerEdge Server family that enable drastic performance improvements. From architectures specifically designed to support acceleration at scale to thermally optimized designs, today's workloads demand higher quality components and subsystems (i.e., high-performance architecture) to flawlessly drive workload operations. Dell PowerEdge Servers can provide:
- Accelerate insights: Through innovative compute that drives performance across your AI and data delivery lifecycle for AI/ML, HPC, modeling and simulation operations — all at the speed of your business.
- Trusted, secure AI: Dell PowerEdge Servers are designed with a security-first approach, for secure interactions and the ability to anticipate potential threats, reduce risks and save time.
- Simplified operations: Intelligent systems that work both together and independently, enabling rapid digital transformation and productivity, and helping organizations progress toward full and accelerated infrastructure automation for simplified AI operations.
Dell PowerEdge Server highlights for accelerated workloads include:
- XE9680: No-compromise accelerated AI
- XE9640: Dense acceleration
- XE8640: Purpose-built performance
- R760xa: Purpose-built scale-up server for GPU applications
Nvidia-Certified Dell Systems
NVIDIA-Certified Dell Systems bring together optimized configurations of NVIDIA GPUs and NVIDIA networking in servers and hyper-converged infrastructure from Dell Technologies. These systems are validated for performance, manageability, security and scalability, and are backed by enterprise-grade support from NVIDIA and Dell Technologies.
- Deliver infrastructure to drive a diverse range of accelerated workloads for the enterprise
- Excellent performance
- Reduce time to deployment
- Secured, no-compromise operations and workflows
- Designed for single to multi-node configurations, optimal scale-out and clusters
Nvidia Omniverse
NVIDIA Omniverse is an easily extensible open platform built for virtual collaboration and real-time, physically accurate simulation. Creators, designers, researchers and engineers can connect major design tools, assets and projects to collaborate and iterate in a shared virtual space. Developers and software providers can also easily build and sell extensions, apps, connectors and microservices on Omniverse's modular platform to expand its functionality.
How WWT can help
Creating digital twins at scale is a complex and multifaceted endeavor that requires many tools and technologies to bring to fruition. From NVIDIA's Omniverse to the latest VAEs and GANs, each infrastructure component plays a vital role in developing effective digital twins for use in the public sector.
That's where a trusted partner like WWT comes in. Not only do we possess the expertise to educate our clients and partners about digital twins and other advanced AI/ML-powered technology, but we are fully equipped to help our clients build, test and implement digital twins in our new composable AI lab environment.