Skip to main content
Digital Thread

Why Digital Twin?

By May 14, 2020July 12th, 2023No Comments

What does it take to make better decisions? Some would answer, “Experience.” That is certainly true in some cases, but for deterministic problems, many people would say it takes more information, or better information. What we really need is the ability to predict future outcomes, but that’s not possible. Or is it?

Simulation is a scientific method for predicting behavior and outcomes. Engineers have been using simulation tools for years to anticipate how products will behave. They capture/imagine the characteristics and actions of a product (through design) and translate those into data and logic so that a product can live in the digital space. By creating this modeled version of a product to feed into simulations, behavioral predictions are possible in a virtual sandbox.

Building Higher Fidelity Models

The effectiveness of this digital incarnation at making predictions is then determined by the detail and accuracy of the model, and of the quality of the simulated environment. Higher fidelity simulations depend on more accurate descriptions of a scenario, and the real world is often different from simulated conditions. Also, simulations are often performed on the idealized product, not the one that’s been damaged, repaired, or otherwise well-used. Computational simulation with this level of detail can be expensive, but affordable realistic simulation of real-world products is on the horizon. We can’t perform realistic simulation without a high-fidelity model of the actual product though. This is why digital twins are important – they are high fidelity models of actual physical products, not the idealized virtual product.

Digital Twin – Simulation of Car Manufacturing

Let’s talk about what makes a high-fidelity model of an actual physical product. First, every aspect of the virtual product must be modeled; mechanical, electrical, and software at a minimum. When the product’s software runs, will the device respond as expected? If we cannot predict the performance of the idealized product (virtual prototype), how are we going to predict the behavior of the actual product? Multi-domain models like this (those incorporating multiple disciplines like mechanical, electrical, etc.) are critical because not only must the virtual product model be complete, but its idealized simulation must be accurate. Recognize that multi-discipline simulation may require more specialized knowledge and calculations in domains such as thermal, magnetic, radiation, specialized materials, fluids, weather, and more.

Tracking of the Physical Product

We still don’t have the kind of crystal ball that we want though. To truly have a high-fidelity model, we need to know the details of each specific physical product. Perhaps we had a shortage of parts at one point and our product was made with an acceptable substitute. But that substitute part might have an impact on how our product behaves, so we need to know which physical products contain the substitute part. Or maybe that wasn’t what made this physical instance different; perhaps the product is made in a different factory in a different part of the globe and thus a different MBOM (Manufacturing BOM) results in the (theoretically) same product. Now we can see why instance-specific configuration management (carefully tracking as-built and as-maintained records) is important. It lets us know the exact makeup of any particular physical product, helping us track differences which are imperceptible to our customers, but which may be impactful to our product performance predictions.

It might also be important to know that some of our substitute parts were made from a different material or received a different coating. Seemingly small details like the length of time it took for a part to dry, or the temperature at which something was baked, might be significant to how our product behaves. Sensors on the shop floor can help track all of this. This is why the Industrial Internet-of-Things (IIoT) is so important to digital twins.

Managing Complexity of Data

But if sensors on the shop floor can help us characterize the components that go into our products, wouldn’t sensors on the products themselves also be helpful (these sensors are often referred to as being part of the Internet-of-Things, or IoT)? Of course, they would. Knowing that our product had been dropped, or was run too hot by the operator, or experienced other conditions, is invaluable to developing a high-fidelity model of that instance of the product. Assuming this information is only valuable to us as the product maker overlooks an obvious benefit to the user of the product – if the device can tell when it is about to break or when it needs maintenance. Both of these points highlight why IoT data is so valuable.

By collecting and managing the right data, it seems that a product crystal ball is well within our grasp. But that assumes that we can collect and own all of the data about all of the components that go into our product, and that is rarely the case. Most product makers have a supply chain and their products are made up of products sold by other people. So, our realistic simulation depends on the models and simulations of our suppliers. Essentially our product is a system of other systems – some of which we can control, some of which we can obtain models for, and some of which we can only reverse engineer. But modeling our product/system will depend on modeling its subsystems and components well, and knowing how accurate those lower level models are. Managing this level of complexity and model ambiguity highlights why model-based systems engineering is important – it can help us analyze the sensitivity of our predictions to inaccurate data and immature models.

Let’s assume that in this future digital Nirvana, we can predict the specific behavior of individual products that we sell in the marketplace, allowing us to make nearly perfect decisions. But our decisions are only as good as the information on which they are based. There are lots of ways in which this could go wrong. Reductions in the fidelity of our models will reduce the quality of our predictions. Inaccuracies in our models, like using the wrong revision of a part, could result in bad decisions. So not only do we need to collect and control a lot of detailed information, but we need to manage it and curate it carefully lest we use it to reach the wrong conclusions. This is about curating more than the specs and drawings that are traditionally managed by PDM systems and PLM processes. It requires a new discipline; let’s call it Digital CM (Configuration Management) for now. What this means is that using PLM to manage our product data is the start of a journey, not the finish line.


There are added benefits to building these high-fidelity models. First, the opportunity to monitor and propose maintenance/service for fielded products is a potentially new/enhanced revenue stream from end-users. Second, just like we need good models from our suppliers, if our product is used in someone else’s ecosystem, our supply chain customers might pay for our high-fidelity models. Our digital twins may represent multiple new revenue streams.

In this article, I have tried to identify the practical value of many new buzzwords, concepts, and technologies being discussed in the engineering world today. Depending on what you design and/or manufacture, not all the concepts I’ve discussed will apply. For the pieces that do apply, the technology may not be fully ready for you to use either. But if you want to be ready to make better decisions tomorrow, you need to start collecting better information today to lay that foundation. I’m ready to get started collecting and organizing this important data, what about you?

Thank you to my colleague Rob Kubiak for his contributions to this article.

Close Menu