In recently reviewing Razorleaf’s 15+ years in business and several hundred PDM/PLM implementations, I came to recognize a pattern. Of all of those implementations, just two were for brand new startups. A company in the Pacific Northwest that was going to be developing and manufacturing virtual reality gear for use in military simulation as well as an airship manufacturer in Las Vegas that planned to build blimp-type vehicles for communication much like what Project Loon is doing today.
My conclusion from this review of our client-base? Nearly everyone implementing a new PLM system has been in business for some time and therefore has product-related intellectual property that will need to be tracked and stored in their new PLM system. Now, that last statement might seem obvious but what continues to surprise me is that getting that data into the new system is so often an afterthought for companies.
The Most Overlooked Process: Migration of Legacy Data
It is easy for you and I to sit here and play Monday morning quarterback and say “well of course you need to plan for and import your legacy data into your new system”…..
We both know that your company’s product, process and design history is vitally important. Unless you are one of those fancy startups, then likely much of your product development process is about maintaining what you already have or improving it. To do either of those things requires that you have access to the original product data. It seems obvious! But, for whatever reason, migrating legacy data is rarely given the attention it deserves in an overall PLM implementation project. Instead, it seems like data loading is a postscript to the project.
They Said What??
In fact, you might not believe this, but we have actually had clients say things like:
“Well, there’s two days left in our 150-day PLM implementation project… can’t you just use that time to get the data loaded?”
I asked myself, why do some clients give so little attention to one of the most import parts of a PLM system deployment? How can it be that they have spent 10, 20, 30 years or more building this data and then expect that it will only take a day or two to move it all into a new system?
Underestimating the Complexity of Legacy Data
Well, I can’t read minds but here is my best guess. I know our clients are smart. I know our clients are capable of estimating the effort it takes to do various activities. I know our clients know something about project management. The only thing that could explain this situation then, is that most people underestimate the complexity of their legacy product data!
You see, to most project teams, it’s just a CAD file or it’s just metadata from the ERP system. They wonder, “How can it be that hard to load some CAD files into a vault, or records into a database?” To be fair, that’s probably because they have never tried to do those things. Or maybe it’s because they really believe that their data is perfect and standardized.
I can tell you from experience that just isn’t the case. This realization led me to an idea. This isn’t a real offer, but let me share my idea with you. What if Razorleaf offered to migrate legacy product data for free, and clients only had to pay when there were problems?
FREE MIGRATIONS HERE…Peanuts, Popcorn…Get Your Free Migration Here!
Let that sink in. What if you could migrate your legacy product data for free?
Of course, there would be one tiny little catch. Your data has to be fully standardized. You wouldn’t expect us to know your data unless it met some industry standard, right? So naturally, you’d have it in a standard format, like ISO 10303 AP239 (PLCS) or something similar, and conforming to EIA-649-B or some similar standard for metadata. And if there were any problems in the data, you wouldn’t mind paying us to fix those as we found them, right?
Reality Sinks In
So, if I did offer a free migration to you, knowing there is one simple requirement, would you take it? Would you be worried? Let me take you through a reasonable example, and you can see why it just might make sense for me to offer a free migration. If your data really was fully standardized and without any problems, I probably could load it quickly. Our off-the-shelf tools would work after connecting them to your new PLM system, and I’d point it to your data and let it run until it was finished.
An Example Scenario
Let’s take a deeper look at an example. This example is a composite of several migrations and legacy data sets I’ve seen over the past several years. It isn’t the worst situation I’ve ever seen, but it represents the kind of situations I come across. The customer has a couple of legacy CAD systems, an old PDF drawing vault, some metadata in ERP, and several network drives containing Word and Excel files. No problem.
- In our hypothetical world, we’ve agreed to migrate their legacy product data for free because they have agreed to pay $10 for every exception we find. Things start off a little rocky because the customer’s data doesn’t follow any particular standard or any one convention, but they provide a spreadsheet that details the names, locations, part numbers, and revisions of all of the files they want to have loaded. So things seem to be back on track, but let’s look at what goes wrong: We get through the first 10,000 or so Word documents with no problem, and then the migration tool stops because the files are not in the right location. A network drive got moved to a new server, so we have to repair each of the next 300 entries manually. $3,000. This happens one more time before we get to the end of the Word files. Another $2,000.
- Then we load the AutoCAD data. This goes really smoothly because it looks like we’re loading the latest released version of everything. Another 15,000 records loaded in just a couple of hours. Perfect.
- Next comes the PDF data. Another smooth process. 25,000 more records loaded in no time flat. One small problem, a big chunk of these PDFs are the same as the AutoCAD data loaded previously. So we need to go back and remove 7,500 PDFs, match those PDFs with the right AutoCAD files, and put the PDFs into the same record as the AutoCAD files. At $10 per PDF, this is a $75,000 exception.
- On to the tricky step with the SolidWorks data. Roughly 100,000 parts, assemblies, and drawings are in the data load spreadsheet. The files are all separated into revision-specific folders, so things look pretty good there, too. One minor problem is that people have been using local copies of SolidWorks Toolbox. So as we load the assemblies, we find 0.5% of the files are missing (someone leaves the company and their computer gets wiped clean so the reference to the local Toolbox files is now broken) and another 1% are duplicates (things like “bearing.sldprt” that appear in everybody’s standard parts library). So, 15,000 unique problems that each have to be run to ground at $10 a pop. This one is a $150,000 exception.
- Everything is in good shape, and the customer starts reviewing the data we just loaded into their new PDM system. That’s when they realize one significant flaw in their data loading spreadsheet. Where there were old AutoCAD drawings that got converted into SolidWorks files during an Engineering Change, they forgot to indicate that these were part of the same revision family. This problem impacts 10,000 AutoCAD drawings and 10,000 SolidWorks drawings, but the customer decides not to do anything about it so that they can save $200,000 in exceptions.
To summarize, the customer found five types of mistakes in their legacy data, they elected to resolve four of those types of mistakes, and they paid $230,000 to handle all of those exceptions. That’s how it would have gone if we offered free migrations with $10 exceptions. In reality, this probably would have been roughly a $100,000 project with multiple rounds of testing so that the customer could find and resolve their data problems themselves. But it does make me think there’s something to this idea of offering free migrations.
See What We Mean?
The fact of the matter is, the reason data loads are expensive and time consuming is not because there are no data load experts or that there are no good tools to help with data loads. The reason they are expensive and time consuming is the data is complex, sometimes inaccurate, varied and often even undiscovered. You see, if the data was 100% known, 100% consistent and 100% standard, we’d be happy to load it for free. It would probably take us an hour to setup and kick-off, and the good will that we would receive would make it worth our effort. But it never happens like this. Never.
I hope this fictitious offer of a free data load gets my point across; data loads are expensive and time-consuming because the data is complex. Seeing error-free, standardized data is like observing Bigfoot or the Loch Ness Monster. That being said, your legacy data is vital (if not absolutely essential) to the future of your company. You need access to this data in your new business system. So, if the data is complex, if the data is necessary, if the data might need some cleaning (or A LOT of cleaning), wouldn’t it make sense to give your data load task a little more attention than simply “Hey, you think you can get this loaded over the weekend for me”?