By now most of us have heard about the Takata air bag nightmare. (And, if you are anything like me you might be wondering — are you driving around with a Takata air bag?)
You might also be wondering how the Takata air bag situation was able to turn into such a nightmare. In fact, it was a defect identified in a single sub-component used in frontal airbags that led to what the NHTSA calls “the largest and most complex safety recall in U.S. history.”
What led to Takata’s undoing was that as a global enterprise with both suppliers and distributors all over the globe, it did not know exactly which airbags had the faulty sub-component. Takata’s public relations nightmare spurred one of their competitors, a leading manufacturer of automotive safety equipment with more than 80 facilities worldwide, to thoroughly integrate its entire supply chain, more than 10 different ERP systems. Further complicating the situation, the firm manufactures materials, which are consumed in sub-assemblies and added to other assemblies – all in different facilities. Additionally, third party vendors consume some of these elements as well.
To help understand the scope and breadth of the firm’s initiative, consider this safety-related scenario. Imagine a specific chemical lot or shipment of wire spools was found to be out of specification and that this defect could lead to a failure of the finished assembly. Basic materials such as these have the potential to end up in hundreds of thousands of finished goods. Add to this the fact that the company is both a supplier and consumer of sub-assemblies required to manufacture the finished goods (such as side airbags), meaning the suspect material would have travelled across several global manufacturing sites and may have already been shipped to customers.
To address this scenario, the firm would need to quickly identify every sub-assembly, super-assembly, and finished good where the suspect material was used. It would then need to locate the affected components across the globe so they could be removed from the supply chain until their quality could be verified. This is a near impossible process today, as parts and materials are tracked by emails and spreadsheets – an error-prone process that can take weeks to reconcile if at all. Integrating ERP system data would have to be made easier.
Each facility for the firm, like most manufacturers, captures significant amounts of information about the production and assembly of every component at each step in the manufacturing process. The data is as varied as specific lot numbers for chemicals used in an airbag initiator, the electrical current used when welding two metal plates together, or the list of specific sub-assemblies contained in a finished product and which containers they were packed in. All of the information required to track down the suspect components exists somewhere in the supply chain in a multitude of ERP systems.
Our automotive manufacturer realized there was little or no means of rapidly sharing information between the various facilities, as the various sites often store component identifiers according to their own unique cataloging standards. The producer of a component might catalog it as ABC-123 while consumers of that component (assemblers) might enter it into their catalogs as R79-P9777-Q. To successfully track a component along the supply chain required the ability to map the identifiers across each facility.
This problem was exacerbated by the fact that the facilities practice “just-in-time manufacturing,” storing no more than three days’ production requirements in any given facility, so parts and materials are constantly moving. If assembling a complete picture of a component’s history takes more than three days, it was potentially obsolete by the time it was finished.
Given the large number of independent applications managing this information across the company, generating a report would often take a week or more, and usually involved exporting CSV files from their relational database applications and emailing spreadsheets from one facility to the next.
To solve this problem, the company could have implemented a typical data warehouse solution built on their existing relational databases, but the level of effort required to coordinate across the entire company, define the standard data models, and implement the required ETL features led them to look for alternative solutions. Further, with regulatory agencies requiring data be kept for 20 years, long-term storage was an issue too.
MarkLogic’s support for multi-model databases was immediately attractive, and enabled the customer to quickly begin extracting data from their relational database applications and loading the content directly into MarkLogic. Within weeks the initial MarkLogic application ingested data from five sites, loading more than 100,000 documents per day. When loading data into a multi-model database we identify entities. Entities are usually recognizable concepts such persons, places, things, or events which have relevance to our business and the data we’ve stored in a database. We literally load the data as is –– without shredding it into multiple tables for subsequent reassembly when needed.
Because much of the captured data are considered trade secrets, it was critical that the application support secure access. While MarkLogic has a robust feature for managing user authentication and authorization, it was decided the best approach was to leverage MarkLogic’s LDAP support and integrate the application with the customer’s Active Directory database. This allowed the customer to manage user access using their existing IT staff and defined processes.
MarkLogic’s universal index provided immediate full-text search access to the documents, followed rapidly by a customized search configuration supporting more specific search on part numbers, lot numbers, facilities, and document types among others.
In addition to enabling search, the customer’s staff and MarkLogic consultants collaborated on development of a three-tier application using AngularJS and Node.js supporting search, navigable part diagrams, report generation, and data exports in alternate formats. The report generation feature answers the question “where is this component” within seconds rather than weeks.
In the next phase the firm wants to explore using semantic triples to establish important relationships that are difficult, if not impossible, to achieve with relational systems. For instance, by using the Resource Description Framework (RDF) to relate parts, suppliers, locations, test equipment, tests, etc., to model real world relationships such as “containedIn,” “assembledFrom,” “suppliedBy,” “testedOn,” and “assembledAt” enables a much more flexible data model (versus RDBMS). RDF allows you to ask interesting questions that leverage MarkLogic’s multi-model capabilities (searching documents, while doing semantic querying and inferencing with SPARQL).
The Takata nightmare has terrified the industry. A simple material defect spread across the globe like a virus – and Takata couldn’t track it. Learning from that, the automotive firm has built a system so that if there is ever a red flag at any given facility, staff will be able to find the complete history of anything¬¬ –– wherever it is in the supply chain.
Building on Multi-Model Databases 110-page definitive book on multi-model databases, when they should be used and how they can benefit enterprises.
View all posts from Scott Parnell on the Progress blog. Connect with us about all things application development and deployment, data integration and digital business.
Let our experts teach you how to use Sitefinity's best-in-class features to deliver compelling digital experiences.
Learn MoreSubscribe to get all the news, info and tutorials you need to build better business apps and sites