Such systems have additional relevance in the world of process industries such as cement, glass or chemicals, which operate more or less continuously. In scenarios such as these, the data generated by these continuous flow processes are excellent candidates for training AI systems.
Some advanced planning and scheduling (APS) systems, such as Siemens Opcenter APS, operate on the idea of using interlinked entities like objects, events, situations, or similar abstract concepts to create relationships between assets. This approach, known as a knowledge graph, uses process data to train a decision-making system to perform tasks like order prioritization, due date negotiation and order processing.
But AI systems can operate without requiring vast amounts of data, too. In the chemical industry, for example, we might think of events that affect parameters such as productivity, availability and quality and train the machine intelligence to look for anomalies and then scrutinize the results for business relevance.
In fact, a system that does just that has been introduced by Siemens. Dubbed “AI Anomaly Assistant,” the AI-based app trains its machine-learning algorithms on industrial process data, which then go through several feedback loops to determine and evaluate the impact of specific types of anomalies.
The app, which can be installed as a cloud app or within the user’s own infrastructure, also allows anomaly detection to be combined with other services, such as predictive asset management. A dashboard helps users classify and evaluate results.
It doesn’t take a sentient machine to imagine how all of these things can help business leaders make more informed decisions. Indeed, deep learning systems are already being constructed and trained on industrial data sets across a wide variety of industries, leading to capabilities such as computer vision, natural language processing and various other types of advanced data analyses.
An industry knowledge graph, also known as a vertical knowledge graph, is often comprised largely of unstructured or semi-structured data, which tends to have a limited scope in terms of the type of entities it will attempt to extract from the unstructured data.
In process industries, the data extraction process is typically much like the product itself: a continuous flow, handled via a pipeline. It’s not all that different from the way a search engine crawls the web continuously and links the entities it discovers into an ever-evolving knowledge graph. In both cases, we have a large amount of semi-structured data, in which there are large quantities of identifiable, repetitive structures.
Below are seven things AI can already do for Process Industries with the right training:
Typically, this is accomplished by defining a schema and applying metadata that defines specific types of relationships between various nodes. Via these semantic connections, we can begin to ask or build an ontology from large datasets. In such cases, a small amount of annotation data can allow the machine to learn certain rules and then use the rules to extract the same type of data from the whole dataset.
Ultimately, AI is more than this, but business applications that utilize deep learning techniques for tasks like predictive asset management and order processing are a big leap forward in industrial efficiency.
Learn more about the Siemens AI Anomaly Assistant App and Opcenter APS