Visit our Content Hub!
Access free downloadable content curated by our editors.

Off-the-shelf AI tools for adding chat functionality to packaging and processing equipment

A variety of web-based tools are cropping up that allow OEMs to offer AI-powered 24/7 chat support for CPG customers, relying on the OEM’s own documentation as source material.

Based on the OEM’s documentation, the AI can provide plain-language answers to questions. This tool from Scanmatics even cites and links to actual page numbers in the source documentation.
Based on the OEM’s documentation, the AI can provide plain-language answers to questions. This tool from Scanmatics even cites and links to actual page numbers in the source documentation.

In past columns I’ve made the case for packaging and processing OEMs incorporate a “ChatGPT” style functionality into their machines. This can be an important bridge to address the skills gaps that their CPG customers are experiencing. My earlier columns have focused on embedding small language models directly on a PC in the machine, such as the HMI.

The argument for a direct embed is to allow access to real-time data while side-stepping the need for a persistent connection to the cloud, which is a non-starter for CPGs. 

In this column we’ll explore the flip-side of this approach, using off-the-shelf tools that run in the cloud, but keeping them off the machine altogether. There are a few reasons why an OEM may want to do this vs. baking AI into the controls architecture directly.

1.    Off-the-shelf AI tools are already built, slashing development and engineering time required for OEMs to learn about and incorporate AI models directly into their equipment. No learning Python, no downloading models and learning their intricacies, etc.

2.     It leaves the sacred environment of the machine’s controls architecture intact and secure. No worrying about isolating the AI from the rest of the architecture.

3.     Although small language models that can run on local machines do exist, they still require computing resources, and most require a PC with a GPU, which some industrial PC’s or HMI’s may have, but others may lack. Even on PC’s with a GPU, there are legitimate questions about the degradation in performance a locally running SLM could experience. There are also questions about the impact of taking computing resources away from the traditional HMI functionality or whatever else is running on that PC.

 There are several off-the-shelf cloud-based tools that are worth checking out. All of these can run on any PC, tablet or phone, and are totally independent of the machine. These tools operate with what is known as Retrieval-Augmented Generation (RAG), a method where generative AI is directed to pull information exclusively from specific, user-provided documents rather than relying on pre-trained data or web knowledge. This approach greatly reduces the risk of hallucinations, as the AI generates answers based only on the trusted sources it’s given.

Wonderchat

First I’ll talk about Wonderchat, which is an application we are beta-testing here at PMMI. It powers the AI chat for PMMI ProSource, as well as the PACK EXPO show website, and we’re on the verge of rolling it out for for the PMMI website.

You can feed Wonderchat a variety of documents in formats including plain text, PDF, Microsoft Word, or PowerPoint. This is perfect for feeding in all kinds of written documents like manuals, though presumably it wouldn’t be able to make sense of pictorial information like wiring diagrams. (You can also upload numerical data in CSV format. Interestingly, you can upload an audio file or even a video file—up to 10 minutes in length max, though.)