As companies increasingly use automation in factories, warehouses and other locations, they recognize the need for AI technologies that can guide decisions in a timely manner (such as machine vision to detect defects). But in practice, companies have found that cloud-based AI technology is time-consuming and must move the decision-making process to the edge of the network. "We often see manufacturers or OEMs tell me they need to go back and forth in very small fractions of a second, industrial use cases, including networking," said Rita Al-Wahaibi, an AI engineer at Intel. "This makes it impossible to violate the laws of physics and send this request to the cloud." Reducing decision time and reducing data transmission costs are the main reasons why companies are implementing AI technologies at the edge.
Another example where AI processing will benefit at the edge is when a company has a lot of data to digest, such as multiple sources of cameras looking for defects or worker safety protecting employees from machines that could harm them. . Large datasets are expensive when moving data to the cloud. In addition, many companies do not have the network bandwidth to support such large data sets. The third reason to use Edge AI is to ensure that data remains in-country for privacy or intellectual property requirements.
The panel also dove into a question about building the right team for utilizing edge and AI, and who needs to be involved. When thinking about the network piece, Nokia’s Tuuli Ahava suggested that companies begin by having folks from the IT department teaming up with the OT group. “Some years back, we were discussing the convergence of IT and OT, but now it’s really happening,” says Ahava. “Naturally we also need people to develop the algorithms and some data scientists, but I would start with the simple answer, ‘Hey, let’s put IT and OT around the table’ to get them together.” Additional stakeholders that should be included in edge AI discussions include subject matter experts for the process being automated (including factory floor workers who would monitor the processes), systems integrators and application creators in addition to any data scientist teams. Everyone on the panel agreed that data science needs to be brought to the level of every employee, instead of relying on specific data scientists to explain or operate everything.
The panel also spoke about the use of modular building blocks, through microservices, for edge-based applications. Companies that have used microservices in their cloud environment can use that experience – being able to change small parts of applications instead of completely rewriting them – in edge environments. “Monolithic applications are a bit like dinosaurs,” says Intel’s Michael Huether. “They work for a while, but then at some point you are at the end, and you’d like to avoid this because your manufacturing line should never stop. The journey is going on, and you want to do a little modification without rebuilding everything.” Huether added that microservices also provide companies with flexibility on where they place applications, whether directly on an industrial PC running directly beside the data-generating equipment, or whether through a low-latency, high bandwidth network such as 5G to connect to small microservices in the cloud. "If you can consolidate it and reuse it all in one machine, that's a plus, but you have to be flexible," Huether said. The group also discussed other topics during the webinar, including: The importance of choosing the right data for AI processing and how to clean it before processing; Always consider the security issues around which all developing AI projects must revolve (including privacy regulations); Examples of use and scenarios where it may not be needed, such as business intelligence and non-opportunistic scenarios, or scenarios where a combination of edge and cloud may be used; About working with partners and suppliers to ensure they are up to date An important reminder.