PCBA - artificial intelligence

How to Successfully Implement AI Tools in Electronics Manufacturing
Use Cases & Domain Expertise

February 2, 2023
Interview with EMSNow at IPC APEX 2023
In this interview, our Mitch DeCaire and iTAC Software AG's Martin Strempel discuss the implications of AI for the electronics manufacturing industry. This thoughtful discussion ranges from the libraries of algorithms now available to PCBA manufacturers, why domain knowledge about the factory floor is so critical, and the benefits of the Cogiscan-iTAC partnership.
Key Highlights
Now that these AI tools are available thanks to the Googles and the Amazons and the Microsofts of the world. There are libraries of algorithms available to us in the cloud and the AI community in general is always constantly adding new algorithms and improving their algorithms… being used in all sorts of industries and our industry is actually no exception! I feel a bit like a kid in a candy store these days because it's like this whole new set of tools that we can use – and we can combine the available technologies in AI with our domain expertise in SMT to provide real, actual pragmatic use cases that people can use on the factory floor right away so it's not a thing of the future anymore it's available right now

I think all the big corporations have done a really good job at actually making these tools available to everybody, and the job has become how do you translate that with your domain knowledge back down to the actual production floor. From that perspective what we're really seeing is that AI is offering nothing different than our previous set of lean tools they've offered, but now we have AI to help us along that Lean Journey to the next step.
There are certain steps you have to take in order to be able to train an algorithm in an efficient manner and run it in a cost-effective and high-speed way. One of the things that we see a lot is that customers are doing the first step of collecting data from the shop floor and putting it into some sort of data lake in the cloud without planning in advance what specific problems they want to solve… and the result of that is you end up with a bunch of data that maybe is not structured in a way that the algorithm can efficiently learn from.

Instead of a data lake they accidentally make a data swamp where they're dumping data in without forethought. So, our approach, thanks to Martin's team, is to assist the customer on their Journey – step one is to actually have the use-cases in mind and plan in advance what problems you want to solve because that helps you plan how to structure the data in a flat format to send the data into the cloud in a way that's conducive to machine learning.

Those successful Lighthouse projects are normally building from the bottom up – they're understanding the process, and taking the process know-how, what they're doing in manufacturing, and then translating that into AI tools that they can use.
It's very much about how do we adjust the data and build it up in a proper format so that we can at the end of the day immediately implement those algorithms when we have them
available into the stream again. If you don't consider that in the beginning, you almost have to start at the beginning again… once you've figured out what the problem is, or the AI’s figured out what the problem is, but now how do I actually tie that into my production facility in a way that can meet all of my latentcy requirements and make sure that I can keep my tact times
running…

A lot of our customers are attempting to run their algorithms in the cloud. We train our algorithms in the cloud and build the model, but once we have the model, we download it to the edge and we're running it on the edge, and that way you can do more real-time processing of the algorithm to keep up with the line. And you're not creating a bottleneck by always having to go up to the cloud with everything!

We find that building the trust with the algorithm is very important. We see that our customers
are generally on a journey here so they don't want to just have a black box running somewhere where no one has any idea what's going on. They want feedback what's going on in the algorithm. The AOI use-case is a perfect example of that – we found that attraction so much better if we have an actual display where people can see what the results.

Article: A Proven AI Algorithm to Reduce AOI False Calls and Manual Verification by Up to 60%

What they're trying to reduce is the false calls, but in that case the AI is showing this is where the false calls are and why it believes they are false calls. Keeping the operator in the loop at until the trust is there… and then you can hit the automate button! And it has much higher success rate than just having something running in the background. We can start off by building a dashboard to show what the AI model is suggesting, so you still have operator or customer control of things. That's how you can prove to the customer that yes, this thing really works!

Basically, everybody who's running an SMT line has nightmares with false calls right now. Everyone who has an AOI machine, that's all they ever talk about these bloody false calls! We have this $200,000 AOI machine, but still the operator has to sit there telling it which ones are false. What we're finding is we can reduce that by about 60%, so that's 60% less operator time wasted filtering out these false calls.

By showing them what the AI is detecting as false after a while the customer realizes, yes, this thing's always right and it's catching all the obvious false calls! Then you can, as you said hit the automate button, I call it the Easy Button, where now instead of just showing it to the operator, we can actually send the message through our Co-NECT Platform bi-directionally to the machine so that we can automate that. Those use cases are important for convincing, and the term you use developing the trust with the clients, so that they know that this is a solution that actually has been validated and works… and it's worked elsewhere.
We have real use-cases – a library designed for SMT manufacturing so they're very specific use-cases like that AOI one. We have cycle time anomaly detection; we have other correlation between say screen printer data and the SPI data. Looking back over the years we've talked to you in the past about our Factory Intelligence dashboard for example, and all of our customers of Factory Intelligence keep saying we want the data to be actionable. They don't want to just have something on a screen, you want to be
able to predict something even before it happens.

And by comparing say data from an AOI machine to what's happening in the screen printer, for a human being to correlate data and see okay when this changes in the screen printer I realize I'm having more defects at SPI,
it's pretty hard to do that especially in real time… the algorithm can take care of this, so it's actually enabling you to predict problems before you even know they're going to happen. It’s really enabling us to just prevent issues that otherwise you'd have to wait for something to go wrong before you'd have to react to – we're going from reactive to prescriptive analytics with this model.

… if we talk about how many cases have failed is that it's so difficult to get to that point where you actually can plug something in the production line and know that it's going to work and get your return on your efforts. We try to accelerate that process for our customers by; number one, offering them a platform that will be able to support them throughout their long-term digitization strategy, but at the same time immediately offer instant low hanging fruit that you can plug in now and off you go and you've got immediate management buy-in!

… it's tools like this and it's AI that allows us to achieve that ultimate goal… towards the smarter factory. As Martin said it's a journey, it's not like a light switch but these are pragmatic use-cases that we can apply today to get you closer to that lights out factory in the future.
You might also like