Although scientists and engineers are continually creating new materials with special qualities that can be utilized for 3D printing, doing so can be a challenging and expensive task.
In order to find the optimal parameters that consistently produce a new material’s best print quality, an expert operator frequently needs to conduct manual trial-and-error experiments, sometimes creating thousands of prints. Printing speed and the amount of material the printer deposits are some of these variables.
Artificial intelligence has now been employed by MIT researchers to streamline this process. They developed an ML system that uses computer vision to monitor the production process and fix handling errors in real-time.
They put the controller on a real 3D printer after using simulations to train a neural net on how to change the printing parameters to reduce error.
The procedure of printing tens of thousands or hundreds of millions of actual objects to teach the neural network is avoided by the work. Additionally, it might make it simpler for engineers to incorporate novel materials into their designs, enabling them to create products with unique chemical or electrical properties. If unexpected changes occur in the setting or the material being printed, it might also make it easier for technicians to make quick adjustments to the printing process.
Due to the extensive amount of trial and error involved, choosing the optimal parameters for a digital manufacturing method can be one of the most expensive steps in the procedure. Additionally, once a technician discovers a combination that functions well, those parameters are only optimal in that one particular circumstance. She lacks information on how the substance will operate in various settings, on various gear, or if a fresh batch has different characteristics.
Additionally, there are difficulties with using an ML system. The researchers had to first take real-time measurements of what was happening on the printer.
To do this, they developed a machine-vision setup with two cameras pointing at the nozzle of the 3D printer. The technology illuminates the material as it is deposited and determines the thickness of the material depending on how much light passes through.
Making millions of prints is necessary for training a neural network-based controller to comprehend this manufacturing process, which is a data-intensive operation.
Their controller was trained using a method known as reinforcement learning, which educates a model by paying it when it commits an error. The model was required to select printing parameters that would result in a particular object in a virtual environment. The model was awarded when the parameters it selected minimized the variance between its print and the anticipated result after being given the predicted result.
An “error” in this context means that the model either distributed too much material, filling in spaces that should have remained empty, or not enough material, leaving spaces that needed to be filled in.
The real world, however, is rougher than a model. In actuality, conditions typically alter as a result of minute fluctuations or printing process noise. Researchers utilized this approach to simulate noise, which produced more accurate outcomes.
When the controller was put through its paces, it printed objects more precisely than any other control strategy they examined. It worked particularly well when printing infill, which involves printing an object’s interior. The researchers’ controller changed the printing path so the object kept level while some other controllers placed so much material that the printed object protruded up.
Even after materials are deposited, their control policy can learn how they disperse and adapt parameters.
The researchers intend to create controls for other manufacturing processes, now that they have demonstrated the efficiency of this method for 3D printing. They would also like to examine how the strategy may be changed to accommodate situations where there are several material layers or various materials are being produced simultaneously. Additionally, their method made the assumption that each material had a constant viscosity (or “syrupiness”), but a later version might employ artificial intelligence to detect and account for viscosity in real-time.
The post How to Use AI to Control Digital Manufacturing? appeared first on .