Monitoring method of injection molding process based on screw position and pressure curve of injecti
Time:2022-12-01 10:16:01 / Popularity: / Source:
1 Dimensionality reduction and monitoring model of injection molding machine curve
Dimensionality reduction and monitoring model of injection molding machine curve is shown in Figure 1. First, original data of pressure and position curve of screw are obtained through the injection molding machine. After preprocessing, four methods are used to reduce dimension and input into neural network to complete functions of plastic raw material monitoring, mold temperature monitoring and molding product quality prediction.
Figure 1 Monitoring model of injection molding process
1.1 Data collection and preprocessing
Main control variables of injection molding process include melt temperature, injection position, injection speed and injection pressure, etc. Therefore, theoretically, it is necessary to collect temperature curve of melt, position of screw, speed and pressure curve. Considering that temperature sensor of injection molding machine is installed on outer wall of barrel, low thermal conductivity of plastic melt leads to a large hysteresis of measured temperature data, temperature curve is not a real-time monitoring curve. At the same time, temperature, pressure and volume of plastic melt must satisfy PVT equation, so information of temperature curve can be indirectly reflected by position (volume) and pressure curve of screw. In addition, speed of screw is the first derivative of its position, injection molding machine monitors curve to select screw position and pressure curve.
In order to avoid influence of difference in curve range of monitoring variable the subsequent analysis, data need to be normalized and preprocessed. Data normalization is an important data processing step before data modeling, including sample scale normalization, sample-by-sample mean subtraction, and feature standardization. Sample-by-sample mean subtraction is mainly used in stable data sets, that is, statistical properties of each dimension of data are same, screw position and pressure during injection molding process are different at different times, so this method is not applicable. Feature standardization refers to mean and variance equalization of each dimension of data. Commonly used data standardization methods are: Z standardization, maximum-minimum standardization, Log function standardization, etc. According to characteristics of data, Z standardization method is adopted, deviation standardization based on statistical theory makes processed data conform to standard normal distribution, that is, mean is 0 and standard deviation is 1. Processing steps are as follows.
In order to avoid influence of difference in curve range of monitoring variable the subsequent analysis, data need to be normalized and preprocessed. Data normalization is an important data processing step before data modeling, including sample scale normalization, sample-by-sample mean subtraction, and feature standardization. Sample-by-sample mean subtraction is mainly used in stable data sets, that is, statistical properties of each dimension of data are same, screw position and pressure during injection molding process are different at different times, so this method is not applicable. Feature standardization refers to mean and variance equalization of each dimension of data. Commonly used data standardization methods are: Z standardization, maximum-minimum standardization, Log function standardization, etc. According to characteristics of data, Z standardization method is adopted, deviation standardization based on statistical theory makes processed data conform to standard normal distribution, that is, mean is 0 and standard deviation is 1. Processing steps are as follows.
(1). Centralized processing, that is, removing mean value, eliminating influence of self-variation and numerical value, namely
In formula: x——original screw position or pressure; x1——data after corresponding centralization processing; N——number of sampling points of curve; μ——mean value of sampling value.
(2). Dimensionless processing, namely
In formula: X11 - data after dimensionless processing; δ - root mean square error of sampling value.
1.2 Data dimensionality reduction method
Pressure and speed of screw of injection molding machine change with time in each mold. Dimension of data is high after time and batch are expanded, and direct calculation cost is large. Dimensionality reduction process is to reproduce original data with lower-dimensional data by finding out important factors in high-dimensional data or discovering hidden relationship between variables while retaining original information as much as possible. Compared with original data, dimension-reduced data not only reduces dimension, but also is more likely to reflect hidden data relationships that original data cannot. Now we focus on analyzing 1 linear and 3 nonlinear methods for dimensionality reduction of data.
(1) Principal component analysis method. Principal component analysis method is a multivariate statistical analysis method that projects a high-dimensional data space into a low-dimensional principal component space through linear transformation, and selects a small number of important variables. It removes redundant information in original data and is an effective data compression and information extraction method. Principal component analysis method is suitable for two-dimensional data matrix X(N*M), where N is number of data samples, M is data dimension, score vector, load vector, and eigenvalue are obtained, namely
In formula: T——score vector; P——load vector.
Score vector and load vector are orthogonal and have length 1. Inner product of score vector Tj corresponds to eigenvalue X of covariance matrix XtX of λj. Payload vector Pj is feature vector of λj. In this way, variance information of X is actually expressed by λ, proportion of other feature values is small. In the case of ensuring that main information of data is not lost, the first A principal components can be selected to construct Xj to represent original data X, that is,
In formula: Tj - main feature vector, consisting of the first A components of T; Pj - a vector consisting of the first A components of P.
(2) Statistical analysis method. Statistical analysis method is an information processing, compression and extraction method that uses statistical information to complete data dimension reduction and reproduction. Statistical variables include first-order statistics (mean y), second-order statistics (variance ε), third-order statistics (skewness γ), fourth-order statistics (kurtosis k), etc., which are defined as follows:
(3) Laplace mapping method. Laplace mapping algorithm looks for a low-dimensional data to preserve local properties of manifold data. Low-dimensional reproduction of data is realized by distance between two adjacent points, distance between data points and K nearest neighbors are minimized by means of weights, that is, the closer points are, the greater te impact on cost function. Using sparse spectrum theory, cost function is defined as a feature problem, and algorithm is divided into following steps.
1) To construct an adjacency graph G, the nearest neighbor ε method or K nearest neighbor method can be used, that is, K nearest neighbor method is used.
2) Define the nearest neighbor weight matrix W, which can use thermal kernel method or simple connection method. Now thermal kernel method is adopted, that is, if xi and xj are adjacent, then
In formula: t ——width of thermal core, value of which is suitable for neighborhood K.
3) Construct Laplacian feature matrix L=D-W to minimize feature mapping error, which is equivalent to calculating minimum feature vector in following formula.
In formula: D——diagonal matrix, .
Eigenvectors μ1, μ2, …, μd corresponding to minimum d+1 eigenvalues of L constitute low-dimensional embedding result T=[μ1, μ2, … μd+1]T. This method converts problem of dimensionality reduction and feature extraction into solution of matrix eigenvalues and eigenvectors. Process is simple and does not require iteration, so calculation amount and calculation time are reduced.
(4) Diffusion coefficient map method. Diffusion coefficient map method, like Laplace mapping method, belongs to nonlinear dimensionality reduction method, which achieves purpose of dimensionality reduction by finding its hidden low-dimensional spatial data structure. Different from sparse spectral analysis based on proximity graph of Laplace mapping method, diffusion coefficient graph method is a dimensionality reduction method for full spectral analysis based on dispersion distance under condition of preserving local properties. Feature dimension reduction steps of diffusion coefficient map method are as follows.
1) Carry out data normalization of following formula to ensure that data falls in (0,1).
2) Calculate data graph, and use Gaussian kernel function for connection of weights.
In formula: σ——Gaussian variance.
3) Calculate sum P of row vectors of matrix W. P defines Markov matrix of forward transition probability matrix, which represents process of one data point in the data set being transferred to another data point.
where: —a matrix of forward t iterations, diffusion distance can be defined by sum of its row vectors.
where: —diffusion distance; —indicates that more weight is attributed to part of high-density map.
4) Use spectral theory to obtain low-dimensional reproduction data Y that preserves diffusion distance.
Since graph is fully linked, the largest eigenvalue is trivial, 1, and discarded. Reproduction data Y is a d-segment main feature vector, that is, .
1.3 Fault monitoring and quality prediction model
Data features can be extracted through different dimensionality reduction methods to achieve fault monitoring and quality prediction, such as neural networks and support vector machines. Neural network theory is mature, support vector machine solves problems of slow convergence speed and existence of local minimum points in neural network. Support vector machine has advantages in classification problem. Therefore, neural network is used as modeling method.
Main purpose of fault monitoring is to realize fault diagnosis of raw material and mold temperature, and use classifier to monitor. The more layers of neural network and the more neurons, the higher training accuracy, but at the same time generalization ability of network decreases. In order to achieve better results, according to application scenario, input vector is dimensionality-reduced data, with about 20 dimensions, and output type is 3 types. According to Kolmogorov's theorem, a three-layer neural network structure can be used. According to experiment, it is found that number of neurons is stable at about 10, and classification function uses Softmax classifier. Model of fault monitoring is shown in Figure 2.
Main purpose of fault monitoring is to realize fault diagnosis of raw material and mold temperature, and use classifier to monitor. The more layers of neural network and the more neurons, the higher training accuracy, but at the same time generalization ability of network decreases. In order to achieve better results, according to application scenario, input vector is dimensionality-reduced data, with about 20 dimensions, and output type is 3 types. According to Kolmogorov's theorem, a three-layer neural network structure can be used. According to experiment, it is found that number of neurons is stable at about 10, and classification function uses Softmax classifier. Model of fault monitoring is shown in Figure 2.
Figure 2 Neural network model for fault monitoring
Quality prediction model finally outputs a specific number, not a category, i.e. if fault detection is a discrete output, quality prediction is a continuous output. Design layer number of model is determined, and hidden neural units are consistent with fault monitoring. Different from Softmax classifier at the end of fault monitoring, quality prediction uses a fitter.
Quality prediction model finally outputs a specific number, not a category, i.e. if fault detection is a discrete output, quality prediction is a continuous output. Design layer number of model is determined, and hidden neural units are consistent with fault monitoring. Different from Softmax classifier at the end of fault monitoring, quality prediction uses a fitter.
2 Experimental Design
Injection molding process can be divided into four stages: plasticization, injection, pressure holding and cooling. These four stages determine molding quality of final product. Since result of plasticization can be reflected in process control curves of injection and pressure holding, plasticization stage can be subtracted. Distribution and change of mold temperature will affect flow resistance of melt, so change in cooling stage can also be reflected in control curve of injection and pressure holding stages. According to above 2 points, sampling of test is screw displacement and pressure curves of injection and pressure holding stages. Test uses a 900 kN servo hydraulic injection molding machine, screw displacement is measured by a grating ruler, pressure is hydraulic system pressure, and sampling period is 3 ms. Plastic material is polypropylene PPH-T03, and test product is a box-shaped part of 68 mm * 60 mm * 41 mm, with an average wall thickness of 2 mm, and molding quality of product is used as evaluation index.
In actual injection process, even if process parameters remain unchanged, molding quality of product will fluctuate due to environmental or working conditions. Variables of working conditions change, and influence of non-controlled variables on quality results of molded products is reduced by repeated tests. Test conditions are shown in Table 1.
Table 1 Design of mold temperature and raw material factor
In actual injection process, even if process parameters remain unchanged, molding quality of product will fluctuate due to environmental or working conditions. Variables of working conditions change, and influence of non-controlled variables on quality results of molded products is reduced by repeated tests. Test conditions are shown in Table 1.
Table 1 Design of mold temperature and raw material factor
No. | 1 | 2 | 3 | 4 | 5 | 6 |
Mold cooling temperature/℃ | 40 | 60 | 80 | 40 | 60 | 80 |
Recycled materials | No | No | No | Yes | Yes | Yes |
Repeat times | 80 | 99 | 81 | 82 | 82 | 80 |
Fault monitoring model is evaluated by means of mean square error MSE and percent error E%. The smaller MSE, the better model classification is, and 0 means no error. E% indicates proportion of samples that are misclassified, 0 means classification is completely correct, and 100 means all errors. Quality prediction model is evaluated by MSE and regression coefficient R. R represents correlation between measured output value and target value, an R value of 1 indicates a high correlation, and 0 indicates a random relationship.
3 Results and Discussion
3.1 Fault monitoring results and discussion
Based on control curve, it is judged whether raw material and mold temperature of product belong to normal category, so as to achieve purpose of fault monitoring. In actual production, although raw materials are directly controlled by feeding place, there is no strict distinction between raw materials and recycled materials, which is easy to cause mixing, so it is necessary to judge type of raw materials by detecting production process curve of product to ensure normal production. In the test, raw material is 1 and recycled material is 0 for classification. Test results are shown in Table 2.
Dimensionality reduction method | Train | Verify | Test | Number of neurons | |||
MSE(*10-3) | EL% | MSE(*10-3) | EL% | MSE(*10-3) | EL% | ||
Raw data | 3.86 | 0.284 | 14.93 | 1.345 | 25.13 | 3.947 | 20 |
PCA | 12.07 | 1.136 | 6.98 | 0 | 13.15 | 1.132 | 10 |
SP | 7.53 | 1.136 | 2.46 | 0 | 13.55 | 1.316 | 5 |
LE | 16.61 | 2.273 | 13.78 | 1.316 | 5.67 | 0 | 10 |
DM | 0.27 | 0 | 0.12 | 0 | 0.37 | 0 | 10 |
Table 2 Statistical table of classification results of raw materials and recycled materials
It can be seen from Table 2 that although compared with mean square error and percent error of original data training, data after dimensionality reduction has a greater improvement in validation set and test set, LE and DM methods have a 0 percent error on test set. Percent error of DM is 0 during training, validation, and testing, and classification is 100% correct. In terms of number of neurons, only 5 hidden layer neurons are used for statistical analysis, while 20 neurons are used for original data. Combined with input dimension, 18 dimensions are used for statistical analysis, and 930 dimensions are used for original data. Forward calculation is 930*20 (calculated input value) + 930 (calculated activation value), which is much larger than 220 (ie 20*10+20) of statistical analysis, which increases amount of calculation. It can be seen that feature extraction can simplify monitoring model.
Mold temperatures of 40, 60, and 80 ℃ are recorded as categories 1, 2, and 3, respectively, and classification results corresponding to different dimensionality reduction methods are shown in Table 3. Compared with SP method, original data has smaller mean square error MSE and percent error E% on training set, validation set and test set than them, which may be because SP method only retains statistical characteristics of original data, and does not extract directly useful data information related to classification. PCA method is a global-based linear feature extraction method. Compared with original data, PCA modeling has better generalization ability, that is, when percentage errors of training set and validation set are approximately equal, there is a smaller test error on test set. Both LE and DM diffusion coefficient maps are nonlinear dimensionality reduction methods, data obtained from dimensionality reduction can build better models.
Table 3 Statistics of different mold temperature classification results
It can be seen from Table 2 that although compared with mean square error and percent error of original data training, data after dimensionality reduction has a greater improvement in validation set and test set, LE and DM methods have a 0 percent error on test set. Percent error of DM is 0 during training, validation, and testing, and classification is 100% correct. In terms of number of neurons, only 5 hidden layer neurons are used for statistical analysis, while 20 neurons are used for original data. Combined with input dimension, 18 dimensions are used for statistical analysis, and 930 dimensions are used for original data. Forward calculation is 930*20 (calculated input value) + 930 (calculated activation value), which is much larger than 220 (ie 20*10+20) of statistical analysis, which increases amount of calculation. It can be seen that feature extraction can simplify monitoring model.
Mold temperatures of 40, 60, and 80 ℃ are recorded as categories 1, 2, and 3, respectively, and classification results corresponding to different dimensionality reduction methods are shown in Table 3. Compared with SP method, original data has smaller mean square error MSE and percent error E% on training set, validation set and test set than them, which may be because SP method only retains statistical characteristics of original data, and does not extract directly useful data information related to classification. PCA method is a global-based linear feature extraction method. Compared with original data, PCA modeling has better generalization ability, that is, when percentage errors of training set and validation set are approximately equal, there is a smaller test error on test set. Both LE and DM diffusion coefficient maps are nonlinear dimensionality reduction methods, data obtained from dimensionality reduction can build better models.
Table 3 Statistics of different mold temperature classification results
Dimensionality reduction method | Train | Verify | Test | Number of neurons | |||
MSE(*10-3) | EL% | MSE(*10-3) | EL% | MSE(*10-3) | EL% | ||
Raw data | 0.11 | 0 | 51.64 | 8.108 | 117.60 | 21.62 | 10 |
PCA | 4.27 | 0 | 18.67 | 2.703 | 29.04 | 5.405 | 10 |
SP | 48.63 | 7.645 | 44.04 | 5.405 | 22.77 | 2.703 | 10 |
LE | 67.50 | 10.588 | 145.90 | 32.43 | 71.19 | 10.81 | 10 |
DM | 21.44 | 1.765 | 37.15 | 2.703 | 26.83 | 0 | 10 |
3.2 Quality prediction results and discussion
Molding quality distribution of test product is shown in Figure 3. Light-colored line represents quality curve, horizontal line represents average quality of each process state, vertical line distinguishes different materials or mold temperatures, and dark-colored curve represents average quality of all samples. It can be seen from Figure 3 that due to process differences (mold temperature) and material states (raw materials and recycled materials), the overall quality of molded products fluctuates violently, distribution is regular, but it is difficult to intuitively establish a relationship with screw position and pressure curve. Taking mean square error as standard, from training set, mean square error of original data is the smallest, followed by DM, SP, LE, and finally PCA principal component analysis method, which shows that model trained using original data matches training data best, and this is also consistent with results of regression coefficients. From point of view of validation set and test set, contrary to training set, regression coefficient of model trained on original data is only 0.8 when new data is tested, while regression coefficient obtained by generally using reduced dimension data is 0.9, mean square error is also smaller than original data model, which indicates that data obtained by feature extraction is more representative. Data obtained by SP and DM methods are input as mixed data, and it can be seen from Table 4 that a better fitting effect is obtained.
Figure 3 Product quality under different working conditions
Table 4 Statistical table of product quality prediction results under different working conditions
Table 4 Statistical table of product quality prediction results under different working conditions
Dimensionality reduction method | Train | Verify | Test | |||
MSE(*10-3) | R | MSE(*10-3) | R | MSE(*10-3) | R | |
Raw data | 0.054 | 0.996 | 3.940 | 0.769 | 3.832 | 0.805 |
PCA | 1.155 | 0.918 | 2.799 | 0.833 | 2.431 | 0.828 |
SP | 0.841 | 0.941 | 1.027 | 0.933 | 1.429 | 0.912 |
LE | 0.996 | 0.930 | 2.139 | 0.860 | 2.286 | 0.851 |
DM | 0.663 | 0.663 | 2.204 | 0.862 | 1.809 | 0.914 |
SP+DM | 0.653 | 0.653 | 1.195 | 0.925 | 1.232 | 0.917 |
Figure 4 and Figure 5 show comparison of regression coefficients between original data of curve and mixed data after dimensionality reduction by SP and DM methods in prediction of neural network quality, where Y=T means complete fitting, the closer it is to this line, the better fitting effect. It can be seen from Figure 4 and Figure 5 that regression coefficient of original data is nearly 0.99 on training set, but only 0.77 and 0.81 on validation set and test set. There are two reasons for this phenomenon: ①Data is over-fitted, which can be improved by adjusting regular parameters or number of neurons; ② Data is not representative, that is, model itself does not find inherent relationship of data. After experimenting to adjust the number of neurons, it was found that the results did not improve, indicating that the latter was the cause. After dimensionality reduction by SP and DM methods, predicted value of product quality not only shows a high regression coefficient (>0.92) on training set, but also on validation set and test set, which indicates that SP and DM methods are used for dimensionality reduction. Finally, high-dimensional, nonlinear, and strongly coupled data relationships inherent in original data are extracted, and a large number of redundant parameters are discarded. Extracted features not only make quality prediction model run faster, but also improve accuracy.
Figure 4 Raw data
Figure 5 SP and DM methods for dimensionality reduction data
Recommended
Related
- Aluminum alloy die-casting technology: quality defects and improvement measures of aluminum alloy di11-25
- Summary of abnormal analysis of automobile molds11-25
- Research status and development trends of high-strength and tough die-cast magnesium alloys11-23
- N93 mobile phone battery cover injection mold design key points11-23
- Mold design affects quality of aluminum die castings11-22