You are on page 1of 6

Available online at www.sciencedirect.

com

ScienceDirect
Procedia CIRP 12 (2013) 252 257

8th CIRP Conference on Intelligent Computation in Manufacturing Engineering

Image data processing via neural networks for tool wear prediction
D.M. DAddona*, R. Teti
Fraunhofer Joint Laboratory of Excellence on Advanced Production Technology (Fh J_LEAPT), Dept. of Materials and Production Engineering, University of Naples Federico II, Piazzale Tecchio 80, 80125 Naples, Italy * Corresponding author. Tel.: +39 0812399231; fax: +39 0817682362.E-mail address: daddona@unina.it.

Abstract

In the manufacturing systems, one of the most important issues is to estimate the rest of cutting tool life under a given cutting conditions as accurately as possible. In fact, machining efficiency is easily influenced by the kind of tool selected at
each cutting process. One of the most complex problems for tool selection is that of estimating the tool life under a given cutting conditions as accurately as possible. As the quality of the cutting tool is directed related to the quality of product, the level of tool wear should be kept under control during machining operations. In order to monitor the tool wear development during machining processes, the interface chosen between the working procedure and the computer was a digital image of the cutting tool detected by an optical sensor. Images, however, are not homogeneous. Images with standard size and pixel density were produced elaborating tool images files obtained during machining tests. This paper is focused on a procedure for the processing of cutting tool images detected during tests. A methodology to design and optimized artificial neural networks for automatic tool wear recognition using standard images of cutting tool is proposed.
2012 2013 The The Authors. Authors. Published Published by byElsevier ElsevierB.V. BV. Selection and/or peer-review under responsibility of Professor Roberto Teti. Selection and peer review under responsibility of Professor Roberto Teti
Keywords: Cutting tool wear; image processing; neural networks

1. Introduction The technology of metal cutting is in permanent evolution and is a field of much interest in terms of computer applications. Cutting tool wear detection and monitoring is a fundamental aspect in the evolution of production techniques. As the quality of the cutting tool is directly related to the quality of the product, the level of tool wear should be kept under control during machining operations. The recognition of the general conditions of a cutting tool has a major role in the optimisation of machining processes, since the accurate prediction of the exact moment for tool change results in many cases in an effective economy: a longer cutting tool life can be achieved, tolerances can be under control and rejection of pieces by deterioration of the tool conditions can be prevented [1]. Generally, two procedures are followed for cutting tool replacement, depending on the value of the workpiece. If it is a large

production where the single workpiece value is low compared to the cost of tool change and tool value, complete tool degradation is reached, then the tool is changed and the workpiece is rejected. On the other hand, if it is a production where the cost of the workpiece is high or the workpiece is in its finishing stage, the tool is replaced before it produces unacceptable results, according to a preventive approach. In order to control the tool wear development, the interface chosen between the real working procedure and the computer was a digital image of the cutting tool detected by an optical sensor. In this paper, cutting tool images detected during machining tests with standard size and pixel density were produced by processing image files. These standard images of cutting tool were utilized for intelligent processing of image data aimed at automatic and real time tool wear evaluation. Neural networks (NN) were designed to produce a mapping from image data input to tool wear characterization in terms of crater wear width.

2212-8271 2013 The Authors. Published by Elsevier B.V. Selection and peer review under responsibility of Professor Roberto Teti doi:10.1016/j.procir.2013.09.044

D.M. DAddona and R. Teti / Procedia CIRP 12 (2013) 252 257

253

2. Machining tests The work material used during quasi-orthogonal cutting tests was an AISI 1045 steel and the cutting tool was a P3 tungsten carbide insert. The cutting parameters values are shown in Table 1. Using different parameter combinations, 14 cutting conditions were selected, namely "adh", "aeh", "aei", "afh", "afi", "bdi", "beh", "bfh", "bfi", "ldi", "leh", "lei", "lfh", "lfi". Every machining test was carried out with a new cutting tool until tool failure. More details of machining tests can be found in [2]. 3. Cutting tool image detection The machining tests were interrupted after each minute of cutting process and two types of image of the cutting tool were detected by a video camera: the flank wear (type "l", Fig. 1a) and the crater wear (type "c", Fig. 1b). The number of cutting tool images for each cutting test ranges from 6 to 122 depending on each test duration. A total of 717 digital tool images were detected and stored as image files. In order to allow for the measurement of the cutting tool wear, the image includes the worn zone of the tool and a reference copper wire with a known thickness of 0.25 mm. Images, however, are not homogeneous. Contrast, definition, size and position of the cutting tool vary from one image to another. Standardization procedure of the cutting tool images was reported in [3].
Fig. 2. NN training steps.

(a) image. File: bfh190c.tif

(b)

Fig. 1. (a) Raw flank wear image. File: bfh190l.tif. (b) Raw crater wear

4. Neural Network analysis In order to predict the residual tool life before the wear limit in terms of width of the crater wear, backpropagation neural networks (BP NN) were utilized to produce a mapping from input vectors to output values (Fig. 2) [4-7]. The first step to train a BP NN is to choose its parameters and main characteristics, such as number of layers of input and output neurons, transfer function, input array or matrix, output array or matrix.
Table 1. Cutting parameters for quasi-orthogonal tests. Cutting speed (m/min) a b l 200 150 250 Feed rate (mm/rev) d e f 0.06 0.12 0.19 Depth of cut (mm) h i 1.0 1.5

The BP NN were created using the newff Matlab function: Net = newcf( p, t, [S1 S2...S(N-l)], {TF1 TF2...TFN}, BTF, BLF, PF, IPF, OPF, DDF); where: Newcf: to create a cascade-forward backpropagation network; p: to transpose input matrix or array: R x Q1 matrix of Q1 sample R-element input vectors; t: to transpose target matrix or array: SN x Q2 matrix of Q2 sample SN-element input vectors; Si number of input and output layers: size of ith layer, for N-1 layers; TFi: to transfer function of ith layer; BTF: backpropagation newtwork training function BLF: backpropagation weight/bias learning function; PF: performance function; IPF: roll cell array of input processing functions; OPF: roll cell array of output processing functions; DDF: data division function;

254

D.M. DAddona and R. Teti / Procedia CIRP 12 (2013) 252 257

To train the BP NN, the following commands and parameter values were utilized: >> net.trainParam.epochs=100; >> net.trainParam.lr=0.3; >> net.trainParam.mc=0.6; For the training phase: >> net=train (net, p, t); To simulate the BP NN, the following command was used: >> Y=sim(net, ps'); e=(Y-ts) Where e [mm] is the error between the simulated output Y [mm] and the measured crater wear ts [mm]. 4.1. NN training using non standardized cutting tool images The cutting tool wear images were divided by the main cutting parameters: cutting speed, feed rate, depth of cut time of cutting (e.g.: AEH series with cutting speed A = 200m/min, feed rate E = 0,12mm/rad, depth of cut H = 1 mm, Fig. 3). A set of whole matrices composed by all grey scale values from 0 (black) to 255 (white) and cutting parameters were used for NN training and testing. The input matrix was a 170x173 (pixel resolution plus cutting parameters for every line). The BP NN utilized to produce a mapping from input vectors to output values had the following structure: hidden layer had 3 nodes and the output layer 1 node for tool wear prediction. The number of hidden nodes was chosen according to a "cascade learning" procedure [8-9]: hidden units are added one at a time until an acceptable training speed is achieved. Weights and thresholds were randomly initialized between -1 and +1. Learning coefficients were: - learning rate between input and hidden layer = 0.3, - learning rate between hidden and output layers = 0.15, momentum m = 0.4. The learning rule was the Normal Cumulative Delta Rule and the transfer function applied to the nodes was the sigmoid function f(x) = 1/(1+e-x). The number of learning steps for a complete training was between 10000 and 30000, according to the time to convergence. Epoch size, i.e. the number of training presentations between weight updates, was 8. BP NN training was carried out by the "leave-k-out" method, which is particularly useful when dealing with small training sets. One pattern vector (k = 1) was held back in turn for the recall phase, and the other pattern vectors were used for learning.

Fig. 3. AEH060, a=200m/min, e=0.12mm/rad, h=1mm, 6th min.

Fig. 4. AEH060, a=200m/min, e=0.12mm/rad, h=1mm 6st min. after first processing step.

4.2. Optimization of NN training phase To reduce the NN training time and to have good NN output results, the best way was to reduce each input and output matrix to a single line vector. Image size, brightness and contrast were changed to standardize as much as possible each image, using simple common used software, Microsoft Office Picture Manager (Fig. 4). Figure 5 shows the typical trend of grey scale distribution in the AEH image series after this first step of standardization. The grey trend varies with too much discontinuity and has too many peaks. The cutting tool wear images needed a stronger standardization, so the next step of the paper, the image elaboration. A vector was created using the machining parameters and some characteristics typical of each image, such as mean grey level, median grey level, mode grey level, standard deviation of the grey levels, skewness, kurtosis and number of pixels with different grey levels [3]. These kind of vectors were used as matrix of input parameters (p), while the vertical output vector (t) was composed by the real width of the craters. The testing image was also given in a vector form (ps), composed by the same parameters of the input vectors. The measured output value (ts) of each tested image has been then compared to the simulated value obtained (Y). ts was a vector because returned for each line the measured output.

D.M. DAddona and R. Teti / Procedia CIRP 12 (2013) 252 257


800 600 400 200 0 1 50 100 150 200 250 Fig. 7: AEI080, grey scale image.

255

Fig. 5. Grey level histogram of AEH060 cutting tool image.

Cutting tool image standardization process To obtained standard grey scale images, an image standardization process has been developed using fresh tool images as reference for grey color. Images have been processed using Adobe Photoshop CS5 software package. The flow diagram of the image elaboration process is reported in Fig. 6. First processing step has been to put them lined up with the fresh tool, then they have been overlapped and cropped on the interested area using the following commands: Image: Rotate: Arbitrary and Crop Tool. After converting them in grey scale mode, last step has been the adjustment of color to obtain images with the same Brightness/Contrast level and the same Gaussian Blur and a well distributed grey values histogram as possible[Filter: Blur: Gaussian Blur].

Fig. 8: LEH060, black&white image.

Switching images from RGB mode to grey scale, values from 0 to 255 were obtained. Each image was resized to 100 pixels x 100 pixels size (Fig. 7). After this step, some images were still processed in MatLab to obtain standard black&white images (Fig. 8). NN training and testing using standardized cutting tool images The BP NN were training and tested using two different input data sets (Fig. 9): 1. the grey scale standardized cutting tool images 2. the black&white cutting tool images. 1. NN grey scale images training set For the grey scale standardized cutting tool images data set, NN input values were: the values extracted from each cutting tool image matrix (grey mean value, grey median level, grey mode level, skewness, kurtosis) the number of pixels with a grey level <=> 20 the cutting parameters, the current cutting time. All input values were normalized to 1. The NN best performance was obtained using standardized cutting tool AEI images and the follow input characteristics: time; cutting parameters; grey mean value; grey median value; grey mode level; standard deviation; skewness; kurtosis.

Fig 6: Image elaboration process.

256

D.M. DAddona and R. Teti / Procedia CIRP 12 (2013) 252 257

In Figure 10, the typical trend of grey scale distribution of AEI cutting tool image series, after standardization process, was presented. A regular pattern can be noted, with many peaks and discontinuities between successive images. In Figure 11, the experimental tool crater wear (in blue) and the NN output (in red) were plotted vs. number of cutting tool images in the input data set. The value of the mean error is equal to 0,049 mm, with an highest error of 0,101 mm and a lowest error of 0,013 mm.

1.0

0.5

Fig. 11: Experimental AEI cutting tool crater wear (blue line) and NN output (red line) vs. number of input cutting tool images

2. NN Black&white images training set From each black&white image, MatLab gives back a matrix of 0 and 1 values. The first NN input array was composed by number of pixels and maximum white width, plus time and cutting parameters; this procedure has been rejected because it was like giving the real output in an hidden form. Another way was to add in the NN input array the mean value, the median value, the standard deviation, maximum width and maximum length of the white area and the distance of each pixel from mean and median value. The best results were obtained using as NN input an array composed by the percentage of white and black pixels, current cutting input time and cutting parameters. None of the above NN input arrays gave acceptable results for automatic tool wear recognition. 5. Conclusion
Fig. 9: NN training phase using standardized cutting tool images.

100

500

0 1 50 100 150 200 250

Fig 10: Grey level histogram of AEI cutting tool images data set.

In the manufacturing systems, one of the most important issues is to estimate the rest of cutting tool life under a given cutting conditions as accurately as possible. In order to monitor the tool wear development during machining processes, the interface chosen between the working procedure and the computer was a digital image of the cutting tool detected by an optical sensor (video camera). Images, however, are not homogeneous: contrast, definition, size and position of the tool vary from one detected image to another and they depend on the particular condition in which they was captured by an optical sensor. In this paper, a methodology for cutting tool images processing, detected during turning tests, is proposed. Images with standard size and pixel density were produced by elaborating tool image files obtained during cutting tests. Artificial neural networks, in particular Back-propagation Neural Networks (BP NN), are designed and tested to estimate tool wear characterisation on the basis of the standard images of cutting tool.

D.M. DAddona and R. Teti / Procedia CIRP 12 (2013) 252 257

257

Further research efforts will concern the training and testing phases of the designed BP NN for automatic tool wear recognition using the obtained standard images of cutting tool. Acknowledgements The activity for the preparation of this work has received funding support from the European Community's Seventh Framework Programme NMP-FoF 285489 IFaCOM. References
[1] Byrne, G., Dornfeld, D., Inasaki, I., Ketteler, G., Koenig, W., Teti, R., 1995. Tool Condition Monitoring. (TCM) - The Status of Research and Industrial Application, Annals of the CIRP, vol. 44/2, 1995, p. 541. [2] Teti, R., 1993. Sensor Signal Frequency Analysis for Metal Cutting Process Monitoring, 1st AITEM Conf., Ancona, Italy, 2224 Sept., 1993, p. 55. [3] DAddona, D., Teti, R., 2010. Monitoring of Cutting Tool Wear through Images Analysis, 6th IPROMS Virtual Conference, 15-26 Nov. [4] Fahlman, S.E. and Lebiere, C., 1990, An Empirical Study of Learning Speed in Back Propagation Networks, CMU Technical Report, CMU-CS-88-162. [5] Teshima, T., Shibasaka, T., Takuma, M., Yamamoto, A., 1993. Estimantion of Cutting Tool Life by Processing Tool Image Data with Neural Network, Annals of the CIRP, 42(1), p. 59. [6] D'Addona, D., Segreto, T., Simeone, A., Teti, R., 2011. ANN tool wear modelling in the machining of nickel superalloy industrial products, CIRP Journal of Manufacturing Science and Technology, 4 (1), p. 33. [7] Leone, C., DAddona, D.M., Teti, R., 2011, Tool wear modelling through regression analysis and intelligent methods for nickel base alloy machining, CIRP Journal of Manufacturing Science and Technology, 4 (3), p. 327. [8] Hertz, J., Krogh, A., Palmer, R.G., 1991, Introduction to the Theory of Neural Computation, Addison-Wesley, New York, NY. [9] Masters, T., 1993, Practical Neural Network Recipes in C++, Academic Press, San Diego.

You might also like