Nanoscale friction on thin films

22.03.2021

In nature ‘entities’, either be living or non-living tend to behave differently at different scales by exhibiting different properties. Which play a major role in the way we frame our perspectives and opinions on them. One of the simplest examples is watching an airplane soaring in sky and judging its tiny size. Not until it touches the ground we realize the vastness of this magnificent machine.

Similarly the effect of nanoscale friction on thin films is believed to be highly unpredictable and different from macro scale surfaces and objects. This is not only because of the behavior of  films involved, but the unavailability of reference data and experimental results that help to draw conclusions. Hence a group of researchers tried to crack this code with the help of AI by conducting friction experiments on different materials and using various machine learning algorithms to judge the effects.

Introduction

Materials like Aluminum oxide, Titanium dioxide, Molybdenum disulfide and Aluminum are used to perform experiments. Coming to the experimental setup the normal loads acting on this samples vary from FN = 10-150 nN, sliding velocities from 5 to 500 nm/s and temperatures from 20 to 80 degree Celsius. Wide range of algorithms with varying complexities to attain accurate results are considered. The experimental results are obtained using lateral force microscopy. Hence the influence of tip wear and adhesion are also considered by calibrating the bending and transversal stiffness of the used Bruker’s triangular SNL-10 probes. It is important to note that these measurements are carried out in hermetic enclosure with relative humidity constantly being monitored.

To test an algorithm the whole data is divided into three categories namely main training data, validation data and testing data. Here training data acts as the input for learning, validation data helps optimize the algorithm and testing data to scrutinize its accuracy. Mean absolute error (MAE) and Root mean square error (RMSE) are helpful in identifying the error variations. Generally speaking, RMSE value will be larger or equal to MAE value. Additionally, the co-efficient of determination of R2 metric represents the variance of a dependent variable to the independent variable. The closer its value is to 1, the better the fit. It is also important to consider that unsupervised ML methods rely solely on inputs but supervised ML algorithms have both inputs and their corresponding outputs.

banner

Results and Discussion

General techniques like Multi-layer perceptron (MLP), Random forest (RF) and Support vector regression (SVR) are used for training the algorithms.

  • MLP is a deep artificial neural network which has a self-learning capability by iterating the results back and forth and minimizing the error.
  • RF is one of the most powerful ensemble Algorithm which uses statistical methods to combine several trees to one. Where each tree is trained independently with different data sets and the average is considered.
  • In SVR a learning algorithm retrieves the coefficients and parameters that can be coupled to optimization algorithms to limit the number of support vectors.

A comparison of the performance of the models is done, where the value of R2 is given considerable amount of weightage, so that if R2 is above 0.7 then the prediction is considered good. Here RSME is considered as the base in this comparison because of its simplicity during the computation.

By the obtained results it’s decided that RF algorithm is considerably accurate, with good prediction based on the high non-linear variability of the nanoscale friction, but MLP is the most accurate. It is observed that if the data is at its largest possible size with broader range of values then the prediction improves.

Now to understand the trustworthiness of the data few graphs are drawn to compare the attained values. Upon observing the below graph one can say that the graph of MLP is considerably away from the true value though the fit with respect to the data is good with the true values., exposing the unreliability of drawing conclusion just looking at the errors. Ironically, we can observe that the values for SVR and RF are close to the true values but with a poor fit.

Neural network prediction
Predictive performances of considered ML models on the test dataset for (a) Al and (b) MoS2 with respective uncertainty levels in three shades of grey. Credit: [1].

But for Al and MoS2 samples the closeness with the true value and the fit seems to be fine for all of the training models. By looking at this we can come to a conclusion that analyzing only one or two metrics will not project a confident assessment or reliability on the model.

banner
Predictive performances of considered ML models on the test dataset for (a) Al and (b) MoS2 with respective uncertainty levels
in three shades of grey. Credit: [1]

Though these results are considered satisfactory, there are a few advanced learning Algorithm that have the ability to deal with multi-dimensional experimental data. These are called as AI based evolutionary algorithms. In this research few such EA’s are referred and tested on our experimental data. For AI models based on genetic programming, parent frontier methodology is used where a set of solutions are taken to be quasi-optimal.

During research it is observed that the Koza-style genetic programming (KSGP) Predictions are quite poor while Grammatical evolution genetic programming (GEGP) generates simplest models but with poor results.

Fits of predicted values of the model vs. experimental data for the training and test datasets. Credit: [1]

We can observe that the Multi gene genetic programming (MGGP) model trained data gives the best performance. Hence this model is chosen for further analysis with the test data set. After few tweaks in the model to make it less complex and defining a parent frontier. It yields us a mathematical expression with seven variables in which three are related to the influencing parameters (Force, temperature and sliding speed) and the rest being material class dummy variables. This created a simpler and more user-friendly predictive model. Which on further analysis is known to be a regression model. It is expected that the results are placed as close as possible to the R2=1 line which is actually the case with the enhanced model. Though, to gain complete confidence and reliability with this model the developed predictive model will be statistically tested by analyzing the residual plots.

In conclusion I would like to quote the researchers themselves:
‘The resulting correlation functions, linking the considered process variables to the value of nanometric friction, provide a very thorough insight into the studied phenomena made of complex interactions, as well as a very valuable, novel and unprecedented contribution in the field of nanotribology. This constitutes the preconditions and provides means for an in-depth understanding and for practical improvements in the field of nanotribology, and a novel insight into this fundamental force of nature.’

Further information: Artificial intelligence-based predictive model of nanoscale friction using experimental data, Marko Perčić, Saša Zelenika & Igor Mezić, https://link.springer.com/article/10.1007/s40544-021-0493-5

References:

[1] Artificial intelligence-based predictive model of nanoscale friction using experimental data, Marko Perčić, Saša Zelenika & Igor Mezić, https://link.springer.com/article/10.1007/s40544-021-0493-5

Be the first to comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.