Just open a Bet365 account today and make a deposit http://abonuscode.co.uk Make a deposit of £10-£200 and then enter the 10-digit bonus code


With the ESIS blog we want to promote and intensify relevant scientific discussions on recent publications in Engineering Fracture Mechanics. The blog is hosted by IMechanica on:


The editors, Professors Karl-Heinz Schwalbe and Tony Ingraffea, support this initiative. ESIS hopes that this blog will achieve the following objectives:

• To initiate scientific discussions on relevant topics by highlighting, leaving comments, suggestions, questions etc. related to recent publications;

• To suggest re-reading, re-examination, comparison of results from the past, that may be overlooked by the authors or have fallen into general oblivion;

• To promote and give reference to groups with similar or related scientific goals and to promote collaboration, as well as bridging gaps between different disciplines;

• To focus attention on new ideas that may face a risk of drowning in the noise of today's extensive scientific production;

Per Ståhle

Latest posts from ESIS Blog on IMechanica

Discussion of fracture paper #21 - Only 6% of experimentalists want to disclose raw-data

Experimental data availability is a cornerstone for reproducibility in experimental fracture mechanics. This is how the technical note begins, the recently published "Long term availability of raw experimental data in experimental fracture mechanics", by Patrick Diehl, Ilyass Tabiai, Felix W. Baumann, Daniel Therriault and Martin Levesque, in Engineering Fracture Mechanics, 197 (2018) 21–26. It is five pages that really deserves to be read and discussed. A theory may be interesting but of little value until it has been proven by experiments. All the proof of a theory is in the experiment. What is the point if there is no raw-data for quallity check? The authors cite another survey that found that 70% of around 1500 researchers failed to reproduce other scientists experiments. As a surprise, the same study find that the common scientists are confident that peer reviewed published experiments are reproducible. A few years back many research councils around the world demanded open access to all publications emanating from research finansed by them. Open access is fine, but it is much more important to allow examination of the data that is used. Publishers could make a difference by providing space for data from their authors. Those who do not want to disclose their data should be asked for an explanation. The pragmatic result of the survey is that only 6% will provide data, and you have to ask for it. That is a really disappointing result. The remaining was outdated addresses 22%, no reply 58% and 14% replied but were not willing to share their data. The result would probably still be deeply depressing, but possibly a bit better if I as a researcher only have a single experiment and a few authors to track down. It means more work than an email but on the other hand I don't have 187 publications that Diehl et al. had. Through friends and former co-authors and some work I think chances are good. The authors present some clever ideas of what could be better than simply email-addresses that are temporary for many researchers. The authors of the technical note do not know what hindered those 60% who did receive the request and did not reply. What could be the reason for not replying to a message where a colleague asks you about your willingness to share the raw experimental data of a published paper with others? If I present myself to a scientist as a colleague who plan to study his data and instead of studying his behaviour, then chances that he answers increase. I certainly hope that, and at least not the reversed but who knows, life never ceases to surprise. It would be interesting to know what happens. If anyone would like to have a go, I am sure that the author's of the paper are willing to share the list of papers that they used. Again, could there be any good reason for not sharing your raw-data with your fellow creatures? What is your opinion? Anyone, the authors perhaps. Per Ståhle



Discussion of fracture paper #20 - Add stronger singularities to improve numerical accuracy

It is common practice to obtain stress intensity factors in elastic materials by using Williams series expansions truncated at the r ^(-1/2)-stress term. I ask myself, what if both evaluations of experimental and numerical data is improved by including lower order (stronger singularities) terms? The standard truncation is done in a readworthy pape r "Evaluation of stress intensity factors under multiaxial and compressive conditions using low order displacement or stress field fitting", R. Andersson, F. Larsson and E. Kabo, in Engineering Fracture Mechanics, 189 (2018) 204–220, where t he authors propose a promising methodology for evaluation of stress intensity factors from asymptotic stress or displacement fields surrounding the crack tip. The focus is on cracks appearing beneath the contact between train wheel and rail and the difficulties that is caused by compression that only allow mode II and III fracture. The proposed methodology is surely applicable to a much larger collection of cases of fracture under high hydrostatic pressure such as at commonplace crushing or on a different length scale at continental transform faults driven by tectonic motion. In the paper they obtain excellent results and I cannot complain about the obtained accuracy. The basis of the analysis is XFEM finit element calculations of which the results are least square fitted to a series of power functions r^n /2. The series is truncated at n =-1 for stresses and 0 for displacements. Lower order terms are excluded. We know that the complete series, converges within an annular region between the largest circle that is entirely in the elastic body and the smallest circle that encircles the non-linear region at the crack tip. In the annular ring the complete series is required for convergence with arbitrary accuracy. Outside the annular ring the series diverges and on its boundaries anything can happen. A single term autonomy is established if the stress terms for n <-1 are insignificant on the outer boundary and those for n >-1 are insignificant on the inner boundary. Then only the square root singular term connects the outer boundary to the inner boundary and the crack tip region. Closer to the inner boundary the n ≤-1 give the most important contributions and at the outer the n ≥-1 are the most important. I admit that in purely elastic cases the non-linear region at the crack tip is practically a point and all terms n <-1 become insignificant, but here comes my point: Both at evaluation of experiments and numerics the accuracy is often not very good close to the crack tip which often force investigators to exclude data that seem less accurate. This was done in the reviewed paper, where the result from the elements closes to the crack tip was excluded. This is may be the right thing to do but what if n =-2, a r ^-1 singularity is included? After all the numerical inaccuracies at the crack tip or the inaccurate measurements or non-linear behaviour at experiments are fading away at larger distances from the crack tip. In the series expansion of stresses in the elastic environment this do appear as finite stress terms for n ≤-1. It would be interesting to hear if there are any thoughts regarding this. The authors of the paper or anyone who wishes express an opinion is encouraged to do so. Per Ståhle



Discussion of fracture paper #19 - Fracture mechanical properties of graphene

Extreme thermal and electrical conductivity, blocks out almost all gases, stiff as diamond and stronger than anything else. The list of extreme properties seems never ending. The paper Growth speed of single edge pre-crack in graphene sheet under tension, Jun Hua et al., Engineering Fracture Mechanics 182 (2017) 337–355 , deals with the fracture mechanical properties of graphene. A sheet of armchair graphene can be stretched up to 15 percent which is much for a crystalline material but not so much when compared with many polymers. The ultimate load, on the other hand, becomes huge almost 100 GPa or more. Under the circumstances, it is problematic to say the least, that the fracture toughness is that of a ceramic, only a few MPam^(1/2). Obviously cracks must be avoided if the high ultimate strength should be useful. Already a few microns deep scratches will bring the strength down to a a few hundred MPa. The research group consisting of Jun Hua, Qinlong Liu, Yan Hou, Xiaxia Wu and Yuhui Zhang from the dept. of engineering mechanics, school of science, Xi’an University of Architecture and Technology, Xi’an, China, has studied fast crack growth in a single atomic layer graphene sheet with a pre-crack. They are able to use molecular dynamics simulations to study the kinetics of a quasi-static process. They pair the result with continuum mechanical relations to find crack growth rates. A result that provide confidence is that the fracture toughness obtained from molecular primitives agrees well with what is obtained at experiments. The highlighted results are that the crack growth rate increases with increasing loading rate and decreasing crack length. The tendencies are expected and should be obtained also by continuum mechanical simulations, however then not be first principle and requiring a fracture criterion. Another major loss would be the possibility to directly observe the details of the fracture process. According to the simulation results the crack runs nicely between two rows of atoms without branching or much disturbances of the ordered lattice. The fracture process itself would not be too exciting if it was not for some occasional minor disorder that is trapped along the crack surfaces. The event does not seem to occur periodically but around one of ten atoms suffers from what the authors call abnormal failure. Remaining at the crack surface are dislocated atoms with increased bond orders. All dislocated atoms are located at the crack surface. The distorted regions surrounding solitary dislocated carbon atoms are small. A motivated question would be if the dissipated energy is of the same order of magnitude as the energy required to break the bonds that connects the upper and lower half planes before fracture. Can this be made larger by forcing the crack to grow not along a symmetry plane as in the present study. Without knowing much about the technical possibilities I assume that if two graphene sheets connected to each other rotated so that the symmetry planes do not coincide, the crack would be forced to select a less comfortable path in at least one of the sheets. Everyone with comments or questions is cordially invited raise their voice. Per Ståhle



Discussion of fracture paper #18 - A crack tip energy release rate caused by T-stress

A T-stress is generally not expected to contribute to the stress intensity factor because its contribution to the free energy is the same before and after crack growth. Nothing lost, nothing gained. Some time ago I came across a situation when a T-stress, violates this statement. The scene is the atomic level. As the crack is producing new crack surfaces the elastic stiffness in the few atomic layers closest to the crack plane are modified. This changes the elastic energy which could provide, contribute to or at least modify the energy release rate. If the energy is sufficient depends on the magnitude of the T-stress, the change of the elastic modulus and how many atomic layers that are involved. If I should make an estimate it would be that the energy release rate is the change of the T-stress times the fraction of change of the elastic modulus times the square root of the thickness on the affected layer. Assuming that the T-stress is a couple of GPa, the change of the fraction of change of the elastic modulus is 10% and the affected layer is around ten atomic layers one ends up with 100kPa m^(1/2). Fairly small and the stress and its change are taken at its upper limits but still it is there. The only crystalline material I could find is ice with a toughness of the same level. Other materials are affected but require some additional remote load. Interestingly enough I came across a paper describing a different mechanism leading to a T-stress contribution to the energy release rate. The paper is: Zi-Cheng Jiang, Guo-Jin Tang, Xian-Fang Li, Effect of initial T-stress on stress intensity factor for a crack in a thin pre-stressed layer, Engineering Fracture Mechanics, pp. 19-27. This is a really read worthy paper. The reasons for the coupling between the T-stress and the stress intensity factor is made clear by their analysis. The authors have an admirable taste for simple but accurate solutions. The paper describes a crack with a layer of residual stress, that gives a T-stress in the crack tip vicinity. As the crack advances increasing more material end up behind the crack tip rather than in front of it. The elastic energy density caused by the T-stress is larger in front of the tip than it is behind it. The energy released on the way and can only disappear at the singular crack tip, not anywhere else in the elastic material. The reason for the energy release is the assumed buckling in the direction perpendicular to the crack plane. An Euler-Bernoulli beam theory is used to calculate the contribution to the energy release rate. Having read the paper I realise that in a thin sheet buckling out of its own plane in the presence of a crack and a compressive T-stress there will be energy released that should contribute to crack growth. The buckling will give a more seriously distorted stress state around the crack tip, but never the less. In this case the buckling area would be proportional to the squared crack length in stead of crack length times the height of the layer as in the Jiang et al. paper. The consequence is that the contribution to the stress intensity factor should scale with the T-stress times square root of the crack length. Suddenly I feel that it would be very interesting to hear if anyone, maybe the authors themselves, know of other mechanisms that could lead to this kind of surprising addition to the energy release rate caused by T-stresses. It would be great if we could add more to the picture. Anyone with information is cordially invited to contribute. Per Ståhle



Discussion of fracture paper #17 - What is the second most important quantity at fracture?

No doubt the energy release rate comes first. What comes next is proposed in a recently published study that describes a method based on a new constraint parameter Ap . The paper is: Fracture assessment based on unified constraint parameter for pressurized pipes with circumferential surface cracks, M.Y. Mu, G.Z. Wang, F.Z. Xuan, S.T. Tu, Engineering Fracture Mechanics 175 (2017), 201–218 The parameter Ap is compared with established parameters like T , Q etc. The application is to pipes with edge cracks. I would guess that it should also apply to other large structures with low crack tip constraint. As everyone knows, linear fracture mechanics works safely only at small scales of yielding. Despite this, the approach to predict fracture by studying the energy loss at crack growth, using the stress intensity factor K I and its critical limit, the fracture toughness, has been an engineering success story. K I captures the energy release rate at crack growth. This is a well-founded concept that works for technical applications that meet the necessary requirements. The problem is that many or possibly most technical applications hardly do that. The autonomy concept in combination with J -integral calculations, which gives a measure of the potential energy release rate of a stationary crack, widens the range of applications. However, it is an ironin that the J -integral predicts the initiation of crack growth which is an event that is very difficult to observe, while global instability, which is the major concern and surely easy to detect, lacks a basic single parameter theory. For a working concept, geometry and load case must be classified with a second parameter in addition to K I or J . The most important quantity is no doubt the energy release rate, but what is the second most important. Several successful parameters have been proposed. Most of them describe some type of crack tip constraint, such as the T -stress, Q , the stress triaxiality factor h , etc. A recent suggestion that, as it seems to me, have great potential is a measure of the volume exposed to high effective stress, Ap . It was earlier proposed by the present group GZ Wang and co-authors. Ap is defined as the relative size of the region in which the effective stress exceeds a certain level. As pointed out by the authors, defects in large engineering structures such as pressure pipes and vessels are often subjected to a significantly lower level of crack tip constraint than what is obtained in laboratory test specimens. The load and geometry belong to an autonomy class to speak the language of KB Broberg in his book "Fracture and Cracks". The lack of a suitable classifying parameter is covered by Ap . The supporting idea is that K I or J describe the same series of events that lead to fracture both in the lab and in the application if the situations meet the same class requirements, i.e. in this case have the same Ap . The geometry and external loads are of course not the same, while a simpler and usually smaller geometry is the very idea of the lab test. The study goes a step further and proposes a one-parameter criterion that combines the K I or J with Ap by correlation with data. The method is reinforced by several experiments that show that the method remains conservative, while still avoiding too conservative predictions. The latter of course makes it possible to avoid unnecessary disposal and replacement or repair of components. The authors' conclusions are based on experience of a particular type of application. I like the use of the parameter. I guess more needs to be done extensively map of the autonomy classes that is covered by the method. I am sure the story does not end here. A few questions could be sent along: Like "Is it possible to describe or give name to the second most important quantity after the energy release rate?" The paper mentions that statistical size effects and loss of constraint could affect Ap . Would it be possible to do experiments that separates the statistical effect from the loss of constraint? Is it required or even interesting? It would be interesting to hear from the authors or anyone else who would like to discuss or comment the paper, the proposed method, the parameter or anything related. Per Ståhle



Cookies make it easier for us to provide you with our services. With the usage of our services you permit us to use cookies.
More information Ok Decline