Easy way to measure sharpness points.

I do not see reason to do this additional preparation.

I know, that is the problem. I would suggest some basic reading on hypothesis testing and correlation study but you are a mathematician so I am at a loss as to why this isn't obvious to you directly as it is really basic math. I'll attempt to explain it once more.

You are trying to say that the difference between two points is correlated to the blunting pattern, specifically say 150,130,170 and then arguing that the 130 was "sharper" and this is some kind of cyclic behavior. However if you just did a simple monte carlo distribution with a 10 point spread you would get this pattern EVERY TIME with even larger differences.

At a basic level, you need the difference to be much larger than the random noise - of course if you don't know the size of the random noise then you can't make that distinction. This is why determining the noise is as fundamental as the measurement itself and this is made clear as soon as you do any physical science.

I can imagine that yes that sinusoidal pattern is just a noise, but I also can imagine ....

That is why you should do as I have suggested, which I have been doing for years. This way you don't have to imagine anything you can show numerically if your hypothesis is supported or not. I would also not suggest 30 measurements or similar, but rather 3-5 measurements at most and then 3-5 runs so that you have a much more stable result which prevents systematic deviations from being attributed to steels.

-Cliff
 
I did some extensive testing. To see if results are unstable and purely random I did 3 test sessions 39, 41 and 41 measurements each. To make it less random I resharpen Buck 110 420HC again on Diamond Ezy Lap 1200, to have edge as even as possible.

Sharpness-04.jpg


As you may see with number of tests going up results geting more and more stables. In combined testcard for all 121 measurements it is pretty clear that distribution is not rectangular but rather gaussian, so this is not purely random numbers but groups around certain point.

It is pretty clear that median (red dot) is perfect and stable indicator of what this distribution are centering around. I also monitors median stability and even in worse cases it hit stable position afte 11th measurement and stays stable from that point. So I think that 21 measurement is enough to get it stabilized for sure.

So for me again it looks like this sytem has the stable indicator which stay with this system until you influence it - (by sharpening or dulling). After this this parameter changes value and again stay stable more and more with increasing number of measurement.

It does not looks random - it is bigger on duller knife and lower on sharp knife. It is stable if knife was not touched. It is simple to perform and can be done by everyone.

For me this is good testing method! And more I use it more I convinced that it is actually working - in terms of giving measurement for certain point on edge of this edge condition.

Thanks, Vassili.
 
In combined testcard for all 121 measurements it is pretty clear that distribution is not rectangular but rather gaussian, so this is not purely random numbers but groups around certain point.

Gaussian is a random distribution, in this case the distribution is discrete to a point because the results are rounded to the measurement limit. Thus you would expect the means to be distribution according to that distribution. If you do this you will note that the deviations you note in the above are just random. As I noted earlier, just perform a simple monte carlo simulation and generate a 100 data sets and watch the patterns that are produced.

-Cliff
 
Gaussian is a random distribution, in this case the distribution is discrete to a point because the results are rounded to the measurement limit. Thus you would expect the means to be distribution according to that distribution. If you do this you will note that the deviations you note in the above are just random. As I noted earlier, just perform a simple monte carlo simulation and generate a 100 data sets and watch the patterns that are produced.

-Cliff

OK flat random distribution will be rectangular not gaussian especially in big numbers (otherwise random generator need to be replaced), I do not need to run this generation - I know this - this is what Monte Carlo integration is based on. It must be rectangular especially when you summurize all measurements. And here we clearly do not have it. I do not really understand what are you saying, what is wrong with that numbers I got?

Even if I have rectangular distribution which will fall in certain interval - this interval itself will be good indicator and if it is rectangular median will perfectly identify it - it is just matter of approximation. And here we have different intervals for different sharpness even we assume that distribution is rectangular.

In this particular case we may say that test is useles if we have flat pure random distribution (we may call it Monte Carlo which sounds better) from 0 to 200 (or some other interval) in any case, but it is not flad 0 to 200 distribution here. All bullets are around median and this is enough to say that you aiming is not correct and need to be shiftet bit right or bit down. This is same case.

Also it may be useless if different test with all same conditions shows different results. This is also not a case.

I agreed that single measurement is useles - because of everything you mentioned, no doubt about this and yes results actually screaming about this, as well as you can not tune aiming based on single shot. But 21 measurements gives pretty solid result. Also picture of distribution itself pretty informative - for example I detected chipped edge first on my Higonokami after fine sharpmaker rods by seeing two pick distributon, after this - I looked at the edge I found chips.

Again it looks like this sytem has the stable indicator which stay with this system until you influence it - (by sharpening or dulling). After this this parameter changes value and again stay stable more and more with increasing number of measurement.

Thanks, Vassili.
 
OK flat random distribution will be rectangular not gaussian especially in big numbers

Gaussian is just random with a particular probability distribution, there are of course many, a flat distribution is just a special case where the probability is just 1 everywhere. By monte carlo estimation I mean take your data set, the means and standard errors, and then using the known probability distribution, calculate a 100 pseudo-data sets.

Now look at the "patterns" produced. You will see the data has many steps, rises, falls, etc., but they are all meaningless as this is inherent in the fact that you generated them randomally. I actually do this as part of the analysis I do on the data I collect when I compare the edge retention of two different knives. I described this process awhile back and how you use this to bound the performance estimates.

I agreed that single measurement is useles

It depends on the properties of the population. If the standard deviation of the population is known then you can make inferences. You need multiple measurements if you want to predict the standard deviation. My point isn't about the number of measurements, it is simply that for a meaningful measurement of central tendancy then you need both the mean (median, mode, etc.) and some measure of standard error. Both of these are just as fundamentally necessary.

I'll give you an example, lets say I make five cuts through 3/8" hemp (different roles) with two knives and the results give means of 25 lbs and 30 lbs. Now can you tell if one of those knives cuts better than another? No you can not. Even if you did 100 cuts with both knives you still can't make that comparison, 1000, 100000, it doesn't matter. In order to make a meaningful comparison you need the standard errors, once you have these then you can say if the difference between then is significant by a simple math forumation and say something like "There is less than 3% chance that this correlation is just random.".

As an example :

http://www.cutleryscience.com/images/ss_rd_hist.jpg

This was data collected in 2001, note the median is 1.00 (4) which means there is no significant difference between the knives. All physical data would be expected to be gaussian (which will be flat discrete at the level of rounding). My point is that you also need to estimate the standard error which for medians is usually done by scaling the IQR. As a course estimate you can just use one half as it is symmetric and just overestimate and be conservative. With the standard error and the median you can then make concrete statements about the meaning of the measurements.

That data you showed in the above fits a specific model which I developed awhile back and implemented specifically. I have the code available which is all freeware based (awk / gnuplot) which is publically available.

-Cliff
 
Gaussian is just random with a particular probability distribution, there are of course many, a flat distribution is just a special case where the probability is just 1 everywhere.

Yes, and in case where precise measurement is impossible it is used to find descriptive parameter of system

By monte carlo estimation I mean take your data set, the means and standard errors, and then using the known probability distribution, calculate a 100 pseudo-data sets.

Monte Carlo simulation as I found it out by googling around is when you simulate real random distribution by aproximating it with generated pseudo random distribution when measurement is costly. For example if we have flat distribution or gaussian for let say edge sharpness emulating it we may run some computer emulation to see how it will look like if there is chip on the blade - by combining emulated distributions to see possible patterns without running all this tests manually.

Now I do not understand how this may say anything about this particulat test I proposed. Actually it shows gaussian distribution, as I understand. Now we may aproximate this distribution and came up with Monte Carlo simulation to see how it will most likely behave in this or that condition of the blade - dulled, chipped etc...

Now look at the "patterns" produced. You will see the data has many steps, rises, falls, etc., but they are all meaningless as this is inherent in the fact that you generated them randomally.

This is not as I understand this - I can aproximate any distribution many different way and then have similar results with exact pattern. But it does not proves that method used to get initial results is wrong! This only proves that aproximation is correct and so can be used for Monte Carlo Simolation.

Different results from running Monte Carlo simulation with different distribution applied by wrong assumptions will just profe that this assumption. Same results will profe that parameters for emulation is right!

But finally even simulation of edge condition by Monte Carlo method need to be later backed up by real testing.

I actually do this as part of the analysis I do on the data I collect when I compare the edge retention of two different knives. I described this process awhile back and how you use this to bound the performance estimates.

It depends on the properties of the population. If the standard deviation of the population is known then you can make inferences. You need multiple measurements if you want to predict the standard deviation. My point isn't about the number of measurements, it is simply that for a meaningful measurement of central tendancy then you need both the mean (median, mode, etc.) and some measure of standard error. Both of these are just as fundamentally necessary.

Sure, and I provides entier distribution. Which is even more informative.

Hovewer, I should note that deviation may be influenced by random nature of test itself and by condition of edge. So it is mportant to have to recognize where there is noise and where there is some meaningfull information. For this I think some benchmark should be maintained - freshly sharpened edge probably will be even or at least most even and so deviation from this measurements may be compared later to see is edge lost it's evenness...

Again I think my results is pretty promising. It may be used to create emulation for further Monte Carlo simulation, however it require some programming which I do not intend to do now.

But method I described does work and it is simple!

Thanks, Vassili.
 
I am sure glad that I don't have to understand this stuff to own and enjoy a few old knives. If I did, I would have to take up collecting something else!:
 
Yes, and in case where precise measurement is impossible it is used to find descriptive parameter of system

No, its use has nothing to do with precision. It is ALWAYS necessary, unless the direct measurement uncertainty is known.

Now I do not understand how this may say anything about this particulat test I proposed.

It is used to check to see if the patterns you see are real or random. It is also used to simulate bounds for parameters which are difficult to bound directly.

Different results from running Monte Carlo simulation with different distribution applied by wrong assumptions will just profe that this assumption. Same results will profe that parameters for emulation is right!

There is too much wrong with this paragraph to comment.

Hovewer, I should note that deviation may be influenced by random nature of test itself and by condition of edge. So it is mportant to have to recognize where there is noise and where there is some meaningfull information.

That is exactly my point, and this has to be calculated.

-Cliff
 
Back
Top