From the article:
...After tempering, if the steel is heated to some temperature below the tempering temperature, the hardness will likely not be affected unless it is held there for very long periods of time...
How long? Tempering of a blade is performed over the course of hours. The peer-reviewed published articles I cited found that heating occurs in the first 10-microns of the ground specimen (using high grind and feed speeds) to 120'C for
4 milliseconds. That is too short and too cold to temper most blade steels. Oh well.
Microhardness measurements of that tap edge showed the effect of overheating...
This is the most suspect portion of the article (and please note, Larrin is just reporting the work of others here). I will quote ToddS since he typed it most succinctly: "These type of measurements must be made at least 3 indent diameters from the edge... Also, this type of polishing typically rounds off the specimen near the potting compound interface- so the surface may not be normal to the indenter at those points, which again influences the result."
The indentation diagonal for Vickers micro-hardness testing must be ~20 microns to be considered at all reliable, and the processing (polishing/grinding!) of the specimens for testing typically rounds the last 10-20 microns.
Here is a link regarding microhardness testing:
https://www.hardnesstesters.com/learningzone/articles/common-problems-in-microhardness-testing
One of the biggest problems with Vickers hardness testing is that it relies on optical assessment by the technician. Again, the article above details how this can be a major problem, and you can run the calculations to see how wide a deviation from reality can occur if the measurement made by the technician is off. But not even getting into that, If you look at the images in the article presenting the micro-hardness readings of ground blade edges (presented to Larrin by a fellow named Roman Landes and the data has not been rigorously analyzed), the readings are not performed in duplicate, much less triplicate, and in the sample which shows a drop of 5 Rc, one measurement is at ~30 microns and the other is ~80 microns from the edge,
and those are the only two readings which deviate from the rest of the edge. Landes doesn't specify the amount of force being used during sharpening and only shows one steel (8660 steel), and he is far from an unbiased source.
So what is presented is the microhardness readings from specimens sharpened in a way lacking specific (i.e. fundamental) detail which, if processed properly, were machine polished/ground to remove any defects on the surface, you can even see the rounding of the edge in the images, the lowest hardness reading was taken
within the region deformed by specimen processing and the second reading appears to ALSO fall too close to the edge to give a proper reading. Those two readings would be discarded by any scientist of integrity.
If the argument is being made that all of the dis-temper is occurring within the first 100 microns of a blade's apex, measuring it in terms of harndess is
not technically feasible.
On the other hand,
Latrobe's 2007 article on over-heating your tools (referenced before Landes' stuff) has to do with industrial production of cutting tools. The micro-hardness readings of the tap have no scale to reference, but the images stress the difference in shading between the regions affected by grinding - Landes did not do this for his images and I have not been able to confirm the reliability of the 'shading' method. But note that these tools
are all produced by machine grinding, the article is about how
poor manufacturing techniques can produce a poor-quality tool, whereas proper techniques
do not. The author notes:
A spinning grinding wheel can generate very high temperatures at the surface of the steel, which is being ground. It is imperative that the wheel be clean and dressed, and that the travel speed, feed rate, depth of grind, and fluid application be carefully controlled to prevent the generation of high temperatures and damage to the steel.
Yes, all of that is known. One
can affect the temper through processes that generate heat. But which processes do or do not generate heat to such a degree in knife edges remains absent, and machine-grinding is how tools like that presented in the Latrobe article are produced. If production always and everywhere affected all such tools being manufactured, wouldn't that be important to note?
Next, Larrin presents an amateur project by some guys with a sharpness tester and a WorkSharp grinder and a stone and some bamboo skewers. They test "edge retention" be examining sharpness (on their tester) at baseline, then after 30, 60, and 90 cuts across bamboo skewers. We will ignore the uniformity of the skewers as a reliable medium for now and look at their method:
1) The single most important factor in edge performance is
geometry - this is well known and well established. In this project, the authors
do not assess any difference in geometry between the edges prior to testing or throughout the experiment. Why not? We already noted that SEM is a well known and reliable means of doing this... but they don't bother looking at the edges they produce by their different sharpening techniques. At all. Hmm... Do you happen to know if a Worksharp will produce a different edge-geometry than stone sharpening based on their method details? That is rhetorical, of course it does. The convexity produced by the Worksharp is not addressed in the article, nor the level of refinement used on the stone-ground blade. Was the stone-ground blade stropped? Probably not. No details provided.
2) The authors assert that water-stones keep the blade cool, but again the published studies on the subject indicate that the use of liquid does not actually affect temperature so much as enhance lubrication and removal of swarf - the authors seem to be relying on internet truths for their premises.
3) The authors admit to being amateurs at sharpening - did they happen to determine the level of burr-formation found after sharpening prior to testing? They mention that the WS edge starts sharper but think it degrades much more rapidly... OR they made a burr and lost it. The image they represent of the sharpness-tester results on the stone-ground blade demonstrate that the authors lack any type of quality control - the edge produced by the stone is far from uniform. They start with a sharp knife off the grinder and a dull knife off the stone, then try to compare the degree of change in sharpness as if starting from completely different baselines is at all valid. Imagine a CATRA test where you start with one blade at maximum sharpness and another at 50% sharpness and run the test - the amount of sharpness lost by the sharp blade far exceeds that of the already partially dull blade
every time due to how edge geometry changes as the edge degrades.
The project by these guys is basically a nice set-up for improvement, but the results are
worthless. There is no science here.
So Larrin's article, while a nice summary of some of the "work" that has been done on the subject, basically lacks any evidence indicating that power-sharpening impacts edge performance or heat-treatment in any way whatsoever. *shrug*