Super Steels vs Regular Steels

Status
Not open for further replies.
Okay, I get that.

I doubt busse would send out a sample of their proprietary heat treated steel for metallurgical analysis and testing not of their knife design.

Nor would such a test change real world results already reported by members here. I like science, but there is only so much you can learn in a vacuum.
I agree, but the real value in scientific testing is the elimination of variables (as much as possible anyway). Then you get more reliable results that are repeatable.
Ultimately though, each person needs to use their knives and determine what works best for them and their knife needs and typical usage, but the scientific testing should provide a good indication of the steels that will work best for their individual preferences and usage.
 
I agree, but the real value in scientific testing is the elimination of variables (as much as possible anyway). Then you get more reliable results that are repeatable.
Ultimately though, each person needs to use their knives and determine what works best for them and their knife needs and typical usage, but the scientific testing should provide a good indication of the steels that will work best for their individual preferences and usage.

Sort of.

A view from a systems engineering point of view...

Scientific materials testing only makes sense if one has done the pre-work necessary to produce narrowly constrained definition of a particular behavior of the material and (this is even harder) the pre-work necessary to know that this narrowly defined material behavior is directly correlated with some performance outcome that you want to achieve. This last bit is really hard part, as (from a systems engineering perspective) you need to account all of the ways the user is going to perform the task (the skier is a part of the ski/snow/skier system) and the conditions under which the task is going to perform (the snow is a part of the ski/snow/skier system).

With knife cutting discussion, we almost never have sufficient definitional rigor to adequately describe the type of material to be cut, the techniques used to make the cuts and the conditions under which the cuts are being made. The elimination of variable approach of materials science makes sense if and only if the narrowly defined testing criteria map directly on modality that we know will be seen in actual usage. Elimination of variables in actual usage is very hard.

Another way to say this that performance attributes like "toughness" and "edge retention" have different meanings in a materials engineering context than in a systems/performance engineering context. In the former, toughness can be equate with a test like, say the Charpy test, edge retention with the CARTA test, hardness with the Rockwell test and so on. But in actual usage, toughness needs to be understood in terms of the sorts of impacts the knife receives, the materials encountered and the sorts of deformations typically seen in that particular use case. Different use cases might generate different deformations or fractures and more to the point, one use case may be more closely modeled by the Charpy test than the other. The CARTA test establishes edge retention for a certain kind of cut in a certain kind of media and is more predictive for actual use cases in which the cutting type and cut medium are similar.

I believe Larren understands the distinction I'm making which is why he correctly has said that he tests steel, not knives. Knife testing is like car testing, bike testing, rifle testing, plane testing, and car testing. It's in the realm of systems engineering and the user and medium are all a part of that. Simple version... if you want to know what the best chopper steel (and knife design) is, look at the knives that consistently win chopping contests. That will tell you more than lab results and more correctly, will direct what you can and can't learn in standardized labs tests.
 
I agree, but the real value in scientific testing is the elimination of variables (as much as possible anyway). Then you get more reliable results that are repeatable.
Ultimately though, each person needs to use their knives and determine what works best for them and their knife needs and typical usage, but the scientific testing should provide a good indication of the steels that will work best for their individual preferences and usage.
I agree as well, but true scientific testing will be near impossible to do with knives beyond steel composition. The cost involved would be outrageous and not enough people would give a crap about such science. We aren't talking large epidemiological issues here.

One actually has to use a knife in a given steel from a specific company to know if it works for their tasks. Yes, testing can be a guide but I've never seen a knife edge retention or toughness test that I can equate to science.

Use these tests as guidance, buy what you like, and use it for what you do. I think that is the only way to find the right steel.
 
Sort of.

Simple version... if you want to know what the best chopper steel (and knife design) is, look at the knives that consistently win chopping contests. That will tell you more than lab results and more correctly, will direct what you can and can't learn in standardized labs tests.
To be fair, don't competition chopping knives not last long? I've heard that they get replaced frequently as they're not made for long-term use.
 
Sort of.

A view from a systems engineering point of view...

Scientific materials testing only makes sense if one has done the pre-work necessary to produce narrowly constrained definition of a particular behavior of the material and (this is even harder) the pre-work necessary to know that this narrowly defined material behavior is directly correlated with some performance outcome that you want to achieve. This last bit is really hard part, as (from a systems engineering perspective) you need to account all of the ways the user is going to perform the task (the skier is a part of the ski/snow/skier system) and the conditions under which the task is going to perform (the snow is a part of the ski/snow/skier system).

With knife cutting discussion, we almost never have sufficient definitional rigor to adequately describe the type of material to be cut, the techniques used to make the cuts and the conditions under which the cuts are being made. The elimination of variable approach of materials science makes sense if and only if the narrowly defined testing criteria map directly on modality that we know will be seen in actual usage. Elimination of variables in actual usage is very hard.

Another way to say this that performance attributes like "toughness" and "edge retention" have different meanings in a materials engineering context than in a systems/performance engineering context. In the former, toughness can be equate with a test like, say the Charpy test, edge retention with the CARTA test, hardness with the Rockwell test and so on. But in actual usage, toughness needs to be understood in terms of the sorts of impacts the knife receives, the materials encountered and the sorts of deformations typically seen in that particular use case. Different use cases might generate different deformations or fractures and more to the point, one use case may be more closely modeled by the Charpy test than the other. The CARTA test establishes edge retention for a certain kind of cut in a certain kind of media and is more predictive for actual use cases in which the cutting type and cut medium are similar.

I believe Larren understands the distinction I'm making which is why he correctly has said that he tests steel, not knives. Knife testing is like car testing, bike testing, rifle testing, plane testing, and car testing. It's in the realm of systems engineering and the user and medium are all a part of that. Simple version... if you want to know what the best chopper steel (and knife design) is, look at the knives that consistently win chopping contests. That will tell you more than lab results and more correctly, will direct what you can and can't learn in standardized labs tests.
So you believe that Larrin's testing of steels doesn't make sense?
We have many knife testers out there giving such varied results that people have a hard time making any sense of it all. I think that's one of the reasons Larrin wants to do as much unbiased scientifically based testing as possible.
And what if we look at chopping competitions and we tend to see certain people that consistently win?
Perhaps they have great technique coupled with very good edge geometry for the typical contest challenges and the actual steel is not as important of a factor.
 
So you believe that Larrin's testing of steels doesn't make sense?
We have many knife testers out there giving such varied results that people have a hard time making any sense of it all. I think that's one of the reasons Larrin wants to do as much unbiased scientifically based testing as possible.
And what if we look at chopping competitions and we tend to see certain people that consistently win?
Perhaps they have great technique coupled with very good edge geometry for the typical contest challenges and the actual steel is not as important of a factor.

Quoted the first part for truth.

Highlighted the second part based on my own personal experience with my boys when testing knives of the same steel from different makers against one another. We have tested steels, edge grind, TECHNIQUE (nail driver over here that swung a hammer - still do - for about 25 years before nail guns) and anything else we could think of for simple fun. I think there is a lot of weight to your comment, more than some would think. Swinging a large knife or machete is nothing compared to using a 28 oz framing hammer all day or a 3lb hand sledge.

You learn the geometry of a proper swing, how to sense the sweet spot based on the arc of your own person swing, etc. I won every time (until they changed to Chinese made nails) a the fair/rodeo when they had nail driving contests, even with their crappy hammers. Technique is everything.

To be sure: 1) no scientific testing took place, ever 2) I am not claiming my results are anything more than anecdotal 3) we did our own testing for fun 4) your mileage WILL vary.
 
Quoted the first part for truth.

Highlighted the second part based on my own personal experience with my boys when testing knives of the same steel from different makers against one another. We have tested steels, edge grind, TECHNIQUE (nail driver over here that swung a hammer - still do - for about 25 years before nail guns) and anything else we could think of for simple fun. I think there is a lot of weight to your comment, more than some would think. Swinging a large knife or machete is nothing compared to using a 28 oz framing hammer all day or a 3lb hand sledge.

You learn the geometry of a proper swing, how to sense the sweet spot based on the arc of your own person swing, etc. I won every time (until they changed to Chinese made nails) a the fair/rodeo when they had nail driving contests, even with their crappy hammers. Technique is everything.

To be sure: 1) no scientific testing took place, ever 2) I am not claiming my results are anything more than anecdotal 3) we did our own testing for fun 4) your mileage WILL vary.
Nice example.
You also learned about Chinese nails of the time!
 
I don't review knives and I don't predict which knife is best for any individual. But we can figure out which steel, heat treatment, edge geometry, sharpening, etc. variables affect different tasks. The better our knowledge becomes the more variables we can understand and the more variables we will add.
 
To be fair, don't competition chopping knives not last long? I've heard that they get replaced frequently as they're not made for long-term use.

Sure. But regardless,
So you believe that Larrin's testing of steels doesn't make sense?
We have many knife testers out there giving such varied results that people have a hard time making any sense of it all. I think that's one of the reasons Larrin wants to do as much unbiased scientifically based testing as possible.
And what if we look at chopping competitions and we tend to see certain people that consistently win?
Perhaps they have great technique coupled with very good edge geometry for the typical contest challenges and the actual steel is not as important of a factor.

I think Larrin's testing makes perfect sense. But as a system's engineer, I understand the difference between materials engineering and system engineering. Materials science is a critical factor in good systems engineering and you can't make progress without it. And it's refreshing to see his results cut through some of the hype driven non-sense.

But use case definition is very hard to do rigorously. And the fact that different choppers and different skiers and different drivers and different shooters are better or worse than others and the fact that this makes performance engineering incredibly hard doesn't mean we can just sweep aside that. Note, I'm not suggesting that Larrin is either, so please don't read my post as being critical of his work. I'm merely amplifying it and putting it context and trying to help explain why terms like "edge retention" or "toughness" are so hard to pin down when doing performance testing.

To answer your question about chopping competition and the fact that some people are better, the way that is typically done is to test a variety of designs with a variety of users and evaluate the results. Sometimes the results can be evaluated quantitatively. Other situations require qualitative evaluation.

Larrin is right to say that he tests steel, not knives. This is an important distinction.
 
Sure. But regardless,


I think Larrin's testing makes perfect sense. But as a system's engineer, I understand the difference between materials engineering and system engineering. Materials science is a critical factor in good systems engineering and you can't make progress without it. And it's refreshing to see his results cut through some of the hype driven non-sense.

But use case definition is very hard to do rigorously. And the fact that different choppers and different skiers and different drivers and different shooters are better or worse than others and the fact that this makes performance engineering incredibly hard doesn't mean we can just sweep aside that. Note, I'm not suggesting that Larrin is either, so please don't read my post as being critical of his work. I'm merely amplifying it and putting it context and trying to help explain why terms like "edge retention" or "toughness" are so hard to pin down when doing performance testing.

To answer your question about chopping competition and the fact that some people are better, the way that is typically done is to test a variety of designs with a variety of users and evaluate the results. Sometimes the results can be evaluated quantitatively. Other situations require qualitative evaluation.

Larrin is right to say that he tests steel, not knives. This is an important distinction.
I agree with this, but I guess I just feel like it's over thinking it a little. Of course I am far from a systems engineer!
I also agree that testing steel vs. Knives is an important distinction, but also agree that it is a necessary one to advance understanding as pertaining to knives.
After all, he is the only person I know that is determined to test different steels in ways that specifically relate to knives instead of industrial applications.
And kudos for that Larrin Larrin !:thumbsup:
 
Sure. But regardless,


I think Larrin's testing makes perfect sense. But as a system's engineer, I understand the difference between materials engineering and system engineering. Materials science is a critical factor in good systems engineering and you can't make progress without it. And it's refreshing to see his results cut through some of the hype driven non-sense.

But use case definition is very hard to do rigorously. And the fact that different choppers and different skiers and different drivers and different shooters are better or worse than others and the fact that this makes performance engineering incredibly hard doesn't mean we can just sweep aside that. Note, I'm not suggesting that Larrin is either, so please don't read my post as being critical of his work. I'm merely amplifying it and putting it context and trying to help explain why terms like "edge retention" or "toughness" are so hard to pin down when doing performance testing.

To answer your question about chopping competition and the fact that some people are better, the way that is typically done is to test a variety of designs with a variety of users and evaluate the results. Sometimes the results can be evaluated quantitatively. Other situations require qualitative evaluation.

Larrin is right to say that he tests steel, not knives. This is an important distinction.
I might only be a lowly mechanical engineer, but I don't think anybody here was confusing the difference between systems and material property tests. We know he isn't making judgment calls on knives, only on steel properties as evaluated through his specific tests. We're just wishing to see INFI tested by Larrin in the same manner in which he has tested other steels. That's all. That's enough to be useful in this hobby which is otherwise barren of structured and informed research.

I think you're moving way too fast along the "there's too many variables and inter-dependencies to make a conclusive statement" line of thinking. You absolutely can draw some generalities from the work he is doing. You just need to exercise a little discretion in how applicable they are.
 
Last edited:
I agree with this, but I guess I just feel like it's over thinking it a little. Of course I am far from a systems engineer!
I also agree that testing steel vs. Knives is an important distinction, but also agree that it is a necessary one to advance understanding as pertaining to knives.
After all, he is the only person I know that is determined to test different steels in ways that specifically relate to knives instead of industrial applications.
And kudos for that Larrin Larrin !:thumbsup:
It's definitely overthinking it. Nobody here is assuming that Larrin's articles are a macro/system view rather than a micro/materials view.

What he's doing is nearly unique in this hobby, which is to attempt to define steel properties like a researcher, rather than as a layman or bro scientist. There's no need to come in and knock it all down, saying that we need to consider the knife as a whole system and that it all depends on use cases, because that's obviously not what his articles are about.
 
That’s what I’m getting at! Some people make it seem like the end all be all steel. But Nathan used 3v that held up as well as infi, but will hold a working edge twice as long.

I would like to point out that Infi is tougher than 3V. And it also has better edge stability than a conventional heat treated 3V which translates into better edge retention in most uses for most users. It took a team of people working on the problem over time to get some of the low temperature tweaks to 3V to equal the edge stability and gross edge durability of Infi. We called the final tweak Delta 3V and for most users it is as durable as Infi and has better wear resistance, but when pressed to the limit Infi has higher gross toughness. Of course S7 probably has higher gross toughness than Infi but for most people maxing out one property at the expense of others isn't ideal. Infi is a rare steel that combines very good real world edge retention in very high levels of durability. Delta 3V is an alternative high toughness steel with similar levels of edge stability/durability. Where one has a little higher total toughness the other has a little higher edge retention but I would argue they're both "super steels" in a similar area of the steel spectrum. I would also argue that being a particle metallurgy steel does not inherently make a steel a better steel, it's a necessary evil to maintain the toughness of the steel when you have a significant carbide volume but it can come at the expense of imperfection of the compaction which may be one reason why nearly identical PM alloys like 4V and V4E are not exactly the same.

As far the nail cut test is concerned, a toughness demonstration intended to prove the low temp tweak didn't ruin the toughness of the 3V turned out to be a pretty useful tool when evaluating heat treat tweaks against other known standards. The edge geometry is tightly controlled to 18 DPS and the nail and cut technique is controlled well enough to provide repeatable meaningful measures of edge durability in extreme use. Fail mode and magnitude don't give a numerical outcome but are purely comparative in nature. While it's not as scientific as other better controlled tests such as impact testing and abrasion testing it can begin to measure edge durability in ways other tests can't.
 
I should clarify that when I said I don't test knives I didn't mean that I don't do any tests on knives, I mean that I don't compare a spyderco delica to a Chris Reeve sebenza and then call it a comparison between vg-10 and s30v.
 
I should clarify that when I said I don't test knives I didn't mean that I don't do any tests on knives, I mean that I don't compare a spyderco delica to a Chris Reeve sebenza and then call it a comparison between vg-10 and s30v.

That's a fair point, but really you have to test the knives themselves, because heat treats are so important.

For example, Nathan makes a great knife, but his testing experience recounted above doesn't match my testing experience. He says D3V is better than conventional 3V. I'd say yes and no. It depends. I've tested Nathan's knives hard, and they are excellent, roughly in the ballpark with Infi and Vandis 4 Extra on edge stability, and with D3V being in the middle of Infi and V4E for wear resistance. I've tested "conventional" 3V by a maker here on the forum that was extremely soft. He had sold me a Bowie in W2 that couldn't chop clear Doug Fir without damage. (Others can heat treat W2 so it can chop nails in half without significant damage.) So he offered a 3V replacement. It was tough steel. No damage on Doug fir, but it bent easily and couldn't hold an edge. It melted around a piece of soft baling wire.

I tested another "conventional" 3V chopper from another maker on the forum and it chipped. It was much worse than D3V from Nathan. Bluntcut did a custom reheat treat on that blade and it turned into a powerhouse performer, exceeding D3V in performance.

I have a heavy chopper in A8(mod) -- like Infi -- that went through an elaborate, custom heat treat; and it outperforms Infi in every category.

So you can test steel blanks at any given hardness, geometry and heat treat and get important information; but that doesn't mean its tested performance will be anything close to actual knives heat treated by various makers.
 
What someone needs is to take some of their high end production knives, chop them up, and send them to larrin.

I have an M390 Hinderer that I hate. Maybe something can be arranged. Haha
 
I think this is the first time someone has assumed I forgot about the importance of heat treatment.

I did not assume that, and I'm not criticizing you. I fully realize that you understand the importance of heat treat.

But the point remains that testing steel blanks does not necessarily translate -- and usually doesn't -- into the performance of actual knives as produced by the maker. What really matters to those of use who use knives is the performance of the knives we're buying.
 
Status
Not open for further replies.
Back
Top