Saturday, December 10, 2016

Why Resharper is Evil

There was a post once upon a time called "Why Resharper is Evil", and a person named Hadi commented on that post that Resharper is highly configurable and thus what it does is not 'evil'. i would like to respond to that comment here.

The things that Resharper does are evil, and for one very simple reason. Companies are rolling out Resharper as a part of their development environment and increasingly *requiring* its use. However, it is inevitably set up by the IT division, which installs the defaults, because they're IT - they don't know any better.

New developers are not born with good coding habits or practices. They learn them, just like everything else. However, if on their first job they get a default Resharper installed and are required to use it (or even if they are not required), they will accept it as the 'correct' way to write code. They'll learn their coding standards and style from the tool. Thus, they will learn bad habits.

However, most of the people know that from experience are retired, in administrative positions, or otherwise indisposed - that is, not actively coding. I love coding. I've skipped on more promotion opportunities than you can count because I love coding. So I have the experience AND still do the work.

THESE ARGUMENTS HAVE BEEN DONE. They've been hashed out before, and thrown out because they don't work. The fact that kids are militantly aggressive in their belief that they already know everything is not germaine. Every kid thinks they know everything until they're about 30 and they finally grow up. I was no different, although I figured out what an idiot I was when I was about 25 - I've always been a bit ahead of the curve. Once they figure out that they've been an idiot, then they spend ten years trying to catch up.

Using var for everything? BAD. Why? Because any context you lose in a program is bad. It will come back and bite you sooner or later. Throwing it out on purpose is just dumb. Throwing it out because a program that is riding on popular opinion tells you to is worse. Twenty percent of your time is spent coding. The other 80 percent is spent debugging. I don't care how many characters of typing you save, you aren't saving anything - you WILL spend more time later on as a consequence. Sure, you can mouse-over the function declaration, and if it is properly documented, intellisense will tell you the return type - IF it's properly documented. Of course, if you could see the return type in front of your eyes, you wouldn't need to spend that five seconds looking it up, would you? And trust me, you WILL spend more time looking it up than you could have ever conceivably spent typing it, no matter how long and baroque the type definition is.

I've seen the over-use of 'var' justified using the DRY principle. However, that principle categorically states that "Every piece of knowledge must have exactly one definitive, unique, authoritative representation." You will note that there is no-place in that definition where it mentions 'Not repeating the same sequence of keystrokes if you can avoid it.' More to the point, if you've got three different variable declarations in the same function, all 'var', all assigned to DIFFERENT TYPES, then you have explicitly VIOLATED the DRY principle because you have deliberately thrown away half of the 'knowledge' - the type of the variable. Your code is now declaratively ambiguous.

Similarly naming conventions. I know that Hungarian notation is currently 'out of style'. However, the problem has never been Hungarian notation. The problem is that a whole legion of styles of it have popped out of the ether that were determined and designed by people who NEVER READ THE BOOK. They just looked at what Hungarian notation appeared to be doing and adapted something that was Hungarian-like that was more to their personal coding predelictions, and then called it Hungarian.

Hungarian notation itself,however, is perfectly valid and highly useful. A naming convention that allows you to have a property MyValue and a local variable or parameter myValue that refer to different and perhaps unequal variants of the same data that can be confused by simply 'blowing a shift' (not completely depressing the shift key)? That's EVIL. Because the code will compile fine, run fine, and it's possible that the bug won't manifest for years. When the bug finally does surface, it will be a bugger to track down precisely because NOTHING LOOKS WRONG. With a properly designed Hungarian notation, if that happens it ALWAYS LOOKS EXPLICITLY WRONG. So much so that if you use Hungarian notation, and you use a field instead of an accessor where both exist, you generally comment the reason why - precisely so that somebody who comes after you won't instantly notice that it looks wrong, 'fix' it, and break what you did on purpose - by accident.

So throwing out Hungarian Notation as 'evil' based on some poor implementations is exactly the same as holding up a bunch of badly made pizzas as 'proof' that all pizza is lousy food.

The same extends to many other things. Because a thing 'could' be readonly in a specific context by no means even suggests that it should be. That a condition might 'appear' redundant does not of necessity mean that it is. The fact that a code block 'could' be collapsed into a more concise form does not mean it should be - in fact, that is frequently the exact opposite of the truth. The fact that you 'could' in-line a variable declaration with its initial assignment is true and also almost always a bad thing.

Even things that are clearly redundant are not necessarily evil. If you put in a test for a condition that should never occur, that is called *DEFENSIVE CODING*. It *should* never occur. But after five or ten other people have been wading through your code doing other stuff, maybe eventually it will become possible, or even easy. But even if that happens, because you were wise enough to put the redundant check in there in the first place, the problem that would otherwise exist gets nipped in the bud because an exception comes flying out that says, in unkind terms, "HEY!!! THIS ISN'T EVER SUPPOSED TO HAPPEN! WHAT THE HELL IS GOING ON HERE!?"

You may have heard of 'assertions'. These were things that people would put in their code that would check that conditions were being met as they should be. Normally, these assertions would compile out of release builds, but I know that sometimes those debug builds make it out there, because I see messages of the form: "Assertion Failed: " Yep, that happened. And that was an assertion. So that was NEVER SUPPOSED TO FAIL! That's why you can't trap assertions. They're not supposed to be trapped. They're supposed to be a figurative slap in the developer's face. "Wake up, for crying out loud! You're screwing this up, here. Fix it before somebody sees what an idiot you are!" That's why redundancy isn't a bad thing. Redundancy would be bad if humans were perfect. We're not, so it's not.

The compiler is going to squib out the same executable no matter how you write. The code you write is not for the compiler's use. It is for YOUR use. And it is not for your use NOW. It is for your (and everybody else's) use LATER. The fact is that pretty much every line of code that will ever be written will be debugged many times more often than it is written or re-written, and thus, if you have to err, it is best to err to the benefit of the debugger - not the editor or yourself. Thus, in the long run, verbose is almost invariably the Best Thing. That, however, is something that you don't learn without lots and lots of experience. Which, of course, will be harder to get now, because now everybody's got Resharper tut-tutting them even if they *try* to do the Right Thing.

It seems human nature that every new generation must repeat all the same stupid mistakes for themselves. I accept that. What Resharper has done is codify and automate that stupidity for the ages - and make it easy to not bother learning. When it finally comes back on everybody, they can conveniently deny their own complicity by declaring, "Resharper told me to!" That is the very canonical definition of EVIL.

All my life I've gone by a rule - if I can look at something I wrote even six months ago and *not* ask myself, "What was I thinking?" then I've stopped learning. What I see now are young programmers devoted to Resharper who crank out the exact same thing year after year and remain convinced that it's done right. They've stopped learning. Resharper didn't help them. It didn't make them better. It switched their brains off.

If it were up to me, I would forbid the use of Resharper by any programmer who hadn't been coding for at least ten years *without* it. By then, they'll at least have a clue about how to configure the thing, and an idea of what to ignore and when.

Is Resharper Evil? It's the one thing responsible for more and worse code than anything that has come out of computer science since before the invention of the Atanasoff-Berry Computer. So that would be "all time" as computer science time is reckoned.

No comments:

Post a Comment