In Bluebyte’s blog, Joe Duffy writes an extremely interesting essay about premature optimization, where his conclusions are:
What I do advocate is thoughtful and intentional performance tradeoffs being made as every line of code is written. Always understand the order of magnitude that matters, why it matters, and where it matters. And measure regularly! I am a big believer in statistics, so if a programmer sitting in his or her office writing code thinks just a little bit more about the performance implications of every line of code that is written, he or she will save an entire team that time and then some down the road.
It’s a very good read over all, and very well reasoned, but I disagree with the approach entirely. I prefer simplicity first, making it work and making it simple, and then, where necessary, making it fast.
This approach fails to recognise that even in low level programming, the scarcest resource is in the vast majority of the cases neither CPU time nor memory, but programmer's time. That's what we need to optimize for: making careful consideration of the performance characteristics of every line of code might optimize CPU time, but wastes programmer's time on a task that is totally irrelevant for 90% of the code. On the other hand I agree on: “Given the choice between two ways of writing a line of code, both with similar readability, writability, and maintainability properties, and yet interestingly different performance profiles”. No reason to make something slower for the sake of it, everything else being equal, right? Right, but very rare.
The example with LINQ is emblematic: I will choose the LINQ version any day, because it's simpler to write, simpler to read and clearer. If the profiler says it doesn't meet the performance requirements, then and only then I will kill every single cycle I can from that code. In my experience this approach resulted in faster and simpler low level code.