On CIL and coding

Is it bad that whenever I code C# I can see exactly the IL that will be produced? Should I start developing with ilasm? Hmm…


10 Replies to “On CIL and coding”

  1. It’s not bad. Maybe a bit unusual, but not bad.

    I would enjoy seeing a small app written directly in IL. Might make a good intro-to-IL article.

    Can you send me your email address?

  2. Nah, but if you start seeing jit’ed code, especially heavily optimized jit’ed code, you might want to take a small break.

    It’s bad if you actually think your c# code is easier to read, if you indent it according to what the stack depth will be at that point of the execution.

  3. It only becomes bad when you begin avoiding certain constructs because they produce CIL opcodes that you don’t “like”. For instance preferring an infinite while loop with a break instead of a for loop.

  4. @Bill:

    I do prefer > over >= (ditto for <) because of the IL sequences. 🙂 (a > b): a; b; cgt;

    (a >= b): a; b; clt; ldc.i4.0; ceq;

    For floating-point expressions there isn’t much of a choice, but for integers (a > 0) will produce more efficient IL than (a >= 1).

  5. @Chris:

    I’m surprised the compiler doesn’t take care of that, at least for comparisons with constants.

  6. Even when you’re coding anonymous methods/lambdas and iterators? That would be an impressive skill.

  7. @Gabriel

    The general idea in CLI is that the individual language compilers shouldn’t optimize much because the JIT-Compiler does the optimizations.

  8. @RichB: When coding anonymous delegates, yes I can. I have coded iterators (I assume you are talking about yield return) but not enough to know the CIL generated there. As for lambdas, I haven’t used that feature at all.

Comments are closed.