Java is less magical than C#

I have been programming in C# for several years now, and recently made the switch to Java (at least for now). I noticed that Java, as a language, is “less magical” than C#.

What do I mean by that is that in C# things are usually done for you, behind the scenes, magically, while Java is much more explicit in the toolset it provides. For example, take thread-local storage. The concept is identical in both langauges – there is often a need for a copy of a member variable that’s unique to the current thread, so it can be used without any locks or fear of concurrency problems.

The implementation in C# is based on attributes. You basically take a static field, annotate it with [ThreadStatic], and that’s it:

private static ThreadUnsafeClass foo = null;
private ThreadUnsafeClass Foo
    if (foo != null)
      foo = new ThreadUnsafeClass(...);
    // no other thread will have access to this copy of foo
    // note - foo is still static, so it will be shared between instances of this class.
    return foo;

How does it work? Magic. Sure, one can find the implementation if he digs deep enough, but the first time I encountered it I just had to try it to make sure it actually works, because it seemed too mysterious.

Let’s take a look at Java’s equivalent, ThreadLocal. This is how it works (amusingly enough, from a documentation bug report):

public class SerialNum {
     // The next serial number to be assigned
     private static int nextSerialNum = 0;
     private static ThreadLocal<Integer> serialNum = new ThreadLocal<Integer>() {
         protected synchronized Integer initialValue() {
             return new Integer(nextSerialNum++);
     public static int get() {
         return serialNum.get();

No magic is involved here – get() gets the value from a map, stored on the calling Thread object (source code here, but the real beauty is that’s it’s available from inside your IDE without any special effort to install it).

Let’s look at another example – closures.

In C#, you can write this useful piece of code:

var list = new List<int>();
// find an element larger than 10
list.Find(x => x > 10);

You can also make this mistake:

var printers = new List<Action>();
foreach (var item in list)
  printers.Add(() => Console.WriteLine(item));
Parallel.Foreach(printers, p => p())

An innocent reader might think this prints all the items in list, but actually this only prints the last items list.Count times. This is how closures work. This happens because the item referred to in the closure is not a new copy of item, it’s actually the same item that’s being modified by the loop. A workaround is to add a new temporary variable like this:

foreach (var item in list)
  int tempItem = item;
  printers.add(() => Console.WriteLine(tempItem));

And in Java? Instead of closures, one uses anonymous classes. In fact, this is how they are implemented under the hood in C#. Here the same example, in Java:

for (Integer item : list)
  final int tempItem = item;
  printers.add(new Action(){
    public void doAction()
      // can't reference item here because it's not final.
      // this would have been a compilation error
      // system.out.println(item);

Notice it’s impossible to make the mistake and capture the loop variable instead of a copy of it, because Java requires it to be final. So … less powerful perhaps than C#, but more predictable. As a side note, Resharper catches the ill-advised capturing of local variables and warns about it.

I myself rather prefer the magic of C#, because it does save a lot of the trouble. Lambdas, properties, auto-typing variables… all these are so convenient it’s addictive. But I have to give Java a bit of credit, as the explicit way of doing stuff sometimes teaches you things that you just wouldn’t have learn cruising away in C# land.


  1. Ofer Egozi:

    Excellent post. It’s no accident that it’s not the other way around, Microsoft made this a habit long ago. Before .NET came along, there was MFC versus straight Win32 API calls. We used Win32 API, and whenever we interviewed someone who only used MFC you could sense how little understanding they have of how the magic works under the hood, knowledge which can be extremely useful in many cases. It’s the same thread that goes back to whether programmers should learn about operating systems internals and CPU logic.

  2. Tomer Gabel:

    Both are valid languages, and both can teach you a lot about better software design. Most (though admittedly not all) of what you call magic in C# is well-documented and well-understood, and a good programmer will drill down and learn how things work regardless of whether it’s C#, Java, Haskell or C99 on an embedded Linux platform.

    In my own opinion each has advantages, and while I prefer C# for writing mass amounts of concise, elegant code in a short time, Java is definitely preferable when it comes to robust software design. For example, despite their reputation I’ve come to rely on checked exceptions as a powerful mechanism for enforcing reliability on the code level. Other subtle differences, such as having to explicitly declare captured variables as final, can reduce a lot of hard-to-spot bugs. The Java collections and executors framework is also much, much more powerful.

    On the other hand C# has a much better type system, and new language versions continually add features to aid robustness and increase productivity. I’m looking forward to preconditions and postconditions, variance rules for type parameters and other goodies. It is an interesting race for sure…

  3. Playing around with PLINQ and IO-bound tasks - .NET Code Geeks:

    [...] threads, but it will only allocate two if it so chooses. This is yet another example of C# being more magical than Java – compared to Java's rich ExecutorService, PLINQ offers less fine grained control. However, further [...]

Leave a comment