Final == Good

Here’s an interesting article that is totally, completely 180 degrees wrong. I’ve said this before, and I’ve said it again, but I’ll say it one more time: final should be the default. Java’s mistake was not that it allowed classes to be final. It was making final a keyword you had to explicitly request rather than making finality the default and adding a subclassable keyword to change the default for those few classes that genuinely need to be nonfinal. The lack of finality has created a huge, brittle, dangerously breakable infrastructure in the world of Java class libraries.

The latest shot from the final-haters is a false claim that finality somehow prevents unit testing. To paraphrase Henry S. Thompson, I hate to move a direct negative, but no! There is nothing in finality that prevents unit testing, and I don’t know why people claim it does. I’ve had zero trouble testing final classes in my own work. I suppose it makes writing mock classes a little trickier, but I’m not sure that’s a bad thing. I much prefer to test the real classes and the real interactions that show what really happens rather than what the mock designer thinks will happen. Bugs aren’t always where you expect them to be. And even if finality did somehow interfere with unit testing, breaking the API to support the tests is a clear case of the tail wagging the dog. The tests exist to serve the code, not the other way around.

By way of contrast, although I’m careful to unit test subclassing for my nonfinal classes, I rarely encounter any other projects and libraries where that’s done. I’d venture to say that classes that allow their methods to be overridden rarely test that scenario in any way at all. Ditto for testing protected methods.

I will back up a little bit. It’s really only overriding methods that bothers me. I don’t have any particular objection to adding methods to a subclass. Probably the default should be to make all methods final unless they’re explicitly tagged as overridable. If that were done, you’d rarely need final on classes. However methods should be allowed to be overridden only after much careful thought, planning, and testing.

One final point: final is the safe, conservative choice. Should you mark a class or method final, and later discover a need to subclass/override it, you can remove the finality without breaking anyone’s code. You cannot go the other way. Once you’ve published a class that’s non-final you have to consider the possibility that someone, somewhere is subclassing it. Marking it final now risks breaking people’s running code and working systems.

One of the principles of exteme programming is to make the simplest change that could possibly work. Final is simpler than non-final. Final commits you to less. If you need a class to be non-final, fine. But please don’t make classes non-final by default.

20 Responses to “Final == Good”

  1. Martin Boel Says:

    final commits the API designer less, but puts limitations on extensibility of the user of the API. If you feel that usage of your code in unanticipated ways is a problem, you like final; if you don’t, you don’t. In Open Source projects the final clause is not a big problem, because the user of the API can remove it if needed.

  2. verisimilidude Says:

    As a professional writer I am sure you spew words at the speed of a firehose, but really “risks breakign” and “careful hought”. And I failed the proof-reading test at the college paper!

    You left out of the discussion the question of “subclassing” data by over-riding the name of a non-private data member in the parent. And the whole question of whether to sub-class means to refine/redefine function or extend function.

    Many of the GoF patterns use subclassing only as a way to subvert the type system. As in “I am only allowed to use a class of type X here so I will make this functional class look like an X”. When you start using Python, which mandates subclassing only as a way to extend functionality you start seeing things differently. For implementing something like the state pattern in Python you use “duck-type” inheritance [If it quacks like a duck … ]. As long as the proper function is defined you can pass in a class of any ‘type’ and get a successful execution, it doesn’t have to be descended from a “State” parent type. I see no inherent reason why this couldn’t be checked by a compile process to allow type safety – with appropriate syntax of course. Using inheritance just muddies things.

  3. Anon Says:

    Dude, final shouldn’t even be allowed. Subclasses should be able to override anything, that’s the whole damed point, to extend the class in ways the original author couldn’t anticipate. Unless you can see the future, you haven’t a clue what my needs are, so making “anything” final puts me in a strait jacket and makes your code less usable. Final sucks, you couldn’t be more wrong, and it has nothing to do with unit testing, it has to do with OO. All methods should be public virtual, all instance variables should be protected. Oh and “The lack of finality has created a huge, brittle, dangerously breakable infrastructure in the world of Java class libraries.”, B.S, java’s library sucks because it’s poorly designed, not because of a lack of finality.

  4. Isaac Gouy Says:

    “Probably the default should be to make all methods final unless they’re explicitly tagged as overridable.”

    Maybe you should start using C#

  5. Rich Says:

    that’s the whole damed point, to extend the class in ways the original author couldn’t anticipate. Unless you can see the future, you haven’t a clue what my needs are

    Believe it or not, part of being a good engineer is anticipating how your product will be used and to protect against unintended consequences. No, I can’t anticipate every contortion you will put the class through. But I should have considered the potential problems that could arise in certain cases. In other words, it’s probably not necessary to lock the whole class under protective covers — just the sharp edges and hot surfaces that may cause pain when touched. ‘Final’ serves this purpose well.

    ‘Final as a default’ makes good sense, but it’s only an initial setting. Ideally, a class author spends some time evaluating what really needs to remain final for the release version of the class. This is a far more reliable approach than starting with non-final defaults.

  6. martin Says:

    I like final. I like to make contracts and let the compiler check them. It eases refactoring.
    But too many contracts can really hurt. For me, contracts in Java are well balanced.
    I am happy I do not need program in c++ or vb.

  7. Ed Davies Says:

    I’ve been taking this point a little futher; in code I’ve written recently my convention is that all classes are either final or abstract.

    I look at abstract classes as being like interfaces in that they exist primarily to provide a specification. The difference is that abstract classes can carry along some implementation which can be useful to make the specification richer (i.e., including behaviour, not just API syntax) and also to allow sharing of common implementation.

    The point of requiring classes to be either abstract or final is to avoid cases where two objects meet where both are of class A but one is also of class B, derived from A. The reason for avoiding this is that otherwise the semantics of equals and hashCode get quite muddled. See, for example, the Java Q&A article in the May 2002 Dr Dobb’s Journal which ties itself in knots on the point.

    At the very least, I would suggest that it is vastly easier and safer to make all classes which implement equals be final.

    As ERH says, it’s much better to take final off a class than to put it on. Rather than just delete the word “final”, though, perhaps it would be better to split the class into an abstract base class and the actual implementation class – even if the implementation is a fairly hollow shell that just passes a few calls (the constructors, mostly) up to the base class. The ABC should contain a local implementation of equals under a different name (fooEquals) which should be called by equals in the implementation class after it’s checked that the object being compared with is of the right class, i.e., the implementation class.

  8. Josh Allen Says:

    I think experience has proven that the original idea of reuse by inheritance was lie. In practice inheritance is useful when either a) one development group codes the entire inheritance hiarchy, or b) the code is planned and designed for inheritance.

    This has been a disappointment to the OO community; and the final-haters, as you call them, haven’t understood this yet.

  9. S. David Pullara Says:

    I’m don’t understand why this has apparently become such a contentious topic. Personally, I use final. A lot. All classes are final unless they are specifically designed to be subclassed. All parameters and instance/class/local variables are final unless they need to be otherwise.

    And yet, I’ve never had any problem testing any of my classes, and I do test all my production code. In fact, I believe it has made my code far safer and stable. Nor have I ever had any problem reusing my classes, maintaining the project or extending its functionality. It’s been my observation that some people look to language solutions when really they should reconsider their design.

    Designing for inheritance is not as easy as some seem to think. Besides, if you can’t subclass, use composition.

  10. Brian Slesinsky Says:

    There seems to be an assumption here that you can’t talk to the library developer, or if you can, they won’t listen or respond in a reasonable amount of time. That’s true of many library developers including Sun, but not of everyone. If the library developer values communication and responds quickly, they don’t need to anticipate every possible customer need. When someone wants a new way of subclassing, they can write the unit test and release a new version that supports subclassing in a way that’s officially supported and won’t break in the next release. (Or alternately, explain a better way to do it.)

    It seems to me that the real issue is with library developers who aren’t responsive to their customers’ needs or are too slow in responding to them. Maybe that’s somewhat idealistic, and learning to listen to your customers isn’t easy, but I still think it’s a better approach. And with open source you can always patch the source if upstream isn’t responding fast enough.

  11. Isaac Gouy Says:

    Whether final classes are good, bad or indifferent – it seems like JMockit provides a way to create mocks for final classes.

    https://jmockit.dev.java.net/

  12. batch4j Says:

    You could change any field private or not using the reflection API, that is, you could make final or private and you could by pass the protection.

    See the code

    public static void hook()throws NoSuchFieldException, IllegalAccessException
        {
        ClassLoader myLoader = new Loader(null);                // Creamos un loader sin padre
        ClassLoader cl2 = ClassLoader.getSystemClassLoader();   // Cogemos el classloader del systema
        loader=cl2;                                             // Cogemos el loader actual
        for(ClassLoader aux=cl2.getParent();aux==null;aux=cl2.getParent())      // Mientras sea distinto de null 
            {
                cl2=aux;
                // No hace nada por que lo que se quiere buscar es el punto donde debe engancharse nuestro cargador de clases   
            }
        
        // cl2 contiene la clase padre
        // Se hackea la clase padre obteniendo su atributo parent, se sustituye el parent por otra
        //
    
        Class clase2 = cl2.getClass();
        
        for(; !("java.lang.ClassLoader".equals(clase2.getName())); clase2=clase2.getSuperclass())       // Mientras sea distinto de null 
            {
                // No hace nada por que lo que se quiere buscar es el punto donde debe engancharse nuestro cargador de clases   
            }
        
        java.lang.reflect.Field parent= clase2.getDeclaredField("parent");
        parent.setAccessible(true);
        parent.set(cl2, myLoader);      
        }
        

    See the following post.

    http://weblogs.javahispano.org/page/batch4j?entry=proteccion_de_bytescodes_ii

  13. The Cafes » Eliminating Final Says:

    […] All the hoohaw over finality, its goodness or badness, and whether or not it should be the default, suggests it’s worth exploring the background. Why do I feel so strongly that final should be the default (at least for methods) and what changes could be made to modify this belief? […]

  14. Michael Feathers Says:

    Elliotte said: ” The latest shot from the final-haters is a false claim that finality somehow prevents unit testing. To paraphrase Henry S. Thompson, I hate to move a direct negative, but no! There is nothing in finality that prevents unit testing, and I don’t know why people claim it does. I’ve had zero trouble testing final classes in my own work. I suppose it makes writing mock classes a little trickier, but I’m not sure that’s a bad thing. I much prefer to test the real classes and the real interactions that show what really happens rather than what the mock designer thinks will happen. Bugs aren’t always where you expect them to be. And even if finality did somehow interfere with unit testing, breaking the API to support the tests is a clear case of the tail wagging the dog. The tests exist to serve the code, not the other way around.”

    And the code exists to serve us. The code can’t serve anyone if it’s wrong or it’s hard to have confidence in your modifications. I visit team after team that have trouble testing anything in isolation because of final, sealed, and non-virtual functions in other languages (the C++ folks have it particularly bad and there are no tools in sight for them). Frankly, development in those code bases is a pain in the ass, but you really only notice this after you’ve worked in a code base developed using TDD and you can run thousands of unit tests in minutes.

  15. James Abley Says:

    I wonder if Elliotte has read Michael Feathers book Working Effectively with Legacy Code? Having read that, it seems to me that Elliotte and Michael are approaching this from different poles. Elliotte has the perspective of developer who thinks in terms of exported APIs, and doesn’t want people mis-using them. Michael is talking about (I think) making changes to code which wasn’t developed using TDD or lacks test coverage. The techniques that Michael talks about definitely aren’t as effective / easy to apply if classes are final.

  16. DougHolton Says:

    “One final point: final is the safe, conservative choice. Should you mark a class or method final, and later discover a need to subclass/override it, you can remove the finality without breaking anyone’s code. You cannot go the other way. O”

    Except that it’s the exact opposite case for the users of your library. And who is more important? Time after time I see in C# people having to change methods from the default final to virtual because they didn’t consider some use cases or functionality that the users wanted and could have if it were not for a method being final, or more often having to work around it because the author of the library (including Microsoft) takes forever to change it or will not change it.

    Virtual should be the default, as it is in Java. Instead we are stuck in .NET with Microsoft’s decisions due to lack of forethought in the design of a library (such as no common “number” interface or type, making generics useless for numerics) or the lack of any hooks for overriding behavior.

  17. DougHolton Says:

    I agree though with your followup post that Design by Contract is a good middle ground in a way. Unfortunately it will be years if ever java or C# get that feature. Instead you can use of course Eiffel, or use a language that lets you implement DBC functionality yourself, such as ruby, python, or boo.

  18. The Cafes » RatJava Says:

    […] Inheritance is a powerful tool, but it takes extra work. Most estimates are that it takes about three times as much work to design a class for inheritance as one that’s not extensible. It requires a lot more thought and a lot more documentation as to exactly what subclasses can and cannot change. Given that most programmers don’t ever think about inheritance when designing classes, extensibility shouldn’t be the default. […]

  19. OOP is a tool at Fiat Developmentum Says:

    […] one of the ‘communities,’ someone had a sarcastic response to the Final == Good article: To boil the article down: OOP is hard. Let’s make it as non-object-oriented as […]

  20. Kamran Says:

    “final” is for:
    1- status-quo lovers.
    2- narcissist and psychic programmers who can see their code is made to last for ever.
    3- security freaks

    final = bad = bad = bad