Monthly Archives: February 2013

Dependency Injection Part V: When to use (or not)

Previously we looked at Dependency Injection in detail, looking at the advantages and disadvantages in detail, and examining ways to overcome some of the disadvantages. I’d like to summarize by exploring when to use it and when not to use it.

In general it would be a good idea to use DI when the advantages outweigh the disadvantages, or the disadvantages can be mitigated. A careful review of the advantages and disadvantages we’ve already discussed might point you in the right direction. We don’t have hard and fast rules to say “use it here” and “don’t use it there”, we really need to investigate the pros and cons and see how the apply to what we’re trying to build. The decision is an architectural one and generally needs to be made early in the design of an application.

As a rule of thumb however, using a DI container could be a good choice at the very least if you are starting a project and want a standard technique to compose subsystems and orchestrate the large scale construction. This is another advantage that I didn’t mention before, but many of the large scale structures in an application are singletons, and DI containers facilitate a solution to the Singleton anti-pattern. Rather than controlling singleton instantiation yourself (which is easy to get wrong, common solutions include double-checked locking or enum singletons) the container controls the instantiation of a singleton.

As another rule of thumb, there are times when DI might not be a good choice. If you are working with a large legacy system that does not use DI at all, it could be prohibitively expensive to bolt on. DI is high-level and is best introduced at design time. Java lang objects that really are implementation details, such as Lists and Maps should probably not be injected. And objects created late in the scope’s lifecycle should be not injected (or the scope of the object should be rethought). Finally, if your project is a library or a small application, the overhead of using a framework (indeed, any framework, not just a DI framework) tips the scale in favor of not using it. For software in general it’s good to keep things as simple as possible for as long as possible.


Leave a comment

Filed under Software Engineering

Getting Started with Selenium

Selenium is a neat little tool for integration testing with browser automation. If you’re not familiar with it already, check it out. The Selenium IDE can record and playback actions on your browser, with the caveat that it’s normal to have to go back and manually tweak some of the steps that it recorded. Here are some notes from when I first started out with it, and some issues that I worked through. Hopefully reading this will save you some headaches.

The native file format for tests cases in Selenium IDE is html, but you can convert Selenium IDE files into unit tests in the language of your choice. The big advantage of this is running automated tests as a part of your build process. For Java, navigate to file -> export test case as -> java / junit 4 / webdriver. Selenium unit tests need to run on a machine that has graphics capabilities so it can open a browser, but it is possible to configure a headless machine to run selenium. Some gotchas related to unit testing with selenium test cases:

  • Tests tend to be brittle. If you start building a complete test suite with code, look into the Page Object pattern, this will make your test more resilient.
  • If a unit test fails an assertion, the rest of the test will fail to run, which can hurt if each test does setup and teardown for itself and subsequent tests expect a “clean” state before they start running. It’s best to not depend on teardown steps in your test.
  • If you’re testing with webdriver and firefox, then firefox must be shutdown before running selenium tests because selenium currently can’t attach to a running instance and a new instance of firefox can’t be started if it’s already running.
  • Selenium must run in sequence per machine, they can’t be run in parallel tests because of the aforementioned firefox issue. Parallelized selenium tests can be achieved with selenium grid.

One thing that you generally have to tweak in any tests generated by Selenium IDE is the timing of the steps. When you record, you have natural pauses in your activity but the test case runs each step one after the other as quickly as possible. I had a test case that hit an update button, verified the response, and logged out at the end of the case. However the javascript that would have received the update response stopped running on logout, and verification subsequently failed. You need careful use of andWait / waitFor / pause commands to make sure your app functions correctly under test. Also, if you use a pause or a clickAndWait as the last command of a case (say, to log out at the end of a test case), the case will not actually pause or wait because the time is added to the beginning of the next step of a test case, not the end of the current command. You can get around this by putting an echo as the last command of the case.

Sometimes the selectors that Selenium IDE generates are either too brittle or not working at all. One thing to try is to select the element you want to click in firebug and view the xpath from that, and use that in the selenium step. Another (preferred) option is to be more intelligent about how you select it, say by selecting element by text content or by css class if you know that they will be unique on the page.

Leave a comment

Filed under Software Engineering

Dependency Injection Part IV: Responses to Arguments Against Dependency Injection

In the previous post, we looked at arguments against using dependency injection frameworks. I posted the arguments because I thought they were all valid points, so my response won’t just be “no its not a problem.” rather I want to show how these problems can be mitigated so that DI can made worth your while.

About complexity: yes example code online using DI is usually more complex than the straight-up counterparts. But examples are just examples, when we are demonstrating code online the code has to be small enough to fit on a page (or small enough for a newcomer to wrap their head around easily). With a small example or even small projects, the overhead of DI can be a significant part of the overall code leading to the first impression that it’s very heavy or very imposing on your code. In reality, for large real-world projects, DI amounts for a very small amount of our overall code, so the overhead is relatively small. In other words, DI is too complex for small examples but shines on large object graphs with varying scopes. We just don’t usually see examples like that when we discuss DI online. So the problem of overhead and complexity is mitigated by saving DI for larger projects.

As for breaking encapsulation, yes you can break encapsulation if you inject concrete classes which are your implementation details. I would respond that if we inject a concrete class then we are indeed advertising our implementation, but if we inject an interface we are advertising much less (given that we’re using the Interface Segregation Principle). At that point we are advertising not so much the implementation as we are the dependency on a subsystem. To “do it right” we should inject interfaces for each element of your object tree and use a DI container. The top level class depends on a minimal interface, the implementation of that class itself depends only on minimal interfaces, and so on. Additionally, encapsulation can be managed with visibility control provided by our language.

Sometimes some members should be final but we might think we can’t inject final attributes. In fact we can inject final attributes… In this case, we just cannot use setter injection. Instead we need to not assign the member when it’s declared and use constructor injection.

It’s been said that even though DI eases changing configuration at runtime, we’d rarely change configuration anyway. But I’d like to point out that we change configuration every time we use a mock or stub object in a test. If you do not use unit tests, then the original argument holds water. But if you do not use unit tests, I would say we need to have a different conversation about why unit testing is helpful in the first place. If you see the benefits of unit tests, then you will see the benefits of DI for making your code more testable.

Using a DI container ties us to a framework. Each container offers its own annotations that threaten to invade our code and make it very difficult to ever disentangle from that framework. however, if we only code with the javax inject annotations, the annotations in our code can be just standard JEE.  We can code it so that the only framework specific code is in the wiring, and if that is always in one place it should not be a big deal to swap one DI container implementation for another.

Finally, some DI containers use XML for their configuration. XML is not quite in fashion like it used to be. But fortunately newer DI containers (newer versions of Spring and Guice for sure) allow us to write our configuration in code rather than XML. This gets around the problem of using strings to identify and wire dependencies, and lets us leverage our IDE for things like refactoring and renaming classes, navigating with “find usages”, and so on.

Hopefully this shows you how to work around some shortcomings of DI so that you feel better about adding it to your software engineering arsenal.

Leave a comment

Filed under Software Engineering

Dependency Injection Part III: Arguments Against Dependency Injection

When it comes to DI (as with many things in software) people tend to have one of two very strong opinions: love it or hate it.

Full disclosure: I love it, but I’ve done my best to examine the topic from both sides. Since I am in the “love it” camp, when I started researching Dependency Injection I knew that confirmation bias would probably lead me to arguments for it and little else. I was surprised at how easy it was to find heated arguments lengthy conversations going back and forth about the merits and drawbacks, there is a lot of good  discussion online. So I feel safe in saying the advantages are not as obvious and clear-cut as the pro-DI camp would like to believe.

I thought some good arguments against DI were

  • Introduces complexity
  • A dependency is an implementation detail, specifying it externally  breaks encapsulation
  • Some members should be final but you can’t inject final attributes.
  • Rarely change dependency configuration anyway
  • Ties you to a framework
  • Uses XML to configure your application

Let’s look at each of these in turn.

It’s true that DI brings complexity, there are plenty of things that are more complex with DI than without. You need to teach the concept to new developers. You need to bring in yet another framework to your code and learn how to use it. Some parts of your application may require more code as constructor argument lengths and/or number of setter methods increase. When the framework is wiring things behind the scenes, it may be harder to see what was instantiated when and why.

There is definitely the possibility of breaking encapsulation. The members of a class constitute its implementation, and listing its members in a constructor list is just advertising through the API what the implementation is and by extension how this class works. If you use an empty constructor instead, the user of a class does not know (indeed can not know) what is going on underneath the class, which in general is what we want.

When designing a class, sometimes it is appropriate to declare some members as final. But DI puts some constraints on this part of your class design. Once a final attribute is assigned it cannot be reassigned, and it cannot be assigned after the constructor returns, so you cannot use setter injection for final members. If you feel like DI is forcing you to not use final members so you can use setter injection, you will not be happy with that.

DI advertises that it allows you to configure you object graph at runtime rather than compile time. However, in many real world applications we rarely need to change the dependency configuration. There are some cases (plugins, for example) where implementations need to be swapped at runtime, but those are not that common. If you do need to change a dependency configuration, generally you just change it and recompile your code.

The definition of DI that I’m using for this blog series is using the design pattern along with a DI container to structure an entire application. With that definition, yes there is another framework you  need to bring in to your application and learn how to use. Besides increasing the overhead of learning your application, using more libraries and frameworks increases your chances of dependency hell.

Finally, some DI containers use XML files to wire applications together. Using XML files to describe your application has some disadvantages such as being brittle in the face of refactoring tools, and moving what would otherwise be code into places away from your code. Generally things that change together should stay together.

In conclusion: these are just some arguments that I found compelling, other people might find other arguments compelling or not find these compelling at all. In the next part of this series, we will respond to these arguments and look at how to determine when to use DI and when not to use it.

Leave a comment

Filed under Software Engineering