Monthly Archives: August 2013

Should An Embedded Jetty Server Be A Spring Bean

When we look for examples of containerless Java applications with an embedded Jetty server, it’s easy to find samples where the Server object is instantiated and run directly. As I started writing my own containerless web app with Spring, the question was raised: Should an embedded Server be managed by Spring, or should it exist only in public-static-void-main() … outside the context of the Spring portion of the application?

There is an advantage to keeping the server out of spring. When the spring context and the server are both instantiated in main(), it is very easy to find the startup code and follow what’s going on. However, if the server is instantiated by Spring and started as part of its lifecycle, the very basic architectural questions of “where is the server instantiated” and “how is it started” are hidden behind the Spring framework.

Compare this

// easy to navigate to
Server server = new MainServer();
server.start();

With this

// what is the main class here?
Server server = applicationContext.getBean(Server.class);
server.start();

Although to be fair, this is a more general issue with the nature of Inversion of Control containers, and easily remedied by familiarizing oneself with Spring.

On the other hand, there are some definite advantages to making the server be a managed bean inside Spring.

For one thing, application properties (such as port numbers, keystores, and keystore passwords) can be managed by a PropertySourcesPlaceholderConfigurer and injected via Environment into the Server. This frees you to use the configuration support provided by Spring, including profiles. Notice in this snippet how there is no custom parsing of properties files or environment variables necessary…

@Service
public class MainServer extends Server {

   @Inject
   public MainServer(Environment environment) {
      super();
      int securePort = Integer.parseInt(environment.getProperty("secure.port"));
...
}

For another thing, if the server is a spring bean, all of its dependencies can be automatically injected and it can itself be automatically injected into other beans that require it. This makes wiring of the server with other services that require it (such as a shutdown thread) easier.

@Service
public class ShutdownMonitorThread extends Thread {

    private int shutdownPort;
    private Server server;

    @Inject
    public ShutdownMonitorThread(final Server serverToStop, Environment environment) {
        server = serverToStop;
        setDaemon(true);
        shutdownPort = Integer.parseInt(environment.getProperty("shutdownPort"));
    }

    // server.stop() can be called when shutdown request is received.
    ...
}

Given these advantages, overall I think the stronger advantage lies with making the server be a spring bean.

Advertisements

Leave a comment

Filed under Software Engineering

Make Sweeping Changes in Code

In a previous post we discussed Propagating Best Practices In Code. To quickly review: if you want to make across-the-board changes to the way a certain thing is done in code, what is the best approach? There is a balance to be had between incremental updates and migrating all at once.

I recently had the opportunity to make a large-scale set of changes in code, and wanted to share the experience about how to strike that balance.

Here’s the background: my project had recently updated a library, and I wanted to replace all the instances of a certain outdated (and buggy) component with a new component supported by the new version of the library. There were 50 or so places where the old component or its subclasses were in use, some of them highly visible to users, so this would be a big change. There was potentially a lot that could go wrong, but correspondingly potentially a lot to gain.

As a rough estimate with stories and story points, I spent about 8 points to design and develop a replacement component based on the updated library, and 2 points each for the first three replacements, unit testing, refactoring, and generalizing as I went. The rule of three applied here, and turned out to be surprisingly accurate. I thought the first component was reasonably generic and ready for use elsewhere, but once I converted two more I discovered use cases and scenarios that required more refactoring and coding.

Having three replacements under my belt, I created an epic with a story for each replacement, assigning story points of one or two points each. Towards the end I even created some stories with 1 point each for several replacements at once. The assumption was that the replacements would get easier and go more quickly at the end as the code stabilized and as I got more familiar with the use cases of the component.

The plan is that the epic now appears on the backlog and the component replacement work has visibility along with everything else – so nobody forgets about it! We should be able to do a complete replacement over the course of several months, with work done and work in progress clearly defined.

As for prioritizing, here is how I decided which components to replace first:

  • Some of the components to be replaced were both high-visibility and had known bugs, and we knew that replacing them wholesale would fix those bugs. In this case, this was part of the point of doing all this (with the new library) in the first place. Obviously, buggy code is in a prime position to be replaced.

  • Next, places that had high visibility (but without known bugs) were targed for replacement. Besides being an improvement that is visible, these will be the places people look to for example code, which was a problem we addressed previously. The places that developers can see first are the places they’ll look to for sample code first.

  • Code that in less-frequently-used and less-visible places went next to last on the list. These are low-priority targets, and will probably only be done if they are visited for other reasons anyway. Overall they may or may not ever get done depending on developer bandwidth.

  • Code that is in a feature likely to be deprecated or removed goes last. With any luck, that code will go away before you get a chance to get to it!

So there you have it: one developer’s experience of propagating a best practice in code that spans many sprints. Hopefully this will inspire you to make sweeping changes of your own.

Leave a comment

Filed under Software Engineering

Vagrant Presentation

Giving a talk this week about Vagrant for Java. The slides are available online, maybe you’ll find them interesting.

 

Leave a comment

Filed under Software Engineering

Vagrant For Java: Here is where to compile

After much thought and experimentation, I’ve decided on where to use Vagrant and how it integrates with the Java development workflow.

For JavaEE / deployed applications, configuring a web server and a database server are definitely things that have “enough” complexity to warrant the use of Vagrant. With two servers and the myriad ways to configure them, it’s easy for configuration to get out of sync from one developer to another, bringing about the “works on my machine” syndrome. For this kind of software, it would work best to edit and compile the code on the host, and deploy to a Vagrant VM that mimics your production environment. The deployment folder for the web server could even be symlinked to a compile target on the host, removing the need to manually redeploy. So Vagrant could be an important part of your development lifecycle, but the cycle time for code/compile/deploy from the host and run on the VM with Java would be longer than the cycle time for code on the host and run on the VM that we see with PHP/Ruby/Node/etc.

For standalone Java applications (like libraries or desktop applications) the story changes a bit. In this case it makes the most sense to edit, compile, and run on the host machine, eschewing the use of Vagrant altogether. If you’re using one of the big Java IDE’s (Eclipse, Netbeans, IntelliJ…), you already have Java installed on the machine. At that point there is very little advantage compared to the overhead of using Vagrant, and only serves to put an extra layer of complexity in your development process. This is because by the time you are able to edit Java with an IDE you are able to run everything on the host anyway. One issue is that the version of Java required for the project may not match the version running the IDE on the host. In general (hopefully) this is not too much of a problem; as of this writing JDK6 is end-of-lifed and JDK8 is not yet released (guess where that leaves us). But if you did need to run multiple versions, you should be able to set JAVA_HOME on the host as needed. Though this does introduce extra complexity, it is less complexity that maintaining a Vagrant runtime just to work with projects using different versions of Java.

The interesting question is what to do with containerless web applications. Should the web server (in this case internal to the application) be run inside the VM as we did for the external web server? Or run on the host as we did for the standalone application? For containerless web applications, there is no external web server to worry about, but there is still likely a database. In this situation we can take a hybrid approach. Running a containerless web app is essentially the same as running a standalone application, so it would be effective to compile and run your code on the host machine. But with a database involved there is still enough complexity and configuration there that it makes sense to have the database server be on its own Vagrant VM.

Hopefully this gives Java developers who are interested in Vagrant some context about how to use it.

Leave a comment

Filed under Software Engineering