Monthly Archives: December 2013

Spring Boot: Provided Properties

This post is abbreviated because of the holidays! But here’s a little tidbit about spring boot properties, since we’ve been talking about that lately.

Spring Boot provides certain preset properties, such as “server.port”. A reasonable question is “What are all the properties that it provides?

The answer, as noted in the link, is that there is no master list because it’s subject to changes in the code. But you can generally find the preset properties in the system by looking in the codebase like so:

git clone https://github.com/spring-projects/spring-boot
grep -r –include="*.java" "@ConfigurationProperties" .
grep -r –include="*.java" "@Value" .

Happy Holidays!

Leave a comment

Filed under Software Engineering

Adding Properties to Restful Service, Part II

Last time, we looked at adding properties to an application with Spring Boot. Besides injecting simple properties, it provides additional, more powerful options for working with properties.

Properties Objects

Spring allows you to use the @ConfigurationProperties annotation to define an object bound to a subset of the properties in your properties file.

@ConfigurationProperties(name="service")
public class ServiceProperties {
    private String message;
    private int value = 0;
    ... getters and setters
}

In this example, a ServiceProperties object will have its values bound to the properties “service.message” and “service.value” from the properties file. Note the @ConfigurationProperties name attribute is the prefix for the property in the properties file (“service”), and the bean properties in the object (“message” and “value”) are the name of the rest of the property in the properties file. It’s a very consistent system and is an easy way to get lots of validated and parsed properties into your system.

Is There A Downside?

I like the concept but think there are some liabilities with this approach.

It should be easy to look at a properties file and tell where the properties are used, and it should be easy to look at code and tell where the properties values are defined. I call this bidirectional discoverability, and it’s very important concept (not just for properties) as a system gets very large. Using @Value and @Inject to get properties into your code allows you to grep for a property name and always discover both the definition and the usage because the entire string is exactly the same.

Additionally this approach adds to the number of classes in your system. Generally less code means less room for error and easier comprehension of your codebase.

Finally, the properties object requires setters to set the properties. This means that every time you use a properties object, mutation of properties by the caller is a possibility. This is a disadvantage because usually configuration properties in an application are immutable.

Is There An Upside?

On the other hand, a reason for using @ConfigurationProperties would be if you had a large set of properties (say, more than 4 or 5) that logically made sense to stay together. The properties class more strongly implies that they are intended to be used together, aiding developer comprehension if a property needs to be used elsewhere. This also reduces the number of constructor arguments for the class using the properties, because we can inject a single configuration object instead of 4, 5, or more @Value objects.

Mitigation

Given the caveats, there should be a compelling reason to add this properties class over simple property injection. But if you find the benefits outweighing the costs, there are ways to work around the downsides.

We could use @ConfigurationProperties and make it our convention that each attribute be commented with the entire property name to enable bidirectional search.

Additionally we can make the properties object immutable by having the properties object implement a getter interface, declare the properties object as a @Component, and @Inject the the getter interface instead of the actual properties object. This would reduce the possibility of accidentally mutating the properties.

Leave a comment

Filed under Software Engineering

Adding Properties to Restful Service, Part I

The Configuration Problem

Every project needs a way to manage configuration. Things like database connections, security, web servers, and application-specific configuration all have settings that can vary from environment to environment.

A primary requirement is to store configuration information externally. If you hard coded database credentials in your source code, obviously it would be very easy to set the configuration in your code and to run the application on that one database, but very difficult to run anywhere else (like, say, production). Not to mention credentials in source code is a security nightmare!

JNDI?

One way that Java has accomplished externalized configuration was with JNDI. I have seen enterprise-scale systems use JNDI to store database credentials. This allowed us to deploy a web application to different environments (dev, qa, prod…) and have it connect to different databases. This certainly works, and is part of the Java standard, but is very heavyweight. It’s another technology to learn, requiring special tools to manage. And the trend in Java these days is towards lighter-weight solutions

Properties Files!

What could be lighter weight than your humble properties file?

Properties files have been with us for a long time, but every project has their own way to find them and insert them into a program. I’ve seen legacy systems with their own custom property management framework, similarly to Spring’s properties framework. The main difference between them is that the custom framework is maintained by the company’s developers (with limited time of course) and the spring community which is continuously maintaining it. Spring Boot provides a nice way to add configuration via properties to a project, let’s take a look at that.

See the Externalized Configuration section in the spring boot docs. To quote:

A SpringApplication will load properties from application.properties in the root of your classpath and add them to the Spring Environment. The actual search path for the files is:
classpath root
current directory
classpath /config package
/config subdir of the current directory.

So simply creating a file called “application.properties” and placing it in the correct place will the properties in the file immediately and predictably available to your application. That’s what I call convention over configuration!

The Spring Boot Actuator docs section Externalized Configuration goes into more detail. To actually use a property in your code you can inject it into your object using the @Value annotation. Note that this requires a setter on that property.

For instance, this applies the “service.message” property from your application.properties:

@Value("${service.message}")
private String value;

My preferred technique would be to inject the values in the constructor. Since property values are usually immutable, I’d rather not have setters for them.

private String message;
@Inject
public MessageServiceConstructor(@Value("${service.message}") String message) {
   this.message = message;
}

Additionally, any properties can be overridden with command line arguments directly. For example, you might override the service.message property when running the jar like so:

java -jar build/libs/my-application-1.0.jar --service.message=9999

Conclusion

Externalizing configuration is an important part of every application. Spring Boot provides facilities for easily collecting properties from a properties file or other places in your system, and getting them into your application. Next time we’ll look at other exciting ways to deal with configuration!

Leave a comment

Filed under Software Engineering

Adding Security to Spring Guide’s Rest Service

Recently we looked at creating containerless web applications with Spring Boot. Let’s take a spring boot application and do some experimentation to see how easy it is to build a real web application from the starter projects. Is it easy or are the starter guides exaggerating their own ease of use?

Well, For Starters…

Starter projects are nice, but they are typically trivial. This is for good reason: project authors don’t want to confuse users with details not relevant to the specific technology being taught. But it is difficult to see how a large scale application would actually pull all the pieces together, or even if the pieces will fit together at all.

We are going to take a spring boot guide and see how easy it is to pull two of the guides together by adding security to a web service project. The complete code is available on github (my version requires Java 8).

The Starter Project

The rest service guide is one of the most popular guides available from spring.io. This project is very easy to run, you can just download the project, run “gradle runJar”, and the service immediately runs. The service responds to requests (say, curl localhost:8080/greeting) with {“id”:1,”content”:”Hello, World!”}

The Security Project

Next is the security project. The most obvious place to look is the securing web project, which does in fact do a nice job of securing a web site. But does this work if we want to secure a rest service? Do the security concepts carry over??

The answer is: sort of. If you try to add security from the security web project to the web service project by adding the security dependency and copying WebSecurityConfig.java to the rest project, you will find that it is more oriented around securing web pages, and does not immediately work for securing rest endpoints. The WebSecurityConfig requires some additional changes for it to work with basic auth (as you might want for a rest service). You have to modify the configuration code to use basic auth like so:

@Configuration
@EnableWebSecurity
public class WebSecurityConfig extends WebSecurityConfigurerAdapter {
    @Override
    protected void configure(HttpSecurity http) throws Exception {
            // this replaces the web security http configuration
            http.authorizeRequests()
                 .antMatchers("/**")
                 .authenticated()
                 .and().httpBasic();
    }

    @Override
    protected void configure(AuthenticationManagerBuilder authManagerBuilder) throws Exception {
        authManagerBuilder.inMemoryAuthentication()
                .withUser("user").password("password").roles("USER");
    }
}

REST, Actually

A more appropriate approach, in my opinion, is to apply spring actuator to the rest project.

Start by adding the security dependency and actuator dependency (here, to the gradle build file)

compile("org.springframework.boot:spring-boot-actuator:0.5.0.M6")
compile("org.springframework.boot:spring-boot-starter-security:0.5.0.M6")

Then add a Configuration class to provide an AuthorizationManager bean, per security directions in the actuator project. As of this writing, the actuator directions are a little out of date, so and the latest code does not line up with the latest directions. The code here works, however.

@Configuration
public class WebSecurityConfig {
    
    @Bean
    public AuthenticationManager authenticationManager() throws Exception {
        return new AuthenticationManagerBuilder(new NopPostProcessor())
                       .inMemoryAuthentication().withUser("user").password("password").roles("USER")
                       .and().and().build();
    }
    
    private static class NopPostProcessor implements ObjectPostProcessor {
        @Override
        @SuppressWarnings("unchecked")
        public Object postProcess(Object object) {
            return object;
        }
    };
}

With these simple changes to the popular rest project, the endpoints are now all protected by http basic auth. And of course basic auth should be done over HTTPS, but adding HTTPS is a topic for another day. We can test the endpoints like so:

curl localhost:8080/greeting

and get the response

{"timestamp":1385921426457,"error":"Unauthorized","status":401,"message":"An Authentication object was not found in the SecurityContext"}

Then try with basic auth:

curl user:password@localhost:8080/greeting

And get the response:

{"id":2,"content":"Hello, World!"}

Our endpoint is protected!

Conclusion

The authors of spring boot made every effort to make the guides easy to understand and use, and I think overall they succeeded. However, these are new (incubator) projects, and there are still some kinks being worked out, such as documentation not lining up with the code. And we must keep in mind that usually specific technologies cannot be applied directly from one project to another, because applying a concept from one context to another requires understanding how the technology actually works in the new context. That’s why the guides are so focused and small – to reduce the context needed to explain how it works.

I still think these are great projects to bring Spring technologies into even wider use, and am looking forward to spring boot and the guides becoming the go-to place for new projects.

1 Comment

Filed under Software Engineering

Programming Puzzle: Time-Based Cache

Since we were talking about cache recently, it’s worth it to look at implementing our own cache as a programming exercise. Last time we implemented an LRU cache. Let’s take it a step further and implement a time-based cache.

Why?

To re-iterate: There is really no need to use your own cache implementation in production. There are a variety of existing caching options (such as EHCache, Guava, and JCache), and you put yourself at risk by re-inventing the wheel.

That said, looking into how caches work is a fun exercise. It can give us a feel for the issues involved and hopefully an intuition for dealing with off-the-shelf caching implementations.

Out With The Old, In With The New: Now With Extreme Prejudice

Let’s look at a time-based cache. With this implementation, entries will expire and automatically be removed after a given amount of time, as opposed to the LRU cache where entries would only be removed if the cache was full.

We can maintain a map of entries by access time, and use that map to determine the length of time since last access. Additionally, Java 8 lambdas make it easy to do the scan and removal of old entries.

Note that this cache starts its own thread in the constructor to periodically scan the map and determine which entries need to be removed.

public class TimeBasedCache<K,V> extends HashMap<K,V> {

    private final ScheduledThreadPoolExecutor executor = new ScheduledThreadPoolExecutor(1);
    private final Map<Instant, K> accessTime = new HashMap<>();
    
    public TimeBasedCache(final long scanPeriodMs, final long maxLifetimeMs) {
        executor.scheduleAtFixedRate( ()-> doCacheInvalidation(maxLifetimeMs), scanPeriodMs, scanPeriodMs, TimeUnit.MILLISECONDS);
    }
    
    @Override
    @SuppressWarnings("unchecked")
    public V get(Object key) {
        accessTime.put(Instant.now(), (K)key);
        return super.get(key);
    }
    
    @Override
    public V put(K key, V value) {
        accessTime.put(Instant.now(), key);
        return super.put(key,value);
    }
    
    protected final void doCacheInvalidation(final long maxLifetimeMs) {
        Instant oldestAllowed = Instant.now().minus(maxLifetimeMs, ChronoUnit.MILLIS);
        accessTime.entrySet()
                .stream()
                .filter( (entry) -> entry.getKey().compareTo(oldestAllowed) < 0)
                .forEach((entry) -> super.remove(entry.getValue()));
    }
    
}

Room For Improvement

This cache extends HashMap and doesn’t override all set and put methods. Writing a cache interface and only using those methods for determining access time would be much better. Also this cache could wrap another cache (such as the LRU Cache) using the decorator pattern to compose caching behavior. Then it would be easy to create simple time-based caches, or time-based caches that also had a max size when backed by the LRU Cache.

The Test

The test is a little tricky because it is time-sensitive. Although time-based unit testing is generally best left for integration tests, in this case the test is simple and we can still run it in 60 milliseconds.

public class CacheTest {

    @Test
    public void testCache() throws Exception {

        TimeBasedCache<Long,String> timeCache = new TimeBasedCache<>(10,30);
        
        timeCache.put(1L, "a");
        Thread.sleep(10);
        timeCache.put(2L, "a");
        
        assertTrue(timeCache.containsKey(1L));
        assertTrue(timeCache.containsKey(2L));
        
        Thread.sleep(30);
        assertFalse(timeCache.containsKey(1L));
        assertTrue(timeCache.containsKey(2L));
        
        Thread.sleep(20);
        assertFalse(timeCache.containsKey(1L));
        assertFalse(timeCache.containsKey(2L));        
    }

}

Test Room For Improvement

To be a unit testing purist, we might test just the cache invalidation method and abstract away the creation of the current time that happens with Instant.now(). That would be closer to constituting an actual unit.

Additionally, the instantiation of a timer thread inside this cache precludes the possibility of using any other scheduling mechanism to schedule the scanning (such as Springs Scheduler Annotation).

Conclusion

This was a fun little exercise exploring time-based caching and testing. We can make the code better by: using a cache interface, being able to wrap another cache, unit testing the cache invalidation alone with time abstracted away, externalizing the scanning thread creation, and testing the actual time behavior in an integration test. Maybe we’ll do that in another post!

Leave a comment

Filed under Software Engineering