JShell with IntelliJ

Java 9 introduced the JShell: an interactive programming shell that can speed development.

With an IDE in hand, the obvious thing to do would be to wrap JShell so that it is available as a console window. Fortunately, IntelliJ does just that! The next obvious thing to do is to make your project code available to JShell so that you can reference classes and methods you’ve already defined. This is not as easy from purely the command line, but fortunately IntelliJ does allow us to reference code in our current project with just a couple steps

There are however a few gotchas when attempting this in IntelliJ. The rest of this post applies to IntelliJ 17.3 and later, which has direct Java 9 support.

(An aside to Gradle users, if you are using IntelliJ to create a new Java 9 project, the default version of Gradle that IntelliJ 17.3 pulls down for you is 4.0, which has known issues with Java 9. It’s best to have an existing installation of Gradle 4.3 or later when creating the project, and specify that Gradle location.)

First, when you use the menu item (“Tools” -> “JShell Console…”), type some code like “1+1”, and press the “play” button, the output console doesn’t always appear, bringing confusion about where the output is. I think sometimes you need to click the the tools menu item more than once, or use CTRL-Enter to execute your code to make the output window appear. However, once it appears, it has always been visible for me even after restarts. It just caused some initial confusion to me and others who have tried it.

Next, with the JShell console visible, Java 9 needs to be available. You don’t need to run IntelliJ on Java 9 and the project doesn’t even need to be a Java 9 project, but the Java 9 installation needs to be available to IntelliJ, and you’ll need to set the JRE of the JShell from the JRE dropdown in the JShell console. Otherwise you will get a warning that “JDK 9 or higher is needed to run JShell.”

Now, adding “1+1”, and assigning “int x=0;” is fun and everything, but JShell becomes a lot more practical when you can reference classes and methods in an existing project. The libraries that your project depends on are available to JShell if you use the default setting (at the top of the JShell console) of “Use Classpath of:” [Whole Project]. But what is REALLY useful is to reference the classes and methods defined by your actual project. To do this, you need to make the output of your project available to IntelliJ as a Library. Go to “File > Project Structure”, then in the dialog “Project Settings > Libraries” add the location of the output classes of your project as a Library. For a maven project, this might look like “project/target/classes.” With this addition in place, you should be able to import and reference the classes that your project defines, enabling you to explore your code much faster than even with TDD.

Hopefully you will enjoy using JShell in your project, whether you are on Java 9 or not!


Leave a comment

Filed under Uncategorized

Java 9: Why Is There A New HTTP Client?

Regarding JEP 110: HTTP/2 Client one might reasonably ask the question: “Why is Java 9 introducing a new HTTP client when there are so many high-quality and time-tested libraries out there?”

There are several answers:

  1. The first is provided by the the JEP itself: “A number of existing HTTP client APIs and implementations exist, e.g., Jetty and the Apache HttpClient. Both of these are both rather heavy-weight in terms of the numbers of packages and classes, and they don’t take advantage of newer language features such as lambda expressions.”
  2. Another reason is that with a built-in http client, we can easily use it in the new REPL (thanks Java 9!) without needing to bring in another library.
  3. Finally, this way the Java platform is free to use a high quality http client for other Java platform features without needing a dependency on an external third-party library.

This question and the thoughtful answer #3 have been brought to you by The Audience from my talk on Java 9 with the Philly Java Users Group. Thanks Philly JUG!

Leave a comment

Filed under Java 9

Philly JUG Introduction to Java 9

Recently I had the pleasure of doing a presentation for the Philly Java Users Group about the exciting new features coming to Java 9.

We covered what I considered to be the highest-priority items that would have the broadest interest:

  • Module System
  • REPL
  • Multi-Release JARS
  • Milling Project Coin
  • Process API
  • HttpClient API

For those who missed the talk (or who just want to go back and review) the slides are available online and contain links to original Java 9 documentation, blog posts and github code covering much of the presentation content.

Thanks to all of the Philly JUG members who came to the talk on Java 9, and I look forward to seeing you at many JUG meetings to come!

1 Comment

Filed under Java 9

Java 9: Milling Project Coin

One of Java 9’s new language features is JEP-213: Milling Project Coin. According to the JEP description: “The small language changes included in Project Coin / JSR 334 as part of JDK 7 / Java SE 7 have been easy to use and have worked well in practice. However, a few amendments could address the rough edges of those changes.”

Milling Project Coin incorporates 5 language changes:

  • Allow @SafeVargs on private instance methods
  • Allow effectively-final variables to be used as resources in the try-with-resources statement
  • Allow diamond with anonymous classes if the argument type of the inferred type is denotable
  • Complete the removal, begun in Java SE 8, of underscore from the set of legal identifier names
  • Support for private methods in interfaces, thereby enabling non abstract methods of an interface to share code between them

They say a picture is worth a thousand words. I say a code example is worth a thousand words of explanation. We can actually use all of these language features at the same time in a relatively small piece of code, so let’s take a look!

public class Main {

	public static void main(String[] args) throws Exception {

		// Allow effectively-final variables to be used as resources in the try-with-resources statement
		Reader reader = new InputStreamReader(new FileInputStream("Main.java"));
		BufferedReader in = new BufferedReader(reader);
		try(in) {
			String line;
			while ((line = in.readLine()) != null) {

	interface ListProcessor {

		default List<String> uniquelyFlatten(List<String>... lists) {
			return flattenStrings(lists);

		// Allow @SafeVargs on private instance methods
		// Support for private methods in interfaces, thereby enabling non abstract methods of an interface to share code between them
		private List<String> flattenStrings(List<String>... lists) {

			// Allow diamond with anonymous classes if the argument type of the inferred type is denotable
			// Complete the removal, begun in Java SE 8, of underscore from the set of legal identifier names
			Set<String> _strings = new HashSet<>(){};
			for(List<String> list : lists) {
			return new ArrayList<>(_strings);


My favorite of these is diamond operator for anonymous classes, as this has caught me before. What’s your favorite new language feature in Milling Project Coin?

Leave a comment

Filed under Java 9

Database Migrations For Zero Downtime Deployments

One of the biggest challenges with blue-green deployments is migrating the database. While it’s easy to run two different versions of your software at the same time, you can only run one database (i.e. schema) at a time. For example, if you change a table to use a different column name, and then try to run both versions of your software at the same time (as you’d want to with a blue-green deployment) then the old software will break because it doesn’t know about the new column name.

This situation presents a large but not insurmountable challenge. We can resolve this challenge by following some basic principles.

Basic Principles

  • The most basic principle is that each version N of the software MUST work with both version N and version N+1 of the database.
  • Version N and version N+1 of the software must both be able to safely run at the same time on version N+1 of the database.
  • Additionally, migrating the database and migrating the code are independent. The database usually migrates first because version N+1 of the software is not required to work with version N of the database.

This diagram illustrates the directions in which compatibility must be maintained between the code and the database.


The consequence of these conditions is that we can do any database refactoring that we like with zero downtime, the refactorings just need to be broken down into smaller refactorings and spread across multiple deployments. Also the individual refactorings don’t need to be in consecutive deployments, they can be spread across more deployments as long as they are done in order.

One complication is that this approach becomes very difficult if your releases are very far apart. Imagine a release schedule of once every three months, and imagine a database refactoring spread over three releases… It becomes increasingly likely that things will get forgotten or de-prioritized, and you will end up with a lot of half-finished database refactorings in your system.

The closer together your refactorings are in time, they easier they are to manage. In fact, if you are practicing continuous deployment, the database cleanup steps in subsequent deployments can happen almost immediately after the release once you are confident with the release and the old code has been retired. Blue Green deployment operates in a feedback loop: it enables you to do smaller and more frequent deployments, and it works much better and is much easier if you do smaller and more frequent deployments.

In the table below we see some sample refactorings and how they can be reduced to a set of compatible changes. Note that each column represents changes you would make for an individual release N, and the refactorings (rows) are split by database change and software change. Imagine overlaying the diagram above onto the table below and seeing in which directions compatibility needs to be maintained. Finally, sets of database changes for a single release should be done in a single transaction of course and the changes should be atomic.



This should always be tested against production-scale data and while the database is in use in a perf environment. You can see how long the schema migration takes, the performance impact of the migration to a system under load, and find solutions to these impacts if necessary.


We’ve outlined here just one technical component of zero downtime deployments. There are many other important non-technical components of zero downtime deployments, including considering the culture of your organization, the maturity of your devops practice, and the release cadence expected by customers or mandated by your industry. There are even other possible technical components such as how to migrate non-database data stores, collaborating service migration, and code that depends on data not on schema.

However, with the basic principles above, I am confident that one of the more difficult components of zero downtime deployment (database migrations) can be solved, and that the other components that apply to your own situation can similarly be solved if you are willing to do the work.

Happy deploying!

Leave a comment

Filed under Software Engineering

Java 9: JShell, a Read-Eval-Print Loop (REPL)

What’s a REPL?

A Read-Evaluate-Print-Loop, or, REPL, is a command line interface for interacting with a programming language. In the words of JEP 222, the new Java REPL (JShell) is to “provide an interactive tool to evaluate expressions of the Java programming language, together with an API so that other applications can leverage this functionality.”

Why Does Java Need This?

According to JEP 222: JShell, “The number one reason schools cite for moving away from Java as a teaching language is that other languages have a ‘REPL’ and have far lower bars to an initial ‘Hello, world!’ program.” If we want Java to have a bright future (and don’t we all?) then Java has to compete with other languages (Scala, Groovy, Clojure, Python, Ruby, Haskell…) in academia that have REPL’s and a low barrier to entry.

The REPL is a good feature for students, but what about professional developers? Ask a professional Scala developer, and it becomes clear that the REPL is an iterative development tool with even faster feedback than TDD. Additionally it lets us experiment with third-party libraries with less ceremony than setting up a new project. Finally, it allows us to execute arbitrary Java code from the command line, bringing Java much closer in usability to a scripting language. So at a professional level, the REPL aims to make us more productive in a very direct way.

What Features Does It Have?

JShell maintains history, has a built-in editor, has tab-completion, saves and loads code, and has automatic addition of semicolons. It has forward references (except in method signatures) so we can reference a variable or method before it’s defined. To see everything it can do, call jshell -help from the command line, or /help from within jshell.

How Else Can I Use It?

JShell is pretty flexible and has an API, so people are likely to come up with creative uses for it.

People may use it like a Java oriented shell for doing regular work like we do with the bash shell right now. I can imagine Java moving at least partially into the scripting space, so people have another language at their disposal to work from the command line with files and data, connect to servers or databases, retrieve and manipulate data, etc.

The fact that JShell includes an API means we are likely to see this included with IDE’s, providing a Java “scratch pad” for our daily work. The API also means it could potentially integrate with build tools and increase the flexibility of build scripts.

Personally I would like to see a Github Gist browser and downloader so it’s easy to experiment with other people’s ideas!

Example: Try A Library

We can load jars or classes to try them out in JShell, just specify a jar file when calling jshell –class-path from the command line, or call /classpath from within the shell.

To try out a library like Google Guava, call this from the command line:

wget http://central.maven.org/maven2/com/google/guava/guava/19.0/guava-19.0.jar
jshell --class-path guava-19.0.jar 

then inside jshell, we can type in some code like this:

import com.google.common.base.Optional;
Optional b =  Optional.of(new Integer(10));

// java 8's Optional has .ofNullable()
Optional a = Optional.fromNullable(null);

Example: Pretend Java Is A Scripting Language

I think .jsh makes a good file extension for Java snippets intended for use with JShell. Let’s put this in a file called hello.jsh:

System.out.println("hello world!")

Then from the command line

jshell hello.jsh

Yay, a compiled language that runs like a script!

Example: Load Code At Startup

What if we wanted Linux bash shell-like capabilities inside JShell? We could write code to accomplish that capability, and have that code loaded at startup time.

Let’s put this code in a file called bash.jsh

// a startup file completely replaces the default
// so need to bring in the default imports yourself

import java.util.*
import java.io.*
import java.math.*
import java.net.*
import java.util.concurrent.*
import java.util.prefs.*
import java.util.regex.*

void ls() {
  File cur = new File(".");
  for(String s : cur.list())

Then from the command line:

jshell --startup bash.jsh

Now when we type “ls()” from within jshell we get the file listing of the current directory. It would probably be a bit of work, but in theory we could replicate a complete shell environment. But, you know, we already have a shell. 🙂 What we will see in practice is more likely to be freely available snippets for working with data and code at a higher level.


JShell has a lot of potential to change the way we learn and use Java. I have high hopes for its expanded use in Java 9!

How will you use JShell?

Leave a comment

Filed under Java 9

The Business Case For Zero Downtime Deployments

This post describes the business advantages of zero-downtime, (a.k.a. blue-green) deployments. If you are not familiar with blue-green deployments, you can read the excellent Martin Fowler article about blue-green deployments. But in a nutshell this amounts to the idea of releasing to production by deploying a release to a separate production environment and behind the scenes gradually redirecting user activity to the new environment from the old one.

Why Do I Want This?

  • Eliminate downtime: Done correctly, your users will literally see zero downtime. Many organizations do deployments at midnight local time so that they can takenthe entire service offline “when nobody’s using it.” But if your software is successful in a global environment (or if your target demographic’s usage time is split between midnight and mid-day), there is no time when “nobody’s using it” and the best time for one group of users may be the worst time for another.
  • Increase support: Production deployment is potentially the single most dangerous activity your team does. Instead of midnight deployments when everybody is exhausted or asleep, deploy during business hours when the entire team is available and alert.
  • Reduce risk: allows easy and safe rollback, if something unexpected happens with your release, you can immediately and safely roll back to the last version by simply directing user traffic back to the previous environment
  • Provide for staging: when the new environment is active, the previous environment becomes the staging environment for the next deployment. If you didn’t have a staging environment, you should probably have one anyway.
  • Hot backup for disaster recovery: after deploying and we’re satisfied that it is stable, we can deploy the new release to the previous environment too. This gives us a hot backup in case of disaster.

What’s The Catch?

Zero-downtime deployment isn’t free: it requires a certain diligence and maturity of process and devops, but not any more than you really need to play with the big kids in the world of software engineering anyway. Details at the engineering level to follow in a future post!


The benefits of this kind of deployment are actually huge. If you can deploy with zero downtime once per month, why not once per sprint? Why not every day? Why not as you finish features and fix bugs so that your customers see their bug fixes and feature requests in a fraction of the time?

If you can do deployments this way, your users with thank you. And your boss will thank you. 🙂

Leave a comment

Filed under Software Engineering