ArchUnit

Let’s talk architecture today.

ArchUnit is a test framework to test an application architecture.

It is an interesting concept that I wanted to try out for a while. So I did a little experiment on a Swing application I had.

It looks like unit tests with a fluent interface. It’s quite nice to use. I used Junit 5.

My two rules were simple:

  • Backend should not use Frontend classes
  • Backend should not use Swing or AWT classes

Using ArchUnit, it looks like this (in Kotlin):

@AnalyzeClasses(packages = ["pro.tremblay.myapp.."])
internal class BackendSegregationTest {

    @ArchTest
    val backend_doesnt_use_frontend = layeredArchitecture()
            .layer("backend").definedBy("pro.tremblay.myapp.backend..")
            .layer("frontend").definedBy("pro.tremblay.myapp.frontend..")
            .layer("app").definedBy("pro.tremblay.myapp")
            .whereLayer("frontend").mayOnlyBeAccessedByLayers("app")

    @ArchTest
    val no_ui_framework_on_backend = noClasses().that().resideInAPackage("pro.tremblay.myapp.backend..")
        .should().accessClassesThat().resideInAnyPackage("javax.swing..", "java.awt..");
}

Two simple rules.

  • One is defining my layers and how they should interact.
  • The other is telling which packages are forbidden.

Sure enough, the test was failing. I did some refactoring to repair everything and ta-da, for ever after, architecture will be enforced.

You might want to try it out on your own project.

I do recommend it.

Mocking Time

In many cases in our code we do something like that:

LocalDate today = LocalDate.now();

The problem is that your tests then needs to work based on the real current date (or time, or zone date time, whatever).

Sometime it’s fine, sometimes the test fails randomly. At end of day, month or year in particular. It also forces you to calculated the expected result instead of using a constant.

In the new Java date/time API, there is a built-in way to set a constant time called a Clock.

So instead of the code above, you do

Clock clock = ;
LocalDate today = LocalDate.now(clock);

Clock is an interface with multiple implementations.

If you want a real clock, you can do this.

Clock clock = Clock.systemDefaultZone();
Clock clock = Clock.systemUTC();
Clock clock = Clock.system(ZoneId.systemDefault());

But it you want a fake clock during your test, you can do this:

Clock clock = Clock.fixed(instant, ZoneOffset.UTC);

Sadly, I must confess, since the clock is using an Instant, it’s a bit more complicated to initialize then I would like. For example, to put it to a specific day, you do that:

LocalDate today = LocalDate.ofYearDay(2019, 200);
Clock clock = Clock.fixed(today.atStartOfDay().toInstant(ZoneOffset.UTC), ZoneOffset.UTC);

There are methods to make the clock tick or move forward. However, it’s an immutable class, so ticking the clock will return a new clock. Nothing prevents you from doing your own MutableClock. Here is an implementation.

public class MutableClock extends Clock {

  private volatile Instant now;
  private final ZoneId zone;

  public MutableClock() {
    this(Instant.now());
  }
  
  public MutableClock(Instant now) {
    this(now, ZoneId.systemDefault());
  }

  public MutableClock(Instant now, ZoneId zone) {
    this.now = now;
    this.zone = zone;
  }

  @Override
  public ZoneId getZone() {
    return zone;
  }

  @Override
  public Clock withZone(ZoneId zone) {
    return new MutableClock(now, zone);
  }

  @Override
  public Instant instant() {
    return now;
  }

  public synchronized void plus(TemporalAmount amount) {
    now = now.plus(amount);
  }
}

I found it a really useful tool to fix existing code in a class where you want the time to move at your own pace.

  • Add a Clock attribute
  • Add a constructor taking the clock in parameter
  • The existing constructor uses the system clock Clock.systemDefaultZone()
  • Each time a now() (e.g LocalDate.now()) is called, you use now(clock) instead
  • You can now move the time when needed in your test

Happy clock mocking!

Refactoring Challenge

Hello everyone,

This week I have a refactoring challenge to propose. It’s something I did last year but that I should have advertised more.

I’ve coded a little service (ReportingService) that needs some love.

The idea is to make it easily testable and readable. Just open a pull request with your code to show me your ideas.

The code is on Github.

See where you can go!

My solution is in the ‘henri’ branch but please don’t look at it before trying it out.

Please refrain yourself from just show casing new Java features. That’s not what will improve readability, testability and so on.

All interesting ideas will be mentioned in a follow up post.

Have fun!

JPMS impact on accessibility

About a month about, I had a mysterious IllegalAccessException under Java 11.

A framework was trying to perform reflection on a generated proxy (from Proxy.newProxyInstance).

I was surprised because, so far, my understanding was that when I’m not using the module system, all classes are in the unnamed module. When they try to do reflection on something that was set accessible, it should always work although it will give you a warning on the console.

It’s quite a deep feature of JPMS that caused it.

Let’s get started.

When you are not using the Java 9 module system (aka JPMS aka Java Platform Module System aka Jigsaw), it means you are on the classpath like always. When you are on the classpath you are in a special module called “UNNAMED”. This module has special rights. One is that everything is open to it. Open means it can do reflection on anything. By default, you get a warning telling you are a bad citizen, but it works.

There is one pitfall.

If you try to access a class that is in a module and private to the module (not exported), it still fails.

“Why would something be in a module if I don’t use modules?”

I will if it’s a JDK class. Because JDK classes are always in a module. For example, many com.sun classes won’t be accessible.

In this case, you can add an –add-open flag to fix it.

But there is another pitfall.

You can create a module dynamically.

That’s what the Proxy class is doing. When you create a proxy, it creates a dynamic module for it. Even worst, it creates it in a layer above the boot layer.

Because modules are layered. A bit like class loaders. A class loader has a parent class loader and it goes down to the bootstrap class loader (which is stacked over an infinite number of turtles). With modules, you have the boot layer that contains all the modules loaded at startup and then child layers that are dynamically created. OSGi is using that to correctly work in a JPMS world.

Of course, you can dynamically open a module to another. But you can’t list all the modules (because the boot layer doesn’t have access to its children, just like a class loader). And one can be created at any moment to you can’t open everything upfront.

“Why would I want to perform reflection on some random unknown class?”

It is sometimes useful. For example, when trying to get the size (in bytes) of stash of objects. You have know idea what you will encounter but you still want to size them. It’s a genuinely useful thing to do.

The only solution here is to use a self-registering JVM agent that will open any new module to the UNNAMED module. You can use ByteBuddy for that.

In conclusion, let me thank the knowledgeable friends who help me figure this out:

Thank you.

Map best practices

Today’s topic is about Map and misuses I’ve seen during many code reviews.

The idea with a Map is to do whatever you need by doing as less hashing as possible. A hash occurs each time you access the Map (e.g. get, containsKey, put).

In Java 8, some useful new methods were added. Let’s say you want to check if something is in a Map:

  • If it is, return it
  • If it’s not, add it and return it

The classical way to do it is:

if (map.containsKey(key)) { // one hash
    return map.get(key); // two hash
}
List<String> list = new ArrayList<>();
map.put(key, list); // three hash
return list;

It is also the slowest. A better way is:

List<String> list = map.get(key); // one hash
if(list == null) {
    list = new ArrayList<>();
    map.put(key, list); // two hash
}
return list;

This is already much better. You save one hash.

Important: This isn’t valid if the value might be null. But I highly recommend you to never have null values

But since Java 8, you have three better solutions.

The first one is:

map.putIfAbsent(key, new ArrayList<>()); // one hash
return map.get(key); // two hash

It is better but not much. You still have two hashes. And the ArrayList is instantiated even if it is already in the map.

You can improve with the longer:

List<String> list = new ArrayList<>();
List<String> result = map.putIfAbsent(key, list); // one hash only!
if(result == null) {
    return list;
}
return result;

Now we’re talking, only one hash! But still the ArrayList is instantiated uselessly.

Which brings us to another Java 8 method that does the trick.

return map.computeIfAbsent(key, unused -> new ArrayList<>()); // one hash only!

Job done. One line and the fastest we can get. The ArrayList will be instantiated only when needed.

Important: Do not do map.computeIfAbsent(key, ArrayList::new). computeIfAbsent takes a Function<KEY, VALUE> in parameter. So this will in general not compile unless the KEY matches the parameter of one of the ArrayList constructors. An example is when the KEY is an Integer. Passing a constructor method reference will actually call new ArrayList(KEY)… which is obviously not what you want.

In order to convince you that it’s the best solution, I have made a little benchmark using JMH. Here are the results:

Benchmark                               Mode  Cnt         Score        Error  Units
MapBenchmark.computeIfAbsent_there     thrpt   40  25134018.341 ± 687925.885  ops/s (the best!)
MapBenchmark.containsPut_there         thrpt   40  21459978.028 ± 401003.399  ops/s
MapBenchmark.getPut_there              thrpt   40  24268773.005 ± 690893.070  ops/s
MapBenchmark.putIfAbsentGet_there      thrpt   40  18230032.343 ± 238803.546  ops/s
MapBenchmark.putIfAbsent_there         thrpt   40  20579085.677 ± 527246.125  ops/s

MapBenchmark.computeIfAbsent_notThere  thrpt   40   8229212.547 ± 341295.641  ops/s (the best!)
MapBenchmark.containsPut_notThere      thrpt   40   6996790.450 ± 191176.603  ops/s
MapBenchmark.getPut_notThere           thrpt   40   8009163.041 ± 288765.384  ops/s
MapBenchmark.putIfAbsentGet_notThere   thrpt   40   6212712.165 ± 333023.068  ops/s
MapBenchmark.putIfAbsent_notThere      thrpt   40   7227880.072 ± 289581.816  ops/s

Til next time: Happy mapping.

Pragmatism applied: Avoid single implementation interface

A long time ago (think 2000), all classes in Java used to have an interface. You first started with MyInterface then added a MyInterfaceImpl.

This caused a lot of boilerplating and debugging annoyance. I used to have code generator to make it easier.

Why were we doing that?

Two reasons. A bad and a good one.

The bad reason is decoupling

The idea was that if you depend on an interface you can swap the implementation if ever needed.

This is a “you might need it later” issue. Every sentence with “might” and “later” in it should be rephrased as “I don’t care”. Because most of the time, “later” never occurs and you are just wasting time an energy right now just in case. Whatever happens later should be dealt with later.

That said, you might argue that “yes, but it will be much more painful to deal with it later”. Ok. Let’s check.

Let’s say you have some cheese

public class Cheese {

  private final String name;

  public Cheese(String name) {
    this.name = Objects.requireNonNull(name);
  }

  public String getName() {
    return name;
  }
}

Then you want to retrieve the cheese from a database.

public class CheeseDao {

  private final Database database;

  public Cheese findByName(String name) {
    return database.names()
        .filter(name::equals)
        .reduce((a, b) -> {
          throw new IllegalStateException("More than one entry found for " + name);
        })
        .map(Cheese::new)
        .orElse(null);
  }
}

And then you have a REST resource depending on the CheeseDAO.

public class CheeseResource {

  private final CheeseDAO cheeseDAO;

  public CheeseResource(CheeseDAO cheeseDAO) {
    this.cheeseDAO = cheeseDAO;
  }

  public Cheese get(String name) {
    return cheeseDAO.findByName(name);
  }
}

Since you are an efficient human being, you decided that no interface was needed for the CheeseDAO. It has only one implementation so far and you have not building a cheese open source library. All this code is into your little cheese application.

But one day, some requirements arrive and you actually do need another implementation. “Later” actually happened.

So you now turn CheeseDAO into an interface.

public interface CheeseDao {
  Cheese findByName(String name);
}

public class CheeseDatabaseDao implements CheeseDao {

  private final Database database;

  public Cheese findByName(String name) {
    return database.names()
        .filter(name::equals)
        .reduce((a, b) -> {
          throw new IllegalStateException("More than one entry found for " + name);
        })
        .map(Cheese::new)
        .orElse(null);
  }
}

And now, off you go to fix compilation errors on all the classes depending on CheeseDAO.

For instance, you modify CheeseResource to this:

public class CheeseResource {

  private final CheeseDAO cheeseDAO;

  public CheeseResource(CheeseDAO cheeseDAO) {
    this.cheeseDAO = cheeseDAO;
  }

  public Cheese get(String name) {
    return cheeseDAO.findByName(name);
  }
}

I’ll leave you 5 seconds. 1, 2, 3, 4, 5.

Yes, I’m messing with you. Nothing has changed. Not a single character.

Turning a class into an interface “later” wasn’t painful after all.

Which is why I call it a bad reason. Doing it is painful now and has no benefit later.

Now, the good reason: Testing

The problem with a concrete class is that you need to instantiate. In a testing context, you want to mock dependencies. In order to mock a concrete class, you need two things

  1. Extend the class to be able to mock the behavior
  2. Instantiate the class

The first requirement is easy, the second is trickier. If the class is simple and has a simple constructor to call, everything is alright. If the class is quite annoying to instantiate, you have a problem.

This is where I step in. The coolest trick would be to instantiate the class without calling any constructor.

Fortunately, Java allows that. Because serialization does it all the time. You just need to sneak under the hood a little.

Originally, I got involved in open source to solve that problem specifically. Most mocking framework today are using Objenesis to perform this task. I talked a bit about it in a previous post.

So, since 2003, you don’t need to be afraid to use concrete classes as dependencies. You can mock them just as any interface.

Oracle Kubernetes Cluster

Being an Oracle Groundbreaker Ambassador, I get to use the Oracle Cloud.

They have added support for Kubernetes lately. I must say I was pleasantly surprised about it.

It works perfectly.

So, here is a little tutorial if you want to play with it.

It uses Terraform. Oracle Cloud has developed a connector to it. It makes everything easier and command line.

OCI Prerequisites

The first step is to configure correctly your Oracle Cloud Infrastructure (OCI). You can more or less follow this.

I will comment and summarize here

  1. Install Terraform (brew install terraform, you probably have brew already on a mac so no need to install it and do the chown)
  2. Generate a ssh key for oci. Follow the instructions. You could use an existing key. But other scripts are assuming you have a key in the .oci directory so it’s just easier to create a new one
  3. Add the public key on the Oracle console. Your life will be easier if you log yourself before clicking on all the links
  4. Create a env-vars.sh. I haven’t added it in my .bash_profile. I just do a source env-vars.sh when needed. There are 2 fun values to find: TF_VAR_tenancy_ocid and TF_VAR_user_ocid. The tenancy is here. The user is here. You can, of course, use the region you prefer.

Done!

Create the OKE

Now we get serious and we will create the Oracle Kubernetes Engine (OKE). This is explained here.

Again, the steps with my comments

  1. Get the git repository for oke: git clone https://github.com/cloud-partners/oke-how-to.git. You might want to fork and commit since you will tweak it
  2. Init terraform: terraform init
  3. Generate the plan to make sure it works: terracotta plan. You might want to modify terraform/variables.tf first. This file contains the name of your cluster, the number of nodes per subnet you want, the server instance type and the OKE version used.
  4. You can then apply the plan to create your cluster; terracotta apply. It should work magically. I had one problem on my side though. I think it’s because I have an old account. My OKE limit was at 0. So I couldn’t create a cluster. I had to ask the support to fix it. Which was done pretty quickly.

One thing I am not sure about is if you will need to add some policies. Just in case, here are mine (I’m in the group Administrators):

  • ListandGetVCNs: Allow group Administrators to manage vcn in tenancy
  • ListGetsubnets: Allow group Administrators to manage virtual-network-family in tenancy
  • OKE: Allow service OKE to manage all-resources in tenancy
  • PSM-root-policy: PSM managed compartment root policy
  • Tenant Admin Policy: Tenant Admin Policy

I haven’t mastered the policy system yet to I’m not quite sure what is doing what.

Deploy on the cluster

You now have a running cluster so let’s deploy some stuff on it.

  1. For that you need kubectl: brew install kubectl kubernetes-helm
  2. Add the kube config for the cluster. The doc will tell you to use the config file generated by Terraform. It works but in general you want to keep the configuration of other clusters (e.g docker-for-desktop-cluster or minikube). So you will probably prefer to give it another name. You can then switch from one context to another using kubectl config use-context oci

Then just deploy whatever you want to deploy.

Destroy your cluster

Really important when you are done: terraform destroy

EasyMock 4.0.1 is out!

This release adds a support of Java 9, 10 and 11. It also drops support of Java 6 and 7. So it is now a Java 8+. This brought easier maintenance and some performance improvement.

Modules are partly supported with an automatic module name.

If also changes the way EasyMock will determine the type of the returned mock. The idea is to solve a heavily annoying problem most mocking frameworks have with generics.

To be clear, starting now List<String> list = mock(List.class); will compile perfectly without any “unchecked” warning.

However, String s = mock(List.class); will also compile. But I’m expecting you not to be crazy enough to do such thing. It will do a ClassCastException at runtime anyway.

The only side effect is that in rare cases, the compiler might fail to infer the return type and give a compilation error. If it ever happen, the solution is to use a type witness, e.g. foo(EasyMock.<List<String>>mock(List.class). It should solve the problem nicely, and, again, without a warning.

Change log for Version 4.0.1 (2018-10-30)

  • Upgrade to cglib 3.2.9 to support Java 11 (#234)
  • Upgrade TestNG to version 7 (#233)
  • Update to ASM 7.0 for full Java 11 support (#232)

Change log for Version 4.0 (2018-10-27)

  • Remove most long time deprecated methods (#231)
  • Relax typing for the mocking result (#229)
  • Upgrade Objenesis to 3.0.1 (#228)
  • Update cglib to 3.2.8 and asm to 6.2.1 (#225)
  • Java 11 Compatibility check: EasyMock (#224)
  • easymock 3.6 can’t work with JDK11 EA kit (#218)
  • update testng to 6.14.3 (#216)

Objenesis 3.0.1 is out!

This release adds a support of Java 9, 10 and 11. It also drops support of Java 6 and 7. So it is now a Java 8+. This brought easier maintenance and some performance improvement.

Modules are partly supported with an automatic module name.

Change log for Version 3.0.1 (2018-10-18)

  • No Automatic-Module-Name in objenesis (#66)

Change log for Version 3.0 (2018-10-07)

  • Drop JRockit support (#64)
  • Move lower support to Java 1.8 (#63)
  • Replace findbugs by spotbugs (#62)
  • ClassDefinitionUtils doesn’t compile with Java 11 (#61)
  • update pom.xml for maven plugins (#60)
  • Test errors with Java 10 (#59)
  • Please remove the hidden .mvn directory from the source tarball (#57)
  • Move Android TCK API 26 because objenesis now requires it (#65)

Java is still free

About a month ago, I was preparing my OracleOne talk and tripped on a slide. I was trying to explain the new delivery process and how long the support of each version will last.

It wasn’t that clear at all.

So, I asked my fellow Java Champions about it. It triggered a discussion about the fact that it is indeed quite misunderstood.

We got together. Or to be honest, Martijn led the way and wrote an article to clarify the situation. We then made multiple suggestions and corrections. Representatives of the main OpenJDK providers were also involved.

  • AdoptOpenJDK
  • Amazon
  • Azul (actual supporter of )
  • BellSoft
  • IBM
  • jClarity
  • Oracle (obviously)
  • RedHat (actual supporter of Java 8 and 11)
  • SAP

So I now consider the document a must read for anyone interested in Java.

Is Java still free? Yes, it is.