Archives 2023

Exemplifying method reference vs. lambda – Functional style programming – extending API

180. Exemplifying method reference vs. lambda

Have you ever written a lambda expression and your IDE advises you to replace it with a method reference? Of course, you did! And, I’m sure that you preferred to follow the replacement because name matters, and method references are often more readable than lambdas. While this is a subjective matter, I’m pretty sure that you agree that extracting long lambdas in methods and using/re-using them via method reference is a generally accepted good practice.But, beyond some esoteric JVM internal representations, are they behaving the same? Is any difference between a lambda and a method reference that may affect how the code behaves?Well, let’s assume that we have the following simple class:

public class Printer {
  Printer() {
    System.out.println(“Reset printer …”);
  }
     
  public static void printNoReset() {
    System.out.println(
      “Printing (no reset) …” + Printer.class.hashCode());
  }
      
  public void printReset() {
    System.out.println(“Printing (with reset) …”
      + Printer.class.hashCode());
  }
}

If we assume that p1 is a method reference and p2 is the corresponding lambda then we can perform the following calls:

System.out.print(“p1:”);p1.run();
System.out.print(“p1:”);p1.run();
System.out.print(“p2:”);p2.run();
System.out.print(“p2:”);p2.run();
System.out.print(“p1:”);p1.run();
System.out.print(“p2:”);p2.run();

Next, let’s see two scenarios of working with p1 and p2.

Scenario 1: Call printReset()

In the first scenario, we call printReset() via p1 and p2 as follows:

Runnable p1 = new Printer()::printReset;
Runnable p2 = () -> new Printer().printReset();

If we run the code right now then we get this output (the message generated by the Printer constructor):

Reset printer …

This output is caused by the method reference, p1. The Printer constructor is invoked right away even if we didn’t call the run() method. Because p2 (the lambda) is lazy, the Printer constructor is not called until we call the run() method. Going further, we fire the chain of run() calls for p1 and p2. The output will be:

p1:Printing (with reset) …1159190947
p1:Printing (with reset) …1159190947
p2:Reset printer …
Printing (with reset) …1159190947
p2:Reset printer …
Printing (with reset) …1159190947
p1:Printing (with reset) …1159190947
p2:Reset printer …
Printing (with reset) …1159190947

If we analyze this output we notice that the Printer constructor is called each time the lambda (p2.run()) is executed. On the other hand, for the method reference (p1.run()) the Printer constructor is not called. It was called a single time, at p1 declaration. So, p1 is printing without resetting the printer. This can be a major aspect!

Scenario 2: Call static printNoReset()

Next, let’s call the static method printNoReset():

Runnable p1 = Printer::printNoReset;
Runnable p2 = () -> Printer.printNoReset();

If we run the code right away then nothing will happen (no output). Next, we fire up the run() calls, and we get this output:

p1:Printing (no reset) …149928006
p1:Printing (no reset) …149928006
p2:Printing (no reset) …149928006
p2:Printing (no reset) …149928006
p1:Printing (no reset) …149928006
p2:Printing (no reset) …149928006

The printNoReset() is a static method, so the Printer constructor is not invoked. We can interchangeably use p1 or p2 without having any difference in behavior. So, in this case, is just a matter of preference.

Conclusion

When calling non-static methods, there is a main difference between method reference and lambda. Method reference calls the constructor immediately and only once (at method invocation (run()), the constructor is not called). On the other hand, lambdas are lazy. They call the constructor only at method invocation and at each such invocation (run()).

Hooking lambda laziness via Supplier/Consumer – Functional style programming – extending API

181. Hooking lambda laziness via Supplier/Consumer

The java.util.function.Supplier is a functional interface capable to supply results via its get() method. The java.util.function.Consumer is another functional interface capable to consume the argument given via its accept() method. It returns no result (void). Both of these functional interfaces are lazy, so it is not quite simple to analyze and understand a code that implies them, especially when a snippet of code implies both of them. Let’s give it a try!Consider the following simple class:

static class Counter {
  static int c;
  public static int count() {
    System.out.println(“Incrementing c from “
      + c + ” to ” + (c + 1));
    return c++;                                   
  }
}

And, let’s write the following Supplier and Consumer:

Supplier<Integer> supplier = () -> Counter.count();
Consumer<Integer> consumer = c -> {
  c = c + Counter.count();
  System.out.println(“Consumer: ” + c );
};

So, at this point, what is the value of Counter.c?

System.out.println(“Counter: ” + Counter.c); // 0

The correct answer is: Counter.c is 0. The supplier and the consumer are lazy, so none of the get() or accept() methods was called at their declarations. The Counter.count() was not invoked so far, so Counter.c was not incremented.Here is a tricky one … how about now?

System.out.println(“Supplier: ” + supplier.get()); // 0

We know that by calling supplier.get() we trigger the Counter.count() execution and Counter.c should be incremented and become 1. However, the supplier.get() will return 0.The explanation resides in the count() method at line return c++;. When we write c++, we use the post-increment operation, so we use the current value of c in our statement (in this case, return) and afterward, we increment it by 1. This means that supplier.get() gets back the value of c as 0, while the incrementation takes place after this return, and Counter.c is now 1:

System.out.println(“Counter: ” + Counter.c); // 1

If we switch from post-increment (c++) to pre-increment (++c) then supplier.get() will get back the value of 1 which will be in sync with Counter.c. This is happening because the incrementation takes place before the value is used in our statement (here, return).Ok, so far we know that Counter.c is equal to 1. Next, let’s call the consumer and let’s pass in the Counter.c value:

consumer.accept(Counter.c);       

Via this call, we push the Counter.c (which is 1) in the following computation and display:

c -> {
  c = c + Counter.count();
  System.out.println(“Consumer: ” + c );
} // Consumer: 2

So, c = c + Counter.count() can be seen as Counter.c = Counter.c + Counter.count() which is equivalent to 1 = 1 + Counter.count(), so 1 = 1 + 1. The output will be: Consumer: 2. This time, Counter.c is also 2 (remember the post-increment effect):

System.out.println(“Counter: ” + Counter.c); // 2    

Next, let’s invoke the supplier:

System.out.println(“Supplier: ” + supplier.get()); // 2

We know that get() will receive the current value of c which is 2. Afterward, Counter.c becomes 3:

System.out.println(“Counter: ” + Counter.c); // 3

We can continue like this forever, but I think you got the idea of how the Supplier and the Consumer functional interfaces work.

Refactoring code to add lambda laziness 2 – Functional style programming – extending API

Fixing in functional fashion

How about providing this fix in a functional programming fashion? Practically, all we want is to lazy download the applicatons’s dependenies. Since laziness is the functional programming spciality, and we’ve just got familiar with the Supplier (see the previous problem) we can start as follows:

public class ApplicationDependency {
        
  private final Supplier<String> dependencies
    = this::downloadDependencies;  
  …
  public String getDependencies() {
    return dependencies.get();
  } 
  …
  private String downloadDependencies() {
         
    return “list of dependencies downloaded from repository ”    
     + Math.random();
  }  
}

First, we defined a Supplier that calls the downloadDependencies() method. We know that the Supplier is lazy so nothing happens until its get() method is explicitly called.Second, we have modified getDependencies() to return dependencies.get(). So, we delay the application’s dependencies downloading until they are explicitly required.Third, we modified the return type of the downloadDependencies() method from void to String. This is needed for the Supplier.get().This is a nice fix but it has a serious shortcoming. We lost the caching! Now, the dependencies will be downloaded at every getDependencies() call.We can avoid this issue via memoization (https://en.wikipedia.org/wiki/Memoization). We have covered this concept in Chapter 8 of The Complete Coding Interview Guide in Java. However, in a nutshell, memoization is a technique used to avoid duplicate work by caching results that can be reused later.Memoization is a technique commonly applied in Dynamic Programming, but there are no restrictions or limitations. For instance, we can apply it in functional programming. In our particular case, we start by defining a functional interface that extends the Supplier interface:

@FunctionalInterface
public interface FSupplier<R> extends Supplier<R> {}

Next, we provide an implementation of FSupplier that basically cashes the unseen results and serve from the cache the already seen ones:

public class Memoize {
  private final static Object UNDEFINED = new Object();
  public static <T> FSupplier<T> supplier(
    final Supplier<T> supplier) {
    AtomicReference cache = new AtomicReference<>(UNDEFINED);     
    return () -> {                      
          
      Object value = cache.get();          
          
      if (value == UNDEFINED) {              
              
        synchronized (cache) {
                 
          if (cache.get() == UNDEFINED) {
                      
            System.out.println(“Caching: ” +  supplier.get());
            value = supplier.get();
            cache.set(value);
          }
        }
      }
          
      return (T) value;
    };
  }
}

Finally, we replace our initial Supplier with FSupplier as follows:

private final Supplier<String> dependencies
  = Memoize.supplier(this::downloadDependencies);

Done! Our functional approach takes advantage of Supplier’s laziness and can cache the results.

Refactoring code to add lambda laziness – Functional style programming – extending API

182. Refactoring code to add lambda laziness

In this problem let’s have a refactoring session meant to transform a dysfunctional code into a functional one. We start from the following given code – a simple class mapping information about applications dependencies:

public class ApplicationDependency {
  
  private final long id;
  private final String name;
  private String dependencies;
  public ApplicationDependency(long id, String name) {
    this.id = id;
    this.name = name;
  }
  public long getId() {
    return id;
  }
  public String getName() {
    return name;
  } 
  
  public String getDependencies() {
    return dependencies;
  }
  
  private void downloadDependencies() {
        
    dependencies = “list of dependencies
      downloaded from repository ” + Math.random();
  }  
}

Why did we highlight the getDependencies() method? Because this is the point in the application having a dysfunctionality. More precisely, the following class needs the dependencies of an application in order to process them accordingly:

public class DependencyManager {
  
  private Map<Long,String> apps = new HashMap<>();
  
  public void processDependencies(ApplicationDependency appd){
      
    System.out.println();
    System.out.println(“Processing app: ” + appd.getName());
    System.out.println(“Dependencies: “
      + appd.getDependencies());
      
    apps.put(appd.getId(),appd.getDependencies());       
  }  
}

This class relies on the ApplicationDependency#getDependecies() method which just returns null (the default value of the dependencies fields). The expected application’s dependencies were not downloaded since the downloadDependecies() method was not called. Most probably, a code reviewer will signal this issue and raise a ticket to fix it.

Fixing in imperative fashion

A possible fix will be as follows (in ApplicationDependency):

public class ApplicationDependency {
  
  private String dependencies = downloadDependencies();
  …
  public String getDependencies() {
           
    return dependencies;
  }
  …
  private String downloadDependencies() {
         
    return “list of dependencies downloaded from repository “
      + Math.random();
  }
}

Calling downloadDependencies() at dependencies initialization will definitely fix the problem of loading the dependencies. When the DependencyManager will call getDependencies() it will have access to the downloaded dependencies. But, is this a good approach? I mean downloading the dependencies is a costly operation and we do it every time an ApplicationDependency instance is created. If the getDependencies() method is never called then this costly operation doesn’t pay off the effort.So, a better approach will be to postpone the download of the application’s dependencies until getDependencies() is actually called:

public class ApplicationDependency {
  private String dependencies;
  …
  public String getDependencies() {
             
    downloadDependencies();      
     
    return dependencies;
  }
  …
  private void downloadDependencies() {
         
    dependencies = “list of dependencies
      downloaded from repository ” + Math.random();
  }  
}

This is better but is not the best! This time, the application’s dependencies are downloaded every time the getDependencies() method is called. Fortunately, there is a quick fix for this. We just need to add a null check before performing the download:

public String getDependencies() {
      
  if (dependencies == null) {
    downloadDependencies();
  }
       
  return dependencies;
}

Done! Now, the application’s dependencies are downloaded only at the first call of the getDependencies() method. This imperative solution works like a charm and will pass the code review.

Writing a Function for parsing data – Functional style programming – extending API

183. Writing a Function<String, T> for parsing data

Let’s assume that we have the following text:

String text = “””
  test, a, 1, 4, 5, 0xf5, 0x5, 4.5d, 6, 5.6, 50000, 345,
  4.0f, 6$3, 2$1.1, 5.5, 6.7, 8, a11, 3e+1, -11199, 55                    
  “””;

And, the goal is to find a solution that extracts from this text only the numbers. Depending on a given scenario, we may need only the integers, or only the doubles, and so on. Sometimes, we may need to perform some text replacements before extraction (for instance, we may want to replace the xf characters with a dot, 0xf5 = 0.5).A possible solution to this problem is to write a method (let’s name it parseText()) that takes as argument a Function<String, T>. The Function<String, T> gives us the flexibility to shape any of the following:

List<Integer> integerValues
  = parseText(text, Integer::valueOf);
List<Double> doubleValues
  = parseText(text, Double::valueOf);

List<Double> moreDoubleValues
  = parseText(text, t -> Double.valueOf(t.replaceAll(
      “\\$”, “”).replaceAll(“xf”, “.”).replaceAll(“x”, “.”)));

The parseText() should perform several steps until it reaches the final result. Its signature can be as follows:

public static <T> List<T> parseText(
    String text, Function<String, T> func) {
  …
}

First, we have to split the received text by the comma delimiter and extract the items in a String[]. This way we have access to each item from the text.Second, we can stream the String[] and filter any empty items.Third, we can call the Function.apply() to apply the given function to each item (for instance, to apply Double::valueOf). This can be done via the intermediate operation map(). Since some items may be invalid numbers, we have to catch and ignore any Exception (it is a bad practice to swallow an exception like this but, in this case, there is really nothing else to do). For any invalid number, we simply return null.Fourth, we filter all null values. This means that the remaining stream contains only numbers that passed through Function.apply().Fifth, we collect the stream in a List and return it.Gluing these 5 steps will result in the following code:

public static <T> List<T> parseText(
    String text, Function<String, T> func) {
  return Arrays.stream(text.split(“,”)) // step 1 and 2
    .filter(s -> !s.isEmpty())
    .map(s -> {
       try {
         return func.apply(s.trim());   // step 3
       } catch (Exception e) {}
       return null;
    })
    .filter(Objects::nonNull)           // step 4
    .collect(Collectors.toList());      // step 5
}

Done! You can use this example to solve a wide range of similar problems.

Composing predicates in Stream’s filters – Functional style programming – extending API

184. Composing predicates in Stream’s filters

A predicate (basically, a condition) can be modeled as a boolean-valued function via java.util.function.Predicate functional interface. Its functional method is named test(T t) and returns a boolean.Applying predicates in a stream pipeline can be done via several stream intermediate operations but we are interested here only in the filter(Predicate p) operation. For instance, let’s consider the following class:

public class Car {
  private final String brand;
  private final String fuel;
  private final int horsepower;
  public Car(String brand, String fuel, int horsepower) {
    this.brand = brand;
    this.fuel = fuel;
    this.horsepower = horsepower;
  }
 
  // getters, equals(), hashCode(), toString()
}

If we have a List<Car> and we want to express a filter that produces all the cars that are Chevrolets then we can start by defining the proper Predicate:

Predicate<Car> pChevrolets
  = car -> car.getBrand().equals(“Chevrolet”);

Next, we can use this Predicate in a stream pipeline as follows:

List<Car> chevrolets = cars.stream()              
  .filter(pChevrolets)
  .collect(Collectors.toList());

A Predicate can be negated in at least three ways. We can negate the condition via the logical not (!) operator:

Predicate<Car> pNotChevrolets
  = car -> !car.getBrand().equals(“Chevrolet”);

We can call the Predicate.negate() method:

Predicate<Car> pNotChevrolets = pChevrolets.negate();     

Or, we can call the Predicate.not() method:

Predicate<Car> pNotChevrolets = Predicate.not(pChevrolets);

No matter which of these three approaches you prefer, the following filter will produce all cars that are not Chevrolets:

List<Car> notChevrolets = cars.stream()              
  .filter(pNotChevrolets)              
  .collect(Collectors.toList());

In the previous examples, we have applied a single predicate in a stream pipeline. But, we can apply multiple predicates as well. For instance, we may want to express a filter that produces all the cars that are not Chevrolets and have at least 150 horsepower. For the first part of this composite predicate, we can arbitrarily use pChevrolets.negate(), while for the second part, we need the following Predicate:

Predicate<Car> pHorsepower
  = car -> car.getHorsepower() >= 150;

We can obtain a composite predicate by chaining the filter() calls as follows:

List<Car> notChevrolets150 = cars.stream()              
  .filter(pChevrolets.negate())
  .filter(pHorsepower)
  .collect(Collectors.toList());

But, more shortly and expressive is to rely on Predicate#and(Predicate<? super T> other) which applies the short-circuiting logical AND between two predicates. So, the previous example is better expressed as follows:

List<Car> notChevrolets150 = cars.stream()              
  .filter(pChevrolets.negate().and(pHorsepower))
  .collect(Collectors.toList());

If we need to apply the short-circuiting logical OR between two predicates then relying on Predicate#or(Predicate<? super T> other) is the proper choice. For instance, if we want to express a filter that produces all Chevrolets or electric cars then we can do it as follows:

Predicate<Car> pElectric
  = car -> car.getFuel().equals(“electric”);
      
List<Car> chevroletsOrElectric = cars.stream()              
  .filter(pChevrolets.or(pElectric))
  .collect(Collectors.toList());

If we are in a scenario that heavily relies on composite predicates then we can start by creating two helpers that make our job easier:

@SuppressWarnings(“unchecked”)
public final class Predicates {
  
  private Predicates() {
    throw new AssertionError(“Cannot be instantiated”);
  }
  public static <T> Predicate<T> asOneAnd(
      Predicate<T>… predicates) {
    Predicate<T> theOneAnd = Stream.of(predicates)
      .reduce(p -> true, Predicate::and);
      
    return theOneAnd;
  }
  
  public static <T> Predicate<T> asOneOr(
      Predicate<T>… predicates) {
    Predicate<T> theOneOr = Stream.of(predicates)
      .reduce(p -> false, Predicate::or);
      
    return theOneOr;
  }
}

So, the goal of these helpers is to take several predicates and glue them into a single composite predicate via the short-circuiting logical AND and OR.Let’s assume that we want to express a filter that applies the following three predicates via the short-circuiting logical AND:

Predicate<Car> pLexus=car -> car.getBrand().equals(“Lexus”);
Predicate<Car> pDiesel=car -> car.getFuel().equals(“diesel”);      
Predicate<Car> p250=car -> car.getHorsepower() > 250;    

First, we join these predicates in a single one:

Predicate<Car> predicateAnd = Predicates
  .asOneAnd(pLexus, pDiesel, p250);

Afterward, we express the filter:

List<Car> lexusDiesel250And = cars.stream()              
  .filter(predicateAnd)              
  .collect(Collectors.toList());

How about expressing a filter that produces a stream containing all cars having horsepower between 100 and 200 or 300 and 400? The predicates are:

Predicate<Car> p100 = car -> car.getHorsepower() >= 100;
Predicate<Car> p200 = car -> car.getHorsepower() <= 200;
      
Predicate<Car> p300 = car -> car.getHorsepower() >= 300;
Predicate<Car> p400 = car -> car.getHorsepower() <= 400;

The composite predicate can be obtained as follows:

Predicate<Car> pCombo = Predicates.asOneOr(
  Predicates.asOneAnd(p100, p200),
  Predicates.asOneAnd(p300, p400)
);

Expressing the filter is straightforward:

List<Car> comboAndOr = cars.stream()              
  .filter(pCombo)              
  .collect(Collectors.toList());

You can find all these examples in the bundled code.

Streaming custom code to map – Functional style programming – extending API

179. Streaming custom code to map

Let’s assume that we have the following legacy class:

public class Post {
  
  private final int id;
  private final String title;
  private final String tags;
  public Post(int id, String title, String tags) {
    this.id = id;
    this.title = title;
    this.tags = tags;
  }
  …
  public static List<String> allTags(Post post) {
      
    return Arrays.asList(post.getTags().split(“#”));
  }
}

So, we have a class that shapes some blog posts. Each post has several properties including its tags. The tags of each post are actually represented as a string of tags separated by hashtag (#). Whenever we need the list of tags for a given post, we can call the allTags() helper. For instance, here is a list of posts and their tags:

List<Post> posts = List.of(
  new Post(1, “Running jOOQ”, “#database #sql #rdbms”),
  new Post(2, “I/O files in Java”, “#io #storage #rdbms”),
  new Post(3, “Hibernate Course”, “#jpa #database #rdbms”),
  new Post(4, “Hooking Java Sockets”, “#io #network”),
  new Post(5, “Analysing JDBC transactions”, “#jdbc #rdbms”)
);

Our goal is to extract from this list a Map<String, List<Integer>> containing for each tag (key) the list of posts (value). For instance, for tag #database, we have articles 1 and 3, for tag #rdbms, we have articles 1, 2, 3, and 5, and so on.Accomplishing this task in functional programming can be done via flatMap() and groupingBy(). Both of these have been covered in detail in Java Coding Problems, First Edition. In a nutshell, flatMap() is useful to flatten a nested Stream<Stream<R>> model, while groupingBy() is a collector useful to group data in a map by some logic/property.We need flatMap() because we have the List<Post> that, for each Post, nests via allTags() a List<String> (so, if we simply call stream() then we get back a Stream<Stream<R>>). After flattening, we wrap each tag in Map.Entry<String, Integer>. Finally, we group these entries by tags into a Map as follows:

Map<String, List<Integer>> result = posts.stream()
  .flatMap(post -> Post.allTags(post).stream()
  .map(t -> entry(t, post.getId())))
  .collect(groupingBy(Entry::getKey,
              mapping(Entry::getValue, toList())));

But, based on the previous problem, we know that starting with JDK 16, we can use mapMulti(). So, we can re-write the previous snippet as follows:

Map<String, List<Integer>> resultMulti = posts.stream()
  .<Map.Entry<String, Integer>>mapMulti((post, consumer) -> {
      for (String tag : Post.allTags(post)) {
             consumer.accept(entry(tag, post.getId()));
      }
  })
  .collect(groupingBy(Entry::getKey,
              mapping(Entry::getValue, toList())));

This time, we saved the map() intermediate operation and intermediate streams.