Sorting a map 2 – Functional style programming – extending API

So, when the stream pipeline needs only the map’s values, we can start from values(), when it needs only the keys, we can start from keyset(), and when it needs both (the values and the keys), we can start from entrySet().For instance, a stream pipeline as Map -> Stream -> Filter -> Map that filters the top 5 cars by key and collects them into a resulting map needs the entrySet() starting point as follows:

Map<Integer, Car> carsTop5a = cars.entrySet().stream()
  .filter(c -> c.getKey() <= 5)
  .collect(Collectors.toMap(
     Map.Entry::getKey, Map.Entry::getValue));
  //or, .collect(Collectors.toMap(
     c -> c.getKey(), c -> c.getValue()));

Here is example that returns a Map of the top 5 cars having more than 100 horsepower:

Map<Integer, Car> hp100Top5a = cars.entrySet().stream()
  .filter(c -> c.getValue().getHorsepower() > 100)
  .sorted(Entry.comparingByValue(
          Comparator.comparingInt(Car::getHorsepower)))
  .collect(Collectors.toMap(
     Map.Entry::getKey, Map.Entry::getValue,
       (c1, c2) -> c2, LinkedHashMap::new));
  //or, .collect(Collectors.toMap(
  //   c -> c.getKey(), c -> c.getValue(),
  //   (c1, c2) -> c2, LinkedHashMap::new));

If we need to express such pipelines quite often then we may prefer to write some helpers. Here is a set of four generic helpers for filtering and sorting a Map<K, V> by key:

public final class Filters {
  private Filters() {
    throw new AssertionError(“Cannot be instantiated”);
  }
  public static <K, V> Map<K, V> byKey(
        Map<K, V> map, Predicate<K> predicate) {
  return map.entrySet()
    .stream()
    .filter(item -> predicate.test(item.getKey()))
    .collect(Collectors.toMap(
       Map.Entry::getKey, Map.Entry::getValue));
  }
  public static <K, V> Map<K, V> sortedByKey(
    Map<K, V> map, Predicate<K> predicate, Comparator<K> c) {
    return map.entrySet()
      .stream()
      .filter(item -> predicate.test(item.getKey()))
      .sorted(Map.Entry.comparingByKey(c))
      .collect(Collectors.toMap(
         Map.Entry::getKey, Map.Entry::getValue,
              (c1, c2) -> c2, LinkedHashMap::new));
  }
  …

And, for filtering and sorting a Map by value:

  public static <K, V> Map<K, V> byValue(
      Map<K, V> map, Predicate<V> predicate) {
    return map.entrySet()
      .stream()
      .filter(item -> predicate.test(item.getValue()))
      .collect(Collectors.toMap(
         Map.Entry::getKey, Map.Entry::getValue));
  }
  public static <K, V> Map<K, V> sortedbyValue(Map<K, V> map,
      Predicate<V> predicate, Comparator<V> c) {
  return map.entrySet()
    .stream()
    .filter(item -> predicate.test(item.getValue()))
    .sorted(Map.Entry.comparingByValue(c))
    .collect(Collectors.toMap(
       Map.Entry::getKey, Map.Entry::getValue,
           (c1, c2) -> c2, LinkedHashMap::new));
  }
}

Now, our code becomes much shorter. For instance, we can filter the top 5 cars by key and collect them into a resulting map as follows:

Map<Integer, Car> carsTop5s
  = Filters.byKey(cars, c -> c <= 5);

Or, we can filter the top 5 cars having more than 100 horsepower as follows:

Map<Integer, Car> hp100Top5s
  = Filters.byValue(cars, c -> c.getHorsepower() > 100);
Map<Integer, Car> hp100Top5d
  = Filters.sortedbyValue(cars, c -> c.getHorsepower() > 100,
      Comparator.comparingInt(Car::getHorsepower));                             

Cool, right?! Feel free to extend Filters with more generic helpers for handling Map processing in stream pipelines.

Sorting a map – Functional style programming – extending API

193. Sorting a map

Let’s assume that we have the following map:

public class Car {
  private final String brand;
  private final String fuel;
  private final int horsepower;
  …
}
Map<Integer, Car> cars = Map.of(
  1, new Car(“Dacia”, “diesel”, 350),
  2, new Car(“Lexus”, “gasoline”, 350),
  3, new Car(“Chevrolet”, “electric”, 150),
  4, new Car(“Mercedes”, “gasoline”, 150),
  5, new Car(“Chevrolet”, “diesel”, 250),
  6, new Car(“Ford”, “electric”, 80),
  7, new Car(“Chevrolet”, “diesel”, 450),
  8, new Car(“Mercedes”, “electric”, 200),
  9, new Car(“Chevrolet”, “gasoline”, 350),
  10, new Car(“Lexus”, “diesel”, 300)
);

Next, we want to sort this map into a List<String> as follows:

If the horsepower values are different then sort in descending order by horsepower

If the horsepower values are equal then sort in ascending order by the map keys

The result, List<String>, should contain items of type key(horsepower)

Under these statements, sorting the cars map will result in:[7(450), 1(350), 2(350), 9(350), 10(300), 5(250), 8(200), 3(150), 4(150), 6(80)]Obviously, this problem requires a custom comparator. Having two map entries (c1, c2), we elaborate the following logic:

Check if c2’s horsepower is equal to c1’s horsepower If they are equal, then compare c1’s key with c2’key Otherwise, compare c2’s horsepower with c1’s horsepower Collect the result into a List

In code lines, this can be expressed as follows:

List<String> result = cars.entrySet().stream()
  .sorted((c1, c2) -> c2.getValue().getHorsepower()
        == c1.getValue().getHorsepower()
     ? c1.getKey().compareTo(c2.getKey())
     : Integer.valueOf(c2.getValue().getHorsepower())
        .compareTo(c1.getValue().getHorsepower()))
  .map(c -> c.getKey() + “(“
                       + c.getValue().getHorsepower() + “)”)
  .toList();

Or, if we rely on Map.Entry.comparingByValue(), comparingByKey(), and java.util.Comparator then we can write it as follows:

List<String> result = cars.entrySet().stream()
  .sorted(Entry.<Integer, Car>comparingByValue(
            Comparator.comparingInt(
              Car::getHorsepower).reversed())
  .thenComparing(Entry.comparingByKey()))
  .map(c -> c.getKey() + “(“
    + c.getValue().getHorsepower() + “)”)
  .toList();

This approach is more readable and expressive.

194. Filtering a map

Let’s consider the following map:

public class Car {
  private final String brand;
  private final String fuel;
  private final int horsepower;
  …
}
Map<Integer, Car> cars = Map.of(
  1, new Car(“Dacia”, “diesel”, 100),
  …
  10, new Car(“Lexus”, “diesel”, 300)
);

In order to stream a map, we can start from Map’s entrySet(), values(), or keyset() followed by a stream() call. For instance, if we want to express a pipeline as Map -> Stream -> Filter -> String that returns a List<String> containing all the electric brands then we can rely on entrySet() as follows:

String electricBrands = cars.entrySet().stream()
  .filter(c -> “electric”.equals(c.getValue().getFuel()))
  .map(c -> c.getValue().getBrand())
  .collect(Collectors.joining(“, “));

But, as you can see, this stream pipeline doesn’t use the map’s keys. This means that we can better express it via values() instead of entrySet() as follows:

String electricBrands = cars.values().stream()
  .filter(c -> “electric”.equals(c.getFuel()))
  .map(c -> c.getBrand())
  .collect(Collectors.joining(“, “));

This is more readable and it clearly expresses its intention.Here is another example that you should be able to follow without further details:

Car newCar = new Car(“No name”, “gasoline”, 350);
String carsAsNewCar1 = cars.entrySet().stream()
 .filter(c -> (c.getValue().getFuel().equals(newCar.getFuel())
   && c.getValue().getHorsepower() == newCar.getHorsepower()))
 .map(map -> map.getValue().getBrand())
 .collect(Collectors.joining(“, “));
      
String carsAsNewCar2 = cars.values().stream()
 .filter(c -> (c.getFuel().equals(newCar.getFuel())
   && c.getHorsepower() == newCar.getHorsepower()))
 .map(map -> map.getBrand())
 .collect(Collectors.joining(“, “));

Creating a custom collector via Collector.of() – Functional style programming – extending API

195. Creating a custom collector via Collector.of()

Creating a custom collector is a topic that we covered in detail in Java Coding Problem, First Edition, Chapter 9, Problem 193. More precisely, in that problem, you saw how to write a custom collector by implementing the java.util.stream.Collector interface.In this problem, we continue this journey, and we will create several custom collectors. This time, we will rely on two Collector#of() methods having the following signatures:

static <T,R> Collector<T,R,R> of(
  Supplier<R> supplier,
  BiConsumer<R,T> accumulator,
  BinaryOperator<R> combiner,
  Collector.Characteristics… characteristics)
static <T,A,R> Collector<T,A,R> of(
  Supplier<A> supplier,
  BiConsumer<A,T> accumulator,
  BinaryOperator<A> combiner,
  Function<A,R> finisher,
  Collector.Characteristics… characteristics)

In this context, T, A, and R represent the following (pick-up from Java Coding Problems, First Edition):

T represents the type of elements from the Stream (elements that will be collected)

A represents the type of object that was used during the collection process known as the accumulator, which is used to accumulate the stream elements in a mutable result container.

R represents the type of the object after the collection process (the final result)

Moreover, a Collector is characterized by four functions and an enumeration. Again, here we have a short pick-up from Java Coding Problems, First Edition:These functions are working together to accumulate entries into a mutable result container, and optionally perform a final transformation on the result. They are as follows:

Creating a new empty mutable result container (the supplier argument)

Incorporating a new data element into the mutable result container (the accumulator argument)

Combining two mutable result containers into one (the combiner argument)

Performing an optional final transformation on the mutable result container to obtain the final result (the finisher argument)

In addition, we have the Collector.Characteristics… enumeration which defines the collector behavior. Possible values are UNORDERED (no order), CONCURRENT (more threads accumulate elements), and IDENTITY_FINISH (the finisher is the identity function so no further transformation will take place).In this context, let’s try to fire up a few examples. But, first, let’s assume that we have the following model:

public interface Vehicle {}
public class Car implements Vehicle {
  private final String brand;
  private final String fuel;
  private final int horsepower;
  …
}
public class Submersible implements Vehicle {
  
  private final String type;
  private final double maxdepth;
  …
}

And, some data:

Map<Integer, Car> cars = Map.of(
  1, new Car(“Dacia”, “diesel”, 100),
  …
  10, new Car(“Lexus”, “diesel”, 300)
);

Next, let’s have some collectors in a helper class named MyCollectors.

Writing a custom collector that collects into a TreeSet

In a custom collector that collects into a TreeSet we have that the supplier is TreeSet::new, the accumulator is TreeSet#add(), the combiner relies on TreeSet#addAll(), and the finisher is the identity function:

public static <T>
    Collector<T, TreeSet<T>, TreeSet<T>> toTreeSet() {
  return Collector.of(TreeSet::new, TreeSet::add,
    (left, right) -> {
       left.addAll(right);
       return left;
  }, Collector.Characteristics.IDENTITY_FINISH);
}

In the following example, we use this collector to collect all electric brands in a TreeSet<String>:

TreeSet<String> electricBrands = cars.values().stream()
  .filter(c -> “electric”.equals(c.getFuel()))
  .map(c -> c.getBrand())
  .collect(MyCollectors.toTreeSet());

That was easy!

Writing a custom collector that collects into a LinkedHashSet – Functional style programming – extending API

Writing a custom collector that collects into a LinkedHashSet

In a custom collector that collects into a LinkedHashSet we have that the supplier is LinkedHashSet::new, the accumulator is HashSet::add, the combiner relies on HashSet#addAll(), and the finisher is the identity function:

public static <T> Collector<T, LinkedHashSet<T>,
    LinkedHashSet<T>> toLinkedHashSet() {
  return Collector.of(LinkedHashSet::new, HashSet::add,
    (left, right) -> {
       left.addAll(right);
       return left;
  }, Collector.Characteristics.IDENTITY_FINISH);
}

In the following example, we use this collector to collect the sorted cars’ horsepower:

LinkedHashSet<Integer> hpSorted = cars.values().stream()
  .map(c -> c.getHorsepower())
  .sorted()
  .collect(MyCollectors.toLinkedHashSet());

Done! The LinkedHashSet<Integer> contains the horsepower values in ascending order.

Writing a custom collector that excludes elements of another collector

The goal of this section is to provide a custom collector that takes as arguments a Predicate and a Collector. It applies the given predicate to elements to be collected in order to exclude the failures from the given collector.

public static <T, A, R> Collector<T, A, R> exclude(
    Predicate<T> predicate, Collector<T, A, R> collector) {
  return Collector.of(
    collector.supplier(),
    (l, r) -> {
       if (predicate.negate().test(r)) {
         collector.accumulator().accept(l, r);
       }
    },
    collector.combiner(),
    collector.finisher(),
    collector.characteristics()
     .toArray(Collector.Characteristics[]::new)
  );
}

The custom collector uses the supplier, combiner, finisher, and characteristics of the given collector. It only influences the accumulator of the given collector. Basically, it explicitly calls the accumulator of the given collector only for the elements that pass the given predicate.For instance, if we want to obtain the sorted horsepower less than 200 via this custom collector then we call it as follows (the predicate tells what to exclude):

LinkedHashSet<Integer> excludeHp200 = cars.values().stream()
  .map(c -> c.getHorsepower())
  .sorted()
  .collect(MyCollectors.exclude(c -> c > 200,
           MyCollectors.toLinkedHashSet()));

Here, we use two custom collectors, but we can easily replace the toLinkedHashSet() with a built-in collector as well. Challenge yourself to write the counterpart of this custom collector. Write a collector that includes the elements that pass the given predicate.

Writing a custom collector that collects elements by type

Let’s suppose that we have the following List<Vechicle>:

Vehicle mazda = new Car(“Mazda”, “diesel”, 155);
Vehicle ferrari = new Car(“Ferrari”, “gasoline”, 500);
      
Vehicle hov = new Submersible(“HOV”, 3000);
Vehicle rov = new Submersible(“ROV”, 7000);
      
List<Vehicle> vehicles = List.of(mazda, hov, ferrari, rov);

Our goal is to collect only the cars or only the submersibles, but not both. For this, we can write a custom collector that collects by type into the given supplier as follows:

public static
  <T, A extends T, R extends Collection<A>> Collector<T, ?, R>
    toType(Class<A> type, Supplier<R> supplier) {
  return Collector.of(supplier,
      (R r, T t) -> {
         if (type.isInstance(t)) {
           r.add(type.cast(t));
         }
      },
    (R left, R right) -> {
       left.addAll(right);
       return left;
    },
    Collector.Characteristics.IDENTITY_FINISH
  );
}

Now, we can collect only the cars from List<Vechicle> into an ArrayList as follows:

List<Car> onlyCars = vehicles.stream()
  .collect(MyCollectors.toType(
    Car.class, ArrayList::new));

And, we can collect only the submersible into a HashSet as follows:

Set<Submersible> onlySubmersible = vehicles.stream()
  .collect(MyCollectors.toType(
    Submersible.class, HashSet::new));

Finally, let’s write a custom collector for a custom data structure.

Writing a custom collector for Splay Tree

In Chapter 5, Problem 120, we have implemented a Splay Tree data structure. Now, let’s write a custom collector capable to collect elements into a Splay Tree. Obviously, the supplier is SplayTree::new. Moreover, the accumulator is SplayTree#insert(), while the combiner is SplayTree#insertAll():

public static
    Collector<Integer, SplayTree, SplayTree> toSplayTree() {
  return Collector.of(SplayTree::new, SplayTree::insert,
    (left, right) -> {
       left.insertAll(right);
       return left;
  },
  Collector.Characteristics.IDENTITY_FINISH);
}

Here is an example that collects the car’s horsepower into a SplayTree:

SplayTree st = cars.values().stream()
  .map(c -> c.getHorsepower())
  .collect(MyCollectors.toSplayTree());

Done! Challenge yourself to implement a custom collector.

Throwing checked exceptions from lambdas – Functional style programming – extending API

196. Throwing checked exceptions from lambdas

Let’s suppose that we have the following lambda:

static void readFiles(List<Path> paths) {
  paths.forEach(p -> {
    try {
      readFile(p);
    } catch (IOException e) {
      … // what can we throw here?
    }
  });
}

What can we throw in the catch block? Everybody knows the answer … we can throw an unchecked exception such as RuntimeException:

static void readFiles(List<Path> paths) {
  paths.forEach(p -> {
    try {
      readFile(p);
    } catch (IOException e) {
      throw new RuntimeException(e);
    }
  });
}

Also, everybody knows that we cannot throw a checked exception such as IOException. The following snippet of code will not compile:

static void readFiles(List<Path> paths) {
  paths.forEach(p -> {
    try {
      readFile(p);
    } catch (IOException e) {
      throw new IOException(e);
    }
  });
}

Can we change this rule? Can we come up with a hack that allows us to throw checked exceptions from lambdas? Short answer: sure, we can!Long answer: sure, we can if we simply hide the checked exception for the compiler as follows:

public final class Exceptions {
  private Exceptions() {
    throw new AssertionError(“Cannot be instantiated”);
  }
  public static void throwChecked(Throwable t) {
    Exceptions.<RuntimeException>throwIt(t);
  }
  @SuppressWarnings({“unchecked”})
  private static <X extends Throwable> void throwIt(
      Throwable t) throws X {
    throw (X) t;
  }
}

That’s all! Now, we can throw any checked exception. Here, we throw an IOException:

static void readFiles(List<Path> paths) throws IOException {
  paths.forEach(p -> {
    try {
      readFile(p);
    } catch (IOException e) {              
      Exceptions.throwChecked(new IOException(
        “Some files are corrupted”, e));
    }
  });
}

And, we can catch it as follows:

List<Path> paths = List.of(…);
try {
  readFiles(paths);
} catch (IOException e) {
  System.out.println(e + ” \n ” + e.getCause());
}

If a certain path was not found then the reported error message will be:

java.io.IOException: Some files are corrupted
java.io.FileNotFoundException: …
(The system cannot find the path specified)

Cool, right?!

197. Implementing distinctBy() for Stream API

Let’s suppose that we have the following model and data:

public class Car {
  private final String brand;
  private final String fuel;
  private final int horsepower;
  …
}
List<Car> cars = List.of(
  new Car(“Chevrolet”, “diesel”, 350),
  …
  new Car(“Lexus”, “diesel”, 300)
);

We know that the Stream API contains the distinct() intermediate operation that is capable to keep only the distinct elements based on the equals() method:

cars.stream()
    .distinct()
    .forEach(System.out::println);

While this code prints the distinct cars, we may want a distinctBy() intermediate operation capable to keep only the distinct elements based on a given property/key. For instance, we may need all the cars distinct by brand. For this, we can rely on the toMap() collector and the identity function as follows:

cars.stream()
    .collect(Collectors.toMap(Car::getBrand,
             Function.identity(), (c1, c2) -> c1))
    .values()
    .forEach(System.out::println);

We can extract this idea into a helper method as follows:

public static <K, T> Collector<T, ?, Map<K, T>>
  distinctByKey(Function<? super T, ? extends K> function) {
  return Collectors.toMap(
    function, Function.identity(), (t1, t2) -> t1);
}

And, use it as here:

cars.stream()
  .collect(Streams.distinctByKey(Car::getBrand))
  .values()
  .forEach(System.out::println);

While this is a nice job that works for null values as well, we can come up with other ideas that don’t work for null values. For instance, we can rely on ConcurrentHashMap and putIfAbsent() as follows (again, this doesn’t work for null values):

public static <T> Predicate<T> distinctByKey(
    Function<? super T, ?> function) {
  Map<Object, Boolean> seen = new ConcurrentHashMap<>();
  return t -> seen.putIfAbsent(function.apply(t),
    Boolean.TRUE) == null;
}

Or, we can optimize this approach a little bit and use a Set:

public static <T> Predicate<T> distinctByKey(
    Function<? super T, ?> function) {
  Set<Object> seen = ConcurrentHashMap.newKeySet();
  return t -> seen.add(function.apply(t));
}

We can use these two approaches as in the following examples:

cars.stream()
    .filter(Streams.distinctByKey(Car::getBrand))
    .forEach(System.out::println);
cars.stream()
    .filter(Streams.distinctByKey(Car::getFuel))
    .forEach(System.out::println);

Challenge yourself to implement a distinctByKeys() operation, so by multiple keys.

Writing a custom collector that takes/skips a given number of elements – Functional style programming – extending API

198. Writing a custom collector that takes/skips a given number of elements

In Problem 195, we have written a hand of custom collectors grouped in the MyCollectors class. Now, let’s continue our journey, and let’s try to add here two more custom collectors for taking/keeping a given number of elements from the current stream.Let’s assume the following model and data:

public class Car {
  private final String brand;
  private final String fuel;
  private final int horsepower;
  …
}
List<Car> cars = List.of(
  new Car(“Chevrolet”, “diesel”, 350),
  … // 10 more
  new Car(“Lexus”, “diesel”, 300)
);

The Stream API provides an intermediate operation named limit(long n) which can be used to truncate the stream to n elements. So, if this is exactly what we want then we can use it out of the box. For instance, we can limit the resulting stream to the first 5 cars as follows:

List<Car> first5CarsLimit = cars.stream()
  .limit(5)
  .collect(Collectors.toList());

Moreover, the Stream API provides an intermediate operation named skip(long n) which can be used to skip the first n elements in the stream pipeline. For instance, we can skip the first 5 cars as follows:

List<Car> last5CarsSkip = cars.stream()
  .skip(5)
  .collect(Collectors.toList());

However, there are cases when we need to compute different things and collect only the first/last 5 results. In such cases, a custom collector is welcome.Relying on Collector.of() method (details in Problem 195), we can write a custom collector that keeps/collects the first n elements as follows (just for fun, let’s collect these n elements in an unmodifiable list):

public static <T> Collector<T, List<T>, List<T>>  
    toUnmodifiableListKeep(int max) {
  return Collector.of(ArrayList::new,
    (list, value) -> {
       if (list.size() < max) {
         list.add(value);
       }
    },
    (left, right) -> {
       left.addAll(right);
       return left;
    },
    Collections::unmodifiableList);
}

So, the supplier is ArrayList::new, the accumulator is List#add(), the combiner is List#addAll(), and the finalizer is Collections::unmodifiableList. Basically, the accumulator’s job is to accumulate elements only until the given max is reached. From that point forward, nothing gets accumulated. This way, we can keep only the first 5 cars as follows:

List<Car> first5Cars = cars.stream()
  .collect(MyCollectors.toUnmodifiableListKeep(5));

On the other hand, if we want to skip the first n elements and collect the rest then we can try to accumulate null elements until we reach the given index. From this point forward, we accumulate the real elements. In the end, the finalizer removes the part of the list containing null values (from 0 to the given index) and returns an unmodifiable list from the remaining elements (from the given index to the end):

public static <T> Collector<T, List<T>, List<T>>
    toUnmodifiableListSkip(int index) {
  return Collector.of(ArrayList::new,
    (list, value) -> {
       if (list.size() >= index) {
         list.add(value);
       } else {
         list.add(null);
       }
    },
    (left, right) -> {
       left.addAll(right);
 
       return left;
    },
    list -> Collections.unmodifiableList(
      list.subList(index, list.size())));
}

Alternatively, we can optimize this approach by using a supplier class that contains the resulting list and a counter. While the given index is not reached, we simply increase the counter. Once the given index was reached, we start to accumulate elements:

public static <T> Collector<T, ?, List<T>>
    toUnmodifiableListSkip(int index) {
  class Sublist {
    int index;
    List<T> list = new ArrayList<>();          
  }
  return Collector.of(Sublist::new,
    (sublist, value) -> {
       if (sublist.index >= index) {
         sublist.list.add(value);
       } else {
         sublist.index++;
       }
     },
     (left, right) -> {
        left.list.addAll(right.list);
        left.index = left.index + right.index;
       return left;
     },
     sublist -> Collections.unmodifiableList(sublist.list));
}

Both of these approaches can be used as in the following example:

List<Car> last5Cars = cars.stream()
  .collect(MyCollectors.toUnmodifiableListSkip(5));

Challenge yourself to implement a custom collector that collects in a given range.

Implementing a Function that takes 5 (or any other arbitrary number) of arguments – Functional style programming – extending API

199. Implementing a Function that takes 5 (or any other arbitrary number) of arguments

We know that Java already has java.util.function.Function and the specialization of it, java.util.function.BiFunction. The Function interface defines the method apply(T, t), while BiFunction has apply(T t, U u).In this context, we can define a TriFunction, FourFunction, or why not a FiveFunction functional interface as follows (all of these are specializations of Function):

@FunctionalInterface
public interface FiveFunction <T1, T2, T3, T4, T5, R> {
  
  R apply(T1 t1, T2 t2, T3 t3, T4 t4, T5 t5);
}

As its name suggests, this functional interface takes 5 arguments.Now, let’s use it! Let’s assume that we have the following model:

public class PL4 {
  private final double a;
  private final double b;
  private final double c;
  private final double d;
  private final double x;
  public PL4(double a, double b,
             double c, double d, double x) {      
    this.a = a;
    this.b = b;
    this.c = c;
    this.d = d;
    this.x = x;
  }
  // getters
  public double compute() {
    return d + ((a – d) / (1 + (Math.pow(x / c, b))));
  }
  // equals(), hashCode(), toString()
}

The compute() method shapes a formula known as the 4-Parameter Logistic (4PL – https://www.myassays.com/four-parameter-logistic-regression.html). Without getting into irrelevant details, we pass as inputs four variables (a, b, c, and d), and for different values of the x coordinate we compute the y coordinate. The (x, y) pair of coordinates describe a curve (linear graphic).We need a PL4 instance for each x coordinate and for each such instance, we call the compute() method. This means that we can use the FiveFunction interface in Logistics via the following helper:

public final class Logistics {
  …
  public static <T1, T2, T3, T4, X, R> R create(
      T1 t1, T2 t2, T3 t3, T4 t4, X x,
      FiveFunction<T1, T2, T3, T4, X, R> f) {
       
    return f.apply(t1, t2, t3, t4, x);
  }
  …
}

This act as a factory for PL4:

PL4 pl4_1 = Logistics.create(
    4.19, -1.10, 12.65, 0.03, 40.3, PL4::new);
PL4 pl4_2 = Logistics.create(
    4.19, -1.10, 12.65, 0.03, 100.0, PL4::new);

PL4 pl4_8 = Logistics.create(
    4.19, -1.10, 12.65, 0.03, 1400.6, PL4::new);
System.out.println(pl4_1.compute());
System.out.println(pl4_2.compute());

System.out.println(pl4_8.compute());

But, if all we need is just the list of y coordinates then we can write a helper method in Logistics as follows:

public final class Logistics {
  …
  public static <T1, T2, T3, T4, X, R> List<R> compute(
      T1 t1, T2 t2, T3 t3, T4 t4, List<X> allX,
      FiveFunction<T1, T2, T3, T4, X, R> f) {
    List<R> allY = new ArrayList<>();
    for (X x : allX) {
      allY.add(f.apply(t1, t2, t3, t4, x));
    }
    return allY;
  }
  …
}

We can call this method as follows (here, we pass the 4PL formula, but it can be any other formula with 5 double parameters):

FiveFunction<Double, Double, Double, Double, Double, Double>
    pl4 = (a, b, c, d, x) -> d + ((a – d) /
                            (1 + (Math.pow(x / c, b))));      
List<Double> allX = List.of(40.3, 100.0, 250.2, 400.1,
                            600.6, 800.4, 1150.4, 1400.6);      
List<Double> allY = Logistics.compute(4.19, -1.10, 12.65,
                                      0.03, allX, pl4);

You can find the complete example in the bundled code.

Implementing a Consumer that takes 5 (or any other arbitrary number) of arguments – Functional style programming – extending API

200. Implementing a Consumer that takes 5 (or any other arbitrary number) of arguments

Before continuing with this problem I strongly recommend you to read Problem 199.Writing a custom Consumer that takes 5 arguments can be done as follows:

@FunctionalInterface
public interface FiveConsumer <T1, T2, T3, T4, T5> {
  
  void accept (T1 t1, T2 t2, T3 t3, T4 t4, T5 t5);
}

This is the five-arity specialization of the Java Consumer exactly as the built-in BiConsumer is the two-arity specialization of Consumer.We can use FiveConsumer in conjunction with the PL4 formula as follows (here, we compute y for x = 40.3):

FiveConsumer<Double, Double, Double, Double, Double>
  pl4c = (a, b, c, d, x) -> Logistics.pl4(a, b, c, d, x);
      
pl4c.accept(4.19, -1.10, 12.65, 0.03, 40.3);

The Logistics.pl4() is the method that contains the formula and displays the result:

public static void pl4(Double a, Double b,
                       Double c, Double d, Double x) {
      
  System.out.println(d + ((a – d) / (1
                       + (Math.pow(x / c, b)))));
}

Next, let’s see how we can partially apply a Function.

201. Partially applying a Function

A Function that is partially applied is a Function that applies only a part of its arguments and returns another Function. For instance, here is a TriFunction (a functional function with three arguments) that contains the apply() method next to two default methods that partially apply this function:

@FunctionalInterface
public interface TriFunction <T1, T2, T3, R> {
  
  R apply(T1 t1, T2 t2, T3 t3);
 
  default BiFunction<T2, T3, R> applyOnly(T1 t1) {
    return (t2, t3) -> apply(t1, t2, t3);
  }
  
  default Function<T3, R> applyOnly(T1 t1, T2 t2) {
    return (t3) -> apply(t1, t2, t3);
  }
}

As you can see, applyOnly(T1 t1), applies only the t1 argument and returns a BiFunction. On the other hand, applyOnly(T1 t1, T2 t2), applies only t1 and t2, and returns a Function.Let’s see how we can use these methods. For instance, let’s consider the formula (a+b+c)2 = a2+b2+c2+2ab+2bc+2ca, which can be shaped via the TriFunction as follows:

TriFunction<Double, Double, Double, Double> abc2 = (a, b, c)
  -> Math.pow(a, 2) + Math.pow(b, 2) + Math.pow(c, 2)
     + 2.0*a*b + 2*b*c + 2*c*a;      
System.out.println(“abc2 (1): ” + abc2.apply(1.0, 2.0, 1.0));
System.out.println(“abc2 (2): ” + abc2.apply(1.0, 2.0, 2.0));
System.out.println(“abc2 (3): ” + abc2.apply(1.0, 2.0, 3.0));

Here, we call apply(T1 t1, T2 t2, T3 t3) three times. As you can see, only the c term has a different value per call, while a and b are constantly equal with 1.0, respectively 2.0. This means that we can use apply(T1 t1, T2 t2) for a and b, and apply(T1 t1) for c as follows:

Function<Double, Double> abc2Only1 = abc2.applyOnly(1.0, 2.0);
      
System.out.println(“abc2Only1 (1): ” + abc2Only1.apply(1.0));
System.out.println(“abc2Only1 (2): ” + abc2Only1.apply(2.0));
System.out.println(“abc2Only1 (3): ” + abc2Only1.apply(3.0));

If we assume that only a is constant (1.0) while b and c have different values per call then we can use apply(T1 t1) for a, and apply(T1 t1, T2 t2) for b and c as follows:

BiFunction<Double, Double, Double> abc2Only2
  = abc2.applyOnly(1.0);
      
System.out.println(“abc2Only2 (1): “
  + abc2Only2.apply(2.0, 3.0));
System.out.println(“abc2Only2 (2): “
  + abc2Only2.apply(1.0, 2.0));
System.out.println(“abc2Only2 (3): “
  + abc2Only2.apply(3.0, 2.0));

Mission accomplished!

Summary

This chapter covered 24 problems. Most of them were focused on working with predicates, functions, and collectors, but we also covered JDK 16 mapMulti(), refactoring imperative code to functional, and much more.

Explaining concurrency vs. parallelism – Concurrency – Virtual Threads, Structured Concurrency

209. Explaining concurrency vs. parallelism

Before tackling the main topic of this chapter, structured concurrency, let’s forget about structure and let’s keep only concurrency. Next, let’s put concurrency against parallelism since these two notions are often a source of confusion.Both of them, concurrency and parallelism, have tasks as the main unit of work. But, the way that they handle these tasks makes them so different.In the case of parallelism, a task is split into subtasks across multiple CPU cores. These subtasks are computed in parallel and each of them represents a partial solution for the given task. By joining these partial solutions, we obtain the final solution. Ideally, solving a task in parallel should result in less wall-clock time than in the case of solving the same task sequentially. In a nutshell, in parallelism, at least two threads are running at the same time which means that parallelism can solve a single task faster.In the case of concurrency, we try to solve as many tasks as possible via several threads that compete with each other to progress in a time-slicing fashion. This means that concurrency can solve multiple tasks faster. This is why concurrency is also referenced as virtual parallelism.The following figure depicts parallelism vs. concurrency:

Figure 10.1 – Concurrency vs. paralellism

In parallelism, tasks (subtasks) are part of the implemented solution/algorithm. We write the code, set/control the number of tasks, and use them in a context having parallel computational capabilities. On the other hand, in concurrency, tasks are part of the problem.Typically, we measure parallelism efficiency in latency (the amount of time needed to complete the task), while the efficiency of concurrency is measured in throughput (the number of tasks that we can solve).Moreover, in parallelism, tasks are controlling resource allocation (CPU time, I/O operations, and so on). On the other hand, in concurrency, multiple threads compete with each other to gain as many resources (I/O) as possible. They cannot control resource allocation.In parallelism, threads operate on CPU cores in such a way that every core is busy. In concurrency, threads operate on tasks in such a way that, ideally, each thread has a task.Commonly, when parallelism and concurrency are compared somebody comes and says: How about asynchronous methods?It is important to understand that asynchrony is a separate concept. Asychorny is about the capability to accomplish non-blocking operations. For instance, an application sends a HTTP request but it doesn’t just wait for the response. It goes and solves something else (other tasks) while waiting for the response. We do asynchronous tasks every day. For instance, we start the washing machine and then go to clean. We don’t just wait by the washing machine until it was finished.