Let's compare the performance of using functional and imperative programming styles to process collections in Java.
Here's our benchmark: Given a collection of random numbers, get the distinct evens in sorted order.
Both approaches will have the same time complexity.
We'll split this into two parts. First, we'll build the test client. Then, we'll run the benchmarks and examine the results.
Let's create a wrapper containing a collection of random numbers.
Next we'll define a class where we'll run our benchmarks.
From here on out, the Java snippets will be part of the
We'll use 10
RandomNumberWrappers in each test,
so for a given
n, the test will process
n * 10 numbers.
The benchmark needs to output even numbers. Let's create a utility that.
Now we're ready to start the functional and imperative implementations.
We'll leverage Java's Stream API for the functional approach.
We'll do the same for the imperative approach.
Each test method returns a
long: the number of milliseconds
to process the data.
We'll create a new method to take the average from 100 test runs for a given implementation.
Finally, we want to be able to write our benchmarks to a CSV file, so we'll write a utility for that.
Now we have the pieces in place to build the test client.
The client gets the average time to process the collection using the
functional and imperative approaches for each
The imperative approach slightly outperforms the functional approach
n < 1e5. As
n increases, streams are the clear winner.
In my opinion, the declarative Stream API is more readable and
easier to maintain.
n is small, the performance difference is
probably negligible for most apps.
For that reason, I prefer the functional approach unless I know the dataset is relatively small, and I need to squeeze out as much performance as possible.