TEST list_traversal N == 1000 (10 runs) list -> queue get: 245.4 µs ( 7.9) list comprehension get: 31.2 µs ( 1.0) lists:foldr/3 get: 54.4 µs ( 1.7) lists:map/2 get: 59.4 µs ( 1.9) queue traverse get: 193.4 µs ( 6.2) reverse/traverse get: 53.8 µs ( 1.7) traverse/append get: 3070.1 µs ( 98.4) N == 10000 (10 runs) list -> queue get: 3607.6 µs ( 8.0) list comprehension get: 449.2 µs ( 1.0) lists:foldr/3 get: 993.5 µs ( 2.2) lists:map/2 get: 681.8 µs ( 1.5) queue traverse get: 3062.9 µs ( 6.8) reverse/traverse get: 929.5 µs ( 2.1) traverse/append get: 333260.8 µs (741.9) N == 50000 (10 runs) list -> queue get: 19467.0 µs ( 7.8) list comprehension get: 2505.1 µs ( 1.0) lists:foldr/3 get: 2813.3 µs ( 1.1) lists:map/2 get: 2844.0 µs ( 1.1) queue traverse get: 15101.6 µs ( 6.0) reverse/traverse get: 5721.9 µs ( 2.3) traverse/append get: 13025345.9 µs (*****)
The results show that any traversal that keeps the list size the same should use list comprehension. A traversal that must alter the size of the list could use lists:foldr/3 with the [H|T] syntax, but the usage is not tail-recursive. A tail-recursive solution must use the reverse/traverse method for slightly slower performance (the choice is described in the Erlang Efficiency Guide). Robert Virding mentioned that the reverse/traverse method should be preferred since it is tail-recursive and can keep memory consumption low when processing large lists. This test currently only tests the reverse/traverse method where the reverse is performed before the traversal, since performing the reversal after the traversal should have very similar characteristics.
The important point to remember from the benchmark is to not use the "traverse/append" method which does an in-order traversal that relies on ++[Element] syntax to append an element to the new list (newly created each append operation). Not using "traverse/append" is somewhat contrary to the Erlang efficiency guide, that states '++' is not slow. However, with the other options available, it is easy to ignore.
Thanks goes to Robert Virding and Andrew Thompson for helping with suggestions for improving the benchmark.