maxColumnsToMerge and compaction jobs

A Druid noticed that some compaction jobs kept failing with this type of error:

ERROR [task-runner-0-priority-0] org.apache.druid.indexing.common.task.IndexTask - Encountered exception in BUILD_SEGMENTS.
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Cannot reserve 65536 bytes of direct buffer memory

For every failed compaction job, the 65536 bytes was always the same, leading to the question:

What can I do to reduce the DirectMemory consumption?

The answer was to tune maxColumnsToMerge, a tuningConfig parameter which

Limit of the number of segments to merge in a single phase when merging segments for publishing. This limit affects the total number of columns present in a set of segments to merge. If the limit is exceeded, segment merging occurs in multiple phases. Druid merges at least 2 segments per phase, regardless of this setting.

A reasonable value might be 10000 to 100000. If you set this it will do hierarchical merge which limits memory use. Here’s more context.