what happens when historical is overcommitted

As an example, lets say my historical has 100GB RAM.

what happens if i set:

  • druid.server.maxSize
  • druid.segmentCache.locations
    both to a value of 150 GB?
  1. Does anything go slower?
  2. Does anything crash?
  3. Do the values of both properties always have to be same? What if i set them differently?

Data that is not in use will be paged out. Segments are memory
mapped, meaning that the OS handles paging the pages of data in and
out of memory.

1) Depends on your working set. If your working set is larger than
you have memory, you will pay for swapping data in and out of memory
at query time
2) No. If you run out of disk space, then things will stop loading
and things might go wonky though.
3) They do not have to be the same. They are separate properties for
a reason actually. You can set multiple storage locations for the
segments (i.e. put them on multiple different mount points) and give
each of those locations a max size to fit inside of. The
"druid.server.maxSize" is then the total amount of data that the node
should be given regardless of how much each storage location can give.
Note that the max size should be at least a bit smaller than the sum
of the storage locations, because at the very limit you might have
50MB free on 4 mount points and be given a 200MB segment. The segment
cannot be split amongst the mount points, so it would fail to load.

--Eric