Can I decommission a historical node with no downtime?

Yes (assuming that you are running multiple historicals). decommissioningNodes provides a:

List of historical servers to ‘decommission’. Coordinator will not assign new segments to ‘decommissioning’ servers, and segments will be moved away from them to be placed on non-decommissioning servers at the maximum rate specified by decommissioningMaxPercentOfMaxSegmentsToMove .

Here’s the doc, some context, and a sample JSON:

{
  "millisToWaitBeforeDeleting": 900000,
  "mergeBytesLimit": 100000000,
  "mergeSegmentsLimit" : 1000,
  "maxSegmentsToMove": 5,
  "useBatchedSegmentSampler": false,
  "percentOfSegmentsToConsiderPerMove": 100,
  "replicantLifetime": 15,
  "replicationThrottleLimit": 10,
  "emitBalancingStats": false,
  "killDataSourceWhitelist": ["wikipedia", "testDatasource"],
  "decommissioningNodes": ["localhost:8182", "localhost:8282"],
  "decommissioningMaxPercentOfMaxSegmentsToMove": 70,
  "pauseCoordination": false,
  "replicateAfterLoadTimeout": false,
  "maxNonPrimaryReplicantsToLoad": 2147483647
}

OGG @Sergio_Ferragut I feel a discovery slide deck on this particular nugget in my bones :slight_smile: