All segments replicating to a single host

We are seeing some odd behavior in our Druid cluster right now. We had a few large datasources that were underreplicated due to a bad load rule. We fixed that, flooding our cluster with segments that need to be replicated. However, it seems that instead of replicating across the cluster in a somewhat balanced way, all the segments seem to be going to a single host, which is becoming a problem as the % used rises. Rebalancing can’t seem to keep up with moving segments off the node as fast as they are coming on. We are wondering why exactly this could be happening.

We did begin rolling out an update to druid.segmentCache.locations and druid.server.maxSize with an increase. We stopped after 2 nodes when we caught our underreplication problem. Could the difference in these configs across the same tier be the root of the problem? Possibly confusing the replication algorithm? Other than that, nothing configuration wise has changed and we haven’t recalled seeing this behavior before. Any help would be appreciated.