I configured coordinator’s druid.coordinator.period
as PT10S
, but coordinator apply LoadRule too slow.
As looks like below coordinator log, it’s more than 5 minutes (much higher than 10 seconds)
2023-01-05T02:28:44,944 INFO [qtp1522549999-137] org.apache.druid.metadata.IndexerSQLMetadataStorageCoordinator - Published segments to DB: [test-juhong-segment-loading_2023-01-01T00:00:00.000Z_2023-01-02T00:00:00.000Z_2023-01-05T02:28:43.948Z]
2023-01-05T02:35:22,039 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Assigning 'primary' for segment [test-juhong-segment-loading_2023-01-01T00:00:00.000Z_2023-01-02T00:00:00.000Z_2023-01-05T02:28:43.948Z] to server [ip-172-31-201-235.ap-northeast-1.compute.internal:8083] in tier [_default_tier]
is the source test-juhong fully available in 10s after ingestion?
@Vijay_Narayanan1
I didn’t understand your question.
I found coordinator/time
, coordinator/global/time
metrics are very high.
And coordinator/time
for org.apache.druid.server.coordinator.duty.BalanceSegments
is notably high (more than 500s).
I tried to increase dynamic config balancerComputeThreads
4 to 16 but it didn’t work.
my question was to try to find out if there is some network issue between the cluster and deep storage which slowing the fetching of segments. How many segments are created by the ingestion and what is the total size?
I found emitBalancingStats config is root cause.
I checked coordinator/time
, coordinator/global/time
metrics and it was so high. BalanceSegments
duty spent more than 500 seconds.
When I disable emit balancing stats, BalanceSegments
duty spent less than 1 second, and segment loading also be faster.
cc. @Vijay_Narayanan1