Ingestion takes longer time compare with Prod and DR setup. Even Prod and DR env same

We have same druid configuration and system resources for Prod and DR Setup. Even druid common runtime properties also same for both environment.

Master - 3 node cluster - 4 CPU / 64GB RAM.
Data - 5 node cluster - 32 CPU / 128 GB RAM
Query - 3 node cluster - 16 CPU / 64 GB RAM
Zookeeper - 3 node cluster - 4 CPU / 8 GB RAM

When we are injecting wikipedia datasource (default datasource given by Druid) from Druid Console.
Prod env has 45sec to complete the Inject task. But DR has take only 15sec to complete.

Please help me on this scenario . We don’t know how to troubleshoot in this case.

Below I mentioned both Prod and DR Indexer logs difference.

Note → *Left side = DR Node (orange colour)
Note → *Right Side = prod node (Green colour)

Are the ingestion specs identical in both environments?

@Mark_Herrera
Yes. we injected wikipedia data source (default datasource given by Druid). not our own data.

Below I mentioned both Prod and DR Indexer logs difference.

Note → *Left side = DR Node (orange colour)
Note → *Right Side = prod node (Green colour)

I’m sorry I wasn’t more clear, and I apologize for my delay in replying. I was offsite.

I understand that your prod and DR configurations and system resources are identical, and I understand that you ingested the wikipedia datasource in both environments. I was wondering if your ingestion specifications are identical in both environments. For reference I’m including the following example:

{
  "type": "index_parallel",
  "spec": {
    "dataSchema": {
      "dataSource": "wikipedia",
      "timestampSpec": {
        "column": "timestamp",
        "format": "auto"
      },
      "dimensionsSpec": {
        "dimensions": [
          { "page" },
          { "language" },
          { "type": "long", "name": "userId" }
        ]
      },
      "metricsSpec": [
        { "type": "count", "name": "count" },
        { "type": "doubleSum", "name": "bytes_added_sum", "fieldName": "bytes_added" },
        { "type": "doubleSum", "name": "bytes_deleted_sum", "fieldName": "bytes_deleted" }
      ],
      "granularitySpec": {
        "segmentGranularity": "day",
        "queryGranularity": "none",
        "intervals": [
          "2013-08-31/2013-09-01"
        ]
      }
    },
    "ioConfig": {
      "type": "index_parallel",
      "inputSource": {
        "type": "local",
        "baseDir": "examples/indexing/",
        "filter": "wikipedia_data.json"
      },
      "inputFormat": {
        "type": "json",
        "flattenSpec": {
          "useFieldDiscovery": true,
          "fields": [
            { "type": "path", "name": "userId", "expr": "$.user.id" }
          ]
        }
      }
    },
    "tuningConfig": {
      "type": "index_parallel"
    }
  }
}

Are you using rollup, transformations, metrics, etc. in the production environment which might not be present in the DR environment?