Problem with json ingest


I had problem with ingesting json file, when i using mysql workbench to extract data, all completing good, and json structure is like:

[{“var1”:“dsada”,“var2”:“dsada”}] - is delimited and segments creating.

Problem is when i trying index json file that created from Dbeaver structure of file is:







I cant select delimited in extract options, and can use workbench bcuz the dataset is to large and crashing.

Sample logs from ingest using file exported by dbever(file indexing for 1.5 hours but dont create any segments):


[{“feed”:“metrics”,“timestamp”:“2019-06-04T09:54:44.358Z”,“service”:“druid/overlord”,“host”:“localhost:8090”,“version”:“0.13.0-incubating”,“metric”:“jvm/gc/mem/max”,“value”:29360128,“gcGen”:[“young”],“gcGenSpaceName”:“ s0\u0000 string [internal]”,“gcName”:[“parallel”]}] - reapting many times


2019-06-04T09:55:05,750 INFO [Coordinator-Exec–0] org.apache.druid.server.coordinator.helper.DruidCoordinatorSegmentInfoLoader - Starting coordination. Getting available segments.
2019-06-04T09:55:05,750 INFO [Coordinator-Exec–0] org.apache.druid.server.coordinator.helper.DruidCoordinatorSegmentInfoLoader - Found [0] available segments.
2019-06-04T09:55:05,750 INFO [Coordinator-Exec–0] org.apache.druid.server.coordinator.ReplicationThrottler - [_default_tier]: Replicant create queue is empty.
2019-06-04T09:55:05,750 INFO [Coordinator-Exec–0] org.apache.druid.server.coordinator.helper.DruidCoordinatorCleanupUnneeded - Found 0 availableSegments, skipping the cleanup of segments from historicals. This is done to prevent a race condition in which the coordinator would drop all segments if it started running cleanup before it finished polling the metadata storage for available segments for the first time.


When ingesting JSON, the input needs to have a form like the example shown here:

To troubleshoot issues where the tasks are not ingesting any rows and no segments are created, the ingestion reports can be useful: