Segments not handed off and no metadata entries for the same

Hi All,

We have setup Druid 18.04 like below:
1 node (Coordinator + Overlord)
1 node (Historical + Middle Manager)

1 node (broker + router)

I successfully tried loading data into ‘Wikipedia’ data source as mentioned in the Druid docs.

But for my other data source, segments are not getting published. Segments get created which are available to the query for some time. Later some time, the same data becomes unavailable.

I can see data is present on deep storage but no metadata is present in Mysql. Also, all these segments have entries in MySQL’s pending_segment table.

What could be causing segments to be in an ‘unpublished and available’ state? How can I make the data available?
In the below image, the first segment can be seen in a not published state.

Confidentiality Notice and Disclaimer: This email (including any attachments) contains information that may be confidential, privileged and/or copyrighted. If you are not the intended recipient, please notify the sender immediately and destroy this email. Any unauthorized use of the contents of this email in any manner whatsoever, is strictly prohibited. If improper activity is suspected, all available information may be used by the sender for possible disciplinary action, prosecution, civil claim or any remedy or lawful purpose. Email transmission cannot be guaranteed to be secure or error-free, as information could be intercepted, lost, arrive late, or contain viruses. The sender is not liable whatsoever for damage resulting from the opening of this message and/or the use of the information contained in this message and/or attachments. Expressions in this email cannot be treated as opined by the sender company management – they are solely expressed by the sender unless authorized.

Are there any errors in the logs?

@rachel
No there are no errors in any log except ‘[GC (Allocation Failure) 182.887: [ParNew: 310335K->11198K(314560K’. I guess this can be ignored. Even all indexing tasks are completing with status ‘SUCCESS’.

FYI, I restored MySQL dump from my old 0.12 setup and also migrated deep storage data to newly configured storage.

Hi Avinash

Could you check and let me know how much ram you have allocated maxdirect memory size for indexing tasks?
It is to do with mismatch in resource allocation and configuration.

Regards
Shashank

Each Indexing task will have memory allocated of 1GB and indexing jobs are completing with ‘Success’ state.

FYI, I created a new database, initialized metadata, and restored MySQL backup for druid tables. Not sure what was the issue, but segments are getting published and are available for query.

The only problem is now, in druid console ‘DataSource’ link isn’t working. It just shows a blank page :frowning:

Is the rentention rule confugured for data sources? It is noted if its deleted the console goes blank

Regards
Shashank