Hello, everyone. I am slowly moving towards a working Druid cluster. Emphasis on slowly.
I have successfully completed the Wikipedia data import task on my cluster. However, when I run the query command (on my query node):
curl -X 'POST' -H 'Content-Type:application/json' -d @quickstart/tutorial/wikipedia-top-pages.json http://localhost:8082/druid/v2?pretty
as a response.
I have checked the metadata db, and druid_segments has data from the successful tasks, but there is nothing in druid_datasource.
Any recommendations as to what I should check? I will attach the indexing log for the task as well.
log-19 (139 KB)
Also, in the console, the wikipedia datasource does show up, but it shows as <99% available. When I hover over the red dot it states: “100% to load until available.” Additionally, in the unified console, all of the tabs report 404 errors. I have SQL enabled.
Not sure if this is related, but I imagine it could be.
When I hover over the red dot it states: “100% to load until available.”
Since you’ve configured it to use S3 deep storage:
2019-04-29T18:05:04,394 INFO [main] org.apache.druid.cli.CliPeon - * druid.storage.type: s3
I would recommend checking that the bucket you’ve configured contains the segments you expect, and check your historical logs to see if there are any issues when downloading the segments.
The segments are there. I will dive into the Historical logs today and report back. Thanks!
I don’t see any logs after startup on the Historical. Just
looking at stdout.
By default the tutorial setup would put the service logs in files under
var/sv, do you see anything there?
I changed the way I started each process, and querying works now. Pretty sure my Historical process was not running at all.
Thanks for the help; hunting for the logs led to the fix!