Overwriting data kafka-indexing task via http firefose currupted cluster? (can't get /tasks)

First off:

  1. running 0.14.0 (Imply)

  2. 1 Master Node, 1 Query Node 5 Data Nodes

I have loaded 1 day of data, with 15min segments.

I am now attempting to update/overwrite the first 15 minute segment. using a firehose ‘http’

my spec is attached…

I have been able to ‘hang’ 2 clusters

Once I call:

[centos@uswest2-dev-aldruidmaster-001 ~]$ ./imply-2.9.10/bin/post-index-task --file updates-overwrite-index.json
Beginning indexing data for test_1day
Task started: index_test_1day_2019-05-27T12:47:48.843Z
Task log: http://localhost:8090/druid/indexer/v1/task/index_test_1day_2019-05-27T12%3A47%3A48.843Z/log
Task status: http://localhost:8090/druid/indexer/v1/task/index_test_1day_2019-05-27T12%3A47%3A48.843Z/status
Traceback (most recent call last):
File “/home/centos/imply-2.9.10/bin/post-index-task-main”, line 171, in
main()
File “/home/centos/imply-2.9.10/bin/post-index-task-main”, line 161, in main
task_status = await_task_completion(args, task_id, complete_timeout_at)
File “/home/centos/imply-2.9.10/bin/post-index-task-main”, line 86, in await_task_completion
response = urllib2.urlopen(req, None, response_timeout)
File “/usr/lib64/python2.7/urllib2.py”, line 154, in urlopen
return opener.open(url, data, timeout)
File “/usr/lib64/python2.7/urllib2.py”, line 431, in open
response = self._open(req, data)
File “/usr/lib64/python2.7/urllib2.py”, line 449, in _open
‘_open’, req)
File “/usr/lib64/python2.7/urllib2.py”, line 409, in _call_chain
result = func(*args)
File “/usr/lib64/python2.7/urllib2.py”, line 1244, in http_open
return self.do_open(httplib.HTTPConnection, req)
File “/usr/lib64/python2.7/urllib2.py”, line 1217, in do_open
r = h.getresponse(buffering=True)
File “/usr/lib64/python2.7/httplib.py”, line 1113, in getresponse
response.begin()
File “/usr/lib64/python2.7/httplib.py”, line 444, in begin
version, status, reason = self._read_status()
File “/usr/lib64/python2.7/httplib.py”, line 400, in _read_status
line = self.fp.readline(_MAXLINE + 1)
File “/usr/lib64/python2.7/socket.py”, line 476, in readline
data = self._sock.recv(self._rbufsize)
socket.timeout: timed out

``

The Cluster will no longer show task info, both the Consolidated-UI, and cli ‘times-out’ never returns.

updates-overwrite-index.json (1.32 KB)

I will be retrying this again shortly…with a new cluster, but looking at the docs…may have discovered part of the issue?..updates are NOT production ready? (unless hadoop?)

based on: http://druid.io/docs/latest/ingestion/update-existing-data.htm

Note that IndexTask is to be used for prototyping purposes only as it has to do all processing inside a single process and can’t scale. Please use Hadoop batch ingestion for production scenarios dealing with more than 1GB of data.

Does this mean I should not use http firehose to load anything over 1GB?..the data file in my use case is in fact 1.2GB in size.

Does this mean to update data one MUST use hadoop? or?

Resolved - the json in the ‘update’ payload contained malformed rows.

Glad you got it figured out!