Drop Datasource not Working

Hi,

Deleting a datasource is not working with version 0.15.0.

Disabling and removing segments from the datasource works as described (https://druid.apache.org/docs/latest/tutorials/tutorial-delete-data.html). When the last segment is hit to get unused, the metadata reflects that in the mysql table druid_segments correclty as unused.
However, the segment cannot removed (by the kill command). The amount of segments remains always in the amount of segments before a complete disable ( of all remaining segments) was triggered.

The same effect occurs when I trigger this over the webinterface “Drop datasource (disable)”.

Thanks and any hints are appreciated,

Thomas

Hi Thomas,

Can you follow the below steps once you “Drop datasource (disable)” from Druid console,

Navigate back to Druid console and enable “show disabled”. We can then permanently delete the data-source from Druid cluster. Once we submit “Permanently delete” for the data source, then druid will submit a kill task with the interval “1000-01-01T00:00:00.000Z/3000-01-01T00:00:00.000Z” . Once the task is completed successfully, we can confirm the segments are removed completely from Deep storage and also from the metadata DB.

Can you try the above steps and let me know if you are able to remove the datasource completely.

Thanks,

Hemanth

Hi Hermanth,

I am failing already on the first step “Drop datasource (disable)”. After clicking that, the datasource remains enabled and nothing happens even when waiting for half an hour.

The notification ‘Data drop request acknowledged, next time the coordinator runs data will be dropped’ appears on the UI after triggering this action.

Best regards and thanks,

Thomas

Thomas,

I tired in my Druid cluster 0.15, I was able to see the datasoruce disabled in 5mins.

Can you check the coordinator log for “DruidCoordinatorCleanupUnneeded”? I do see it got triggered.

Thanks,

Hemanth

Hi Hermanth,

Good to know that it works in your case with version 0.15. I do not find “DruidCoordinatorCleanupUnneeded” in the coordinator log.

Here is part of my log and an output of metadata of the segments (I do have only one datasource in druid which I try to delete). Maybe is my issue related to the warning ‘No segments found in the database!’ or with the info ‘No good moves found in tier…’?

2019-08-16T06:33:49,776 INFO [LookupCoordinatorManager–6] org.apache.druid.server.lookup.cache.LookupCoordinatorManager - Not updating lookups because no data exists

2019-08-16T06:33:52,908 INFO [DatabaseRuleManager-Exec–0] org.apache.druid.metadata.SQLMetadataRuleManager - Polled and found 1 rule(s) for 1 datasource(s)

2019-08-16T06:34:14,302 INFO [Coordinator-Exec–0] org.apache.druid.server.coordinator.helper.DruidCoordinatorSegmentInfoLoader - Starting coordination. Getting available segments.

2019-08-16T06:34:14,303 INFO [Coordinator-Exec–0] org.apache.druid.server.coordinator.helper.DruidCoordinatorSegmentInfoLoader - Found [486] available segments.

2019-08-16T06:34:14,305 INFO [Coordinator-Exec–0] org.apache.druid.server.coordinator.ReplicationThrottler - [_default_tier]: Replicant create queue is empty.

2019-08-16T06:34:14,308 INFO [Coordinator-Exec–0] org.apache.druid.server.coordinator.helper.DruidCoordinatorBalancer - Found 2 active servers, 0 decommissioning servers

2019-08-16T06:34:14,308 INFO [Coordinator-Exec–0] org.apache.druid.server.coordinator.helper.DruidCoordinatorBalancer - Processing 4 segments for moving from decommissioning servers

2019-08-16T06:34:14,308 INFO [Coordinator-Exec–0] org.apache.druid.server.coordinator.helper.DruidCoordinatorBalancer - All servers to move segments from are empty, ending run.

2019-08-16T06:34:14,308 INFO [Coordinator-Exec–0] org.apache.druid.server.coordinator.helper.DruidCoordinatorBalancer - Processing 5 segments for balancing between active servers

2019-08-16T06:34:14,310 INFO [Coordinator-Exec–0] org.apache.druid.server.coordinator.helper.DruidCoordinatorBalancer - No good moves found in tier [_default_tier]

2019-08-16T06:34:14,310 INFO [Coordinator-Exec–0] org.apache.druid.server.coordinator.helper.DruidCoordinatorBalancer - [_default_tier]: Segments Moved: [0] Segments Let Alone: [5]

2019-08-16T06:34:14,310 INFO [Coordinator-Exec–0] org.apache.druid.server.coordinator.helper.DruidCoordinatorLogger - [_default_tier] : Assigned 0 segments among 2 servers

2019-08-16T06:34:14,310 INFO [Coordinator-Exec–0] org.apache.druid.server.coordinator.helper.DruidCoordinatorLogger - [_default_tier] : Moved 0 segment(s)

2019-08-16T06:34:14,310 INFO [Coordinator-Exec–0] org.apache.druid.server.coordinator.helper.DruidCoordinatorLogger - [_default_tier] : Let alone 5 segment(s)

2019-08-16T06:34:20,804 WARN [DatabaseSegmentManager-Exec–0] org.apache.druid.metadata.SQLMetadataSegmentManager - No segments found in the database!

Metadata:

Hi Thomas,

Seems like the segments are removed from the historical nodes. The meta-DB show these segments are “unused”. You might also want to check historical nodes for these segment which is configured in the _common/common.runtime.properties.

Are you still seeing this data-source in the druid console?

Hi Hermanth,

Yes, the data-source is still in the druid console (not in disabled state).

Not sure to which property you refere regarding the properties and segment.

What I do have for the historical are the following runtime properties:

Segment storage

druid.segmentCache.locations=[{“path”:“var/druid/segment-cache”,“maxSize”:130000000000}]

druid.server.maxSize=130000000000

When I check this segment cache, I do still have files in there.

Example output:

druid@druid-historical-0:/$ find /var/druid/segment-cache/MyDataSource/ -maxdepth 1 -type d -print | wc -l

119

Hi there,

I tried the same procedure with with the version 0.15.1. I have still the same effect with the datasource that cannot be dropped.

Actually the segments are marked as unused and the datasource is not in the metadata table listed anymore. I can still query it and see it in the management console.

Could that be related to the cluster configuration?

BR,

Thomas

Thomas,

I believe until the segments are removed from historical nodes(var/druid/segment-cache), you will still be able to see it in the Druid console.

Can you check the coordinator process when you submit the drop database from the console? You might get some errors/clue on what’s going on.

Thanks,

Hemanth

Hi Hemanth,

Is there a REST call to trigger explicitly the segements drop on the historcial nodes?

I do not see an error or warning in the coordinator console. What I see before loading data and after droping the datasource is the following warning every minute:

WARN [DatabaseSegmentManager-Exec–0] org.apache.druid.metadata.SQLMetadataSegmentManager - No segments found in the database!

This log is also written when I still see the datasource and the segments. The segments are obviously only in the historical nodes cache.

BR,

Thomas

Thomas,

We can use coordinator APIs to mark the segments unused. But in your testing, the segments are already unused in the Metadata DB. But anyways you can use the below.

curl -X ‘POST’ -H ‘Content-Type:application/json’ -d ‘{ “interval” : “2000-09-12T00:00:00.000Z/2030-00-12T20:00:00.000Z” }’ http://localhost:8081/druid/coordinator/v1/datasources//markUnused

- Name of the datasource.

Interval can be changed as well.

Can you mark the unused segments to true(1) in the druid_segments table for the datasource. Then we will try to drop using the above API query and then check.

Thanks,

Hemanth

Hi Hemanth,

No success with that approach.

Could the issue be related to the Retention rule? I will try to set the Retention to drop forever before I trigger the ‘markUnused’ and ‘kill’.

Thank you for your hints and investigation,

Thomas

Hi Hemanth,

The Retention did also not help.

I went the pragmatic way and started with the Tutorial Wikipedia datasource.

The logging was changed a bit, I see the following exception log on the overlord:

2019-08-22T07:14:00,692 INFO [TaskQueue-StorageSync] org.apache.druid.indexing.overlord.TaskQueue - Synced 1 tasks from storage (0 tasks added, 0 tasks removed).

8/22/2019 9:14:10 AM 2019-08-22T07:14:10,943 INFO [qtp1709225221-79] org.apache.druid.indexing.common.actions.LocalTaskActionClient - Performing action for task[kill_Wikipedia_2012-02-20T00:00:00.000Z_2020-03-25T00:00:00.000Z_2019-08-22T07:12:54.906Z]: SegmentNukeAction{segments=[wikipedia_2015-09-12T00:00:00.000Z_2015-09-13T00:00:00.000Z_2019-08-22T06:31:41.249Z]}

8/22/2019 9:14:10 AM 2019-08-22T07:14:10,944 WARN [qtp1709225221-79] org.apache.druid.indexing.overlord.http.OverlordResource - Failed to perform task action

8/22/2019 9:14:10 AM org.apache.druid.java.util.common.ISE: Segments[[DataSegment{size=4817630, shardSpec=NumberedShardSpec{partitionNum=0, partitions=0}, metrics=[], dimensions=[added, channel, cityName, comment, countryIsoCode, countryName, deleted, delta, isAnonymous, isMinor, isNew, isRobot, isUnpatrolled, metroCode, namespace, page, regionIsoCode, regionName, user], version='2019-08-22T06:31:41.249Z', loadSpec={type=>s3_zip, bucket=>sq-data-storage, key=>druid/segments/wikipedia/2015-09-12T00:00:00.000Z_2015-09-13T00:00:00.000Z/2019-08-22T06:31:41.249Z/0/index.zip, S3Schema=>s3n}, interval=2015-09-12T00:00:00.000Z/2015-09-13T00:00:00.000Z, dataSource='wikipedia', binaryVersion='9'}]] are not covered by locks[[TaskLock{type=EXCLUSIVE, groupId=kill_Wikipedia_2012-02-20T00:00:00.000Z_2020-03-25T00:00:00.000Z_2019-08-22T07:12:54.906Z, dataSource=Wikipedia, interval=2012-02-20T00:00:00.000Z/2020-03-25T00:00:00.000Z, version=2019-08-22T07:12:54.917Z, priority=0, revoked=false}]] for task[kill_Wikipedia_2012-02-20T00:00:00.000Z_2020-03-25T00:00:00.000Z_2019-08-22T07:12:54.906Z]

8/22/2019 9:14:10 AM 	at org.apache.druid.indexing.common.actions.TaskActionPreconditions.checkLockCoversSegments(TaskActionPreconditions.java:49) ~[druid-indexing-service-0.15.1-incubating.jar:0.15.1-incubating]

8/22/2019 9:14:10 AM 	at org.apache.druid.indexing.common.actions.SegmentNukeAction.perform(SegmentNukeAction.java:71) ~[druid-indexing-service-0.15.1-incubating.jar:0.15.1-incubating]

8/22/2019 9:14:10 AM 	at org.apache.druid.indexing.common.actions.SegmentNukeAction.perform(SegmentNukeAction.java:41) ~[druid-indexing-service-0.15.1-incubating.jar:0.15.1-incubating]

8/22/2019 9:14:10 AM 	at org.apache.druid.indexing.common.actions.LocalTaskActionClient.submit(LocalTaskActionClient.java:74) ~[druid-indexing-service-0.15.1-incubating.jar:0.15.1-incubating]

8/22/2019 9:14:10 AM 	at org.apache.druid.indexing.overlord.http.OverlordResource$4.apply(OverlordResource.java:481) [druid-indexing-service-0.15.1-incubating.jar:0.15.1-incubating]

8/22/2019 9:14:10 AM 	at org.apache.druid.indexing.overlord.http.OverlordResource$4.apply(OverlordResource.java:470) [druid-indexing-service-0.15.1-incubating.jar:0.15.1-incubating]

8/22/2019 9:14:10 AM 	at org.apache.druid.indexing.overlord.http.OverlordResource.asLeaderWith(OverlordResource.java:802) [druid-indexing-service-0.15.1-incubating.jar:0.15.1-incubating]

8/22/2019 9:14:10 AM 	at org.apache.druid.indexing.overlord.http.OverlordResource.doAction(OverlordResource.java:467) [druid-indexing-service-0.15.1-incubating.jar:0.15.1-incubating]

8/22/2019 9:14:10 AM 	at sun.reflect.GeneratedMethodAccessor64.invoke(Unknown Source) ~[?:?]

8/22/2019 9:14:10 AM 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_212]

8/22/2019 9:14:10 AM 	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_212]

8/22/2019 9:14:10 AM 	at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) [jersey-server-1.19.3.jar:1.19.3]

8/22/2019 9:14:10 AM 	at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) [jersey-server-1.19.3.jar:1.19.3]

8/22/2019 9:14:10 AM 	at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) [jersey-server-1.19.3.jar:1.19.3]

8/22/2019 9:14:10 AM 	at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302) [jersey-server-1.19.3.jar:1.19.3]

8/22/2019 9:14:10 AM 	at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) [jersey-server-1.19.3.jar:1.19.3]

8/22/2019 9:14:10 AM 	at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) [jersey-server-1.19.3.jar:1.19.3]

8/22/2019 9:14:10 AM 	at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) [jersey-server-1.19.3.jar:1.19.3]

8/22/2019 9:14:10 AM 	at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) [jersey-server-1.19.3.jar:1.19.3]

8/22/2019 9:14:10 AM 	at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542) [jersey-server-1.19.3.jar:1.19.3]

8/22/2019 9:14:10 AM 	at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473) [jersey-server-1.19.3.jar:1.19.3]

8/22/2019 9:14:10 AM 	at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419) [jersey-server-1.19.3.jar:1.19.3]

8/22/2019 9:14:10 AM 	at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409) [jersey-server-1.19.3.jar:1.19.3]

8/22/2019 9:14:10 AM 	at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409) [jersey-servlet-1.19.3.jar:1.19.3]

8/22/2019 9:14:10 AM 	at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) [jersey-servlet-1.19.3.jar:1.19.3]

8/22/2019 9:14:10 AM 	at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) [jersey-servlet-1.19.3.jar:1.19.3]

8/22/2019 9:14:10 AM 	at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) [javax.servlet-api-3.1.0.jar:3.1.0]

8/22/2019 9:14:10 AM 	at com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:286) [guice-servlet-4.1.0.jar:?]

8/22/2019 9:14:10 AM 	at com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:276) [guice-servlet-4.1.0.jar:?]

8/22/2019 9:14:10 AM 	at com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:181) [guice-servlet-4.1.0.jar:?]

8/22/2019 9:14:10 AM 	at com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:91) [guice-servlet-4.1.0.jar:?]

8/22/2019 9:14:10 AM 	at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:85) [guice-servlet-4.1.0.jar:?]

8/22/2019 9:14:10 AM 	at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:120) [guice-servlet-4.1.0.jar:?]

8/22/2019 9:14:10 AM 	at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:135) [guice-servlet-4.1.0.jar:?]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) [jetty-servlet-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.apache.druid.server.http.RedirectFilter.doFilter(RedirectFilter.java:71) [druid-server-0.15.1-incubating.jar:0.15.1-incubating]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) [jetty-servlet-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.apache.druid.server.security.PreResponseAuthorizationCheckFilter.doFilter(PreResponseAuthorizationCheckFilter.java:82) [druid-server-0.15.1-incubating.jar:0.15.1-incubating]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) [jetty-servlet-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.apache.druid.server.security.AllowOptionsResourceFilter.doFilter(AllowOptionsResourceFilter.java:75) [druid-server-0.15.1-incubating.jar:0.15.1-incubating]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) [jetty-servlet-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.apache.druid.server.security.AllowAllAuthenticator$1.doFilter(AllowAllAuthenticator.java:84) [druid-server-0.15.1-incubating.jar:0.15.1-incubating]

8/22/2019 9:14:10 AM 	at org.apache.druid.server.security.AuthenticationWrappingFilter.doFilter(AuthenticationWrappingFilter.java:59) [druid-server-0.15.1-incubating.jar:0.15.1-incubating]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) [jetty-servlet-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.apache.druid.server.security.SecuritySanityCheckFilter.doFilter(SecuritySanityCheckFilter.java:86) [druid-server-0.15.1-incubating.jar:0.15.1-incubating]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) [jetty-servlet-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533) [jetty-servlet-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473) [jetty-servlet-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:724) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:61) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.server.Server.handle(Server.java:531) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260) [jetty-server-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281) [jetty-io-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102) [jetty-io-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118) [jetty-io-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333) [jetty-util-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310) [jetty-util-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168) [jetty-util-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126) [jetty-util-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366) [jetty-util-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:760) [jetty-util-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:678) [jetty-util-9.4.10.v20180503.jar:9.4.10.v20180503]

8/22/2019 9:14:10 AM 	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]

Another thing that I noticed in the tutorial for loading is the Cleanup section ([https://druid.apache.org/docs/latest/tutorials/tutorial-batch.html](https://druid.apache.org/docs/latest/tutorials/tutorial-batch.html)).

A cluster should be shut down an removing the contents of the var directory. Why not just deleting the datasource?

For our use-case it is crucial that we do not have to shut down the cluster to drop a datasource.

BR and thanks in advance,

Thomas

Hi all,

I figured out the issue. This case occurs only with the last remaining datasource in druid. Seems to be a bug, that the last datasource does not get deleted as it should (from the metadata and deep storage it is properly removed when killed).

My workaround is to have one dummy datasource in the list.

Is that issue already in the bug list?

BR,

Thomas

1 Like