Get CACHE_NOT_INITIALIZED error when using gloablly cached lookup via postgres

Hi,

I am pretty new to Druid(0.13.0) and I am exploring globally cached lookup via postgres.

During the query, I got CACHE_NOT_INITIALIZED error:

{

“error” : “Unknown exception”,

“errorMessage” : “namespace [JdbcExtractionNamespace{connectorConfig=DbConnectorConfig{createTables=true, connectURI=‘jdbc:postgresql://postgres:5432/druid’, user=‘druid’, passwordProvider=org.apache.druid.metadata.DefaultPasswordProvider}, table=‘lookupTable’, keyColumn=‘key’, valueColumn=‘value’, tsColumn=‘null’, filter=‘null’, pollPeriod=PT0S}] : org.apache.druid.server.lookup.namespace.cache.CacheScheduler$EntryImpl@4066292d: CACHE_NOT_INITIALIZED, extractorID = namespace-factory-JdbcExtractionNamespace{connectorConfig=DbConnectorConfig{createTables=true, connectURI=‘jdbc:postgresql://postgres:5432/druid’, user=‘druid’, passwordProvider=org.apache.druid.metadata.DefaultPasswordProvider}, table=‘lookupTable’, keyColumn=‘key’, valueColumn=‘value’, tsColumn=‘null’, filter=‘null’, pollPeriod=PT0S}-3c69baf7-c3ca-4d7c-a1bb-e068165280bd”,

“errorClass” : “org.apache.druid.java.util.common.ISE”,

“host” : “localhost:8103”

}

Do I need to setup the cache or anyone know how to fix this?

Thanks,

Syi

Hi Syi,

I don’t think you need to set up any cache configuration, how does your query look? Another thing you can test is the introspection api on your broker, like

curl -X GET http://broker-host:port/druid/v1/lookups/introspect/{lookupId}/keys

and see if you are getting your keys back.

Thanks,

Surekha

Hi Surekha,

Thank you for the reply.

curl -X GET “http://…/druid/v1/lookups/introspect/…/keys” -H ‘Content-Type:application/json’ -H ‘Accept:application/json’

{“error”:“namespace [JdbcExtractionNamespace{connectorConfig=DbConnectorConfig{createTables=true, connectURI=‘jdbc:postgresql://postgres:5432/druid’, user=‘druid’, passwordProvider=org.apache.druid.metadata.DefaultPasswordProvider}, table=‘lookupTable’, keyColumn=‘key’, valueColumn=‘value’, tsColumn=‘null’, filter=‘null’, pollPeriod=PT0S}] : org.apache.druid.server.lookup.namespace.cache.CacheScheduler$EntryImpl@1844be0e: CACHE_NOT_INITIALIZED, extractorID = namespace-factory-JdbcExtractionNamespace{connectorConfig=DbConnectorConfig{createTables=true, connectURI=‘jdbc:postgresql://postgres:5432/druid’, user=‘druid’, passwordProvider=org.apache.druid.metadata.DefaultPasswordProvider}, table=‘lookupTable’, keyColumn=‘key’, valueColumn=‘value’, tsColumn=‘null’, filter=‘null’, pollPeriod=PT0S}-2ffe3c56-dd91-40c2-be18-ef47c732695f”}%

I got same error via introspect api, is there anything else I can try?

Thanks,

Syi

Hi guys,

I’m getting the same error, even though I used the “firstCacheTimeout”: 0 parameter, in the conf.

Hi,

The CACHE_NOT_INITIALIZED error normally shows up when the lookup can not be accessed from its remote location. You can check the broker log for clues.

If there are no connection problems, a quick fix might be to restart the cluster. You might also consider moving away from jdbc lookups (which are single threaded) to URI lookups:

http://druid.io/docs/latest/development/extensions-core/lookups-cached-global.html

Best,

Caroline

Hi Caroline,

Thank you for the help.

We eventually make it work by manually copy the jar to the lib folder.

cp /usr/local/apache-druid-$DRUID_VERSION-incubating/extensions/postgresql-metadata-storage/postgresql-9.4.1208.jre7.jar /usr/local/apache-druid-$DRUID_VERSION-incubating/lib \

We found this jar is not loaded in the class path.

Not sure if this is a bug or missing steps in config.

Thanks,

Syi

Thank you Caroline for the response!

I realised that too, so when I looked at the Broker log, it didn’t like some conf parameters:

2019-03-07T22:36:38,070 WARN [NamespaceExtractionCacheManager-0] org.apache.druid.java.util.common.RetryUtils - Retrying (2 of 2) in 1,418ms.
java.lang.UnsupportedOperationException: hasHeaderRow or maxSkipHeaderRows is not supported. Please check the indexTask supports these options.

2019-03-07T22:36:39,493 ERROR [NamespaceExtractionCacheManager-0] org.apache.druid.server.lookup.namespace.cache.CacheScheduler - Failed to update namespace [UriExtractionNamespace{uri=file:///MY/PATH/TO/CSV-LOOKUPS/MYLOOKUP.csv.gz, uriPrefix=null, namespaceParseSpec=CSVFlatDataParser{columns=[col_id, col_name, col_uuid], keyColumn=‘col_uuid’, valueColumn=‘col_name’},
fileRegex=‘null’, pollPeriod=PT5M}] : org.apache.druid.server.lookup.namespace.cache.CacheScheduler$EntryImpl@3cb581d6
java.lang.UnsupportedOperationException: hasHeaderRow or maxSkipHeaderRows is not supported. Please check the indexTask supports these options.

``

So I deleted the lookup and issued a new one without those configuration options. Now I see the lookup has been loaded by brokers and historicals:

2019-03-07T22:48:36,754 INFO [NamespaceExtractionCacheManager-1] org.apache.druid.server.lookup.namespace.UriCacheGenerator - Finished loading 137 values from 137 lines for [namespace [UriExtractionNamespace{uri
=file:///MY/PATH/TO/CSV-LOOKUPS/MYLOOKUP.csv.gz, uriPrefix=null, namespaceParseSpec=CSVFlatDataParser{columns=[col_id, col_name, col_uuid], keyColumn=‘col_uuid’, valueColumn=‘col_name’}, fileRegex=‘null’, pollPeriod=PT5M}] : org.apache.druid.server.lookup.namespace.cache.CacheScheduler$EntryImpl@41fe85b7] in 13,532,042 ns

``

But when I introspect the lookup I get a 200 response with no data, but the broker log report the following error:

2019-03-07T22:50:16,608 ERROR [qtp1043822951-123] org.apache.druid.server.security.PreResponseAuthorizationCheckFilter - Request did not have an authorization check performed.: {class=org.apache.druid.server.sec
urity.PreResponseAuthorizationCheckFilter, uri=/druid/v1/lookups/introspect/col_uuid/, method=GET, remoteAddr=127.0.0.1, remoteHost=127.0.0.1}
2019-03-07T22:50:16,610 WARN [qtp1043822951-123] org.eclipse.jetty.server.HttpChannel - /druid/v1/lookups/introspect/col_uuid/
org.apache.druid.java.util.common.ISE: Request did not have an authorization check performed.

``

So I tried to restart the cluster but still get the same error… Not sure what I need to do next…

Please let me know how I can solve this issue.

Kind regards,

Sergio

Hi,

we had the same problem with MySQL and also had to copy the jdbc connector jar to druid/lib directory.

I do not think that this is a bug, because using global cached lookups does not mean that you also using metadata storage based on JDBC (MySQL, Postgres, etc.)

The global-cached-lookup extension is working with different types of SQL backends and it makes no sense to provide connector jars for all of them.

…but you are right, it is not documented well :slight_smile:

Regards, Alex