Hi all,
Is there any documentation on which S3 APIs are used/required by the S3-compatible deep storage system? I’m looking to deploy on OCI (Oracle Cloud) which implements a subset of the API:
https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/s3compatibleapi.htm
At a glance I would guess the implementation is complete-enough but I’m wondering if there are any gotchas I should be aware of beforehand.
Thanks!
Alex
We did get this working with a pre-release build of Druid 0.13, which uses the aws-java-sdk for S3. Current and previous versions of Druid use jets3t, which doesn’t seem to be compatible with Oracle Object Storage. (At least as far as we could tell.)
So, if you want to try it out, either wait for Druid 0.13.0 to be released or hop on over to https://lists.apache.org/thread.html/17b509c359509ff450904ab53500a5b08067bae9640f669056d06884@%3Cdev.druid.apache.org%3E for a release candidate. Then add the following to your common.runtime.properties:
druid.storage.type=s3
druid.storage.bucket=yourBucket
druid.storage.baseKey=druid/segments
druid.storage.disableAcl=true
druid.s3.endpoint.url=:443
druid.s3.endpoint.signingRegion=
druid.s3.accessKey=
druid.s3.secretKey=
druid.s3.disableChunkedEncoding=true
druid.s3.enablePathStyleAccess=true
druid.s3.forceGlobalBucketAccessEnabled=false
Hi, Gian
I am using the current version of Druid and s3 compatible, IBM cloud storage as deep storage. Batch ingestion failed to write segments.
Do you know whether jets3set supports it?
Thanks,
Thanks, Gian! I will check it out as soon as I can.
Best,
Alex
Hey Christine,
I’m not sure, I haven’t tried with IBM cloud storage before. It’s probably more likely that the aws-java-sdk will work, since I bet that’s what vendors like IBM test with.