Druid on ECS Fargate parsing .parquet files problem

Hello community,

Currently working on a Druid POC using AWS ECS Fargate

For this, I executed docker compose up with the ecs context pointing to the cluster I created previously.
I’m using the code from branch 0.22.1.

All Druid Fargate tasks seem to be up and running, I can go to the GUI and browse the different pages with no errors.
The problem happens when I try to parse a single .parquet file (or multiple files, it does not matter), it fails with the following error:

Error: Failed to sample data: var/tmp/druid7535129051230999014

I tried other test .parquet files from some other place and I receive this error:

Error: Failed to sample data: java.io.IOException: can not read class org.apache.parquet.format.FileMetaData: Required field 'codec' was not present! Struct: ColumnMetaData(type:BYTE_ARRAY, encodings:[PLAIN, RLE], path_in_schema:[a], codec:null, num_values:10000, total_uncompressed_size:400103, total_compressed_size:380480, data_page_offset:4, statistics:Statistics(null_count:0, max_value:66 66 66 66 65 36 61 30 2D 65 30 63 30 2D 34 65 36 35 2D 61 39 64 34 2D 66 37 66 34 63 31 37 36 61 65 61 32, min_value:30 30 30 38 37 64 65 37 2D 31 30 64 66 2D 34 39 37 39 2D 39 34 63 66 2D 37 39 32 37 39 66 39 37 34 35 63 65), encoding_stats:[PageEncodingStats(page_type:DATA_PAGE, encoding:PLAIN, count:1)])

I have included the required extensions for parquet extensions into the environment variables of the container:

'["druid-histogram", "druid-datasketches", "druid-lookups-cached-global", "postgresql-metadata-storage", "druid-s3-extensions", "druid-basic-security", "druid-kinesis-indexing-service", "druid-parquet-extensions", "druid-aws-rds-extensions", "druid-stats", "druid-avro-extensions", "druid-hdfs-storage", "druid-kafka-extraction-namespace", "druid-kafka-indexing-service", "druid-orc-extensions"]' 

I’m not sure if these new extensions are being installed when I redeploy a new task.

We have another test cluster made using EC2 instances and this other cluster can parse .parquet files with no error.

I wonder if anyone has deployed Druid on Fargate that could provide some useful tips as I ran out of ideas. :slight_smile:

Thanks in advance.


Hey Tono – starting with a silly question ------- what’s var/tmp/druidblaaaaaaaaaaaah in this context? Is that what you’re pointing your ingestion to? Or is it after that? I wondered if it could be perms or something, too…

Hi Peter,

The var/tmp/druid####### error comes after the ingestion.
For example, I put in a S3 file URI, hit apply and I get a new window with symbols and stuff, then I hit ‘Parse data’ and I get this error right away.
Yes, I feel this might be permissions but I haven’t been able to find anything on the logs.
On what component logs would I see anything related to this error? on the middlemanager logs?
The cluster tasks are using EFS storage and I have already set 0777 permissions on the EFS access point (I was having other permissions errors in the past when installing and this fixed it)
For example:

        Gid: "0"
          - "0"
        Uid: "0"
          OwnerGid: "0"
          OwnerUid: "0"
          Permissions: "0777"
        Path: "/"

Could it be I need Gid/Uid?
I set the same permissions on all other components Access Points.
What other places I could me be missing permissions?

I think this same problem is biting me when trying to use the basic authentication extension as well.



I just did some digging on that specific error you’re getting:

Required field 'codec' was not present!

I can’t confess to be a parquet expert (!) but wondered if you may be hitting the same issue as with other databases that ingest data when the codec is not specific or not supported.

I notice in the specific error text it says codec:null – which might explain it?

If so, this would mean that this isn’t a perms issue into var - the parquet extension would appear to not like your incoming Parquet file?

Just linking to some docs for future readers :slight_smile:

I received the codec error when testing with publicly available parquet test files, I tried ingesting and parsing the parquet files I found here:

The var/tmp/druid####### error I do get it with our parquet files which I tested in the other stack created with EC2 instances and I was able to parse them with no errors.

Do you know if Fargate is going to be ‘oficially’ supported in the future? I mean it runs on my test Fargate cluster but it feels there’s not much info on how to make it work properly.

Hi Peter,

I did some more research and it doesn’t seem to be a codec problem neither a parquet format problem since it works on the other EC2 cluster.
The error complains about not being able to parse a file that is supposed to be within var/tmp/druidXXXXXXX.
I finally found this directory within the ‘coordinator’ EFS volume but it’s empty.
The same happens whenever I try to parse a parquet file:

Any idea what could be happening here? It doesn’t seem like a permissions problem since Druid is able to create the directory in EFS.