Can a GroupBy query fail-fast if too many groups would be returned?

Most of our GroupBy queries are very fast, completing in less than one second.

One of our dimensions may have a very high cardinality (100k+ values).
If I include that dimension in a groupBy query, it could spend 30+ seconds and then fail with a “Resource limit exceeded” error.

Is there a way to make it fail faster, so I can warn the user that they must add more filtering to the query, so fewer groups would be returned?

I tried reducing maxMergingDictionarySize to 1000, but the query still spent 30+ seconds before failing.

Similarly, setting limit:1 in the limitSpec did not help.

Thanks for any ideas.

This is a rough one, I've never seen a solution that is actually
faster than just running the query. One example of a thing that sounds
good on paper but doesn't really work in practice is to do a
cardinality query by row
http://druid.io/docs/latest/querying/aggregations or similar. Often
times it takes just as long to do the query for cardinality as it does
to just do the query, or at least comparable lengths of time.

Hopefully someone else has a clever solution for this one because it
is annoying.