Schema Rollover/Evolution

Is there an example or docs on how schema updates work if you are using Storm? It wasn’t apparent what the best method is, other than perhaps tearing down the beam and rebuilding. Would that kill and pending tasks/segement generation?

sigh I mean tranquility + storm, but didn’t call that out in the subject. sorries.

Hey Mitch,

The builtin tranquility-storm adapter (this one: https://github.com/druid-io/tranquility/blob/master/docs/storm.md) creates the beam one time, when the topology starts up, with a particular schema. With this one you’d change your schema by killing and re-deploying your storm topology with a different BeamFactory. As long as the environmental stuff is the same (zk cluster, Druid cluster, Druid datasource, etc) then tranquility’s schema evolution stuff will apply (old tasks will get used for a bit, then new ones will be created with the new schema eventually). See here for more info about how that works: https://github.com/druid-io/tranquility/blob/master/docs/overview.md#schema-evolution

If you want to evolve your schema without re-deploying your topology, you could do that but you’d have to write your own Bolt instead of using the one that comes with tranquility-storm. In that case, yeah, you’d close the old Beam and make a new one. Normal schema evolution would still apply.

Perfect, just wanted to make sure there wasn’t something I was missing. We have logic that looks at a field in our event and spins up multiple beams as needed to route the event to its own datasource, so we can hook this logic into that (detect a need to change schema, spin new one up, shut old one down, etc).

Thanks!