The hadoop indexing speed is mostly bounded by the speed of your Hadoop cluster, not the Druid cluster. If it seems to be taking a lot longer than you think it should, you could double check to make sure the number of reducers being generated is reasonable, and that none of them are running at or near OOM limits.
The coordinator can run on a single machine, but you might want two in production for failover. For small clusters one broker is enough (although you might want two for availability). You should have as many historicals as you need to store your data. The coordinator web console will tell you how full your historicals are.
The broker doesn’t need 244GB of RAM for most typical workloads. You can make do with a lot less. For a moderately sized cluster with some concurrency, 8 CPU / 64 GB RAM should be enough. For a small testing cluster, less is okay. You can run an entire Druid cluster, every node type, on a machine with just a few GB of RAM.