disclaimer: I work primarily with SQL Server and have almost no direct experience with MySQL, but…
In general I think temp tables are a legit approach to the sort of reporting scenario you describe. For particularly gnarly queries, they can help with readability, if nothing else, which is sometimes worth it all by itself.
It sounds like you’ve already picked off the low hanging fruit (nested loops in the application, etc.) and if you’re seeing a 10x improvement in execution time, I’d call that a victory. It’s not uncommon to see variations in performance of the same query for different volumes of data, or even similar volumes with different distributions of values. That’s what makes performance tuning so much fun!
Another thing to consider is that some times you can only do so much with the query itself. If the database was poorly designed – and I’ll admit statements like this
make me think that’s a distinct possibility – you might have bigger problems than can be solved by rewriting queries.
Even so, there are things you can do w/out altering the schema. For example, it might be worthwhile to look at your indexes and statistics; sometimes adding the right index can make a huge difference for a particular workload without even changing your queries. Again, I don’t know specifics for MySQL, but I can’t imagine the core concepts are that different from what I know.
Finally, if you really wanted to do instantiated denormalized tables, I’d probably be tempted to just go all the way and build a datamart or data warehouse or whatever they’re calling it these days for your platform of choice. But that’s whole different scenario, with it’s own set of trade-offs, and likely a fair bit more work than was envisioned when your boss asked you to fix these reports.
Or you could just throw hardware at the problem.
(seriously, though, there’s no shame in a tactical approach to tackling scaling issues.)