1. generate files for providers and for backend
similarly the backend is defined in a file that is best generated too. One reason is different use case requires different configurations -- automation has different AWS credentials, while running locally uses a different state file path and credential parameters; manually modify files in order to run it is too expensive. Also it prevents from mistakes manually updating this file. Another is that since files can be generated, it is easy to substitute a state file path.
2. generate files for different environments
same as #1, variable values can be generated. Settings like "vpc" can be defined in one place then get substituted and copied to the "root' module folder to use, save a lot of duplicate and manual works.
3. generate unimportant resources
since there is a chance to run shell scripts to generate files, why not to generate everything..
actually it is because Terraform lacks the ability to define any macros. A resource must exist or to be created, one cannot create a different number of resources on different environments. Suppose a list of s3 bucket names is given and Terraform has to repeatedly define those resources.. even one can use "count" as a workaround, the resource name will be awkwardly hard to use.
4. by the way, it is easy to generate command line args such as "-var-files" because shell scripts can test if a file named like "terraform.PROD.tfvars" really exists.
5. all the sh scripts have to be checked to avoid including "bash" syntax.. since we run in docker environment. The tool I used in Cygwin is checkbashisms.
6. all the sh scripts when adding to Git repository has to have the "executable" bit set. It is by default not set when adding files from Windows.