So, one of the lectures discusses the scale of models, and how to scale things up/down in Blender. But doesn’t this heavily depend on what objects will be in the scene? I mean, if you have a 400m long oil tanker, it’s a completely different deal whether it’s a distance scene passing under a 3km bridge, in which case you’d want a kilometer/hectometer scale, but if, say, it’s a bunch of people standing on the oil tanker, you need more of a meter scale scene. And since these scales don’t convert… you can make additional models, sure, but at that point you’re still making very large/small models.
What exactly is the problem with large models, anyways? Is it just the camera clipping issue… I mean, very large numbers could cause floating point imprecision, I suppose, but unless you’re building a scale model of the universe… Would it be practical to model objects in their own scale and then Object Scale them to the desired magnitude?