I find it a bit difficult to see how BIC could find a greater number of "tiny" effects and AIC would not also find these "tiny" effects (and more). If the sample size is greater than 7 (which is always - who would use a "non-informative" measure like AIC or BIC when you only have 7 or less data points?) then BIC will always favor a simpler model compared to AIC. BIC can never find a "tiny" effect which AIC does not also find, if the search is done over the same set of models. That is if you calculate AIC and BIC for a fixed set of models, AIC can never prefer a model of lower dimensionality than BIC unless the sample size is below 8, simply because the dimension penalty is the only difference between how they rank models.
However if you use AIC "stepwise" or "forward" or any other path dependent model selection routine, then your final model may be different to a BIC "stepwise" approach. I think this says more about stepwise and other path dependent model selection routines than it does about AIC or BIC. If you do a "stepwise" AIC, then to get BIC for the same models searched simply requires a very simple adjustment of the sequence of AIC values (similar for BIC stepwise).
One other thing to note is that neither BIC nor AIC are particularly good when you have prior information about parameters within a model - such as knowing that variable X has a large effect, variable Y has a small effect, and variable Z has a tiny effect, and so on. Both methods basically assume that you don't know anything about the effect sizes and require them to be estimated from the data (if they did then there would be some place where you could put in that information, and there isn't).
Additionally they are both approximate tools, so will necessarily break down in particular problems. For BIC this will happen when you run into "identifiability" problems, and when MLEs of parameters lie on or close to boundaries of the parameter space, or multi-modal likelihoods (this could quite easily happen in structural equation modeling because of the many latent variables, perhaps explains why BIC). These conditions make the Laplace integral approximation perform poorly, and so BIC will also perform poorly here.