Yes, Brian Thomas is today using a similar study, which (apparently) shows that the “Universe’s Matter Is Too Clumpy“. Amusingly, the study’s primary author’s name is Shaun Thomas, which is going to make this rather difficult… Brian Thomas says:
[Shaun] Thomas and his colleagues used data from the Sloan Digital Sky Survey, which represents an unprecedented “zoom out” view of the universe, to analyze the 3-D distribution of hundreds of thousands of galaxies. Seen from such a great distance, and assuming a naturalistic origin, matter should appear to be twice as smooth (i.e., evenly distributed) as it actually is. However, the matter is “clumpier than astronomers expected.”
About that “twice as smooth”…In the book Bad Science Ben Goldacre touches on when people abuse statistics when they say that something has ‘doubled’ when it has only increased by a small amount (by has still technically doubled). In this case, the Wired article itself says:
The clumpiness of the universe is expected to vary by about 1 percent from one spot to another on these length scales. The new analysis saw a universe that varies by nearly double that amount. It’s still basically smooth, but much clumpier than current cosmological models predict.
In other words, we thought it was 1% but in reality it’s more like 2%. Please explain why this is important, [B.] Thomas…
He doesn’t, instead going on to criticise the hastily-drawn-together ideas about what it could all mean:
In an effort to model the level of “clumpiness” that had already been known up to this point, researchers invoked unobserved entities called “dark matter” and “dark energy.” Supposedly, the gravity of some invisible matter or the force of some undetectable energy could have attracted the matter that makes up galaxies into their current clumped arrangements, leaving gigantic voids where no galaxies reside.
But invoking the distribution of dark matter or energy to solve the problem of the distribution of real matter raises even more questions than before. What is dark matter and where did it come from? And what process distributed the dark matter into clumps and voids so that real matter would follow its lead?
[R]ather than consider the possibility that galaxies have been put in place on purpose, which best fits the data, these scientists instead questioned the fundamental laws of physics, such as general relativity. What would cause gravity to behave differently over very large distances, and where would that cause have originated?
I bet he’d say the same thing if we were still using Newtonian Gravity… (And I take it that the ICR doesn’t share Conservapedia’s opinion on Relativity). At any rate, physicists will speculate on new physics at the drop of the hat – there’s often a better explanation.
As for the “put in place on purpose” thing, I have been saying for a while that Creationism “explains” everything and predicts nothing. It’s only a change of going from one percent to two, after all – aren’t you jumping the gun a bit?..
I won’t comment further on the ICR article except to say that [B.] Thomas really needs to stop conflating “randomness” (or the lack thereof) with “clumpiness”…
What can we make of the study itself?
As I can’t see it, here’s the end of a review of it:
If the inhomogeneity is confirmed, the implications for the standard model would be severe, and might entail a reconsideration of the nature of dark matter and dark energy, or even of the applicability of general relativity on cosmological scales. But given the resounding success of the standard model to date, we should be hesitant before discarding it, and instead consider alternative explanations.
The measurements of galaxy clustering are challenging because the signal—a percent-level fluctuation—is small, so systematic effects must be extremely well controlled. In particular, light from distant galaxies passes through our own Milky Way galaxy on the way to the telescope. Dust clouds in the outer parts of the Milky Way may absorb or scatter the light of distant galaxies, making some of them appear too dim to measure. These effects are known, but may not be perfectly calibrated. Faint stars in the outskirts of the Milky Way have two subtle effects on the data: they can be mistaken for distant galaxies, or their presence can hide the existence of distant galaxies that are nearby in the sky. If any of these effects varies across the sky, the variation could be mistaken for a variation in intrinsic distribution of background galaxies. Although Thomas et al. examined some of these issues, a recent preprint  claims that this “masking” effect of stars has significantly contaminated their results.
Future large-scale surveys  will produce an avalanche of data. These surveys will allow the methods employed by Thomas et al. and others to be extended to still larger scales. Of course, the challenge for these future surveys will be to correct for the systematic effects to even greater accuracy.
First bold: This is one failure of the standard model. It has succeeded before – it could still succeed again.
Second bold: The conclusions of the study itself may be wrong, rendering all this speculation irrelevant. The preprint does look convincing…
Third bold: More data will be out soon. Then we can determine if this is a real effect.
Brian Thomas has a wonderful tendency to underplay the weaknesses of studies that agree with him, and overplay what disagrees. While he does so, there’ll still be something for someone to blog about in the way I am…