The fine-tuning argument relies on the idea that all these different constants line up so perfectly to make the universe viable. But how can we make this kind of probabilistic conclusion because we don't know how other possible combinations of these constants might have worked. For example, maybe even if the universal gravitational constant were too low, if the expansion rate of the universe is also lowered appropriately, then things might have worked out. And since we don't know how many other combinations out there are viable, we can't say that our current universe is particularly amazing or improbable.
This was a great question. There is a more sophisticated way of thinking through this..
It's true that there are combinations of constants that WOULD work. But you have to recognize that when you put in 2 constants (like the expansion rate and gravity), then you've actually increased the "possibility space" to become 2 dimensional.
When we were talking about a single constant, it is just 1 dimension, so then you can get a range like 0.001% and things like that.. represented by a very short line (solution space) on a longer line (possibility space). But when you put 2 constants, then the possibility range becomes 2 dimensions (imagine a x-y axis graph, but not something that goes on forever). So while it is correct to say that there are combinations of those 2 constants that WORK, that "solution" is actually a finite, thin LINE through the 2 dimensional space. Also, that line is a finite, discontinuous line, b/c there are certain values of expansion rate at which point it won't matter what the gravity is.. it just won't work. For example, if the expansion rate is 0, then obviously it won't matter what value gravity constant would be. So... that means -- while it is TRUE that there are other combinations of constants that could work.. but the new probability now needs to be calculated on a 2 dimensional space, rather than a single line. So the probabilistic calculations become more complicated, but it does NOT suddenly become more likely, because now the total possible combinations have also increased. (you can call it domain space)
That's like seeing 2 pictures..
Picture #1: Let's say there's a LINE 1 mile long, and there is a small 2-inch red string within that line.
Picture #2: There's a PLANE 1x2 miles.. and there are 2 red strings on that plane that's like 100 feet long each. (which is MUCH longer than the first scenario)
And you are asked: which one has a higher probability of hitting the red string? The point isn't to answer that question, but the point I'm making is that the solution space increasing must be calculated in light of the larger domain space.
So.. what that means is.. as we increase the # of constants, you are increasing the dimensions... With 3 constants, we would get a 3 dimensional space, and you could imagine that the solution space are small 3 dimensional bubbles here and there within that 3 dimensional space.
You go to 50 constants or so.. and you get a 50-dimensional space, and you have different combination solutions within that 50 dimensional space (which we cannot conceive with our 3-dimensional mind)
Yes, you will be able to get multiple combinations that work together to form a viable universe, but compared to the size of that 50-dimensional space, we're still talking about a very very small solution space.. so the probability still holds.. and one might even say - if you start doing that, the probability of the solution would decrease to make it even more unlikely.. b/c the number of possible combinations of different constants increase exponentially with every additional constant at a rate that is higher than the number of combo solutions that could work.
Hope that's understandable.
Post a Comment