Does it really?
Are you aware of the unverifiable assumptions necessary to come to the conclusion that you just stated?
assumption of the initial state of the measured sample (initial amounts of parent/daughter elements present),
assumption of constant rate of decay (shown recently to be not constant),
assumption of non-contamination and no leaching of elements from other sources.
Different elemental decay measurements on the same sample can give wildly varying "ages" - do you know how they determine which one is right? By the assumed age of the fossils "near" the sample. Circular reasoning.
Are you aware that "new" rocks from Mt St Helens dated to "millions of years" when they were only a few months old? (The "clock" is assumed to start ticking when the rock cools from molten state, on new and "old" rocks.)
Of course, even though this method doesn't work on known age rocks, we have to assume that it works on rocks of unknown age.
They've been able to show some small variations in rate of decay, but it's many orders of magnitude less than what would be required to get 4.5 Billion years worth of apparent decay in just 6,000 years. The thermodynamics of having that much radioactive compressed to a timespan of a few thousand years would mean the Earth would still be millions of years away from having cooled down enough to even thave a crust.
Trying to explain it as "leaching" doesn't explain how samples taken from deep solid rock formations show the same ratios as samles taken near the surface of the formation.
If the Earth is only 6,000 years old, it should be relative easy to produce a sample of uranium ore consistent with only having undergone 6,000 years of decay. I haven't heard of anyone finding one. Have you?