Test-driven analysis of social sweepstakes casino bonuses
ClaimlyLab
ClaimlyLab is a lab. We claim bonuses, play them through, attempt redemption, and publish the resulting honored rate alongside the published terms — not in place of them. The marketing claim is what the operator promises; the honored rate is what was actually earnable in our hands. The page surfaces both and shows the gap. The methodology is published; the testing log is sourced.
What you get on this site
Honored rate, measured.
The published bonus value is a marketing input. The honored rate — the redeemable value clearable through the documented mechanics — is the output. We publish the input, the output, and the delta.
Methodology before result.
Every honored-rate datapoint links to the methodology section that produced it, so the result is auditable rather than asserted. If the methodology is wrong, the result is wrong, and the methodology page is where the disagreement should land.
Per-operator, per-bonus-class.
Bonuses are not interchangeable. We test welcome offers, no-deposit claims, reload promotions, and tier-keyed offers as separate classes, and we publish the honored rate per class rather than rolling them into a single summary number.
Coverage
What we test in the lab
End-to-end bonus testing — claim, play through, attempt redemption, publish the honored rate.
Welcome-bonus tests
Standard welcome offer cycled to the documented playthrough.
No-deposit tests
Claim → eligibility → playthrough → redemption sequence.
Reload-bonus tests
Promotional-calendar reload offers tested at snapshot date.
Tier-keyed tests
Loyalty-tier-conditional offers run from inside the tier.
Honored-rate measurement
Marketed value, documented constraints, and actual clearance.
Methodology-linked results
Every datapoint cites the methodology section that produced it.
How we score
A 10-axis weighted rubric, published before the verdict.
Every operator review grades against the same ten axes. The axes carry fixed weights. The weights are public. Every datapoint links to a row in the testing log. If a claim isn't sourced, it doesn't ship.
Editorial standards
What you can hold us to.
First-hand testing protocol
Bonuses are claimed, played through, and redemption is attempted before any honored-rate datapoint ships.
10-axis weighted rubric
Every operator profile is graded against ten published axes — each with a fixed, public weight.
Public methodology
The framework, weights, and refresh cadence are published before any verdict is published.
Independent editorial
Operator promotion does not reorder reviews. Comparison ranking is mechanical from sourced data.
Reviews
Operator reviews launch with our public methodology.
We don't ship operator profiles before the rubric, the cadence rules, and the testing log are public. The slots below land first — then the reviews.