Where are the testers from?
We tap into various panel services via an API, pre-vetting each participant and conducting in-house quality control after each test to ensure results you can trust.
How are you calculating the statistical probability of the results?
The test results represent a binomial distribution of the average likelihood that the population of testers giving feedback likes/dislikes a content block. According to our tests, the threshold for dependable results is a population of 20 testers.
If we know with a high likelihood that the test block is either really good or really bad, we'll display the score as such. If the result falls in the "mediocre" range, which will be statistically insignificant, we will display the result as such.
In case a text block returned an average score, then as a marketer, you will not need to know "how average" the text block was. It's good enough to know what the block was not good enough and hence needs improvement.