Skip to content

Conversation

davharris
Copy link

I made a few tweaks to the explanation for prop.test. These include:

  • Rounding several estimated proportions to 3 significant digits
  • Attempting to make the explanations for confidence intervals and especially p-values more concrete

It's possible that I mangled the results for a corner case or two, but I think before merging, since it's totally possible that I failed to test one or more combinations of arguments to prop.test and the new explanation now fails sometimes.

Here's what the output looks like for a two-sample test that the proportions are equal.

prop.test(c(6, 7), c(12, 12)) %>% explain()

This was a two-sample proportion test of the null hypothesis that the true population proportions are equal. Using a significance level of 0.05, we do not reject the null hypothesis, and cannot conclude that two population proportions are different from one another. The observed difference in proportions is 0.0833. The observed proportion for the first group is 0.5 (6 events out of a total sample size of 12). For the second group, the observed proportion is 0.583 (7, out of a total sample size of 12).

The confidence interval for the true difference in population proportions is (-0.564, 0.397). Intervals generated with this procedure will contain the true difference in population proportions 95 times out of 100.

The p-value for this test is 1. In other words: if the true difference in sample proportions were exactly 0, and we collected 100 replicate data sets, we would find a discrepancy this large (or larger) in about 100 of these 100 cases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant