Skip to content

[Feature Rquest][RM ANOVA]: Add Siegel-Castellan post hoc test to Friedman #3426

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
2 of 3 tasks
PerPalmgren opened this issue May 4, 2025 · 13 comments
Open
2 of 3 tasks

Comments

@PerPalmgren
Copy link

JASP Version

0.19.3

Commit ID

No response

JASP Module

ANOVA

What analysis are you seeing the problem on?

Friedman´s text (non-parametric ANOVA)

What OS are you seeing the problem on?

Windows 11

Bug Description

When I run a Friedman´s test (non-parametric ANOVA) in JASP the posthoc analysis give strange p-values (both unadjusted and corrected) compared to other statistical softawares such as SPSS (and DATAtab). See pictures below. Is there a bug in JASP

Image Image

?

Expected Behaviour

Give similar p-values in the post hoc analysis as SPSS and DATAtab

Steps to Reproduce

Here is output from DATATab (similar results as SPSS) and different from JASP

Image

Log (if any)

No response

More Debug Information

No response

Final Checklist

  • I have included a screenshot showcasing the issue, if possible.
  • I have included a JASP file (zipped) or data file that causes the crash/bug, if applicable.
  • I have accurately described the bug, and steps to reproduce it.
@JohnnyDoorn
Copy link

Thanks for your report @PerPalmgren - can you share the data you used here? Because I verified with the PMCMRplus package and the results in Field's SPSS book and there the results seem to be in order, so I'm curious to see where the discrepancy could lie.

@tomtomme tomtomme added Waiting for requester If waiting for a long time it is reasonable to close an issue and removed Bug OS: Windows 11 labels May 6, 2025
@tomtomme tomtomme changed the title [Bug]: Friedman´s test gives strange p-values in post-hoc analysis [Task]: Friedman´s test gives strange p-values in post-hoc analysis May 6, 2025
@PerPalmgren
Copy link
Author

Hi @JohnnyDoorn. I used the dataset from Mark Goss-Sampson's manual “Statistical Analysis with JASP: A Guide for Students” and I attach it here. It might be the dataset, but it would be great if we can verify this. Some students pointed this out when we were doing group work using three different software programs (JASP, SPSS, and DATAtab).

Friedman RM-ANOVA 1.csv

@github-actions github-actions bot removed the Waiting for requester If waiting for a long time it is reasonable to close an issue label May 6, 2025
@JohnnyDoorn
Copy link

Thanks @PerPalmgren. I've looked into it some more, and it seems there are two ways of conducting the Conover test:

  • the method used by spss/datatab, is based on a very simple formula for the standard error of the test statistic
    SE <- sqrt(k * (k + 1) / (6 * n)), with k groups and n participants
  • the method used by JASP, which in turn uses the PMCMRplus package, is based on a more involved method for computing the standard error, based on the observed within-ranks (see page 16 of this paper by Conover) - the R-code for the JASP calculation starts around here.

Since I am not an expert here, I will reach out to the author of the PMCMRplus package. Maybe it would've been easier if it would have been "just" a JASP bug :p

@PerPalmgren
Copy link
Author

PerPalmgren commented May 6, 2025

@JohnnyDoorn
Ok I understand. But plotting the data (see image), I think the results from the posthoc analysis from SPSS and DATAtab makes more sense.
All the best
Per the JASP lover 😎

Image

@PerPalmgren
Copy link
Author

@JohnnyDoorn
Was it different in JASP before? Because the results in Goss-Sampson's JASP handbook also seem more reasonable.
Per the JASP lover

@PerPalmgren
Copy link
Author

@JohnnyDoorn it seems like SPSS and DATAtab is not using Conover but a post hoc comparisons using Wilcoxon signed-rank tests between all pairs of conditions, with Bonferroni correction. Maybe it could be an option for JASP to offer this as an alternative!

@JohnnyDoorn
Copy link

JohnnyDoorn commented May 6, 2025

There was a previous issue where I did not implement the code from PMCMRplus correctly (see here, towards the end), which could explain the different result in the book.

I find the results interesting, and think perhaps a raincloud plot is a better illustration of the data, because there you can see the individual data points with the lines indicating increases/decreases:

Image

Here you can see what is also confirmed by the rank biserial correlations from the Conover test, which is that for the 18-36 and 36-48 differences all values are lower/higher in one condition (rrb = 1), and for the 18-48 most are higher in one condition. Perhaps the lower p-value in the Conover test stems from the added power when considering all three conditions during the pairwise comparisons?

@tomtomme tomtomme added the Waiting for requester If waiting for a long time it is reasonable to close an issue label May 6, 2025
@PerPalmgren
Copy link
Author

@JohnnyDoorn I need to check this with some other people. I do not feel 100 % comfortable with this, with very low p-values and large effect sizes between 18 and 48.
Per the JASP lover

@github-actions github-actions bot removed the Waiting for requester If waiting for a long time it is reasonable to close an issue label May 7, 2025
@JohnnyDoorn
Copy link

@PerPalmgren Please do - I'm curious to find out more and I feel this is one of those topics where it gets a bit murky to browse online.

@JohnnyDoorn
Copy link

@PerPalmgren I've checked out the package some more, and it seems that SPSS and Datatab are using he Siegel-Castellan follow-up test. Here is the code to first reproduce the JASP results (with Conover test), and then the SPSS/Datatab results (with Siegel test):

dat <- read.csv("~/Downloads/Friedman.RM-ANOVA.1.csv", sep = ";")
y = unlist(dat[, 2:4])
gr = factor(rep(letters[1:3], each = 15))
id = factor(rep(1:15, 3))

PMCMRplus::frdAllPairsConoverTest(y, gr, id, p.adjust.method = "holm")
PMCMRplus::frdAllPairsConoverTest(y, gr, id, p.adjust.method = "bonferroni")
PMCMRplus::frdAllPairsSiegelTest(y, gr, id, p.adjust.method = "none")
PMCMRplus::frdAllPairsSiegelTest(y, gr, id, p.adjust.method = "bonferroni")

Now, when to use which is not a question I can confidently answer, unfortunately... For such small data sets it can be so tricky because just a single pair of observations can have a big impact on the SE/t/p.

@PerPalmgren
Copy link
Author

@JohnnyDoorn Thank you for looking in to this and clarifying. I am not so much into coding but I think for the non-parametric choice in the Rm-ANOVA module in JASO it would be great to have both the Conover (as you say probably higher power) but also a more traditional option such as Siegel-Castellan approach. Just to harmonise with other statically softwares.

@JohnnyDoorn
Copy link

Hi @PerPalmgren,

The PMCMRplus creator was very responsive and informative, and also pointed me to this article with some simulation studies about Friedman post hoc tests and the Conover test indeed seems to be the most liberal. Based on this, I agree that it would be nice to add the Siegel-Castellan post hoc tests.

In response to the different result in the book, he also wrote

Please note that prior to CHANGES IN PMCMRPLUS VERSION 1.9.10 (2023-12-10) the function
frdAllPairsConoverTest had a bug which lead to wrong p-values. (Maybe your handbook was compiled at these days.)

Cheers
Johnny

@JohnnyDoorn JohnnyDoorn changed the title [Task]: Friedman´s test gives strange p-values in post-hoc analysis [Feature Rquest][RM ANOVA]: Add Siegel-Castellan post hoc test to Friedman May 8, 2025
@PerPalmgren
Copy link
Author

@JohnnyDoorn @tomtomme
👍👍👍
For consistency with many other software, I believe it would be highly beneficial to include Siegel-Castellan post hoc tests as an option alongside Conover when performing a non-parametric Friedman's test in JASP.
Per the JASP lover🤩

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants