This exercise suite requires mirt, psychotree, difR and psychomix to be loaded.
Load the dataset Exercise_07.Rdata
file and inspect the defined objects. The object named datex7
represents a 20-item dichtomously scored test. The group
object consists of two groups: a set of individuals that had been trained in general test taking skills (with_training
), and a set of individuals without any training (without_training
). Explore the dataset, and determine if a 1D 2PL model is fine (best with mirt()
).
Fit a completely independent multiple group 2PL model to the data with multipleGroup()
from mirt, as well as the completely equal 2PL model (i.e., ignoring group membership/contraining all parameters to be equal across groups, argument invariance
), and view the estimated coefficients. Plot the test-score and test-information functions (type = 'score'
and type = 'info'
) for the completely independent model, and plot the IRF of the first three items with itemplot()
.
The items appear to be behaving differently for each group in the completely independent 2PL model. This may be due to either population difference between the groups and/or the items are showing DIF. First, adjust the data to account for group differences by fitting a less constrained versions of the ‘completely equal’ model, where the mean and variance for the second group are freely estimated, but keep the intercepts and slopes equal across groups (Hint: See the invariance = ''
argument in ?multipleGroup
). Use a likelihood ratio test to determine if this groupdiff model fits better than the equal model and the completely independent model.
You should have found that completely independent > groupdiff > equal (> means better fit). So we have a difference between the independent and groupdiff model which indicates that there may be items that are not functioning the same within each group (i.e., contain DIF). For this 2PL model, some items may have different slopes, intercepts, or both. 7) Use the dichoDif()
function from difR to look for uniform DIF for the two groups with method=c("Lord","Raju"")
, the two IRT methods Lord and Raju (it is best to use a p-value multiple comparison adjustment p.adjust.method=
, either the Holm ("Holm"
) or the Benjamini-Hochberg ("BH"
) procedure). Inspect the result. Which items show DIF?
Corroborate the results with the DIF()
function from mirt (this is similar to difR::difLRT()
). Locate DIF items by freeing all parameters (i.e., which.pars=c("a1","d")
) in each respective items one at a time (scheme="drop"
). For the items for which there was DIF in all methods, plot the IRF (use the items2test=
argument and set return_model=TRUE
, then you can plot them with itemplot()
). Try determining whether DIF is occuring in only a subset of the parameters (e.g., Item 3 may have only DIF in the slope parameter, but the intercepts are equivalent).
As we have seen before, serching for DIF can be quite tedious for one group only, even more so for more categorical or metric variables. What if we have many different groups and covariates? We can use fit a tree for greedy automatic DIF selection with psychotree. Load the familiness
data set. There we have a number of additional variables (it would be prudent to first explore association of the items with the additional variables). Fit a Rating Scale Model to the OMC items and plot the IRF (you can use rsmodel()
so the plot methods match with the tree later). We now fit a Rating Scale Tree where all covariates are used as possible items rstree()
. Check the help function rstree()
how to set up the model (the left hand side of the formula must be the matrix of item answers). Inspect and plot the tree. Which items show DIF for which variables?
Now fit a mixture Rating Scale model to the OMC Scale of the FIFS. For this we use the mixRasch package and the function mixRasch()
. Try two, three four and six classes (n.c
argument). Note that one needs to specify the number of thresholds (here steps
) to be estimated (we have categories from 1-6 so at most 5 thresholds). Which model fits best?
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.