Questionnaire on Well-Being (QWB)

Psychometric analysis

Author
Affiliation

Magnus Johansson

Published

2024-10-01

1 Reproducing CFA

Questionnaire on Well-Being (QWB), 18 items, each item is scored on a scale of 0 to 4 (Hlynsson et al. 2024). Data from the same paper. We’ll use the data from the second study that was used in the CFA in the paper.

Code and data were retrieved from the paper’s OSF page. Really great to see these materials made available, it is such an important step towards improving the standards of science!

Code
# Read in study two data -------------------------------------------------------
dd <- read_excel("data/study_two.xlsx")

onefactor <- 'f1 =~ swb1 + swb2 + swb3 + swb4 + swb5 + swb6 + swb7 + swb8 +
                    swb9 + swb10 + swb11 + swb12 + swb13 + swb14 + swb15 + 
                    swb16 + swb17 + swb18'

# Fit the model to the data
cfamodel <- sem(model = onefactor, data = dd, estimator = "WLSMV") 
Warning: lavaan->lav_options_est_dwls():  
   estimator "DWLS" is not recommended for continuous data. Did you forget to 
   set the ordered= argument?

This warning message is important! For WLSMV to work properly, one also needs to specify ordered = TRUE.

Let’s see if we can reproduce the fit metrics reported in the paper (p.15), using the output from the misspecified function call above.

Code
cfamodel %>% summary(standardized=T, ci=F, fit.measures= TRUE, )
lavaan 0.6-18 ended normally after 33 iterations

  Estimator                                       DWLS
  Optimization method                           NLMINB
  Number of model parameters                        36

                                                  Used       Total
  Number of observations                          1561        1795

Model Test User Model:
                                              Standard      Scaled
  Test Statistic                               603.028    1576.509
  Degrees of freedom                               135         135
  P-value (Chi-square)                           0.000       0.000
  Scaling correction factor                                  0.391
  Shift parameter                                           33.384
    simple second-order correction                                

Model Test Baseline Model:

  Test statistic                             40316.305    8701.238
  Degrees of freedom                               153         153
  P-value                                        0.000       0.000
  Scaling correction factor                                  4.698

User Model versus Baseline Model:

  Comparative Fit Index (CFI)                    0.988       0.831
  Tucker-Lewis Index (TLI)                       0.987       0.809
                                                                  
  Robust Comparative Fit Index (CFI)                         0.986
  Robust Tucker-Lewis Index (TLI)                            0.984

Root Mean Square Error of Approximation:

  RMSEA                                          0.047       0.083
  90 Percent confidence interval - lower         0.043       0.079
  90 Percent confidence interval - upper         0.051       0.086
  P-value H_0: RMSEA <= 0.050                    0.887       0.000
  P-value H_0: RMSEA >= 0.080                    0.000       0.892
                                                                  
  Robust RMSEA                                               0.052
  90 Percent confidence interval - lower                     0.049
  90 Percent confidence interval - upper                     0.054
  P-value H_0: Robust RMSEA <= 0.050                         0.106
  P-value H_0: Robust RMSEA >= 0.080                         0.000

Standardized Root Mean Square Residual:

  SRMR                                           0.053       0.053

Parameter Estimates:

  Standard errors                           Robust.sem
  Information                                 Expected
  Information saturated (h1) model        Unstructured

Latent Variables:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
  f1 =~                                                                 
    swb1              1.000                               0.639    0.695
    swb2              0.861    0.035   24.575    0.000    0.550    0.544
    swb3              0.550    0.036   15.121    0.000    0.352    0.413
    swb4              0.876    0.034   25.809    0.000    0.560    0.647
    swb5              0.929    0.039   23.849    0.000    0.594    0.635
    swb6              0.967    0.040   24.054    0.000    0.618    0.656
    swb7              1.064    0.036   29.625    0.000    0.680    0.765
    swb8              1.142    0.034   33.522    0.000    0.730    0.808
    swb9              1.124    0.036   30.957    0.000    0.718    0.780
    swb10             1.182    0.040   29.564    0.000    0.755    0.751
    swb11             1.202    0.045   26.933    0.000    0.768    0.730
    swb12             0.872    0.037   23.385    0.000    0.557    0.654
    swb13             0.707    0.039   17.969    0.000    0.452    0.493
    swb14             0.980    0.035   27.907    0.000    0.626    0.666
    swb15             1.202    0.043   28.182    0.000    0.768    0.743
    swb16             1.104    0.036   30.255    0.000    0.706    0.715
    swb17             1.097    0.040   27.296    0.000    0.701    0.728
    swb18             0.981    0.045   21.770    0.000    0.627    0.587

Variances:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
   .swb1              0.436    0.018   24.187    0.000    0.436    0.517
   .swb2              0.721    0.025   28.874    0.000    0.721    0.704
   .swb3              0.601    0.024   24.670    0.000    0.601    0.829
   .swb4              0.435    0.017   25.780    0.000    0.435    0.581
   .swb5              0.523    0.019   27.276    0.000    0.523    0.597
   .swb6              0.506    0.019   26.097    0.000    0.506    0.570
   .swb7              0.327    0.014   22.718    0.000    0.327    0.414
   .swb8              0.283    0.012   23.155    0.000    0.283    0.347
   .swb9              0.332    0.013   25.432    0.000    0.332    0.392
   .swb10             0.441    0.018   24.399    0.000    0.441    0.436
   .swb11             0.518    0.022   23.263    0.000    0.518    0.468
   .swb12             0.415    0.017   24.855    0.000    0.415    0.572
   .swb13             0.634    0.022   28.570    0.000    0.634    0.757
   .swb14             0.493    0.019   26.580    0.000    0.493    0.557
   .swb15             0.480    0.020   24.323    0.000    0.480    0.449
   .swb16             0.476    0.019   25.534    0.000    0.476    0.489
   .swb17             0.436    0.019   22.826    0.000    0.436    0.470
   .swb18             0.749    0.028   26.616    0.000    0.749    0.656
    f1                0.408    0.026   15.931    0.000    1.000    1.000

The “standard” column in the output looks like what has been reported in the paper (see quote below) regarding χ2, RMSEA, and CFI. Good to see that it is reproducible.

A single-factor solution for the Confirmatory factor analysis for the Questionnaire on Well-Being. QWB resulted in a good fit for the data: χ2(135) = 603.03, p < 0.001, CFI = 0.988, SRMR = 0.053, RMSEA = 0.047 [90% CI: 0.043, 0.051]. Thus, our single- factor model for the QWB exhibits all of our predetermined criteria for a good model fit.

Let’s run the CFA function call with ordered = TRUE added to make the WLSMV estimator, which was correctly described in the paper, work as intended.

Code
cfamodel2 <- sem(model = onefactor, data = dd, estimator = "WLSMV", ordered = TRUE)
cfamodel2 %>% summary(standardized=T, ci=F, fit.measures= TRUE, )
lavaan 0.6-18 ended normally after 34 iterations

  Estimator                                       DWLS
  Optimization method                           NLMINB
  Number of model parameters                        90

                                                  Used       Total
  Number of observations                          1561        1795

Model Test User Model:
                                              Standard      Scaled
  Test Statistic                              1832.971    2940.978
  Degrees of freedom                               135         135
  P-value (Chi-square)                           0.000       0.000
  Scaling correction factor                                  0.630
  Shift parameter                                           32.413
    simple second-order correction                                

Model Test Baseline Model:

  Test statistic                            131413.959   40457.340
  Degrees of freedom                               153         153
  P-value                                        0.000       0.000
  Scaling correction factor                                  3.257

User Model versus Baseline Model:

  Comparative Fit Index (CFI)                    0.987       0.930
  Tucker-Lewis Index (TLI)                       0.985       0.921
                                                                  
  Robust Comparative Fit Index (CFI)                         0.831
  Robust Tucker-Lewis Index (TLI)                            0.808

Root Mean Square Error of Approximation:

  RMSEA                                          0.090       0.115
  90 Percent confidence interval - lower         0.086       0.112
  90 Percent confidence interval - upper         0.093       0.119
  P-value H_0: RMSEA <= 0.050                    0.000       0.000
  P-value H_0: RMSEA >= 0.080                    1.000       1.000
                                                                  
  Robust RMSEA                                               0.126
  90 Percent confidence interval - lower                     0.122
  90 Percent confidence interval - upper                     0.130
  P-value H_0: Robust RMSEA <= 0.050                         0.000
  P-value H_0: Robust RMSEA >= 0.080                         1.000

Standardized Root Mean Square Residual:

  SRMR                                           0.060       0.060

Parameter Estimates:

  Parameterization                               Delta
  Standard errors                           Robust.sem
  Information                                 Expected
  Information saturated (h1) model        Unstructured

Latent Variables:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
  f1 =~                                                                 
    swb1              1.000                               0.740    0.740
    swb2              0.788    0.022   35.977    0.000    0.583    0.583
    swb3              0.614    0.029   21.080    0.000    0.454    0.454
    swb4              0.946    0.021   44.763    0.000    0.700    0.700
    swb5              0.944    0.022   43.672    0.000    0.699    0.699
    swb6              0.940    0.023   41.605    0.000    0.695    0.695
    swb7              1.123    0.020   56.041    0.000    0.831    0.831
    swb8              1.187    0.019   62.155    0.000    0.878    0.878
    swb9              1.112    0.019   59.591    0.000    0.823    0.823
    swb10             1.061    0.020   54.326    0.000    0.785    0.785
    swb11             1.076    0.021   50.727    0.000    0.796    0.796
    swb12             0.954    0.022   42.924    0.000    0.706    0.706
    swb13             0.713    0.026   27.920    0.000    0.527    0.527
    swb14             0.946    0.020   46.765    0.000    0.700    0.700
    swb15             1.064    0.020   52.990    0.000    0.787    0.787
    swb16             1.009    0.019   54.088    0.000    0.747    0.747
    swb17             1.066    0.020   52.085    0.000    0.789    0.789
    swb18             0.840    0.024   35.203    0.000    0.622    0.622

Thresholds:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
    swb1|t1          -2.252    0.088  -25.646    0.000   -2.252   -2.252
    swb1|t2          -1.076    0.039  -27.312    0.000   -1.076   -1.076
    swb1|t3          -0.102    0.032   -3.213    0.001   -0.102   -0.102
    swb1|t4           1.117    0.040   27.866    0.000    1.117    1.117
    swb2|t1          -1.949    0.067  -29.073    0.000   -1.949   -1.949
    swb2|t2          -0.847    0.036  -23.368    0.000   -0.847   -0.847
    swb2|t3          -0.020    0.032   -0.633    0.527   -0.020   -0.020
    swb2|t4           1.082    0.039   27.392    0.000    1.082    1.082
    swb3|t1          -2.613    0.129  -20.271    0.000   -2.613   -2.613
    swb3|t2          -1.627    0.053  -30.772    0.000   -1.627   -1.627
    swb3|t3          -0.886    0.037  -24.149    0.000   -0.886   -0.886
    swb3|t4           0.316    0.032    9.776    0.000    0.316    0.316
    swb4|t1          -2.341    0.096  -24.399    0.000   -2.341   -2.341
    swb4|t2          -1.189    0.041  -28.723    0.000   -1.189   -1.189
    swb4|t3          -0.075    0.032   -2.353    0.019   -0.075   -0.075
    swb4|t4           1.212    0.042   28.968    0.000    1.212    1.212
    swb5|t1          -2.044    0.073  -28.162    0.000   -2.044   -2.044
    swb5|t2          -0.908    0.037  -24.557    0.000   -0.908   -0.908
    swb5|t3           0.191    0.032    5.993    0.000    0.191    0.191
    swb5|t4           1.257    0.043   29.397    0.000    1.257    1.257
    swb6|t1          -2.232    0.086  -25.911    0.000   -2.232   -2.232
    swb6|t2          -1.151    0.041  -28.285    0.000   -1.151   -1.151
    swb6|t3          -0.133    0.032   -4.174    0.000   -0.133   -0.133
    swb6|t4           0.948    0.037   25.272    0.000    0.948    0.948
    swb7|t1          -2.317    0.094  -24.744    0.000   -2.317   -2.317
    swb7|t2          -1.312    0.044  -29.847    0.000   -1.312   -1.312
    swb7|t3          -0.208    0.032   -6.498    0.000   -0.208   -0.208
    swb7|t4           0.978    0.038   25.798    0.000    0.978    0.978
    swb8|t1          -2.294    0.092  -25.065    0.000   -2.294   -2.294
    swb8|t2          -1.105    0.040  -27.710    0.000   -1.105   -1.105
    swb8|t3          -0.038    0.032   -1.189    0.234   -0.038   -0.038
    swb8|t4           1.145    0.041   28.210    0.000    1.145    1.145
    swb9|t1          -1.852    0.062  -29.836    0.000   -1.852   -1.852
    swb9|t2          -0.602    0.034  -17.755    0.000   -0.602   -0.602
    swb9|t3           0.534    0.033   15.978    0.000    0.534    0.534
    swb9|t4           1.542    0.050   30.794    0.000    1.542    1.542
    swb10|t1         -1.664    0.054  -30.703    0.000   -1.664   -1.664
    swb10|t2         -0.645    0.034  -18.832    0.000   -0.645   -0.645
    swb10|t3          0.311    0.032    9.625    0.000    0.311    0.311
    swb10|t4          1.375    0.045   30.257    0.000    1.375    1.375
    swb11|t1         -2.018    0.071  -28.422    0.000   -2.018   -2.018
    swb11|t2         -1.099    0.040  -27.631    0.000   -1.099   -1.099
    swb11|t3         -0.279    0.032   -8.668    0.000   -0.279   -0.279
    swb11|t4          0.598    0.034   17.657    0.000    0.598    0.598
    swb12|t1         -2.423    0.104  -23.194    0.000   -2.423   -2.423
    swb12|t2         -1.367    0.045  -30.210    0.000   -1.367   -1.367
    swb12|t3         -0.092    0.032   -2.909    0.004   -0.092   -0.092
    swb12|t4          1.082    0.039   27.392    0.000    1.082    1.082
    swb13|t1         -2.070    0.074  -27.876    0.000   -2.070   -2.070
    swb13|t2         -1.236    0.042  -29.203    0.000   -1.236   -1.236
    swb13|t3         -0.183    0.032   -5.741    0.000   -0.183   -0.183
    swb13|t4          1.029    0.039   26.611    0.000    1.029    1.029
    swb14|t1         -1.994    0.070  -28.659    0.000   -1.994   -1.994
    swb14|t2         -0.915    0.037  -24.692    0.000   -0.915   -0.915
    swb14|t3          0.105    0.032    3.314    0.001    0.105    0.105
    swb14|t4          1.275    0.043   29.553    0.000    1.275    1.275
    swb15|t1         -1.718    0.056  -30.542    0.000   -1.718   -1.718
    swb15|t2         -0.842    0.036  -23.275    0.000   -0.842   -0.842
    swb15|t3          0.112    0.032    3.516    0.000    0.112    0.112
    swb15|t4          1.099    0.040   27.631    0.000    1.099    1.099
    swb16|t1         -1.834    0.061  -29.952    0.000   -1.834   -1.834
    swb16|t2         -0.901    0.037  -24.422    0.000   -0.901   -0.901
    swb16|t3          0.075    0.032    2.353    0.019    0.075    0.075
    swb16|t4          1.179    0.041   28.616    0.000    1.179    1.179
    swb17|t1         -2.098    0.076  -27.561    0.000   -2.098   -2.098
    swb17|t2         -1.260    0.043  -29.429    0.000   -1.260   -1.260
    swb17|t3         -0.323    0.032   -9.977    0.000   -0.323   -0.323
    swb17|t4          0.756    0.035   21.440    0.000    0.756    0.756
    swb18|t1         -1.658    0.054  -30.718    0.000   -1.658   -1.658
    swb18|t2         -0.828    0.036  -22.996    0.000   -0.828   -0.828
    swb18|t3          0.010    0.032    0.329    0.742    0.010    0.010
    swb18|t4          1.034    0.039   26.695    0.000    1.034    1.034

Variances:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
   .swb1              0.453                               0.453    0.453
   .swb2              0.660                               0.660    0.660
   .swb3              0.794                               0.794    0.794
   .swb4              0.510                               0.510    0.510
   .swb5              0.512                               0.512    0.512
   .swb6              0.517                               0.517    0.517
   .swb7              0.310                               0.310    0.310
   .swb8              0.229                               0.229    0.229
   .swb9              0.323                               0.323    0.323
   .swb10             0.384                               0.384    0.384
   .swb11             0.366                               0.366    0.366
   .swb12             0.502                               0.502    0.502
   .swb13             0.722                               0.722    0.722
   .swb14             0.511                               0.511    0.511
   .swb15             0.380                               0.380    0.380
   .swb16             0.442                               0.442    0.442
   .swb17             0.378                               0.378    0.378
   .swb18             0.614                               0.614    0.614
    f1                0.547    0.018   29.873    0.000    1.000    1.000

Before looking closer at the results and making comparisons to the published/reported metrics, we need to address the issue of reporting the correct, scaled model fit metrics.

The often used Hu & Bentler (1999) cutoff values (also used in the paper) are based on simulations of continuous data and ML estimation. As such, they are not appropriate for ordinal data analyzed with the WLSMV estimator (McNeish 2023; Savalei 2018). The R-package dynamic can produce appropriate cutoff values for model fit indices. We’ll get into that after reviewing the scaled fit metrics and modification indices.

1.1 Scaled fit metrics

For WLSMV, the .scaled metrics should be reported.

Code
fit_metrics_scaled <- c("chisq.scaled", "df", "pvalue.scaled", 
                        "cfi.scaled", "tli.scaled", "rmsea.scaled", 
                        "rmsea.ci.lower.scaled","rmsea.ci.upper.scaled",
                        "srmr")

fitmeasures(cfamodel2, fit_metrics_scaled) %>% 
  rbind() %>% 
  as.data.frame() %>% 
  mutate(across(where(is.numeric),~ round(.x, 3))) %>%
  rename(Chi2.scaled = chisq.scaled,
         p.scaled = pvalue.scaled,
         CFI.scaled = cfi.scaled,
         TLI.scaled = tli.scaled,
         RMSEA.scaled = rmsea.scaled,
         CI_low.scaled = rmsea.ci.lower.scaled,
         CI_high.scaled = rmsea.ci.upper.scaled,
         SRMR = srmr) %>% 
  knitr::kable()
Chi2.scaled df p.scaled CFI.scaled TLI.scaled RMSEA.scaled CI_low.scaled CI_high.scaled SRMR
. 2940.978 135 0 0.93 0.921 0.115 0.112 0.119 0.06

Again, these were the metrics reported in the paper:

A single-factor solution for the Confirmatory factor analysis for the Questionnaire on Well-Being. QWB resulted in a good fit for the data: χ2(135) = 603.03, p < 0.001, CFI = 0.988, SRMR = 0.053, RMSEA = 0.047 [90% CI: 0.043, 0.051]. Thus, our single- factor model for the QWB exhibits all of our predetermined criteria for a good model fit.

The differences from the model fit metrics output in the table above and those found in the paper are partially due to the missing ordered = TRUE option, but also from reporting the wrong metrics for the WLSMV estimator.

The correct model fit metrics indicate problems, no matter which cutoffs one would use, especially regarding RMSEA. Let us review the modification indices.

1.2 Modification indices

We’ll filter the list and only present those with mi/χ2 > 30.

Code
modificationIndices(cfamodel2,
                    standardized = T) %>% 
  as.data.frame(row.names = NULL) %>% 
  filter(mi > 30) %>% 
  arrange(desc(mi)) %>% 
  mutate(across(where(is.numeric),~ round(.x, 3))) %>%
  knitr::kable()
lhs op rhs mi epc sepc.lv sepc.all sepc.nox
swb4 ~~ swb5 213.550 -0.242 -0.242 -0.474 -0.474
swb5 ~~ swb12 155.477 -0.211 -0.211 -0.416 -0.416
swb7 ~~ swb8 132.329 -0.143 -0.143 -0.536 -0.536
swb11 ~~ swb17 102.493 -0.144 -0.144 -0.388 -0.388
swb1 ~~ swb2 78.974 -0.172 -0.172 -0.315 -0.315
swb1 ~~ swb16 68.421 -0.138 -0.138 -0.309 -0.309
swb2 ~~ swb16 68.162 -0.161 -0.161 -0.298 -0.298
swb5 ~~ swb6 63.609 -0.146 -0.146 -0.284 -0.284
swb15 ~~ swb17 62.165 -0.117 -0.117 -0.308 -0.308
swb12 ~~ swb13 58.963 -0.162 -0.162 -0.269 -0.269
swb13 ~~ swb14 48.027 -0.148 -0.148 -0.244 -0.244
swb9 ~~ swb18 42.649 -0.120 -0.120 -0.269 -0.269
swb4 ~~ swb12 39.141 -0.118 -0.118 -0.233 -0.233
swb2 ~~ swb17 34.139 0.137 0.137 0.275 0.275
swb8 ~~ swb12 33.173 0.123 0.123 0.363 0.363
swb2 ~~ swb3 31.034 -0.132 -0.132 -0.182 -0.182

Many very large mi/χ2 values due to residual correlations.

1.3 Dynamic cutoff values

In order to establish useful cutoff values for the WLSMV estimator with ordinal data, we need to run simulations relevant to the current set of items and response data (McNeish 2023). This has been implemented in the development version of dynamic.

Code
library(dynamic) # devtools::install_github("melissagwolf/dynamic") for development version
Beta version. Please report bugs: https://github.com/melissagwolf/dynamic/issues.
Code
dyncut <- catOne(cfamodel2, reps = 500)
Code
dyncut
Your DFI cutoffs: 
            SRMR  RMSEA CFI  
Level-0     0.015 0.011 0.999
Specificity 95%   95%   95%  
                             
Level-1     0.026 0.04  0.991
Sensitivity 95%   95%   95%  
                             
Level-2     0.031 0.056 0.983
Sensitivity 95%   95%   95%  
                             
Level-3     0.034 0.066 0.977
Sensitivity 95%   95%   95%  

Empirical fit indices: 
 Chi-Square  df p-value   SRMR   RMSEA    CFI
   2940.978 135       0   0.06   0.115   0.93

 Notes:
  -'Sensitivity' is % of hypothetically misspecified models correctly identified by cutoff in DFI simulation
  -Cutoffs with 95% sensitivity are reported when possible
  -If sensitivity is <50%, cutoffs will be supressed 

Explanations on Levels 0-3 from the dynamic package vignette:

When there are 6 or more items, cfaOne will consider three levels of misspecification. As in catHB, the Level-0 row corresponds to the anticipated fit index values if the fitted model were the exact underlying population model. The Level-1 row corresponds to the anticipated fit index values if the fitted model omitted 0.30 residual correlations between approximately 1/3 of item pairs. The Level-2 row corresponds to the anticipated fit index values if the fitted model omitted 0.30 residual correlations between approximately 1/3 of item pairs. The Level-3 row corresponds to the anticipated fit index values if the fitted model omitted 0.30 residual correlations between all item pairs.

As we can see, the observed/empirical fit metrics from the data does not come close to the Level-3 simulation based cutoff values.

2 Summary comments

The 18 items do not fit a unidimensional model, due to issues with residual correlations and potential multidimensionality.

3 Exploratory factor analysis

Let’s look at the data using EFA. A lot of the code for this analysis was borrowed from https://solomonkurz.netlify.app/blog/2021-05-11-yes-you-can-fit-an-exploratory-factor-analysis-with-lavaan/

Code
f1 <- 'efa("efa")*f1 =~ swb1 + swb2 + swb3 + swb4 + swb5 + swb6 + swb7 + swb8 +
                    swb9 + swb10 + swb11 + swb12 + swb13 + swb14 + swb15 + 
                    swb16 + swb17 + swb18'

# 2-factor model
f2 <- 'efa("efa")*f1 + 
       efa("efa")*f2 =~ swb1 + swb2 + swb3 + swb4 + swb5 + swb6 + swb7 + swb8 +
                    swb9 + swb10 + swb11 + swb12 + swb13 + swb14 + swb15 + 
                    swb16 + swb17 + swb18'

# 3-factor
f3 <- '
efa("efa")*f1 +
efa("efa")*f2 +
efa("efa")*f3 =~ swb1 + swb2 + swb3 + swb4 + swb5 + swb6 + swb7 + swb8 +
                    swb9 + swb10 + swb11 + swb12 + swb13 + swb14 + swb15 + 
                    swb16 + swb17 + swb18'

# 4-factor
f4 <- '
efa("efa")*f1 +
efa("efa")*f2 +
efa("efa")*f3 +
efa("efa")*f4 =~ swb1 + swb2 + swb3 + swb4 + swb5 + swb6 + swb7 + swb8 +
                    swb9 + swb10 + swb11 + swb12 + swb13 + swb14 + swb15 + 
                    swb16 + swb17 + swb18'

# 5-factor
f5 <- '
efa("efa")*f1 +
efa("efa")*f2 +
efa("efa")*f3 +
efa("efa")*f4 + 
efa("efa")*f5 =~ swb1 + swb2 + swb3 + swb4 + swb5 + swb6 + swb7 + swb8 +
                    swb9 + swb10 + swb11 + swb12 + swb13 + swb14 + swb15 + 
                    swb16 + swb17 + swb18'

efa_f1 <- 
  cfa(model = f1,
      data = dd,
      rotation = "oblimin",
      estimator = "WLSMV",
      ordered = TRUE)
efa_f2 <- 
  cfa(model = f2,
      data = dd,
      rotation = "oblimin",
      estimator = "WLSMV",
      ordered = TRUE)
efa_f3 <- 
  cfa(model = f3,
      data = dd,
      rotation = "oblimin",
      estimator = "WLSMV",
      ordered = TRUE)
efa_f4 <- 
  cfa(model = f4,
      data = dd,
      rotation = "oblimin",
      estimator = "WLSMV",
      ordered = TRUE)
efa_f5 <- 
  cfa(model = f5,
      data = dd,
      rotation = "oblimin",
      estimator = "WLSMV",
      ordered = TRUE)

3.1 Model fit table

Code
rbind(
  fitmeasures(efa_f1, fit_metrics_scaled),
  fitmeasures(efa_f2, fit_metrics_scaled),
  fitmeasures(efa_f3, fit_metrics_scaled),
  fitmeasures(efa_f4, fit_metrics_scaled),
  fitmeasures(efa_f5, fit_metrics_scaled)
  ) %>% 
  as.data.frame() %>% 
  mutate(across(where(is.numeric),~ round(.x, 3))) %>%
  rename(Chi2.scaled = chisq.scaled,
         p.scaled = pvalue.scaled,
         CFI.scaled = cfi.scaled,
         TLI.scaled = tli.scaled,
         RMSEA.scaled = rmsea.scaled,
         CI_low.scaled = rmsea.ci.lower.scaled,
         CI_high.scaled = rmsea.ci.upper.scaled,
         SRMR = srmr) %>% 
  add_column(Model = paste0(1:5,"-factor"), .before = "Chi2.scaled") %>% 
  knitr::kable()
Model Chi2.scaled df p.scaled CFI.scaled TLI.scaled RMSEA.scaled CI_low.scaled CI_high.scaled SRMR
1-factor 2940.978 135 0 0.930 0.921 0.115 0.112 0.119 0.060
2-factor 1984.981 118 0 0.954 0.940 0.101 0.097 0.105 0.047
3-factor 1255.923 102 0 0.971 0.957 0.085 0.081 0.089 0.033
4-factor 858.626 87 0 0.981 0.966 0.075 0.071 0.080 0.026
5-factor 441.334 73 0 0.991 0.981 0.057 0.052 0.062 0.018

3.2 Plot 4-factor EFA

Code
standardizedsolution(efa_f4) %>% 
  filter(op == "=~") %>% 
  mutate(item  = str_remove(rhs, "swb") %>% as.double(),
         factor = str_remove(lhs, "f")) %>% 
  # plot
  ggplot(aes(x = est.std, xmin = ci.lower, xmax = ci.upper, y = item)) +
  annotate(geom = "rect",
           xmin = -1, xmax = 1,
           ymin = -Inf, ymax = Inf,
           fill = "grey90") +
  annotate(geom = "rect",
           xmin = -0.7, xmax = 0.7,
           ymin = -Inf, ymax = Inf,
           fill = "grey93") +
  annotate(geom = "rect",
           xmin = -0.4, xmax = 0.4,
           ymin = -Inf, ymax = Inf,
           fill = "grey96") +
  geom_vline(xintercept = 0, color = "white") +
  geom_pointrange(aes(alpha = abs(est.std) < 0.4),
                  fatten = 10) +
  geom_text(aes(label = item, color = abs(est.std) < 0.4),
            size = 4) +
  scale_color_manual(values = c("white", "transparent")) +
  scale_alpha_manual(values = c(1, 1/3)) +
  scale_x_continuous(expression(lambda[standardized]), 
                     expand = c(0, 0), limits = c(-1, 1),
                     breaks = c(-1, -0.7, -0.4, 0, 0.4, 0.7, 1),
                     labels = c("-1", "-.7", "-.4", "0", ".4", ".7", "1")) +
  scale_y_continuous(breaks = 1:18, sec.axis = sec_axis(~ . * 1, breaks = 1:18)) +
  ggtitle("Factor loadings for the 4-factor model") +
  theme(legend.position = "none") +
  facet_wrap(~ factor, labeller = label_both) 

3.3 EFA comments

As we saw in the CFA modification indices, I think most issues stem from residual correlations - some items are too similar and one in each correlated pair needs to be removed.

Looking at the 4-factor solution, we have one factor with 1 item, two with 4 items, and one with 7 items.

Let’s review the 7 items with standardized loadings > 0.4 from the factor with most items in the 4-factor solution.

Code
items <- standardizedsolution(efa_f4) %>% 
  filter(op == "=~",
         lhs == "f3",
         est.std > 0.4) %>% 
  pull(rhs)

itemlabels <- read_csv("data/itemlabels_swb.csv") %>% 
  mutate(itemnr = paste0("swb",1:18))

standardizedsolution(efa_f4) %>% 
  filter(op == "=~",
         lhs == "f3",
         est.std > 0.4) %>% 
  arrange(desc(est.std)) %>% 
  mutate_if(is.numeric, ~ round(.x, 3)) %>% 
  dplyr::select(!c(lhs,op,z,pvalue)) %>% 
  dplyr::rename(itemnr = rhs,
                loading = est.std) %>% 
  left_join(itemlabels, by = "itemnr") %>% 
  knitr::kable()
itemnr loading se ci.lower ci.upper item
swb11 0.868 0.024 0.821 0.914 … have been able to be focused and concentrated on today’s tasks?
swb8 0.838 0.026 0.786 0.889 … felt satisfied with your life in its present situation?
swb17 0.793 0.029 0.736 0.849 … had the power to recover one’s strength if something has been stressful or difficult?
swb7 0.779 0.026 0.727 0.830 … been able to object and to assert yourself when this is needed?
swb15 0.652 0.032 0.589 0.715 … been able to make decisions and carry them out?
swb10 0.552 0.030 0.493 0.611 … slept well and got the right amount of sleep?
swb9 0.419 0.039 0.343 0.496 … felt that your life is meaningful?

4 Brief Rasch analysis

For fun, let’s see how the 7 items from the EFA above work as a unidimensional scale using Rasch Measurement Theory.

Code
library(RISEkbmRasch) # install first with `devtools::install_github("pgmj/RISEkbmRasch")`
df <- dd %>% 
  dplyr::select(all_of(items)) %>% 
  na.omit()
itemnr item
swb7 … been able to object and to assert yourself when this is needed?
swb8 … felt satisfied with your life in its present situation?
swb9 … felt that your life is meaningful?
swb10 … slept well and got the right amount of sleep?
swb11 … have been able to be focused and concentrated on today’s tasks?
swb15 … been able to make decisions and carry them out?
swb17 … had the power to recover one’s strength if something has been stressful or difficult?
Code
simfit1 <- RIgetfit(df, iterations = 500, cpu = 8) 
RIitemfit(df, simfit1)
Item InfitMSQ Infit thresholds OutfitMSQ Outfit thresholds Infit diff Outfit diff
swb7 0.927 [0.921, 1.082] 0.933 [0.92, 1.084] no misfit no misfit
swb8 0.758 [0.923, 1.073] 0.749 [0.908, 1.106] 0.165 0.159
swb9 1.1 [0.926, 1.076] 1.094 [0.919, 1.084] 0.024 0.01
swb10 1.09 [0.931, 1.084] 1.087 [0.912, 1.096] 0.006 no misfit
swb11 0.997 [0.922, 1.074] 0.966 [0.918, 1.076] no misfit no misfit
swb15 1.083 [0.931, 1.072] 1.115 [0.928, 1.081] 0.011 0.034
swb17 1.042 [0.917, 1.08] 1.069 [0.919, 1.083] no misfit no misfit
Note:
MSQ values based on conditional calculations (n = 1561 complete cases).
Simulation based thresholds from 500 simulated datasets.
Code
RIgetfitPlot(simfit1, df)

Code
ICCplot(as.data.frame(df), 
        itemnumber = 2, 
        method = "cut", 
        itemdescrip = c("item 8"))

Code
### also suggested:
library(RASCHplot) # install first with `devtools::install_github("ERRTG/RASCHplot")`
CICCplot(PCM(df),
         which.item = 2,
         lower.groups = c(0,6,12,17,23),
         grid.items = FALSE)
$swb8

Code
Item Observed value Model expected value Absolute difference Adjusted p-value (BH) Significance level
swb7 0.76 0.73 0.03 0.028 *
swb8 0.82 0.73 0.09 0.000 ***
swb9 0.70 0.73 0.03 0.067 .
swb10 0.71 0.73 0.02 0.195
swb11 0.74 0.73 0.01 0.661
swb15 0.72 0.73 0.01 0.297
swb17 0.72 0.73 0.01 0.620
Code

PCA of Rasch model residuals

Eigenvalues Proportion of variance
1.63 24%
1.45 20.7%
1.21 17.2%
0.99 14.8%
0.88 13.1%
Code
simcor1 <- RIgetResidCor(df, iterations = 500, cpu = 8)
RIresidcorr(df, cutoff = simcor1$p99)
swb7 swb8 swb9 swb10 swb11 swb15 swb17
swb7
swb8 0.16
swb9 -0.12 0.02
swb10 -0.23 -0.21 -0.02
swb11 -0.14 -0.17 -0.23 -0.18
swb15 -0.25 -0.21 -0.23 -0.11 -0.24
swb17 -0.16 -0.2 -0.34 -0.26 0.01 0
Note:
Relative cut-off value (highlighted in red) is -0.06, which is 0.089 above the average correlation (-0.148).
Code

Code
mirt(df, model=1, itemtype='Rasch', verbose = FALSE) %>% 
  plot(type="trace", as.table = TRUE, 
       theta_lim = c(-8,8))

Code
# increase fig-height above as needed, if you have many items
RItargeting(df)

Code

Code
iarm::score_groups(as.data.frame(df)) %>% 
  as.data.frame(nm = "score_group") %>% 
  dplyr::count(score_group)
  score_group   n
1           1 786
2           2 775
Code
dif_plots <- df %>% 
  add_column(dif = iarm::score_groups(.)) %>% 
  mutate(dif = factor(dif, labels = c("Below median score","Above median score"))) %>% 
  split(.$dif) %>% # split the data using the DIF variable
  map(~ RItileplot(.x %>% dplyr::select(!dif)) + labs(title = .x$dif))
dif_plots[[1]] + dif_plots[[2]]

4.1 Rasch analysis 1 comments

Item 8 shows misfit and has a residual correlation with item 7.

  • swb7 - been able to object and to assert yourself when this is needed?
  • swb8 - felt satisfied with your life in its present situation?

We’ll remove item 8 and run the analysis again. Several other item pairs also have problematic residual correlations, but we’ll start with removing one item and see how that affects the others.

We have no demographic information, which makes invariance/DIF difficult to evaluate. I tried splitting the data into score groups based on median score, but the high scoring group had too much missing data in lower response categories for analysis to be feasible.

Code
df$swb8 <- NULL

5 Rasch analysis 2

itemnr item
swb7 … been able to object and to assert yourself when this is needed?
swb9 … felt that your life is meaningful?
swb10 … slept well and got the right amount of sleep?
swb11 … have been able to be focused and concentrated on today’s tasks?
swb15 … been able to make decisions and carry them out?
swb17 … had the power to recover one’s strength if something has been stressful or difficult?
Code
simfit2 <- RIgetfit(df, iterations = 500, cpu = 8) 
RIitemfit(df, simfit2)
Item InfitMSQ Infit thresholds OutfitMSQ Outfit thresholds Infit diff Outfit diff
swb7 0.955 [0.916, 1.073] 0.959 [0.918, 1.072] no misfit no misfit
swb9 1.092 [0.919, 1.072] 1.091 [0.912, 1.102] 0.02 no misfit
swb10 1.018 [0.922, 1.092] 1.015 [0.918, 1.094] no misfit no misfit
swb11 0.941 [0.935, 1.086] 0.906 [0.913, 1.093] no misfit 0.007
swb15 1.011 [0.931, 1.08] 1.032 [0.928, 1.086] no misfit no misfit
swb17 0.974 [0.927, 1.073] 0.99 [0.924, 1.075] no misfit no misfit
Note:
MSQ values based on conditional calculations (n = 1561 complete cases).
Simulation based thresholds from 500 simulated datasets.
Code
RIgetfitPlot(simfit2, df)

Code
ICCplot(as.data.frame(df), 
        itemnumber = c(2,4), 
        method = "cut", 
        itemdescrip = c("item 9","item 11"))

[1] "Please press Zoom on the Plots window to see the plot"
Code
### also suggested:
library(RASCHplot) # install first with `devtools::install_github("ERRTG/RASCHplot")`
CICCplot(PCM(df),
         which.item = c(2,4),
         lower.groups = c(0,6,12,18),
         grid.items = TRUE)

Code
Item Observed value Model expected value Absolute difference Adjusted p-value (BH) Significance level
swb7 0.74 0.71 0.03 0.133
swb9 0.68 0.71 0.03 0.133
swb10 0.71 0.71 0.00 0.964
swb11 0.74 0.71 0.03 0.143
swb15 0.72 0.71 0.01 0.926
swb17 0.72 0.71 0.01 0.532
Code

PCA of Rasch model residuals

Eigenvalues Proportion of variance
1.56 26.8%
1.36 22.3%
1.17 18.9%
0.98 17%
0.92 14.9%
Code
simcor2 <- RIgetResidCor(df, iterations = 500, cpu = 8)
RIresidcorr(df, cutoff = simcor2$p99)
swb7 swb9 swb10 swb11 swb15 swb17
swb7
swb9 -0.08
swb10 -0.22 -0.03
swb11 -0.12 -0.23 -0.22
swb15 -0.23 -0.24 -0.15 -0.28
swb17 -0.15 -0.35 -0.3 -0.02 -0.04
Note:
Relative cut-off value (highlighted in red) is -0.089, which is 0.089 above the average correlation (-0.177).
Code

Code
# increase fig-height above as needed, if you have many items
RItargeting(df)

Code

Code
RIpfit(df)

5.1 Rasch analysis 2 comments

Item 9 is slightly high in item fit and has residual correlations with both item 7 and 10. We’ll remove it next

  • 9: felt that your life is meaningful?
  • 10: slept well and got the right amount of sleep?
  • 7: been able to object and to assert yourself when this is needed

These are, from my perspective, a bit unexpected residual correlation pairs. While sleep certainly is important for quality of life, I would not expect a strong residual correlation with “life is meaningful”. And item 9 and 7 seem even more unexpected. But these are just my reflections.

Code
df$swb9 <- NULL

6 Rasch analysis 3

itemnr item
swb7 … been able to object and to assert yourself when this is needed?
swb10 … slept well and got the right amount of sleep?
swb11 … have been able to be focused and concentrated on today’s tasks?
swb15 … been able to make decisions and carry them out?
swb17 … had the power to recover one’s strength if something has been stressful or difficult?
Code
simfit3 <- RIgetfit(df, iterations = 1000, cpu = 8) 
RIitemfit(df, simfit3)
Item InfitMSQ Infit thresholds OutfitMSQ Outfit thresholds Infit diff Outfit diff
swb7 1.012 [0.932, 1.077] 1.01 [0.933, 1.078] no misfit no misfit
swb10 1.109 [0.927, 1.077] 1.11 [0.913, 1.105] 0.032 0.005
swb11 0.941 [0.928, 1.073] 0.917 [0.927, 1.075] no misfit 0.01
swb15 1.009 [0.926, 1.083] 1.016 [0.92, 1.089] no misfit no misfit
swb17 0.925 [0.925, 1.084] 0.93 [0.924, 1.085] no misfit no misfit
Note:
MSQ values based on conditional calculations (n = 1561 complete cases).
Simulation based thresholds from 1000 simulated datasets.
Code
RIgetfitPlot(simfit3, df)

Code
ICCplot(as.data.frame(df), 
        itemnumber = c(2), 
        method = "cut", 
        itemdescrip = c("item 10"))

Code
CICCplot(PCM(df),
         which.item = c(2),
         lower.groups = c(0,5,10,15),
         grid.items = FALSE)
$swb10

Code
Item Observed value Model expected value Absolute difference Adjusted p-value (BH) Significance level
swb7 0.72 0.72 0.00 0.748
swb10 0.69 0.72 0.03 0.081 .
swb11 0.75 0.72 0.03 0.081 .
swb15 0.72 0.72 0.00 0.748
swb17 0.75 0.72 0.03 0.081 .
Code

PCA of Rasch model residuals

Eigenvalues Proportion of variance
1.44 29.4%
1.39 27.8%
1.18 23.7%
0.99 19%
0.01 0.1%
Code
simcor3 <- RIgetResidCor(df, iterations = 500, cpu = 8)
RIresidcorr(df, cutoff = simcor3$p99)
swb7 swb10 swb11 swb15 swb17
swb7
swb10 -0.21
swb11 -0.14 -0.24
swb15 -0.26 -0.17 -0.35
swb17 -0.2 -0.36 -0.11 -0.12
Note:
Relative cut-off value (highlighted in red) is -0.132, which is 0.086 above the average correlation (-0.217).
Code

Code
# increase fig-height above as needed, if you have many items
RItargeting(df)

Code

Code
RItif(df, cutoff = 2, samplePSI = TRUE)

Code
Separation Reliability: 0.8556
Code
RIpfit(df)

Code
Threshold 1 Threshold 2 Threshold 3 Threshold 4 Item location
swb7 -4.72 -1.86 0.76 3.65 -0.54
swb10 -2.68 -0.21 1.99 4.62 0.93
swb11 -3.62 -1.18 0.61 2.57 -0.41
swb15 -2.70 -0.71 1.54 3.86 0.5
swb17 -3.77 -1.67 0.47 3.06 -0.48
Note:
Item location is the average of the thresholds for each item.
Code
RIscoreSE(df, output = "figure")

Code
Ordinal sum score Logit score Logit std.error
0 -6.200 0.750
1 -4.802 0.927
2 -3.990 0.866
3 -3.355 0.790
4 -2.804 0.739
5 -2.301 0.708
6 -1.825 0.690
7 -1.365 0.679
8 -0.913 0.672
9 -0.466 0.669
10 -0.022 0.669
11 0.423 0.672
12 0.872 0.678
13 1.333 0.688
14 1.810 0.701
15 2.309 0.720
16 2.835 0.748
17 3.400 0.795
18 4.036 0.865
19 4.830 0.916
20 6.195 0.737

6.1 Rasch analysis 3 comments

Item 10 now shows slightly high item fit, while item-restscore looks ok for all items.

Item 17 has two residual correlations slightly above the cutoff, with items 11 and 15.

  • 17: had the power to recover one’s strength if something has been stressful or difficult?
  • 11: have been able to be focused and concentrated on today’s tasks?
  • 15: been able to make decisions and carry them out?

Model fit could probably be improved further by removing item 10 or 17, but we’ll leave it for now.

Overall, these are five items with decent psychometric properties. The TIF curve could be better and targeting shows a minor ceiling effect, which all well-being questionnaires seem to have to some degree.

7 CFA with 5 items

Based on the previous Rasch analysis.

Code
fiveitems <- 'f1 =~ swb7 + swb10 + swb11 + swb15 + swb17'
m5 <- cfa(model = fiveitems,
    data = df,
    estimator = "WLSMV", ordered = TRUE)
summary(m5, standardized = TRUE)
lavaan 0.6-18 ended normally after 16 iterations

  Estimator                                       DWLS
  Optimization method                           NLMINB
  Number of model parameters                        25

  Number of observations                          1561

Model Test User Model:
                                              Standard      Scaled
  Test Statistic                                30.935      76.720
  Degrees of freedom                                 5           5
  P-value (Chi-square)                           0.000       0.000
  Scaling correction factor                                  0.404
  Shift parameter                                            0.112
    simple second-order correction                                

Parameter Estimates:

  Parameterization                               Delta
  Standard errors                           Robust.sem
  Information                                 Expected
  Information saturated (h1) model        Unstructured

Latent Variables:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
  f1 =~                                                                 
    swb7              1.000                               0.796    0.796
    swb10             0.977    0.018   53.170    0.000    0.778    0.778
    swb11             1.071    0.017   61.268    0.000    0.852    0.852
    swb15             1.027    0.017   58.740    0.000    0.817    0.817
    swb17             1.060    0.017   61.137    0.000    0.844    0.844

Thresholds:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
    swb7|t1          -2.317    0.094  -24.744    0.000   -2.317   -2.317
    swb7|t2          -1.312    0.044  -29.847    0.000   -1.312   -1.312
    swb7|t3          -0.208    0.032   -6.498    0.000   -0.208   -0.208
    swb7|t4           0.978    0.038   25.798    0.000    0.978    0.978
    swb10|t1         -1.664    0.054  -30.703    0.000   -1.664   -1.664
    swb10|t2         -0.645    0.034  -18.832    0.000   -0.645   -0.645
    swb10|t3          0.311    0.032    9.625    0.000    0.311    0.311
    swb10|t4          1.375    0.045   30.257    0.000    1.375    1.375
    swb11|t1         -2.018    0.071  -28.422    0.000   -2.018   -2.018
    swb11|t2         -1.099    0.040  -27.631    0.000   -1.099   -1.099
    swb11|t3         -0.279    0.032   -8.668    0.000   -0.279   -0.279
    swb11|t4          0.598    0.034   17.657    0.000    0.598    0.598
    swb15|t1         -1.718    0.056  -30.542    0.000   -1.718   -1.718
    swb15|t2         -0.842    0.036  -23.275    0.000   -0.842   -0.842
    swb15|t3          0.112    0.032    3.516    0.000    0.112    0.112
    swb15|t4          1.099    0.040   27.631    0.000    1.099    1.099
    swb17|t1         -2.098    0.076  -27.561    0.000   -2.098   -2.098
    swb17|t2         -1.260    0.043  -29.429    0.000   -1.260   -1.260
    swb17|t3         -0.323    0.032   -9.977    0.000   -0.323   -0.323
    swb17|t4          0.756    0.035   21.440    0.000    0.756    0.756

Variances:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
   .swb7              0.366                               0.366    0.366
   .swb10             0.395                               0.395    0.395
   .swb11             0.273                               0.273    0.273
   .swb15             0.332                               0.332    0.332
   .swb17             0.288                               0.288    0.288
    f1                0.634    0.018   35.020    0.000    1.000    1.000

All factor loadings are around 0.80.

7.1 Model fit

We need the simulation based cutoff values to make sense of model fit metrics.

Code
dyncut2 <- catOne(m5, reps = 250)
Code
dyncut2
Your DFI cutoffs: 
            SRMR  RMSEA CFI  
Level-0     0.01  0.026 1    
Specificity 95%   95%   95%  
                             
Level-1     0.018 0.066 0.997
Sensitivity 95%   95%   95%  
                             
Level-2     0.027 0.108 0.993
Sensitivity 95%   95%   95%  

Empirical fit indices: 
 Chi-Square  df p-value   SRMR   RMSEA    CFI
      76.72   5       0  0.022   0.096  0.994

 Notes:
  -'Sensitivity' is % of hypothetically misspecified models correctly identified by cutoff in DFI simulation
  -Cutoffs with 95% sensitivity are reported when possible
  -If sensitivity is <50%, cutoffs will be supressed 

These are a bit better than level-2 values, which seems acceptable.

7.2 Modification indices

Code
modificationIndices(m5,
                    standardized = T) %>% 
  as.data.frame(row.names = NULL) %>% 
  filter(mi > 3) %>% 
  arrange(desc(mi)) %>% 
  mutate(across(where(is.numeric),~ round(.x, 3))) %>%
  knitr::kable()
lhs op rhs mi epc sepc.lv sepc.all sepc.nox
swb11 ~~ swb15 17.226 0.090 0.090 0.298 0.298
swb10 ~~ swb17 16.541 0.090 0.090 0.268 0.268
swb10 ~~ swb15 8.805 -0.060 -0.060 -0.165 -0.165
swb15 ~~ swb17 6.290 -0.051 -0.051 -0.163 -0.163
swb11 ~~ swb17 4.560 -0.044 -0.044 -0.156 -0.156
swb7 ~~ swb11 3.240 -0.037 -0.037 -0.117 -0.117

Similarly to the Rasch analysis, there are some residual correlations. We can see item 17 involved in three of the six correlated item pairs.

7.3 CTT reliability

Classical test theory assumes reliability to be a constant value across the latent continuum and across all participants.

Code
omega(df, nfactors = 1, poly = TRUE)
Omega_h for 1 factor is not meaningful, just omega_t
Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
Omega_h and Omega_asymptotic are not meaningful with one factor
Warning in cov2cor(t(w) %*% r %*% w): diag(V) had non-positive or NA entries;
the non-finite result may be dubious
Omega 
Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
    digits = digits, title = title, sl = sl, labels = labels, 
    plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
    covar = covar)
Alpha:                 0.91 
G.6:                   0.89 
Omega Hierarchical:    0.91 
Omega H asymptotic:    1 
Omega Total            0.91 

Schmid Leiman Factor loadings greater than  0.2 
         g  F1*   h2   h2   u2 p2 com
swb7  0.80      0.64 0.64 0.36  1   1
swb10 0.78      0.60 0.60 0.40  1   1
swb11 0.85      0.72 0.72 0.28  1   1
swb15 0.82      0.67 0.67 0.33  1   1
swb17 0.84      0.70 0.70 0.30  1   1

With Sums of squares  of:
  g F1*  h2 
3.3 0.0 2.2 

general/max  1.5   max/min =   Inf
mean percent general =  1    with sd =  0 and cv of  0 
Explained Common Variance of the general factor =  1 

The degrees of freedom are 5  and the fit is  0.07 
The number of observations was  1561  with Chi Square =  111.07  with prob <  0.00000000000000000000024
The root mean square of the residuals is  0.03 
The df corrected root mean square of the residuals is  0.04
RMSEA index =  0.117  and the 10 % confidence intervals are  0.098 0.136
BIC =  74.31

Compare this with the adequacy of just a general factor and no group factors
The degrees of freedom for just the general factor are 5  and the fit is  0.07 
The number of observations was  1561  with Chi Square =  111.07  with prob <  0.00000000000000000000024
The root mean square of the residuals is  0.03 
The df corrected root mean square of the residuals is  0.04 

RMSEA index =  0.117  and the 10 % confidence intervals are  0.098 0.136
BIC =  74.31 

Measures of factor score adequacy             
                                                 g F1*
Correlation of scores with factors            0.95   0
Multiple R square of scores with factors      0.91   0
Minimum correlation of factor score estimates 0.82  -1

 Total, General and Subset omega for each subset
                                                 g  F1*
Omega total for total scores and subscales    0.91 0.91
Omega general for total scores and subscales  0.91 0.91
Omega group for total scores and subscales    0.00 0.00
Code
alpha(df)

Reliability analysis   
Call: alpha(x = df)

  raw_alpha std.alpha G6(smc) average_r S/N    ase mean   sd median_r
      0.88      0.88    0.86       0.6 7.6 0.0047  2.5 0.82      0.6

    95% confidence boundaries 
         lower alpha upper
Feldt     0.87  0.88  0.89
Duhachek  0.87  0.88  0.89

 Reliability if an item is dropped:
      raw_alpha std.alpha G6(smc) average_r S/N alpha se   var.r med.r
swb7       0.86      0.86    0.83      0.61 6.3   0.0057 0.00190  0.60
swb10      0.86      0.87    0.83      0.62 6.4   0.0056 0.00137  0.61
swb11      0.85      0.85    0.81      0.59 5.7   0.0061 0.00129  0.58
swb15      0.86      0.86    0.82      0.60 6.0   0.0059 0.00190  0.59
swb17      0.85      0.85    0.82      0.59 5.9   0.0060 0.00048  0.60

 Item statistics 
         n raw.r std.r r.cor r.drop mean   sd
swb7  1561  0.80  0.81  0.74   0.70  2.6 0.89
swb10 1561  0.81  0.80  0.73   0.69  2.2 1.01
swb11 1561  0.85  0.84  0.80   0.75  2.7 1.05
swb15 1561  0.83  0.83  0.77   0.72  2.3 1.03
swb17 1561  0.84  0.84  0.79   0.74  2.7 0.96

Non missing response frequency for each item
         0    1    2    3    4 miss
swb7  0.01 0.08 0.32 0.42 0.16    0
swb10 0.05 0.21 0.36 0.29 0.08    0
swb11 0.02 0.11 0.25 0.34 0.27    0
swb15 0.04 0.16 0.34 0.32 0.14    0
swb17 0.02 0.09 0.27 0.40 0.22    0

8 Software used

Code
R version 4.4.1 (2024-06-14)
Platform: aarch64-apple-darwin20
Running under: macOS Sonoma 14.6.1

Matrix products: default
BLAS:   /Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/lib/libRblas.0.dylib 
LAPACK: /Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/lib/libRlapack.dylib;  LAPACK version 3.12.0

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

time zone: Europe/Stockholm
tzcode source: internal

attached base packages:
 [1] parallel  grid      stats4    stats     graphics  grDevices utils    
 [8] datasets  methods   base     

other attached packages:
 [1] RASCHplot_0.1.0      ggdist_3.3.2         doParallel_1.0.17   
 [4] iterators_1.0.14     foreach_1.5.2        RISEkbmRasch_0.2.4.3
 [7] janitor_2.2.0        iarm_0.4.3           hexbin_1.28.4       
[10] catR_3.17            glue_1.7.0           ggrepel_0.9.6       
[13] reshape_0.8.9        matrixStats_1.4.1    psychotree_0.16-1   
[16] psychotools_0.7-4    partykit_1.2-22      mvtnorm_1.3-1       
[19] libcoin_1.0-10       psych_2.4.6.26       mirt_1.42           
[22] lattice_0.22-6       eRm_1.0-6            kableExtra_1.4.0    
[25] formattable_0.2.1    knitr_1.48           dynamic_1.1.0       
[28] patchwork_1.3.0      lavaan_0.6-18        lubridate_1.9.3     
[31] forcats_1.0.0        stringr_1.5.1        dplyr_1.1.4         
[34] purrr_1.0.2          readr_2.1.5          tidyr_1.3.1         
[37] tibble_3.2.1         ggplot2_3.5.1        tidyverse_2.0.0     
[40] readxl_1.4.3        

loaded via a namespace (and not attached):
  [1] later_1.3.2          splines_4.4.1        R.oo_1.26.0         
  [4] cellranger_1.1.0     rpart_4.1.23         lifecycle_1.0.4     
  [7] Rdpack_2.6.1         rstatix_0.7.2        rprojroot_2.0.4     
 [10] globals_0.16.3       MASS_7.3-61          backports_1.5.0     
 [13] magrittr_2.0.3       vcd_1.4-12           Hmisc_5.1-3         
 [16] rmarkdown_2.28       yaml_2.3.10          httpuv_1.6.15       
 [19] sessioninfo_1.2.2    pbapply_1.7-2        abind_1.4-5         
 [22] audio_0.1-11         quadprog_1.5-8       R.utils_2.12.3      
 [25] nnet_7.3-19          listenv_0.9.1        GenOrd_1.4.0        
 [28] testthat_3.2.1.1     RPushbullet_0.3.4    vegan_2.6-8         
 [31] parallelly_1.38.0    svglite_2.1.3        permute_0.9-7       
 [34] codetools_0.2-20     DT_0.33              xml2_1.3.6          
 [37] tidyselect_1.2.1     farver_2.1.2         base64enc_0.1-3     
 [40] jsonlite_1.8.9       progressr_0.14.0     Formula_1.2-5       
 [43] survival_3.7-0       systemfonts_1.1.0    tools_4.4.1         
 [46] gnm_1.1-5            snow_0.4-4           Rcpp_1.0.13         
 [49] mnormt_2.1.1         gridExtra_2.3        xfun_0.46           
 [52] here_1.0.1           mgcv_1.9-1           distributional_0.4.0
 [55] Bayesrel_0.7.7       ca_0.71.1            withr_3.0.1         
 [58] beepr_2.0            fastmap_1.2.0        fansi_1.0.6         
 [61] digest_0.6.37        mime_0.12            timechange_0.3.0    
 [64] R6_2.5.1             colorspace_2.1-1     simstandard_0.6.3   
 [67] R.methodsS3_1.8.2    inum_1.0-5           utf8_1.2.4          
 [70] generics_0.1.3       data.table_1.16.0    SimDesign_2.17.1    
 [73] htmlwidgets_1.6.4    semTools_0.5-6       pkgconfig_2.0.3     
 [76] gtable_0.3.5         lmtest_0.9-40        brio_1.1.5          
 [79] htmltools_0.5.8.1    carData_3.0-5        scales_1.3.0        
 [82] corrplot_0.92        snakecase_0.11.1     rstudioapi_0.16.0   
 [85] reshape2_1.4.4       tzdb_0.4.0           checkmate_2.3.2     
 [88] nlme_3.1-165         curl_5.2.3           cachem_1.1.0        
 [91] zoo_1.8-12           relimp_1.0-5         vcdExtra_0.8-5      
 [94] foreign_0.8-87       pillar_1.9.0         vctrs_0.6.5         
 [97] promises_1.3.0       ggpubr_0.6.0         car_3.1-2           
[100] xtable_1.8-4         Deriv_4.1.3          cluster_2.1.6       
[103] dcurver_0.9.2        GPArotation_2024.3-1 htmlTable_2.4.3     
[106] evaluate_0.24.0      pbivnorm_0.6.0       cli_3.6.3           
[109] compiler_4.4.1       rlang_1.1.4          future.apply_1.11.2 
[112] ggsignif_0.6.4       plyr_1.8.9           stringi_1.8.4       
[115] viridisLite_0.4.2    munsell_0.5.1        Matrix_1.7-0        
[118] qvcalc_1.0.3         hms_1.1.3            future_1.34.0       
[121] shiny_1.9.1          rbibutils_2.2.16     memoise_2.0.1       
[124] broom_1.0.7          ggstance_0.3.7      

9 References

Hlynsson, Jón Ingi, Anders Sjöberg, Lars Ström, and Per Carlbring. 2024. “Evaluating the Reliability and Validity of the Questionnaire on Well-Being: A Validation Study for a Clinically Informed Measurement of Subjective Well-Being.” Cognitive Behaviour Therapy 0 (0): 1–23. https://doi.org/10.1080/16506073.2024.2402992.
Hu, Li‐tze, and Peter M. Bentler. 1999. “Cutoff Criteria for Fit Indexes in Covariance Structure Analysis: Conventional Criteria Versus New Alternatives.” Structural Equation Modeling: A Multidisciplinary Journal 6 (1): 1–55. https://doi.org/10.1080/10705519909540118.
McNeish, Daniel. 2023. “Dynamic Fit Index Cutoffs for Categorical Factor Analysis with Likert-Type, Ordinal, or Binary Responses.” American Psychologist 78 (9): 1061–75. https://doi.org/10.1037/amp0001213.
Savalei, Victoria. 2018. “On the Computation of the RMSEA and CFI from the Mean-And-Variance Corrected Test Statistic with Nonnormal Data in SEM.” Multivariate Behavioral Research 53 (3): 419–29. https://doi.org/10.1080/00273171.2018.1455142.

Reuse