#1
Which of the following best defines measurement reliability?
Consistency or stability of a measuring instrument
ExplanationMeasurement reliability refers to the consistency or stability of a measuring instrument.
#2
What does test-retest reliability measure?
Consistency of results over repeated administrations of the same test
ExplanationTest-retest reliability measures the consistency of results over repeated administrations of the same test.
#3
Internal consistency reliability is assessed using which statistical measure?
Cronbach's alpha
ExplanationInternal consistency reliability is assessed using Cronbach's alpha.
#4
Which of the following is NOT a type of reliability?
Concurrent validity
ExplanationConcurrent validity is not a type of reliability; it's a type of validity.
#5
What is the primary method used to assess inter-rater reliability?
Intraclass correlation coefficient (ICC)
ExplanationThe primary method used to assess inter-rater reliability is the Intraclass correlation coefficient (ICC).
#6
Which of the following statements best describes parallel forms reliability?
Consistency of results across different forms of a test
ExplanationParallel forms reliability refers to the consistency of results across different forms of a test.
#7
Which of the following best describes internal consistency reliability?
Consistency of responses within the same test
ExplanationInternal consistency reliability refers to the consistency of responses within the same test.
#8
What is the main purpose of assessing measurement reliability in behavioral research?
To ensure that the results are consistent and replicable
ExplanationAssessing measurement reliability in behavioral research aims to ensure that results are consistent and replicable.
#9
Which of the following is NOT a threat to measurement reliability?
Sampling error
ExplanationSampling error is not a threat to measurement reliability; it's a factor affecting representativeness of samples.
#10
What is the primary difference between test-retest reliability and parallel forms reliability?
Test-retest reliability measures consistency over time, while parallel forms reliability measures consistency across different versions of the same test
ExplanationTest-retest reliability measures consistency over time, whereas parallel forms reliability measures consistency across different versions of the same test.
#11
What is the term used to describe the extent to which the results of a study can be generalized to other populations, settings, or times?
Generalizability
ExplanationGeneralizability refers to the extent to which the results of a study can be generalized to other populations, settings, or times.
#12
Which of the following is a common method used to assess inter-rater reliability?
Intraclass correlation coefficient (ICC)
ExplanationIntraclass correlation coefficient (ICC) is a common method used to assess inter-rater reliability.
#13
Which of the following best describes split-half reliability?
Consistency of responses within the same test
ExplanationSplit-half reliability refers to the consistency of responses within the same test.
#14
Which of the following is a measure of inter-rater reliability that adjusts for chance agreement?
Cohen's kappa
ExplanationCohen's kappa is a measure of inter-rater reliability that adjusts for chance agreement.
#15
What is the term used to describe the consistency of results across different items within the same test?
Inter-item reliability
ExplanationInter-item reliability describes the consistency of results across different items within the same test.
#16
Which of the following is NOT a factor that influences measurement reliability?
Sample size
ExplanationSample size does not directly influence measurement reliability; it affects the precision of estimates.
#17
What is the primary aim of using split-half reliability?
To assess the consistency of results within the same test
ExplanationThe primary aim of using split-half reliability is to assess the consistency of results within the same test.
#18
In behavioral research, what does inter-rater reliability assess?
Consistency of ratings assigned by different observers or raters
ExplanationInter-rater reliability assesses the consistency of ratings assigned by different observers or raters.
#19
What is the main drawback of using test-retest reliability?
It may be affected by practice effects or memory recall
ExplanationTest-retest reliability may be affected by practice effects or memory recall, compromising its accuracy.
#20
What statistical measure is typically used to assess split-half reliability?
Cronbach's alpha
ExplanationSplit-half reliability is typically assessed using Cronbach's alpha.
#21
What is the primary difference between reliability and validity?
Reliability refers to consistency, while validity refers to accuracy
ExplanationReliability refers to consistency, whereas validity refers to accuracy in measurement.
#22
What is the main limitation of using Cronbach's alpha to assess internal consistency reliability?
It assumes all items within the test measure the same underlying construct
ExplanationCronbach's alpha assumes all items within the test measure the same underlying construct, which may not always be true.
#23
What is the term used to describe the degree to which a measurement instrument accurately reflects the underlying concept or construct it is intended to measure?
Validity
ExplanationValidity refers to the degree to which a measurement instrument accurately reflects the underlying concept or construct it is intended to measure.
#24
What is the main concern when using a single-item measure to assess a construct in behavioral research?
Reliability
ExplanationThe main concern when using a single-item measure to assess a construct in behavioral research is its reliability.
#25
What statistical measure is typically used to assess inter-rater reliability for ordinal data?
Cohen's kappa
ExplanationCohen's kappa is typically used to assess inter-rater reliability for ordinal data.