It represents the discrepancies between scores obtained on tests and the corresponding true scores. The goal of reliability theory is to estimate errors in measurement and to suggest ways of improving tests so that errors are minimized. The central assumption of reliability theory is that measurement errors are essentially random.
This does not mean that errors arise from random processes. For any individual, an error in measurement is not a completely random event. However, across a large number of individuals, the causes of measurement error are assumed to be so varied that measure errors act as random variables. If errors have the essential characteristics of random variables, then it is reasonable to assume that errors are equally likely to be positive or negative, and that they are not correlated with true scores or with errors on other tests.
It is assumed that: Reliability theory shows that the variance of obtained scores is simply the sum of the variance of true scores plus the variance of errors of measurement. In its general form, the reliability coefficient is defined as the ratio of true score variance to the total variance of test scores. Or, equivalently, one minus the ratio of the variation of the error score and the variation of the observed score:.
Unfortunately, there is no way to directly observe or calculate the true score , so a variety of methods are used to estimate the reliability of a test. Some examples of the methods to estimate reliability include test-retest reliability , internal consistency reliability, and parallel-test reliability. Each method comes at the problem of figuring out the source of error in the test somewhat differently.
It was well-known to classical test theorists that measurement precision is not uniform across the scale of measurement. Tests tend to distinguish better for test-takers with moderate trait levels and worse among high- and low-scoring test-takers.
Item response theory extends the concept of reliability from a single index to a function called the information function. The IRT information function is the inverse of the conditional observed score standard error at any given test score. Four practical strategies have been developed that provide workable methods of estimating test reliability. The correlation between scores on the first test and the scores on the retest is used to estimate the reliability of the test using the Pearson product-moment correlation coefficient: The key to this method is the development of alternate test forms that are equivalent in terms of content, response processes and statistical characteristics.
For example, alternate forms exist for several tests of general intelligence, and these tests are generally seen equivalent. If both forms of the test were administered to a number of people, differences between scores on form A and form B may be due to errors in measurement only. The correlation between scores on the two alternate forms is used to estimate the reliability of the test. This method provides a partial solution to many of the problems inherent in the test-retest reliability method.
For example, since the two forms of the test are different, carryover effect is less of a problem. Reactivity effects are also partially controlled; although taking the first test may change responses to the second test.
However, it is reasonable to assume that the effect will not be as strong with alternate forms of the test as with two administrations of the same test. This method treats the two halves of a measure as alternate forms.
It provides a simple solution to the problem that the parallel-forms method faces: The correlation between these two split halves is used in estimating the reliability of the test. This halves reliability estimate is then stepped up to the full test length using the Spearman—Brown prediction formula. There are several ways of splitting a test to estimate reliability.
For example, a item vocabulary test could be split into two subtests, the first one made up of items 1 through 20 and the second made up of items 21 through However, the responses from the first half may be systematically different from responses in the second half due to an increase in item difficulty and fatigue.
In splitting a test, the two halves would need to be as similar as possible, both in terms of their content and in terms of the probable state of the respondent. The simplest method is to adopt an odd-even split, in which the odd-numbered items form one half of the test and the even-numbered items form the other. This arrangement guarantees that each half will contain an equal number of items from the beginning, middle, and end of the original test.
The most common internal consistency measure is Cronbach's alpha , which is usually interpreted as the mean of all possible split-half coefficients. These measures of reliability differ in their sensitivity to different sources of error and so need not be equal. Also, reliability is a property of the scores of a measure rather than the measure itself and are thus said to be sample dependent. Reliability estimates from one sample might differ from those of a second sample beyond what might be expected due to sampling variations if the second sample is drawn from a different population because the true variability is different in this second population.
This is true of measures of all types—yardsticks might measure houses well yet have poor reliability when used to measure the lengths of insects. Reliability may be improved by clarity of expression for written assessments , lengthening the measure,  and other informal means. However, formal psychometric analysis, called item analysis, is considered the most effective way to increase reliability. This assumption, that the variable you are measuring is stable or constant, is central to the concept of reliability.
In principal, a measurement procedure that is stable or constant should produce the same or nearly the same results if the same individuals and conditions are used. So what do we mean when we say that a measurement procedure is constant or stable?
Some variables are more stable constant than others; that is, some change significantly, whilst others are reasonably constant. Therefore, the score measured e. The true score is the actual score that would reliably reflect the measurement e.
The error reflects conditions that result in the score that we are measuring not reflecting the true score , but a variation on the actual score e. This error component within a measurement procedure will vary from one measurement to the next, increasing and decreasing the score for the variable. It is assumed that this happens randomly, with the error averaging zero over time; that is, the increases or decreases in error over a number of measurements even themselves out so that we end up with the true score e.
Provided that the error component within a measurement procedure is relatively small , the scores that are attained over a number of measurements will be relatively consistent ; that is, there will be small differences in the scores between measurements. As such, we can say that the measurement procedure is reliable. Take the following example:. Intelligence using IQ True score: Actual level of intelligence Error: Caused by factors including current mood, level of fatigue, general health, luck in guessing answers to questions you don't know Impact of error on scores: Would expect measurements of IQ to be a few points up and down of your actual IQ, not to points, for example i.
By comparison, where the error component within a measurement procedure is relatively large , the scores that are obtained over a number of measurements will be relatively inconsistent ; that is, there will be large differences in the scores between measurements. As such, we can say that the measurement procedure is not reliable. Reaction time by measuring the speed of pressing a button when a light bulb goes on i. Actual reaction speed of person Error: Potential for time to be significantly different from one measurement to the next e.
Reliability in research Reliability, like validity, is a way of assessing the quality of the measurement procedure used to collect data in a dissertation. In order for the results from a study to be considered valid, the measurement procedure must first be reliable.
Internal validity dictates how an experimental design is structured and encompasses all of the steps of the scientific research method. Even if your results are great, sloppy and inconsistent design will compromise your integrity in the eyes of the scientific community. Internal validity and reliability are at the core of any experimental design.
Test-retest reliability is a measure of reliability obtained by administering the same test twice over a period of time to a group of individuals. The scores from Time 1 and Time 2 can then be correlated in order to evaluate the test for stability over time. If findings from research are replicated consistently they are reliable. A correlation coefficient can be used to assess the degree of reliability. If a test is reliable it should show a high positive igmosb.gq: Saul Mcleod.
Reliability has to do with the quality of measurement. In its everyday sense, reliability is the "consistency" or "repeatability" of your measures. Before we can define reliability precisely we have to lay the groundwork. First, you have to learn about the foundation of reliability, the true score theory of measurement. The use of reliability and validity are common in quantitative research and now it is reconsidered in the qualitative research paradigm. Since reliability and validity are rooted in positivist perspective then they should be redefined for their use in a naturalistic approach. Like reliability and.