Example: confidence

Validity and reliability in quantitative studies

Validity and reliability in quantitative studiesRoberta Heale,1 Alison Twycross2 Evidence-based practice includes, in part, implementa-tion of thefindings of well-conducted quality researchstudies. So being able to critique quantitative research isan important skill for nurses. Consideration must begiven not only to the results of the study but also therigourof the research. Rigour refers to the extent towhich the researchers worked to enhance the quality ofthe studies . In quantitative research, this is achievedthrough measurement of the Validity and defined as the extent to which a concept isaccurately measured in a quantitative study. Forexample, a survey designed to explore depression butwhich actually measures anxiety would not be consid-ered valid.

In other words, the extent to which a research instru-ment consistently has the same results if it is used in the same situation on repeated occasions. A simple example of validity and reliability is an alarm clock that rings at 7:00 each morning, but is set for 6:30. It is very reliable (it consistently rings the same time each day), but is not

Tags:

  Reliability, Ment, Validity, Instru, Instru ment

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Validity and reliability in quantitative studies

1 Validity and reliability in quantitative studiesRoberta Heale,1 Alison Twycross2 Evidence-based practice includes, in part, implementa-tion of thefindings of well-conducted quality researchstudies. So being able to critique quantitative research isan important skill for nurses. Consideration must begiven not only to the results of the study but also therigourof the research. Rigour refers to the extent towhich the researchers worked to enhance the quality ofthe studies . In quantitative research, this is achievedthrough measurement of the Validity and defined as the extent to which a concept isaccurately measured in a quantitative study. Forexample, a survey designed to explore depression butwhich actually measures anxiety would not be consid-ered valid.

2 The second measure of quality in a quantita-tive study isreliability, or the accuracy of an other words, the extent to which a research instru - ment consistently has the same results if it is used in thesame situation on repeated occasions. A simple exampleof Validity and reliability is an alarm clock that rings at7:00 each morning, but is set for 6:30. It is very reliable(it consistently rings the same time each day), but is notvalid (it is not ringing at the desired time). It s importantto consider Validity and reliability of the data collectiontools (instruments) when either conducting or critiquingresearch. There are three major types of Validity . Theseare described intable category iscontent Validity . This categorylooks at whether the instrument adequately covers allthe content that it should with respect to the variable.

3 Inother words, does the instrument cover the entiredomain related to the variable, or construct it wasdesigned to measure? In an undergraduate nursingcourse with instruction about public health, an examin-ation with content Validity would cover all the contentin the course with greater emphasis on the topics thathad received greater coverage or more depth. A subset ofcontent Validity isface Validity , where experts are askedtheir opinion about whether an instrument measures theconcept Validity refers to whether you can drawinferences about test scores related to the concept beingstudied. For example, if a person has a high score on asurvey that measures anxiety, does this person trulyhave a high degree of anxiety? In another example, atest of knowledge of medications that requires dosagecalculations may instead be testing maths are three types of evidence that can be used todemonstrate a research instrument has constructvalidity:1 Homogeneity meaning that the instrument mea-sures one Convergence this occurs when the instrument mea-sures concepts similar to that of other if there are no similar instruments avail-able this will not be possible to Theory evidence this is evident when behaviour issimilar to theoretical propositions of the constructmeasured in the instrument.

4 For example, when aninstrument measures anxiety, one would expect tosee that participants who score high on the instru - ment for anxiety also demonstrate symptoms ofanxiety in their day-to-day measure of Validity iscriterion Validity . A cri-terion is any other instrument that measures the samevariable. Correlations can be conducted to determine theextent to which the different instruments measure thesame variable. Criterion Validity is measured in threeways:1 Convergent Validity shows that an instrument ishighly correlated with instruments measuring Divergent Validity shows that an instrument ispoorly correlated to instruments that measure differ-ent variables. In this case, for example, there shouldbe a low correlation between an instrument that mea-sures motivation and one that measures Predictive Validity means that the instrumentshould have high correlations with future example, a score of high self-efficacy related toperforming a task should predict the likelihood aparticipant completing the relates to theconsistencyof a measure.

5 A par-ticipant completing an instrument meant to measuremotivation should have approximately the sameresponses each time the test is completed. Although it isnot possible to give an exact calculation of reliability ,an estimate of reliability can be achieved through differ-ent measures. The three attributes of reliability are out-lined intable 2. How each attribute is tested for isdescribed (internal consistency)is assessed usingitem-to-total correlation, split-half reliability , Kuder-Richardson coefficient and Cronbach s . In split-halfreliability, the results of a test, or instrument, areTable 1 Types of validityType ofvalidityDescriptionContentvalidityThe extent to which a research instrumentaccurately measures all aspects of aconstructConstructvalidityThe extent to which a research instrument(or tool) measures the intended constructCriterionvalidityThe extent to which a research instrumentis related to other instruments thatmeasure the same variablesEditor s choiceScan to access morefree of Nursing, LaurentianUniversity, Sudbury, Ontario,Canada2 Faculty of Health and SocialCare, London South BankUniversity, London, UKCorrespondence to.

6 Dr Roberta Heale,School of Nursing, LaurentianUniversity, Ramsey Lake Road,Sudbury, Ontario, Based NursJuly 2015| volume 18| number 3|Research made simple on February 13, 2022 by guest. Protected by Based Nurs: first published as on 15 May 2015. Downloaded from divided in half. Correlations are calculated comparingboth halves. Strong correlations indicate high reliability ,while weak correlations indicate the instrument may notbe reliable. The Kuder-Richardson test is a more compli-cated version of the split-half test. In this process theaverage of all possible split half combinations is deter-mined and a correlation between 0 1 is generated. Thistest is more accurate than the split-half test, but canonly be completed on questions with two answers (eg,yes or no, 0 or 1).

7 3 Cronbach s is the most commonly used test todetermine the internal consistency of an instrument. Inthis test, the average of all correlations in every combin-ation of split-halves is determined. Instruments withquestions that have more than two responses can beused in this test. The Cronbach s result is a numberbetween 0 and 1. An acceptable reliability score is onethat is and tested using test retest and parallel oralternate-form reliability testing. Test retest reliability isassessed when an instrument is given to the sameparticipants more than once under similar statistical comparison is made between participant stest scores for each of the times they have completed provides an indication of the reliability of the instru - ment .

8 Parallel-form reliability (or alternate-form reliabil-ity) is similar to test retest reliability except that adifferent form of the original instrument is given to parti-cipants in subsequent tests. The domain, or conceptsbeing tested are the same in both versions of the instru - ment but the wording of items is an instru - ment to demonstrate stability there should be a highcorrelation between the scores each time a participantcompletes the test. Generally speaking, a correlation coef-ficient of less than signifies a weak correlation, is moderate and greater than is assessed through inter-rater test includes a process for qualitatively determiningthe level of agreement between two or more good example of the process used in assessing inter-rater reliability is the scores of judges for a skating com-petition.

9 The level of consistency across all judges in thescores given to skating participants is the measure ofinter-rater reliability . An example in research is whenresearchers are asked to give a score for the relevancy ofeach item on an instrument. Consistency in their scoresrelates to the level of inter-rater reliability of how rigorously the issues of reliabilityand Validity have been addressed in a study is an essen-tial component in the critique of research as well asinfluencing the decision about whether to implement ofthe studyfindings into nursing practice. In quantitativestudies, rigour is determined through an evaluation ofthe Validity and reliability of the tools or instrumentsutilised in the study. A good quality research study willprovide evidence of how all these factors have beenaddressed.

10 This will help you to assess the Validity andreliability of the research and help you decide whetheror not you should apply thefindings in your area ofclinical Roberta Heale at @robertaheale andAlison Twycross at @alitwyCompeting interestsNone G, Haber research in , critical appraisal, and utilization. 3rd Canadian : Elsevier, K. Conducting Educational Research. Validity ofInstruments. 2012. M. Internal Consistency reliability . 2015. the correlation coefficient. 2 Attributes of reliabilityAttributesDescriptionHomogene ity (orinternal consistency)The extent to which all the itemson a scale measure oneconstructStabilityThe consistency of results usingan instrument with repeatedtestingEquivalenceConsistency among responses ofmultiple users of an instrument,or among alternate forms of aninstrumentEvid Based NursJuly 2015| volume 18| number 3|67 Research made simple on February 13, 2022 by guest.


Related search queries