Example: bankruptcy

ANALYSING LIKERT SCALE/TYPE DATA, ORDINAL LOGISTIC ...

ANALYSING LIKERT SCALE/TYPE data , ORDINAL LOGISTIC regression example IN items are used to measure respondents attitudes to a particular question or statement. One must recallthat LIKERT -type data is ORDINAL data , we can only say that one score is higher than another, not thedistancebetween the lets imagine we are interested in ANALYSING responses to some ascertion made, answered on a Liket scale asbelow; 1 =Strongly disagree 2 =Disagree 3 =Neutral 4 =Agree 5 =Strongly to the ORDINAL nature of the data we cannot use parametric techniques to analyse LIKERT type data ; Analysisof variance techniques include; Mann Whitney test.

4 analysing likert scale/type data, ordinal logistic regression example in r. answer 4 6.699 1.67468 1.8435 0.1239 residuals 139 126.273 0.90844

Tags:

  Data, Types, Example, Logistics, Regression, Ordinal, Data types, Ordinal logistic regression example in

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of ANALYSING LIKERT SCALE/TYPE DATA, ORDINAL LOGISTIC ...

1 ANALYSING LIKERT SCALE/TYPE data , ORDINAL LOGISTIC regression example IN items are used to measure respondents attitudes to a particular question or statement. One must recallthat LIKERT -type data is ORDINAL data , we can only say that one score is higher than another, not thedistancebetween the lets imagine we are interested in ANALYSING responses to some ascertion made, answered on a Liket scale asbelow; 1 =Strongly disagree 2 =Disagree 3 =Neutral 4 =Agree 5 =Strongly to the ORDINAL nature of the data we cannot use parametric techniques to analyse LIKERT type data ; Analysisof variance techniques include; Mann Whitney test.

2 Kruskal Wallis techniques include; Ordered LOGISTIC regression or; Multinomial LOGISTIC regression . Alternatively collapse the levels of the Dependent variable into two levels and run binary LOGISTIC data consists of respondantsanswerto the question of interest, theirsex(Male, Female), highestpost-schooldegreeachieved(Bacheor s, Masters, PhD, Other, None), and a standardisedincomerelated contain the numerical equivalent scores to the respondants answers, and thenominalcolumnrelats to a binning of respontants answers(where Neutral=1, Strongly disagree or Disagree=0, and Strongly agreeor Agree=2). The first 6 respondants data are shown below;> head(dat)Answer sex degree income score nominal1 Neutral F PhD 3 12 Disagree F Masters 2 13 Agree F Bachelors 1 04 Stronly agree F Masters 5 25 Neutral F PhD 3 16 Disagree F Bachelors 2 Males and Females answer differently?

3 Imagine we were interested in statistically testing if therewere a significant difference between the answering tendancies of Males and Females. Unofficially we may concludefrom the barplot below that Males seem to have a higher tendancy toStrongly Disagreewith the ascertion made,Females seem to have a higher tendancy toStrongly Agreewith the ascertion made. Using aMann-Whitney(aswe only have two groups M and F)we can officially test for a difference in scoring tendancy.> barplot(table(dat$sex,dat$Answer),beside =T,+ , ("Female","Male"),+ (x=12,y=25,cex= ),+ col=c("pink","light blue"))12 ANALYSING LIKERT SCALE/TYPE data , ORDINAL LOGISTIC regression example IN disagreeStronly officially test for a difference in scoring tendancies between Males and Females weuse aMann-Whitney(This is the same as a two-sample wilcoxon test).

4 > (score~sex, data =dat)Wilcoxon rank sum test with continuity correctiondata: score by sexW = 3007, p-value = hypothesis: true location shift is not equal to 0 From the Mann-Whitney test we get a p-value of , hence we can reject the null hypothesisThat Males andFemales have the same scoring tendancyat the 5% level. This is aslo evident from the bar chart which indicatesfar more Females answer withStrongly Agree, and far more MAles answer withStrongly scoring tendancies differ by dregee level?If we were interested in statistically testing if there werea significant difference between the scoring tendancies of people with different post-school degree we may conclude from the barplot that there is seemilgly no difference in the scoring tendancies ofpeople having achieved either one of the listed degrees.

5 Using aKruskal-Walliswe can officially test for adifference.> barplot(table(dat$degree,dat$Answer),+ beside=T, (cex= ),+ , ("Bachelors", ANALYSING LIKERT SCALE/TYPE data , ORDINAL LOGISTIC regression example IN + "Masters","PhD","None","Other"))>AgreeDi sagreeNeutralStrongly disagreeStronly officially test for a difference in scoring tendancies of people with different post-school degree cheivements we use aKruskal-Wallis Test.> (Answer~degree, data =dat)Kruskal-Wallis rank sum testdata: Answer by degreeKruskal-Wallis chi-squared = , df = 4, p-value = Kruskal-Wallis test gives us a p-vale of , hence we have no evidence to reject our null are likely therefore to believe that there is no difference in scoring tendancy between people with differentpost-school lvels of way of treating this type of data if we there is a normally distributed continiousindependent variable is to flip the variables around.

6 Hence, to officially test for a difference in means between theincomeof people scoring differently we use aOne-way ANOVA(as the samples are independent).> anova(lm(income~Answer, data =dat))Analysi s of Variance TableResponse: incomeDf Sum Sq Mean Sq F value Pr(>F)4 ANALYSING LIKERT SCALE/TYPE data , ORDINAL LOGISTIC regression example IN 4 139 ANOVA gives us a p-value of , hece we have no evidence to reject our null-hypothesis. We are thereforelikely to believe that there is no difference in the average income of people who score in each of the Chi-Square test can be used if we combine the data intonominalcategories, thiscompares the observed numbers in each category with those expected( equal proportions), we asses if anyobserved discrepancies(from our theory of equal proportions)can be reasonably put down to numbers in eachnominalcategory(as described above)are shown below.

7 > table(dat$nominal,dat$sex)F M0 16 141 40 452 28 1> table(dat$nominal,dat$degree)Bachelors Masters None Other PhD0 6 5 11 5 31 7 5 27 30 162 3 11 7 4 4>Output from each Chi-square test is shown below. Initially we test if there is a significant difference betweenthe expected frequencies and the observed frequencies between the specified(nominal)scoring categories of thesexes. The second Chi-squared test tests if there is a significant difference between the expected frequencies andthe observed frequencies between the specified(nominal)scoring categories of people with different post-schooleducation levels.

8 > (table(dat$nominal,dat$sex))Pearson's Chi-squared testdata: table(dat$nominal, dat$sex)X-squared = , df = 2, p-value = > (table(dat$nominal,dat$degree))Pearson's Chi-squared testdata: table(dat$nominal, dat$degree)X-squared = , df = 8, p-value = first Chi-squared test gives us a p-value of< , hence we have a significant result at the 1% level allowing usto reject the null hypothesis(of equal proportions). We would therefore believe that there are unequal proportions ofMales and Females scoring in each of the three(nominal)categories. The second Chi-squared test gives us a p-valueof< , hence we have a significant result at the 2% level allowing us to reject the null hypothesis(of equalproportions).

9 We would therefore believe that there are unequal proportions of people with different post-schooleducation levels scoring in each of the three(nominal) ORDINAL Logisic regression LOGISTIC regression or ( ORDINAL regression ) is used to predict an ORDINAL dependent variable given one ormore independent variables.> library(MASS)> mod<-polr(Answer~sex + degree + income, data =dat,Hess=T)> summary(mod)Call:polr(formula = Answer ~ sex + degree + income, data = dat, Hess = T)Coefficients: ANALYSING LIKERT SCALE/TYPE data , ORDINAL LOGISTIC regression example IN Std. Error t valuesexM :Value Std.

10 Error t valueAgree|Disagree |Neutral |Strongly disagree disagree|Stronly agree Deviance: : summary output in R gives us the estimated log-oddsCoefficientsof each of the predictor varibales shownin theCoefficientssection of the output. The cut-points for the adjecent levels of the response variable shown intheInterceptssection of the interpretation of the ordered log-odds coefficient is that for a one unit increase in the predictor, theresponse variable level is expected to change by its respective regression coefficient in the ordered log-odds scalewhile the other variables in the model are held constant.


Related search queries