Example: confidence

Interpreting SPSS Correlation Output

Interpreting SPSS Correlation Output Correlations estimate the strength of the linear relationship between two (and only two) variables. Correlation coefficients range from (a perfect negative Correlation ) to positive (a perfect positive Correlation ). The closer Correlation coefficients get to or , the stronger the Correlation . The closer a Correlation coefficient gets to zero, the weaker the Correlation is between the two variables. Ordinal or ratio data (or a combination) must be used. The types of correlations we study do not use nominal data. SPSS permits calculation of many correlations at a time and presents the results in a Correlation matrix.. A sample Correlation matrix is given below. The variables are: Optimism: Compared to now, I expect that my family will be better off financially a year from now. Life Satisfaction: Overall, life is good for me and my family right now. Entrepreneurial Interest:: I am interested in starting a business or investing in a business in the next six months.

Interpreting SPSS ANOVA Output Analysis of Variance (ANOVA) tests for differences in the mean of a variable across two or more groups. The dependent (Y) variable is always ordinal or ratio data while the independent (X) variable is always nominal data (or other data that’s converted to be nominal). With ANOVA, the independent variable can

Tags:

  Analysis, Variance, Interpreting, Analysis of variance

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Interpreting SPSS Correlation Output

1 Interpreting SPSS Correlation Output Correlations estimate the strength of the linear relationship between two (and only two) variables. Correlation coefficients range from (a perfect negative Correlation ) to positive (a perfect positive Correlation ). The closer Correlation coefficients get to or , the stronger the Correlation . The closer a Correlation coefficient gets to zero, the weaker the Correlation is between the two variables. Ordinal or ratio data (or a combination) must be used. The types of correlations we study do not use nominal data. SPSS permits calculation of many correlations at a time and presents the results in a Correlation matrix.. A sample Correlation matrix is given below. The variables are: Optimism: Compared to now, I expect that my family will be better off financially a year from now. Life Satisfaction: Overall, life is good for me and my family right now. Entrepreneurial Interest:: I am interested in starting a business or investing in a business in the next six months.

2 All measures were recorded on five point Likert scales anchored by Strongly Disagree (1) to Strongly Agree (5). Family's income Life in is good Consider will improve right now. business ** **. Family's income will improve Pearson Correlation 1 .494 .248. Sig. (2-tailed) .000 .000. N 491 488 469. ** **. Life is good right now. Pearson Correlation .494 1 .228. Sig. (2-tailed) .000 .000. N 488 501 475. ** **. Consider business Pearson Correlation .248 .228 1. Sig. (2-tailed) .000 .000. N 469 475 479. The Correlation coefficient for Optimism and Satisfaction is For survey scale type Notice the diagonal of ones. These are data this is pretty large. The number of perfect correlations between variables and respondents in the sample answering both themselves. The matrix is symmetrical on items is 488. p-value for this Correlation either side of the diagonal, meaning all coefficient is .000. It's not technically zero. correlations are given twice. SPSS does not give p-values to more than three decimal places The statistical hypothesis test for this p-value is: H0: There is no significant relationship between Optimism and Life Satisfaction.

3 Ha: There is a statistically significant relationship between Optimism and Life Satisfaction. Because p < .05, reject the null of no relationship and conclude that the relationship is statistically significant. Interpreting SPSS ANOVA Output analysis of variance (ANOVA) tests for differences in the mean of a variable across two or more groups. The dependent (Y) variable is always ordinal or ratio data while the independent (X) variable is always nominal data (or other data that's converted to be nominal). With ANOVA, the independent variable can have as many levels as desired. A sample of SPSS ANOVA Output is below and on the following page. The variables in this example are: Entrepreneurial Interest (Y):: I am interested in starting a business or investing in a business in the next six months. State of Residence(X): Florida, Nevada, Texas Descriptive Statistics Dependent Variable: Consider starting your own business data collection location Mean Std. Deviation N This table simply provides the Nevada 29 means, standard deviations, and Florida 49 group sizes for the dependent Texas 50 variable for all three levels of the Total 128 independent variable.

4 Tests of Between-Subjects Effects Dependent Variable: Consider starting your own business This table gives the Type III Sum of main effects of the Source Squares df Mean Square F Sig. AVOVA test. This is a Corrected Model 2 .000 where you look first to Intercept 1 .000 see if any significant location 2 .000 differences exist in the Error 125 dependent variable Total 128 between levels of the Corrected Total 127 independent variable. For this ANOVA application, the only Source variable we're interested in is our independent variable, location. If you follow across to the right to the Sig. column, you find the p-value for this hypothesis test. That test is: H0: There is no difference between the three states in respondent interest in starting a business. Ha: At least one state will differ from the others in interest in starting a business. The alternative hypothesis is worded this way because there are three levels of the independent. The p- value indicates that the null should be rejected, but it does not say how the states differ.

5 We use multiple group comparisons to that (even though in this case it's pretty obvious). There are many ways to calculate multiple difference tests. The one used below is called the Least Significant Differences test (LSD). It compares all possible pairs of levels of the independent variable and tests each for significance in a way that controls what's referred to as the experiment-wide error rate. Note that all information is given twice. That is, Texas is compared to Florida and then Florida is compared to Texas. It can get confusing. Multiple Comparisons Consider starting your own business LSD. (I) data collection (J) data collection Mean Difference 95% Conf. Interval location location (I-J) Std. Error Sig. Lower Bound Upper Bound *. Nevada Florida .351 .000 *. Texas .350 .001 *. Florida Nevada .351 .000 .68 Texas .21 .301 .483 .81. *. Texas Nevada .350 .001 .47 Florida .301 .483 .38. The hypothesis test for each pairwise comparison is the same: H0: There is no difference between the two states being compared.

6 Ha: There is a difference between the two states being compared. There are three possible comparisons: Nevada with Florida. The p-value is .000, meaning reject the null and conclude the state differ. Nevada with Texas. The p-value is .001, meaning reject the null and conclude the state differ. Florida with Texas. The p-value is .483, meaning do not reject the null. Conclude the states don't differ. Thus, respondents in Texas and Florida don't significantly differ in their interests in starting a business. However, these two states significantly differ from Nevada. Check the descriptive statistics and you'll see that the mean values show how. There's significantly less interest among Nevada respondents in starting a business than respondents from Florida and Texas, who are equally interested.


Related search queries