




                           CHAPTER  7

                   PARAMETRIC HYPOTHESIS TESTS



  A wide variety of the commonly used parametric hypothesis tests
are  available  for  use right on screen.  They provide quick and 
easy solutions to the computation  of  various  parametric  tests
that  are  often very difficult to perform by hand or by use of a 
calculator.  The computations are rapid and very  accurate.   All
you  need  to do is enter a few items of information and you will
quickly have the answers you need. 

  None of the parametric hypothesis tests will send  the  results
to  your printer.  However, all of your results are saved for you 
as you work along.  Then, when you exit  the  SPPC  you  will  be
given  the  option of reviewing your work on screen, printing it,
or storing it in a disk file of your choice.


                    ONE-SAMPLE TEST OF MEANS

  When  you  choose  the  option  to conduct a one-sample test of 
means, you will be conducting a t-test of the hypothesis  that  a
sample  mean  significantly  differs  from  0  or from a known or 
presumed population mean.  The program will ask you to enter  the
sample  size,  the  sample  mean, the standard deviation computed 
from the sample and corrected for bias, and the population  mean.
If  you  enter  a  population  mean  of 0 you will be testing the 
hypothesis that the sample mean departs from zero.  If you  enter
any other population mean, Mu, you will be testing the hypothesis
that  the  Diff = Mn - Mu = 0.  You may enter raw or summary data 
at the keyboard and you may conduct your hypothesis tests at  the
90%, 95% or the 99% confidence level.

  The  following  are  sample outputs from the one-sample test of 
means where Mean is the sample mean, Mu is the  population  mean,
and SEM is the standard error of the mean.

      N = 7             SEM = 5.37568     
   Mean = 27.57143        t = 5.12892     
     Mu = 0.00000        p <= 0.00268     
      99% Confidence Interval
     8.76192 <= Mn <= 46.38094    

      N = 189           SEM = 0.80013     
   Mean = 35.00000        t = 43.74277    
     Mu = 0.00000        p <= 0.00000     
      99% Confidence Interval
    32.93886 <= Mn <= 37.06114    

      N = 300           SEM = 1.21244     
   Mean = 47.00000        t = -0.82479    
     Mu = 48.00000       p <= 0.20475     
      95% Confidence Interval
   44.62363 <= Diff <= 49.37637   


           TWO-SAMPLE TEST OF MEANS: DEPENDENT SAMPLES

  The  two-sample test of means for dependent samples enables you 
to compare two means that are obtained from  the  same  subjects.
It  is  often called the "direct difference t-test of means".  If 
you enter raw scores, the program will  compute  and  report  the
correlation  for  you.   If you enter summary data, you must know 
the correlation between the two variables as well as  the  sample
means,  standard  deviations  and  the  sample size.  The program
produces the difference between the two means, the standard error 
of that difference, the t-test and  its  associated  probability,
and one of three confidence intervals that you specify.

  The following are samples of outputs for the two-sample test of
means using dependent samples.

      GROUP I                   GROUP II
 N      = 34                   N      = 34         
 Mean x = 87.00000             Mean x = 95.00000   
 SDx    = 23.00000             SDx    = 28.00000   
 SEDiff = 5.07323                r = 0.34000 
    t   = -1.57691                p  <= 0.12070    
 Mean 1 - Mean 2  = -8.00000   
      99% Confidence Interval
   -21.95138 <= Diff <= 5.95138     


          TWO-SAMPLE TEST OF MEANS: INDEPENDENT SAMPLES

  The  two-sample  test of means using independent samples is the 
good old "t-test for mean differences".  You  may  enter  raw  or
summary  data  from  the  keyboard  and  the following are sample
outputs from this procedure.

      GROUP I                   GROUP II
 N      = 89                   N      = 114        
 Mean x = 54.00000             Mean x = 61.00000   
 SDx    = 28.00000             SDx    = 31.00000   
 SEDiff = 4.20442    
    t   = -1.66492                p  <= 0.04796    
 Mean 1 - Mean 2  = -7.00000   
      95% Confidence Interval
   -15.24066 <= Diff <= 1.24066     


                    TEST PEARSON CORRELATIONS

  There  are six tests of correlations that have identical inputs 
and  outputs.   These  are  the  test  of  a  Pearson   r,   Phi,
Point-Biserial,   Spearman's   Rho,   partial  correlations,  and 
semi-partial correlations.  For each of  these  tests,  you  need
only enter the sample size and the value of the correlation.  The
program  does  the rest and provides you with an F-ratio, degrees 
of freedom for hypothesis (dfh), degrees  of  freedom  for  error
(dfe),   the   associated  probability  for  the  F-ratio  and  a
confidence interval for your correlation.

    N = 18        dfh =     1
    r = 0.46000   dfe =    16
    F =  4.29427   p <= 0.05229 
      95% Confidence Interval
    -0.05003 <=  r  <= 0.77973     


                      TEST PHI COEFFICIENT

  See the above explanation for "Test Pearson Correlation".   The
inputs  and  outputs  are identical.  The only difference is that
you will enter your Phi coefficient instead of the Pearson r. 

    N = 67        dfh =     1
    r = 0.18000   dfe =    65
    F =  2.17652   p <= 0.14120 
      95% Confidence Interval
    -0.06791 <=  r  <= 0.40698     


                 TEST POINT-BISERIAL CORRELATION

  See the above explanation for "Test Pearson Correlation".   The
inputs  and  outputs  are identical.  The only difference is that 
you will enter your Point-Biserial  correlation  instead  of  the
Pearson r. 

    N = 219       dfh =     1
    r = 0.17000   dfe =   217
    F =  6.45793   p <= 0.00552 
      95% Confidence Interval
     0.03829 <=  r  <= 0.29591     


                        TEST SPEARMAN RHO

  See the above explanation for "Test Pearson Correlation".   The
inputs  and  outputs  are identical.  The only difference is that
you will enter your Spearman Rho instead of the Pearson r. 

    N = 189       dfh =     1
    r = 0.67000   dfe =   187
    F = 152.32136   p <= 0.00000 
      95% Confidence Interval
     0.58302 <=  r  <= 0.74179     


                    TEST PARTIAL CORRELATION

  See the above explanation for "Test Pearson Correlation".   The
inputs  and  outputs  are identical.  The only difference is that
you will enter your partial correlation instead of the Pearson r. 

    N = 118       dfh =     1
    r = 0.23000   dfe =   116
    F =  6.47915   p <= 0.01178 
      95% Confidence Interval
     0.04951 <=  r  <= 0.39594     


                  TEST SEMI-PARTIAL CORRELATION

  See  the above explanation for "Test Pearson Correlation".  The 
inputs and outputs are identical.  The only  difference  is  that
you  will  enter  your  semi-partial  correlation  instead of the
Pearson r. 

    N = 56        dfh =     1
    r = 0.54000   dfe =    54
    F = 22.22812   p <= 0.00009 
      95% Confidence Interval
     0.31801 <=  r  <= 0.70586     


                   TEST MULTIPLE CORRELATIONS

  The test of a multiple correlation is used  to  test  the  null
hypothesis,  H:  R  =  0.  In order to use it, you must enter the 
sample size, the multiple  correlation,  R,  and  the  number  of
independent  variables,  IVs, used to obtain R.  The program will 
produce the F-ratio, degrees of  freedom  for  hypothesis  (dfh),
degrees  of  freedom  for error (dfe), the probability associated 
with the F-ratio, and the confidence interval that you  specifiy.
The  program  then produces the same information for the shrunken 
multiple correlation, and the following are sample  outputs  from
the program.

    N = 137       dfh =     3
    r = 0.43000   dfe =   133
    F = 10.05672   p <= 0.00004 
    IV's = 3           
      95% Confidence Interval
     0.27989 <=  r  <= 0.55959     

      SHRUNKEN CORRELATION

    N = 137       dfh =     3
    r = 0.40806   dfe =   133
    F = 10.05672   p <= 0.00004 
    IV's = 3           
      95% Confidence Interval
     0.25518 <=  r  <= 0.54104     


                       TEST CHANGE IN R^2

  The ability to test a change in a squared multiple correlation, 
R^2, can be extremely important.  In order to do that,  you  will
need  to  enter  the  degrees of freedom for hypothesis, dfh, the 
degrees of freedom for error, dfe, the change  in  R^2  that  you
wish  to  test,  and  the overall R^2 for the multiple regression 
model.  In the first example shown below, the value of  dfh  =  1
for  the  test of an increment in R^2 due to the addition of only
one variable.  In the second example, the change in R^2 was based
on the addition of three variables so the value of dfh = 3.

     dfh        = 1       
     dfe        = 218     
     R^2 Change = 0.14000 
     R^2 Total  = 0.47000 
     F-ratio    = 57.5849     
     p         <= 0.48394 


     dfh        = 3       
     dfe        = 321     
     R^2 Change = 0.26000 
     R^2 Total  = 0.37000 
     F-ratio    = 44.1587     
     p         <= 0.00006 


                 ONE-SAMPLE TEST OF PROPORTIONS

  The one-sample test of proportions is much like the  one-sample
test  of  means.   If  you enter a population proportion of 0 you 
will be testing the hypothesis that  your  sample  proportion  is
equal  to  0.   If  you  enter any other value for the population 
proportion, you will be testing the hypothesis that Diff =  SP-PP
=  0.   These  features  are  illustrated by the following sample
outputs. 

      SP = 0.14000          PP = 0.00000         
       N = 89              SEP = 0.03678         
       t = 2.00000          p <= 0.00051         
                                              
     95% Confidence Interval for Proportion      
                                              
           0.00000 <= SP <= 0.28000            

                                                  
      SP = 0.23000          PP = 0.27000         
       N = 121             SEP = 0.03826         
       t = 1.98000          p <= 0.29819         
                                              
     95% Confidence Interval for Proportion      
                                              
           0.19000 <= Diff <= 0.27000          


        TWO-SAMPLE TEST OF PROPORTIONS: DEPENDENT SAMPLES

  The  two-sample  test  of  proportions  for  dependent  samples 
enables you to test the hypothesis, Ho: p1 = p2.  You  need  only
enter  the two proportions and the sample size in order to obtain
the output illustrated below.

          GROUP I                   GROUP II                 
                                                         
     Prop   = 0.42000              Prop   = 0.38000          
     N      = 134                  N      = 134              

     Difference = 0.04000                                    
     t = -0.51769    p <= 0.30234                            


       TWO-SAMPLE TEST OF PROPORTIONS: INDEPENDENT SAMPLES

  The two-sample test  of  proportions  for  independent  samples
enables  you  to test the hypothesis, Ho: p1 = p2.  You need only 
enter the two proportions and the sample size in order to  obtain
the output illustrated below.

          GROUP I                   GROUP II                 

     Prop   = 0.38000              Prop   = 0.31000          
     N      = 89                   N      = 112              
     Std Dev = 0.05025            Std Dev = 0.04479          
                                                         
     Difference     = 0.07000    SEDiff = 0.06732            
     t = 1.03989     p <= 0.14920                            
                                                         
      95% Confidence Interval for Difference             
                                                         
            -0.06194 <= Diff <= 0.20194                  


             ONE-SAMPLE TEST OF A STANDARD DEVIATION

  This  test  is analagous to the one-sample test of means except
that it is conducted for standard deviations.

      N = 67           Mean = 35       
     SD = 17.00000      PSD = 21.00000 
      t = -2.72373     SESD = 1.46858     
     p <= 0.00816     
         95% Confidence Interval
      -6.93715 <= Diff <= -1.06285   


         TWO-SAMPLE TEST OF VARIANCES: DEPENDENT SAMPLES

  This test is analagous to the two-sample test  of  means  using
dependent samples except that the test is applied to variances.

          GROUP I                   GROUP II
     N      = 36                   N      = 36         
     Var X  = 127.00000            Var X  = 138.00000  
     SD  X  = 11.26943             SD  X  = 11.74734   
     Difference = Var1 - Var2  = -11.00000   
     SEDiff = 43.72152                t    = -0.25159    
        r   = 0.27000                p   <= 0.79825    
              95% Confidence Interval
          -100.27934 <= Diff <= 78.27934    


        TWO-SAMPLE TEST OF VARIANCES: INDEPENDENT SAMPLES

  This  test  is  analagous to the two-sample test of means using
independent samples except that the test is applied to variances.

          GROUP I                   GROUP II
     N      = 168                  N      = 236        
     Var x  = 489.00000            Var x  = 512.00000  
         F  = 1.04703               p     <= 0.18867     


            TEST TWO CORRELATIONS: DEPENDENT SAMPLES

  Often one has a correlation between X and Y and  for  X  and  Z
from  the  same  sample.   In such cases it is often desirable to 
test the hypothesis that the two  correlations  are  equal,  i.e.
that  r(xy) = r(xz).  This procedure enables you to do that.  You 
need only  enter  the  sample  size  and  the  two  correlations.
However,  you must also enter the correlation, r(yz), between the 
variables Y and Z.  Remember, the test is on the hypothesis,  Ho:
r(xy) = r(xz).  The following is a sample of the output from this
procedure.

 N      = 79             r(xy)  = 0.47000    
 r(xz)  = 0.52000        r(yz)  = 0.23000    
 Diff   = r(xy) - r(xz)     = -0.05000   
 z      = -0.44074           p <= 0.66481    
      95% Confidence Interval
    -2.46105 <= Diff <= 1.57223     


           TEST TWO CORRELATIONS: INDEPENDENT SAMPLES

  Often  one has two correlations for the same variables but from
independent samples.  In such cases it is often desirable to test 
the  hypothesis  that  the  two  correlations  are  equal.   This
procedure enables you to do that.  You need only enter the sample
size  and  the  correlation for each of the two groups or samples
and the following is a sample of the output from this procedure.

          GROUP I               GROUP II
        N   = 89                  N   = 131        
        r   = 0.46000             r   = 0.37000    
        Z   = 0.49731             Z   = 0.38842    
 Difference = 0.10889    
        t   = 0.78096             p  <= 0.21741    
              95% Confidence Interval
            -0.16439 <= Diff <= 0.38217     


                         OMNIBUS R TEST

  The  Omnibus  R Test enables you to enter an entire correlation 
matrix from the keyboard and then test the hypothesis that  there
are  no  significant  correlations  within the entire matrix.  In 
other words, one is never justified in  analyzing  a  correlation
matrix using, multivariate procedures for example, unless one can
first  reject the null hypothesis, Ho: R = 0.  The Omnibus R Test 
provides such a test and the following is a sample of the  output
from this procedure for 4x4 correlation matrix.

 N    = 367             df   = 6          
 Chi-Square  = 571.82834   
          p <= 0.00000    

