Significance Testing is fundamental in identifying whether there is a relationship exists between two or more variables in a Psychology Research. It is achieved by comparing the probability of which the data has demonstrated its effect due to chance, or due to real connection.

The ‘p’ value in Significance Testing indicates the probability of which the effect is cause by chance. When the p value is small, it suggests that it is likely the effect is not caused by chance. Hence having a real connection between the relationships, and the conclusion we make from the data has higher validity.

The most commonly agreed border in Significance Testing is at the P value 0.05. If the p value is being less than 5% (p<0.05), we will identify it being Statistically Significant. Similarly, if the P value is more than 5% (p>0.05), we will identify it being Statistically Insignificant. However, it is worth knowing that the boarder value of Significance Testing can vary depending on how the experimenter would identify the relationship being significant. In some cases, the experimenters may consider p<0.1 still be Statistically Significant.

Significance Testing is very important in researches as it helps to indicate whether if the data is valid, and the appropriateness to conclude from the data.

### Like this:

Like Loading...

*Related*

Significance testing is extremely important I agree. Without it we would never know if our results are because of chance or not. Significance testing is even more important in the world of medication, if a drug is thought to work however a statistical test has not been used then the effectiveness of the drug may be completely because of chance. Therefore in these circumstances the P value must be much lower in order for doctors to know without any doubt that the drug works and is effective at doing the job that it is meant to do.

Significance testing is indeed a clear method for determining the nature of a set of results, and their indicated usefulness within the field of Psychology. It is regarded as a universal system that has been adopted to give a uniform assessment of the contributions of chance, and whether the results can be deemed as ‘true’.

However, it is also a system which is incredibly flawed.

Multiple levels of significance exist, although the most used is the 0.5% level. Misuse of the significance level by the researchers is a present threat; for example, changing the agreed level of significance to manipulate the statistical significance of their research. After all, publication bias dictates that research with a statistically significant result is much more likely to get published that insignificant results (see File Drawer problem). Similarly, statistical significance does not indicate the practical significance of the results. There is also the problem of outliers, specifically their inclusion- which would undoubtedly influence the statistical significance of the research.

Rigorous control of the experimental design can also influence the statistical significance. Such constraints minimise the degree to which outliers, or extraneous variables can manipulate the significance test. Generally, the larger the sample size, the less the impact that far outliers will affect the statistical significance, and as such, can hide the ‘true’ significance of the research.

Finally, due to the stringent nature of the measure, a result can be classed as statistically insignificant even if it is minutely over the 0.5 limit (if that is the limit which has been adhered to). Of course, the 0.5 limitation is simply not valid in certain tests – where the contributions of chance are negligible. Additionally, as you mention, it is not uncommon for researchers to interpret the significance of their research, regardless of the results of significance testing.

To outline the matter.

Is it a question off, the integrity of the tester and what is their purpose and their goal to achieve an outcome of preference. Is it questionable that the tester be tested for its self opinion and observational skills, however the training.

Viewing statistics are centred around not the general or practical; whereby who is more in favour. Giving this only a chance or inclined to chance.