You are on page 1of 2

STATISTICAL GUIDELINES JOURNAL OF VETERINARY INTERNAL MEDICINE Drawing appropriate conclusions from experimental and observational studies depends

on many factors including correct experimental design and statistical analysis of data. Authors should ensure that the statistical analyses they perform are appropriate for the hypotheses being tested and the data being examined. A number of excellent textbooks are available as resources for study design, data presentation, statistical analysis and reporting of the results of these analyses in the biological sciences. Study design: Designing the study is the most critical step in any investigation with the design of clinical investigations (observational studies) tending to be a much more complicated process than is the design of classical experiments. Poor study design cannot be corrected through statistical analysis of the data and flawed study design is the most common reason for final rejection of papers by journal editors. Investigators should make a regular practice of consulting appropriate references on study design and statistical analysis when planning a new investigation. Consultation with individuals having a comprehensive understanding of the strengths and weaknesses of the various types of study designs, statistical analyses and the biological relationships under examination is essential during the design phase of the study. Epidemiologists with a strong interest in quantitative analysis and biostatisticians frequently fill this role. A priori hypotheses (those formulated before the data are collected) should be stated and clearly differentiated from those formulated after the data have been collected. Bias in sample estimates will be reduced by formal random assignment of animals to groups and by masked (blind) evaluation of each outcome. The potential impact upon study validity as the result of any failure to adhere to these principles should be addressed by the author. Estimates of samples sizes required to achieve appropriate study power should be undertaken during the process of study design, not after the statistical analysis has been completed. Reporting statistical methods and results: A comprehensive presentation of guidelines for reporting statistical methods and results may be obtained from publications dealing directly with these issues. What follows is a list of points for consideration by authors submitting to the journal, based upon issues that are commonly raised during the review process. 1. Statistical software and complex or unusual statistical methods beyond t-tests, correlation, chi-square, stratified analysis, analysis of variance, regression and survival analysis should be referenced. 2. Wherever possible, the data and main findings of the study should be presented in the form of informative, carefully designed and labeled tables and graphs. 3. If the number of observations is small, all the data should be presented. 4. Appropriate descriptive statistics should be provided for each variable under consideration. Data for continuous variables should be summarized with the following: the number of observations; a measure of central tendency (mean or median, not both); and a measure of variability (standard deviation, range or interpercentile ranges (e.g. deciles, quartiles), as appropriate for the data. The number of animals in each group and category should be provided for categorical variables. Cut-points used to create categorical variables from continuous data need to be explained and justified. 5. The standard error of the mean (SEM) is a hypothetical quantity, not a descriptive statistic, and should not be represented as such. Standard error bars should not be used in the graphical presentation of means. This practice is misleading as it suggests that the data are less variable than they are. 6. The number of significant figures in a value provides an indication of the reliability and precision of the measurement. Care should be taken when reporting values that an unreasonable precision is not implied by use of excessive significant figures. For instance, reporting serum creatinine concentration to 3 decimal places (1.175 mg/dl) implies greater precision than is the case. Similarly, reporting white blood cell counts to 5 significant figures (11,575 cells/ ul) suggests greater precision than is achieved or is warranted. Reporting of most values should be limited to 3 significant figures. A source for detailed instructions for reporting significant figures is provided in suggested reading material listed below. 7. Calculation of percentages to summarize small samples (less than 20) is uninformative and should be avoided. 8. When percentages are reported, both the numerator and the denominator should be given. 9. The 95% confidence limits should be provided to indicate the precision of each estimate. 10. All P values should be reported to two significant digits (e.g. P=0.032), whether or not the value is statistically significant. Values less than P=0.001 should be reported as P<0.001. The practice of reporting P values as less than or greater than should be avoided. 11. Simple models should be reported as a formula (e.g. formula for the regression line). 12. When multiple hypotheses are tested, such as in the analysis of prospective or retrospective epidemiological studies, report the final and any competing models in a table that includes: number of observations; coefficients; standard errors, confidence limits; test statistics; odds ratios (stratified analysis or logistic regression); and the P values. 13. Frequently clinical investigators test for differences between groups (e.g. treatment and control groups) with respect to a list of characteristics (e.g. a panel of hematological and biochemical tests) or risk factors. If this is done without having established an a priori hypothesis and using an approach such as multiple t-tests then the problem of multiple testing is encountered. A variety of strategies are available for dealing with this problem but it is preferable that the problem and the strategy are identified during the design phase of the study. Examples of appropriate strategies include the use of the Bonferroni correction and more conservative (smaller) critical P values.

Statistical assumptions: Most statistical tests make some assumptions about the data. It is the authors responsibility to verify these assumptions and to provide a statement to the effect that this has been done. For example, the assumptions of parametric tests include: approximately normal distribution of the outcome variable Y within each level of each factor (X); equal variance of Y with in each factor or explanatory variable (X); and the independence of each value of Y from all other values of Y. Violation of any of these assumptions may require rank, logarithmic, square root, arcsine, reciprocal or other transformation of the data, or use of statistical techniques that accommodate certain violations including nonparametric methods and repeated measures analysis. SUGGESTED READING Both science and art are involved in the development of an optimal study design, data presentation and statistical analysis. There are many useful publications in these areas. The short list of references listed below should provide some guidance to novice and experienced investigators alike. 1. Douglas G. Altman. Practical Statistics for Medical Research. Chapman & Hall, New York, NY, 1991. 2. Ian Dohoo, Wayne Martin and Henrick Stryhn. Veterinary Epidemiologic Research. University of Prince Edward Island, 2003. 3. Thomas A. Lang and Michelle Secic. How to Report Statistics in Medicine: Annotated Guidelines for Authors, Editors, and Reviewers. American College of Physicians, Philadelphia, PA, 1997. 4. Geoffrey R. Norman and David L. Streiner. Biostatistics, The Bare Essentials. 2nd Edition. B.C. Decker Inc., Hamilton, Ontario, 2000. 5. J. M. Campbell. Laboratory mathematics: medical and biological applications. Mosby, St. Louis. 1990. pp 24-28.

You might also like