The Method Section, Part I: Enhancing the Credibility of Quantitative Research Studies
For many years, I served as the managing editor of an educational research journal. In that role, I read every accepted manuscript before the issue went to print. While the revise and resubmit process corrected most significant problems with the manuscripts, we invariably had queries for authors to address during the final manuscript preparation process. The method section was one part of the manuscript where we frequently raised questions.
While you will sometimes hear that the purpose of the method section is so that others can replicate the study, replication is not a goal in most social science and educational research. Instead, the method should offer readers enough insight into your methodological choices for them to be able to ascertain whether the decisions you made were appropriate for the research problem you sought to address and to assess the value or reasonableness of any findings and conclusions derived from that method. It should be a relatively straightforward description of how you conducted your study. More importantly, that description should help establish the credibility of your research.
In this first of two posts on the method section, I describe six focal points for enhancing the credibility of a quantitative research study. But first, a disclaimer: my focus here is on writing the method section. I assume you have already made the decisions about study design, sampling, and data analysis, among others. As such, it is beyond this post’s scope to delve into the nuances of these different choices.
Research Design
There isn’t typically a subsection headed “research design,” though authors might provide a brief overview of the design in the method section’s opening paragraph(s). If not stated explicitly, readers should be able to easily infer the study’s design from your discussion of sampling, data collection, and analysis. You may also want to suggest why this is the most appropriate design choice based on the research problem or possible constraints, such as using a self-report survey when direct observation of behaviors is difficult or impossible.
In most educational and social science research, designs will likely be either quasi-experimental or ex post facto. The extent to which the researcher can control for threats to internal validity and demonstrate the generalizability of the results contributes to the strength of the quasi-experimental design and the likelihood of publication. With ex post facto designs, researchers improve the likelihood of publication by moving beyond simple exploratory or descriptive analyses to testing hypotheses and controlling for alternative explanations of the findings. Again, the rationale and appropriateness of the design choices should be clear to readers.
Context
If you have designed a treatment or intervention, the method section should include a description of the intervention. Similarly, you may want to include a description of the site where data collection took place if the setting has some bearing on the findings or their generalizability to other contexts.
Variables
Depending on your research design, you may describe several different types of variables, including independent, dependent, and control variables. In experimental and quasi-experimental designs, independent variables are factors or conditions that the researcher manipulates or varies in some way to understand whether or how it affects other factors or conditions known as dependent variables, response variables, or outcome variables.
In ex post facto designs, the independent variable, sometimes called assigned or attribute variables, cannot be manipulated but might be used to group research subjects. Rather than establishing a cause-and-effect relationship between the independent and dependent variables, the ex post facto design suggests whether the presence of the independent and dependent variables correlate with one another.
In some cases, researchers may anticipate that attribute variables (e.g., race/ethnicity, socioeconomic background, prior educational experiences) or situational variables will affect the relationship between the independent and dependent variables. To limit the impact of extraneous variables, researchers will either attempt to hold them constant throughout the study or control for them statistically.
In addition to identifying each variable of interest, the method section should address how those variables were defined and collected. Such operational definitions frequently involve the use of carefully selected or designed instruments.
Instruments or Measurement
When using an instrument to measure a variable, you’ll want to comment on its reliability and validity by reporting on your own analysis of the instrument or citing published reports. “Reliability is generally defined as the consistency of the measurement instrument. This implies that the testing instrument would produce the same or a very similar result every time it is used” (Newman et al., p. 205). If the measure involves observations or assigning scores based on a rubric, you will want to describe procedures for training raters and offer a measure of inter-rater reliability.
Validity simply suggests that the instrument measures what it purports to measure, but it can be more challenging to establish validity than reliability. You can address one or more aspects of validity in describing measurement choices, including how well a measure aligns with existing theory (i.e., construct validity), with what it intends to measure (i.e., content validity), or with other valid measures of the same concept (i.e., criterion validity).
Sample
The discussion of the sample’s characteristics and how you selected the sample undergird the validity of your research outcomes and your ability to generalize beyond the specific study. You’ll want to report both intended and actual sample sizes. For survey research, this might be reported as response rate—in this case, you may wish to comment on how that rate compares to similar data collection processes. If you used criteria for including or excluding participants from the sample or assigning them to different treatment or control groups, be sure to describe those procedures.
This section should also address recruitment strategies or compensation offered to participants, as well as compliance with institutional review board policies and relevant ethical standards. Other things to include are relevant demographic characteristics of the sample as a whole and treatment or control groups, if applicable. If you plan to report many characteristics, you may choose to present this information in table form.
Data Analysis
The method section should also provide insight into the strategies you used to analyze the data. In addition to describing the specific statistical analyses you performed, the subsection focused on data analysis should also include a discussion of any data diagnostics or manipulation you completed before running statistical tests. This subsection might describe strategies you used to clean or code the data and address how you managed missing data.
Many of the topics I have reviewed here—description of the study design, sampling procedures, operational definitions, and decisions about instrumentation and data analysis—can be considered “legitimation techniques” (Newman et al., 2011). Remember: the primary focus of the method section is convincing the reader of the reasonableness of your design choices and that you have made solid efforts to control for threats to internal and external validity. Your attention to these may vary by study, but you should consider which are of the greatest importance to establishing the legitimacy of any particular study. If space is limited (when isn't it when you are submitting for publication), give the most in-depth treatment to the most important legitimation techniques.
In the second part of this series, I take up considerations for using your method section as a way to establish the trustworthiness of qualitative research findings. Both posts draw on my online course, Writing About Your Research Methods. The four-module course addresses strategies for writing the method section for quantitative, qualitative, and mixed-methods studies. The accompanying workbook includes templates, writing prompts, and resources to support learning and skill development.
Reference
Newman, I., Newman, D., & Newman, C. (2011). Writing research articles using mixed methods: Methodological considerations to help you get published. In T. S. Rocco, T. Hatcher, & Associates, The handbook of scholarly writing and publishing (pp. 191-208). Jossey-Bass.
Hozzászólások