Professional Documents
Culture Documents
A Preliminary Draft for a pdf Document/ Starting Point for Multimedia Web Module K P Mohanan, IISER-Pune mohanan@iiserpune.ac.in
i. ii.
iii. iv. v.
How faithful to the idea of constructivism are the 10th grade science textbooks used in government schools in India? How well do the secondary school examination question designers and members of school boards understand the implications of constructivism to syllabi, textbooks, and examinations? Are the practices of 10th grade math teachers in CBSE schools in harmony with constructivist pedagogy? What features would distinguish a constructivist science textbook of high quality from one of medium quality, and from a non-constructivist science textbook of high quality? How do we distinguish mathematics examination questions of a constructivist curriculum from those of a non-constructivist curriculum for 10th grade?
In general, it is a good idea to restrict ourselves to a single research question: as the number of questions increase, so does the danger of lack of coherence and focus. If you have two or more research questions, it is worth trying to formulate them as subsidiaries of an overarching question at a higher level, to avoid the objection that your questions are just a random list of items, not tightly connected or adequately integrated. A shopping list of research questions from which a few can be easily dropped or to which a few can be added, such that neither the deletion nor the addition substantially affects the quality of the proposal, can be taken as an indication that the proposal is of poor quality. When we ask a research question, it is important that we have a rough idea of the kinds of answers we are likely to find. For instance, questions (i) and (ii) expect answers like very well, moderately well, or not well. To judge the degree of faithfulness to constructivism, or understanding of the idea, we need to have thought through the criteria on the basis of which we can make these judgments. What are the strands of constructivism? Which of them do you think should be present in science textbooks? What indicators would legitimize assigning one textbook to the category very well and another to below the threshold? Consideration of such issues is an integral component in formulating good research question. Question (iii) can have only two answers: Yes, the practices of 10th grade math teachers in CBSE schools are in harmony with constructivist pedagogy. No, the practices of 10th grade math teachers in CBSE schools are not in harmony with constructivist pedagogy. What kind of evidence would support the first answer? And what kind of evidence would support the second answer? To answer these questions, we must obviously have clear answers to other questions: What is constructivist pedagogy? What are the indicators of constructivist pedagogy in mathematics? Unlike questions (i)-(iii), the grounds for answers to questions (iv) and (v) are not empirical. That is, we do not look for answers to (iv) and (v) by observing, counting, measuring, gathering information, and so on. Rather, we do so on the basis of conceptual clarification, definitions, axioms, and shared values. It is important in research to distinguish between empirical questions (like (i)-(iii)) and conceptual questions (like (iv)-(v)). To grasp this distinction, compare (iv)-(v) with (vi)-(vii): vi. What are the differences between what educationists judge as constructivist science textbooks of high quality, and what they judge as constructivist science textbooks of medium quality? vii. What are the differences between what educationists judge as mathematics exam questions of a constructivist curriculum, and what they judge as those of a nonconstructivist curriculum for 10th grade?
In contrast to (iv)-(v), questions (vi)-(vii) are empirical. To look for answers for (vi), for instance, we identify a representative sample of educationists, give them samples of a range of science textbooks, ask them to pick what they judge to be constructivist textbooks, and then ask them to group the textbooks into high quality and medium quality ones. Once these two samples are identified, we proceed to look for differences between them. Similar remarks apply to (vii). We would not use such evidence-based research for (iv)-(v). For these questions, we clarify the concept of constructivism, define it, and identify its components. For question (iv), we identify which of these components can be embodied in textbooks, as opposed to classroom pedagogy and assessment. And for (v), we identify the manifestation of these components in the design of exam questions. Only after having done this can we answer questions (iv)-(v). 3.2 Literature Review: Placing the research question in the context of existing knowledge A research question, we said earlier, is an articulation of what we dont know against the backdrop of what we do know. This means that when we pose a research question, we should also specify the knowledge that already exists in the literature. This is the function of what is called literature survey/review. A literature survey is not a list of X says this, and Y says that. Nor is it a documentation of research abstracts that can be cut and pasted through a google search, or copied from the literature survey of existing dissetations on the topic. And yet, more often than not, what is presented under the title literature review is merely a list of everything that the researcher has read on the subject, without clarifying what we know (from available literature) and what we need to find out. Locating the niche for the proposed research in the existing body of knowledge is crucial in both a research proposal and a research thesis/report. To see what a good literature survey ought to accomplish, consider the following research question: viii. What is the impact of inquiry-based pedagogy on student learning? Given our definition, the above sentence does not constitute a good research question until we specify what we already know about the impact of inquiry-based pedagogy on student learning on the basis of what the existing literature has established as knowledge, and then articulate what it is that we do not know yet. Once we clearly spell out the we-know-this-butwe-dont-know-that, chances are that the above question will have to be reformulated to focus attention on what we dont know. Suppose you formulate a research question on which some research already exists, but you dont clearly indicate how your proposed research seeks to go beyond what is already known. Now, if the selection committee is aware of the literature, it is almost guaranteed that your proposal would be rejected. There might be a slim chance that the committee is not aware of the existing literature (after all, no one has read everything relevant on any given topic), but it would be unwise to bet on it (and it would be unethical as well). 3.3 Null Hypotheses and Pilot Studies Question (iii) (Are the practices of grade ten mathematics school teachers in CBSE schools in harmony with constructivist pedagogy?) is a yes-no question. The expected answers, as stated earlier, are: The practices of 10th grade math teachers in CBSE schools are in harmony with constructivist pedagogy. or The practices of 10th grade math teachers in CBSE schools are not in harmony with constructivist pedagogy.
These are two competing hypotheses: the answer is either yes (true) or no (false). Such questions require us to choose between two logically contradictory hypotheses. This is the basic idea of what is popularly presented in textbooks on research methodology as the hypothesis-testing model of research. To take another example to illustrate this point, consider another question: ix. Is there a correlation between students height and their ability to solve mathematical problems? The answer to this question involves a choice between (a) and (b): a. There is no correlation between students height and their ability to solve mathematical problems. b. There is a correlation between students height and their ability to solve mathematical problems. Now, suppose we had no evidence to support (a), and no evidence to support (b) either. In such a situation, do we suspend our judgment, and say that neither of them is true or false? Or do we take one of them as true? If we choose one, which one do we choose? A brief reflection tells us that as long as there is no evidence against it, we accept (a). This is called a null hypothesis. A null hypothesis is a conclusion that we would adopt in the absence of evidence either way. The idea of null hypothesis is central to experimental science. The null hypothesis in science is the counterpart of the principle of innocent-until-provenguilty in criminal law. If there is no evidence to show that the accused is guilty, our legal system demands that we judge the accused to be innocent. The counterpart of innocent-until-proven-guilty does not exist in mathematics. In mathematical inquiry, given a conjecture, if we can neither prove that it is true, nor that it is false, we suspend judgment. Thus, Goldbachs conjecture (which says that every even integer greater than two can be expressed as the sum of two prime numbers) has not been proved to be true. Its negation (Not every even integer greater than two can be expressed as the sum of two prime numbers) has not been proved either. In mathematics, there is no null hypothesis, so neither of these conjectures can be judged as true. We need to suspend judgment. In science, when we pursue a research question, it is crucial that we think carefully about the possible answers, identify the null hypothesis, and ask ourselves if our research is likely to confirm the null hypothesis. If all that the evidence we have gathered points to is that the null hypothesis is correct, our work is of very little value. How do we avoid this danger? What if we spend a whole year or even more gathering relevant data, only to discover that it only provides evidence for the null hypothesis? This is where it is crucial to conduct a pilot study before submitting a research proposal, so that we have reasonable confidence that large-scale data would not end up supporting the null hypothesis. A pilot study is a quick informal study to test the feasibility of the research, and to figure out the kinds of answers it is likely to lead to. Pilot studies are valuable in any research that requires gathering large amounts of data. If the actual study requires a sample size of 10,000 students, for instance, a pilot study with, say, 30 students would be useful. In many forms of educational research, a research proposal would be much stronger if it is submitted on the basis of a reasonably successful pilot study. For some research question, a pilot study may even be obligatory. [For a discussion of pilot studies in experimental research, see the wikipedia entry at http://en.wikipedia.org/wiki/Pilot_experiment ]
III. Feasibility: The proposed methodology (or cluster of methodologies) must be feasible to implement within the context in which the research will be conducted. A methodology that is appropriate for investigating a research question on the causes of cancer needs to be based on data; but data-based methodology is not appropriate in proving a mathematical conjecture. The grounds for mathematical proofs are already proven conjectures and/or axioms and definitions. Likewise, investigating the question, What is high quality education? calls for the methodology of conceptual inquiry, while investigating the question, Is the education offered by the government schools in Kerala of high quality? calls for the methodology of scientific inquiry. Similarly, in order to probe into the students conception of excellence in teaching, one may conduct a survey on the traits that students associate with
excellence in teaching. But for the research question, What is excellence in teaching? the survey method is entirely inappropriate. Investigating a correlational hypothesis may not need experimental strategies, but investigating a causal hypothesis typically calls for experimentation, resorting to statistical techniques such as regression only where experiments are not feasible. (For a discussion of the appropriateness of methodology for a research question, see http://wiki.nus.edu.sg/display/aki/2.3.+Examples+of+Methodology ) Giving a detailed account of ALL the methodological considerations relevant for the methodological component of educational research is beyond the scope of a short document such as this one. It might be useful to go to the library and/or internet to gain a reasonable understanding of the following concepts: Quantitative vs. qualitative research; empirical, interpretive, and conceptual/philosophical research; Statistical research: sampling, biased sample, representativeness of the sample; variability, variable, confounding variable; operationalization, reliability, validity; statistical tests, ; Experimental research: correlation vs. cause, independent and dependent variable, controlling for confounding variables, ; Data elicitation from humans: Surveys, interviews, focus groups, ; Action research as experimentation in natural as opposed to laboratory settings (as in field experiments and case studies involving researcher intervention); Fieldwork/ethnography: non-invasive strategies of data elicitation from humans (and difficulties of generalizing on the basis of fieldwork and action research; using metaanalysis as a solution); Triangulation as convergence of evidence from independent sources. Rigour of methodology is a matter of anticipating potential flaws and weaknesses, and doing the best we can to minimize them. No matter how hard we try, there will always be some flaws and weaknesses; what is important is our thinking through the problems and convincing the selection committee that we have done the best we can. An oft-repeated question from selection committee members is: Is it feasible to execute the proposed research within the timeframe indicated? Candidates answers often reveal that they have not given adequate thought to this issue or consulted experienced researchers on it. Committee members often also express concern about the feasibility of implementing the research: gathering sufficient data, or data that would tell us anything about the research question. To address such issues, the best solution a pilot study, to find out for yourself if you can actually do what you propose to do. You will then be in a much better position to respond to questions and objections from the committee. A good research proposal is not merely an expression of the intention to do research on a topic: there are several steps from the desire to do research to submitting a research proposal.
Desire to do research Research Topic(s) Research Question(s)
RESEARCH PROPOSAL
RESEARCH
DISSERTATION/ REPORT/ARTICLE
As the above diagram suggests, the function of the pilot study in research is: to make sure that the research question(s) that we have come up with can be investigated within our constraints of time and resources; to come up with preliminary results; to ensure that these results are not trivial; and to fine-tune and further clarify the research question(s) if necessary. 6. Modes of Justifying Potential Conclusions Unlike the situation in a census, research is not a matter of gathering and documenting a body of information (= data points). Research may of course be based on information, but it has to begin with a research question and lead to a conclusion. Thus, if we do a survey on how many Indians can read and understand newspaper articles, which language they read in, what they read, when they read, and how much time they spend on reading, it does not become research unless such data collection is aligned to an overarching research question, and becomes potential evidence for a conclusion that can be defended. The process of defending a claim or a conclusion is called justification in the academic world. Examples of justification include: a mathematical proof of a conjecture; a prosecutors proof in the criminal court to show that the accused is guilty; experimental evidence to show that a proposed hypothesis is true; reasons to show that it is necessary to buy the expensive equipment that a unit in an organization has requested. Justification has the following structure: it has a claim/conclusion to be defended before a skeptical jury of peers; the grounds put forward to support the claim/conclusion, to be scrutinized and evaluated by the jury; the background assumptions shared by the jury and hence not explicitly stated; and the reasoning from the grounds (and background assumptions) to the claim/conclusion. (For details, see http://wiki.nus.edu.sg/pages/viewpage.action?pageId=31227935) For a skeptical jury to accept the justification as legitimate, and hence accept the proposal, the jury must: accept as correct the grounds put forward for the claim; share the background assumptions; and accept as legitimate the mode of reasoning used, as well as the steps of reasoning. A committee that evaluates a research proposal (as opposed to one that evaluates a research thesis) does not expect the proposal to either present a conclusion or justify it. But they do expect it to provide some indication of careful reflection on the potential conclusions, and the kinds of considerations in the grounds offered that would support one conclusion and refute another. In particular, it is crucial that we think carefully about the kinds of conclusions that are likely to arise, and the kinds of objections that might be raised. This means that we must be able to respond to questions like these: Given conclusion C, what kinds of counterevidence/counterexamples would refute it? What kinds of alternatives would challenge the conclusions we are proposing? Given such challenges, do we respond to hem, or do we modify our conclusions?
Indications of having thought through such issues transform a research proposal of medium quality to one of high quality.
8. Funding Issues
(I am not competent to discuss funding issues.)
-------------------------------------------------------------------------------You can find a wide array of suggestions on writing research proposals at the following sites, but the first one is my favourite: 1. How to write a Research Proposal (at http://ocw.mit.edu/courses/biology/7-16-experimentalmolecular-biology-biotechnology-ii-spring-2005/scientific-comm/lec03_resch_prop.pdf) 2. How to write a Research Proposal (at http://www.daaddelhi.org/imperia/md/content/newdelhi/guidelineresearchproposal.pdf) 3. Apparently, I have to write a research proposal. What do I need to do? (at https://www.uq.edu.au/student-services/phdwriting/phfaq01.html) 4. Writing a Research Proposal (at http://www.library.illinois.edu/learn/research/proposal.html) 5. How to write a Research Proposal (at http://www.tuilmenau.de/fileadmin/media/vsbs/Research_Skills/Folien/howToWriteAResearchProposalDAAD. pdf) 6. Guidelines on Writing a Research proposal (at http://www2.hawaii.edu/~matt/proposal.html) 7. How to Write a Research Proposal (at http://www.studygs.net/proposal.htm)