Professional Documents
Culture Documents
stays on the right path during the study. Finally, by defining the population, the researcher
identifies the group that the results will apply to at the conclusion of the study. Researcher has
identified the population of the study as children ages 10 to 12 years. This narrower population
makes the study more manageable in terms of time and resources.
Step 6: Develop the Instrumentation Plan
The plan for the study is referred to as the instrumentation plan. The instrumentation plan serves
as the road map for the entire study, specifying who will participate in the study; how, when, and
where data will be collected; and the content of the program. In the obesity study, the researcher
has decided to have the children participate in a walking program for six months. The group of
participants is called the sample, which is a smaller group selected from the population specified
for the study. The study cannot possibly include every 10 to 12-year-old child in the community,
so a smaller group is used to represent the population. The researcher develops the plan for the
walking program, indicating what data will be collected, when and how the data will be
collected, who will collect the data, and how the data will be analyzed. The instrumentation plan
specifies all the steps that must be completed for the study. This ensures that the programmer has
carefully thought through all these decisions and that she provides a step-by-step plan to be
followed in the study.
Step 7: Collect Data
Once the instrumentation plan is completed, the actual study begins with the collection of data.
The collection of data is a critical step in providing the information needed to answer the
research question. Every study includes the collection of some type of datawhether it is from
the literature or from subjectsto answer the research question. Data can be collected in the
form of words on a survey, with a questionnaire, through observations, or from the literature. In
the obesity study, the programmers will be collecting data on the defined variables: weight,
percentage of body fat, cholesterol levels, and the number of days the person walked a total of
10,000 steps during the class. The researcher collects these data at the first session and at the last
session of the program. These two sets of data are necessary to determine the effect of the
walking program on weight, body fat, and cholesterol level. Once the data are collected on the
variables, the researcher is ready to move to the final step of the process, which is the data
analysis.
Step 8: Analyze the Data
All the time, effort, and resources dedicated to steps 1 through 7 of the research process
culminate in this final step. The researcher finally has data to analyze so that the research
question can be answered. In the instrumentation plan, the researcher specified how the data will
be analyzed. The researcher now analyzes the data according to the plan. The results of this
analysis are then reviewed and summarized in a manner directly related to the research
questions. In the obesity study, the researcher compares the measurements of weight, percentage
of body fat, and cholesterol that were taken at the first meeting of the subjects to the
measurements of the same variables at the final program session. These two sets of data will be
analyzed to determine if there was a difference between the first measurement and the second
measurement for each individual in the program. Then, the data will be analyzed to determine if
the differences are statistically significant. If the differences are statistically significant, the study
validates the theory that was the focus of the study. The results of the study also provide valuable
information about one strategy to combat childhood obesity in the community.
As you have probably concluded, conducting studies using the eight steps of the scientific
research process requires you to dedicate time and effort to the planning process. Researcher
cannot conduct a study using the scientific research process when time is limited or the study is
done at the last minute. Researchers who do this conduct studies that result in either false
conclusions or conclusions that are not of any value to the organization.
Q2a) Questionnaire: A questionnaire is a research instrument consisting of a series
of questions and other prompts for the purpose of gathering information from respondents.
Although they are often designed for statistical analysis of the responses, this is not always the
case. Questionnaires have advantages over some other types of surveys in that they are cheap, do
not require as much effort from the questioner as verbal or telephone surveys, and often have
standardized answers that make it simple to compile data. A distinction can be made between
questionnaires with questions that measure separate variables, and questionnaires with questions
that are aggregated into either a scale or index. Questionnaires within the former category are
commonly part of surveys, whereas questionnaires in the latter category are commonly part of
tests.
Questionnaires with questions that measure separate variables could for instance include
questions on:
Questionnaires with questions that are aggregated into either a scale or index, include for
instance questions that measure:
Q2b) Observation: Observation is way of gathering data by watching behavior, events, or noting
physical characteristics in their natural setting. Observations can be overt (everyone knows they
are being observed) or covert (no one knows they are being observed and the observer is
concealed). The benefit of covert observation is that people are more likely to behave naturally if
they do not know they are being observed. However, you will typically need to conduct overt
observations because of ethical problems related to concealing your observation. Observations
can also be either direct or indirect. Direct observation is when you watch interactions,
processes, or behaviors as they occur; for example, observing a teacher teaching a lesson from a
written curriculum to determine whether they are delivering it with fidelity. Indirect observations
are when you watch the results of interactions, processes, or behaviors; for example, measuring
the amount of plate waste left by students in a school cafeteria to determine whether a new food
is acceptable to them.
Observation method is used in the following condtions:
Trying to understand ongoing process or situation: Through observation a process can be
monitored or a situation can be evaluated as it occurs.
Gathering data on individual behaviors or interactions between people: Observation allows to
watch peoples behaviors and interactions directly, or watch for the results of behaviors or
interactions.
Physical setting: Seeing the place or environment where something takes place can help increase
understanding of the event, activity, or situation
Evaluating: For example, it can be used to observe whether a classroom or training facility is
conducive to learning.
When data collection from individuals is not a realistic option. If respondents are unwilling or
unable to provide data through questionnaires or interviews, observation is a method that
requires little from the individuals from where data need to be collected.
Q3 TYPES OF SAMPLING DESIGN
The sampling design can be broadly grouped on two basis viz., representation and element
selection. Representation refers to the selection of members on a probability or by other means.
Element selection refers to the manner in which the elements are selected individually and
directly from the population. If each element is drawn individually from the population at large,
it is an unrestricted sample. Restricted sampling is where additional controls are imposed, in
other words it covers all other forms of sampling. The classification of sampling design on the
basis of representation and element selection is shown below:
Element
Representation Basis
Selection
Unrestricted
Restricted
Probability
Simple random
Complex random
Systematic
Stratified
Cluster
Double
Non-probability
Convenience
Purposive
Judgment
Quota
Snowball
PROBABILITY SAMPLING
Probability sampling is where each sampling unit in the defined target population has a known
nonzero probability of being selected in the sample. The actual probability of selection for each
sampling unit may or may not be equal depending on the type of probability sampling design
used. Specific rules for selecting members from the operational population are made to ensure
unbiased selection of the sampling units and proper sample representation of the defined target
population. The results obtained by using probability-sampling designs can be generalized to the
target population within a specified margin of error. The different types of probability sampling
designs are discussed below;
Unrestricted or Simple Random Sampling
In the unrestricted probability sampling design every element in the population has a known,
equal nonzero chance of being selected as a subject. For example, if 10 employees (n = 10) are to
be selected from 30 employees (N = 30), the researcher can write the name of each employee in
a piece of paper and select them on a random basis. Each employee will have an equal known
probability of selection for a sample. The same is expressed in terms of the following formula;
Probability of selection = Size of sample
-------------------------Size of population
Each employee would have a 10/30 or .333 chance of being randomly selected in a drawn
sample. When the defined target population consists of a larger number of sampling units, a more
sophisticated method can be used to randomly draw the necessary sample. A table of random
numbers can be used for this purpose. The table of random numbers contains a list of randomly
generated numbers. The numbers can be randomly generated through the computer programs
also. Using the random numbers the sample can be selected.
Restricted or Complex Probability Sampling
As an alternative to the simple random sampling design, several complex probability sampling
design can be used which are more viable and effective. Efficiency is improved because more
information can be obtained for a give sample size using some of the complex probability
sampling procedures than the simple random sampling design. The five most common complex
probability sampling designs viz., systematic sampling, stratified random sampling, cluster
sampling, area sampling and double sampling are discussed below:
Systematic random sampling
The systematic random sampling design is similar to simple random sampling but requires that
the defined target population should be ordered in some way. It involves drawing every nth
element in the population starting with a randomly chosen element between 1 and n. In other
words individual sampling units are selected according their position using a skip interval. The
skip interval is determined by dividing the sample size into population size. For e.g. if the
researcher wants a sample of 100 to be drawn from a defined target population of 1000, the skip
interval would be 10(1000/100). Once the skip interval is calculated, the researcher would
randomly select a starting point and take every 10 th until the entire target population is proceeded
thorough. The steps to be followed in a systematic sampling method are enumerated below;
Total number of elements in the population should be identified
The sampling ratio is to be calculated ( n = total population size divided by size of the
desired sample)
The random start should be identified
A sample can be drawn by choosing every nth entry
Stratification leads to segmenting the population into smaller, more homogeneous sets of
elements. In order to ensure that the sample maintains the required precision in terms of
representing the total population, representative samples must be drawn from each of the smaller
population groups.
There are three reasons as to why a researcher chooses a stratified random sample;
To increase the samples statistical efficiency
To provide adequate data for analyzing various sub population
To enable different research methods and procedures to be used in different strata.
Drawing a stratified random sampling involves the following steps;
Combine the samples from each stratum into a single sample of the target population.
There are two common methods for deriving samples from the strata viz., proportionate and
disproportionate. In proportionate stratified sampling, each stratum is properly represented so
the sample drawn from it is proportionate to the stratums share of the total population. The
larger strata are sampled more because they make up a larger percentage of the target population.
This approach is more popular than any other stratified sampling procedures due to the following
reasons;
It has higher statistical efficiency than the simple random sample
It is much easier to carry out than other stratifying methods
It provides a self-weighting sample ie the population mean or proportion can be
estimated simply by calculating the mean or proportion of all sample cases.
In disproportionate stratified sampling, the sample size selected from each stratum is
independent of that stratums proportion of the total defined target population. This approach is
used when stratification of the target population produces sample sizes that contradict their
relative importance to the study.
An alternative of disproportionate stratified method is optimal allocation. In this method,
consideration is given to the relative size of the stratum as well as the variability within the
stratum to determine the necessary sample size of each stratum. The logic underlying the optimal
allocation is that the greater the homogeneity of the prospective sampling units within a
particular stratum, the fewer the units that would have to be selected to estimate the true
population parameter accurately for that subgroup. This method is also opted for in situation
where it is easier, simpler and less expensive to collect data from one or more strata than from
others.
Cluster Sampling
Cluster sampling is a probability sampling method in which the sampling units are divided into
mutually exclusive and collectively exhaustive subpopulation called clusters. Each cluster is
assumed to be the representative of the heterogeneity of the target population. Groups of
elements that would have heterogeneity among the members within each group are chosen for
study in cluster sampling. Several groups with intergroup heterogeneity and intergroup
homogeneity are found. A random sampling of the clusters or groups is done and information is
gathered from each of the members in the randomly chosen clusters. Cluster sampling offers
more of heterogeneity within groups and more homogeneity among the groups.
Single Stage and Multistage Cluster Sampling
In single stage cluster sampling, the population is divided into convenient clusters and required
numbers of clusters are randomly chosen as sample subjects. Each element in each of the
randomly chosen cluster is investigated in the study. Cluster sampling can also be done in several
stages, which is known as multistage cluster sampling. For example to study the banking
behaviour of customers in a national survey, cluster sampling can be used to select the urban,
semiruban and rural geographical locations of the study. At the next stage, particular areas in
each of the location would be chosen. At the third stage, the banks within each area would be
chosen. Thus multi stage sampling involves a probability sampling of the primary sampling
units; from each of the primary units, a probability sampling of the secondary sampling units is
drawn; a third level of probability sampling is done from each of these secondary units, and so
on until the final stage of breakdown for the sample units are arrived at, where every member of
the unit will be a sample.
Area Sampling
Area sampling is a form of cluster sampling in which the clusters are formed by geographic
designations. For example, state, district, city, town etc., Area sampling is a form of cluster
sampling in which any geographic unit with identifiable boundaries can be used. Area sampling
is less expensive than most other probability designs and is not dependent on population frame.
A city map showing blocks of the city would be adequate information to allow a researcher to
take a sample of the blocks and obtain data from the residents therein.
Stratified random sampling Vs Cluster sampling
The cluster sampling differs from stratified sampling in the following manner;
In stratified sampling the population is divided into a few subgroups, each with many
elements in it and the subgroups are selected according to some criterion that is related to
the variables under the study. In cluster sampling the population is divided into many
subgroups each with a few elements in it. The subgroups are selected according to some
criterion of ease or availability in data collection.
Stratified sampling should secure homogeneity within the subgroups and heterogeneity
between subgroups. Cluster sampling tries to secure heterogeneity within subgroups and
homogeneity between subgroups.
The elements are chosen randomly within each subgroup in stratified sampling. In cluster
sampling the subgroups are randomly chosen and each and every element of the subgroup
is studied in-depth.
Double sampling
This is also called sequential or multiphase sampling. Double sampling is opted when further
information is needed from a subset of group from which some information has already been
collected for the same study. It is called as double sampling because initially a sample is used in
the study to collect some preliminary information of interest and later a subsample of this
primary sample is used to examine the matter in more detail The process includes collecting data
from a sample using a previously defined technique. Based on this information, a sub sample is
selected for further study. It is more convenient and economical to collect some information by
sampling and then use this information as the basis for selecting a sub sample for further study.
NON-PROBABILITY SAMPLING
In non-probability sampling method, the elements in the population do not have any probabilities
attached to being chosen as sample subjects. This means that the findings of the study cannot be
generalized to the population. However at times the researcher may be less concerned about
generalizability and the purpose may be just to obtain some preliminary information in a quick
and inexpensive way. Sometime when the population size is unknown, then non-probability
sampling would be the only way to obtain data. Some non-probability sampling technique may
be more dependable than others and could often lead to important information with regard to the
population. The non-probability sampling designs are discussed below;
Convenience sampling
Non-probability samples that are unrestricted are called convenient sampling. Convenience
sampling refers to the collection of information from members of population who are
conveniently available to provide it. Researchers or field workers have the freedom to choose as
samples whomever they find thus it is named as convenience. It is mostly used during the
exploratory phase of a research project and it is the best way of getting some basic information
quickly and efficiently. The assumptions is that the target population is homogeneous and the
individuals selected as samples are similar to the overall defined target population with regard to
the characteristics being studied. However in reality there is no way to accurately assess the
representativeness of the sample. Due to the self-selection and voluntary nature of participation
in data collection process the researcher should give due consideration to the non-response error.
Advantages and disadvantages
Convenient sampling allows a large number of respondents to be interviewed in a relatively short
time. This is one of the main reasons for using convenient sampling in the early stages of
research. However the major drawback is that the use of convenience samples in the
development phases of constructs and scale measurements can have a serious negative impact on
the overall reliability and validity of those measures and instruments used to collect raw data.
Another major drawback is that the raw data and results are not generalizable to the defined
target population with any measure of precision. It is not possible to measure the
representativeness of the sample, because sampling error estimates cannot be accurately
determined.
Purposive Sampling
A non-probability sample that conforms to certain criteria is called purposive sampling. There
are two major types of purposive sampling viz.., Judgment sampling and Quota sampling.
Judgment sampling
Judgment sampling is a non-probability sampling method in which participants are selected
according to an experienced individuals belief that they will meet the requirements of the study.
The researcher selects sample members who conform to some criterion. It is appropriate in the
early stages of an exploratory study and involves the choice of subjects who are most
advantageously placed or in the best position to provide the information required. This is used
when a limited number or category of people have the information that are being sought. The
underlying assumption is that the researchers belief that the opinions of a group of perceived
experts on the topic of interest are representative of the entire target population.
Quota sampling
The quota sampling method involves the selection of prospective participants according to
prespecified quotas regarding either the demographic characteristics (gender, age, education,
income, occupation etc.,) specific attitudes (satisfied, neutral, dissatisfied) or specific behaviours
(regular, occasional, rare user of product) .The purpose of quota sampling is to provide an
assurance that prespecified subgroups of the defined target population are represented on
pertinent sampling factors that are determined by the researcher. It ensures that certain groups are
adequately represented in the study though the assignment of the quota.
Snowball Sampling
Snowball sampling is a non-probability sampling method in which sets of respondents are
chosen who helps the researcher to identify additional respondents to be included in the study.
This method of sampling is also called as referral sampling because one respondent refers other
potential respondents. Snowball sampling is typically used in research situations where the
defined target population is very small and unique and compiling a complete list of sampling
units is a nearly impossible task. While the traditional probability and other non-probability
sampling methods would normally require an extreme search effort to qualify a sufficient
number of prospective respondents, the snowball method would yield better result at a much
lower cost. The researcher has to identify and interview one qualified respondent and then solicit
his help to identify other respondents with similar characteristics.
Q4a) Survey Errors: Errors may occur at any stage during the collection and processing of
survey data, whether it is a census or a sample survey. There are two main sources of survey
error: Sampling error (errors associated directly with the sample design and estimation methods
used) and non-sampling error (a blanket term used to cover all other errors). Non-sampling errors
are usually sub-divided as follows:
Coverage errors, which are mainly associated with the sampling frame, such as missing
units, inclusion of units not in the population of interest, and duplication.
Response errors, which are caused by problems related to the way questions were
phrased, the order in which the questions were asked, or respondents' reporting errors
(also referred to as measurement error if possible errors made by the interviewer are
included in this category).
Non-response errors, which are due to respondents either not providing information or
providing incorrect information. Non-response increases the likelihood of bias in the
survey estimates. It also reduces the effective sample size, thereby increasing the
observed sampling error. However, the risk of bias when non-response rates are high is
generally more dangerous than the reduction in sample size per se.
Data capture errors, which are due to coding or data entry problems.
Edit and imputation ("E&I") errors, which can be introduced during attempts to find and
correct all the other non-sampling errors.
All of these sources may contribute to either, or both, of the two types of survey error. These are
bias, or systematic error, and variance, or random error.
Sampling error is not an error in the sense of a mistake having been made in conducting the
survey. Rather it indicates the degree of uncertainty about the 'true' value based on information
obtained from the number of people that were surveyed.
It is reasonably straightforward for knowledgeable, experienced survey-taking organizations to
control sampling error through the use of suitable sampling methods and to estimate its impact
using information from the sample design and the achieved sample. Any statement about
sampling errors, namely variance, standard error, margin of sampling error or coefficient of
variation, can only be made if the survey data come from a probability sample.
The non-sampling errors, especially potential biases, are the most difficult to detect, to control
and to measure, and require careful planning, training and testing.
Q4b) Reliability: Internal consistency reliability is used to assess the reliability of a summated
scale where several items are summed to form a total score. In internal consistency reliability
estimation a single measurement instrument is administered to a group of people on one occasion
to estimate reliability. In effect the reliability of the instrument is judged by estimating how well
the items that reflect the same construct yield similar results. We are looking at how consistent
the results are for different items for the same construct within the measure.
There are a wide variety of internal consistency measures that can be used:
i.
The average inter-item correlation uses all of the items on the instrument that are designed to
measure the same construct. The correlation between each pair of items is computed first. For
example, if we have six items we will have 15 different item pairings (i.e., 15 correlations). The
average inter item correlation is simply the average or mean of all these correlations.
ii.
This approach also uses the inter-item correlations. In addition, a total score for the six items is
computed and use that as a seventh variable in the analysis. The figure shows the six item-tototal correlations at the bottom of the correlation matrix.
iii.
Split-Half Reliability
In split-half reliability all items that purport to measure the same construct into two sets are
randomly divided. The entire instrument is administered to a sample of people and the total score
for each randomly divided half is calculated.
iv.
Cronbach's Alpha ()
Content validity is a subjective but systematic evaluation of how well the content of a
scale represents the measurement task at hand.
Construct validity addresses the question of what construct or characteristic the scale is,
in fact, measuring.
Construct validity includes convergent, discriminant, and
nomological validity.
Convergent validity is the extent to which the scale correlates positively with other
measures of the same construct.
Discriminant validity is the extent to which a measure does not correlate with other
constructs from which it is supposed to differ.
Nomological validity is the extent to which the scale correlates in theoretically predicted
ways with measures of different but related constructs.
Q5 The amount of data that can be collected and assembled in a business research study can be
astronomical. Data organization and reduction are two very important aspects of data analysis
that is seldom highlighted. Yet, these steps are crucial to the ability to make sense out of data and
to the ability to make cogent and insightful data interpretation. An impressive array of methods
fordata organization and data reduction are available.
A business researcher may tabulate data or compile frequency. The means or averages and other
measures of dispersion are common ways of analyzing data for which frequency distributions are
available. Very often, advanced statistics and decision models are used to maximize the
information that can be extracted from research data. The following section provides a brief
description of several commonly used statistical tools, decision support models, and optimization
routines
Quantitative Market Research Decision Support Tools
Statistical Methods
Multiple Regression - This statistical procedure is used to estimate the equation with the
best fit for explaining how the value of a dependent variable changes as the values of a
number of independent variables shift. A simple market research example is the estimation of
the best fit for advertising by looking at how sales revenue (the dependent variable) changes
in relation to expenditures on advertising, placement of ads, and timing of ads.
Factor Analysis - This statistical method is used to determine which are the strongest
underlying dimensions of a larger set of variables that are inter-correlated. Where many
variables are correlated, factor analysis identifies which relations are strongest. A Using factor
analysis, a market researcher who wants to know what combination of variables or factors are
most appealing to a particular type of consumer can use factor analysis to reduce the data
down to a few variables are most appealing to consumers.
Conditions for a Factor Analysis Exercise:
It requires metric data. This means that the data should be either interval or ratio scale
in nature.
The size of the sample respondents should be at least four to five times more than the
number of variables.
Initial set of variables should be highly correlated. A correlation matrix of the variables
could be computed and tested for its statistical significance. The test is carried out by
using a Barttlet test of sphericity, which takes the determinant of the correlation matrix
into consideration.
Kaiser-Meyer-Olkinis carried out before performing factor analysis, it takes a value
between 0 and 1.The KMO statistics compares the magnitude of observed correlation
coefficients with magnitudes of partial correlation coefficients.
form groups. Partitioning quantitative variables is only justi able if there are easily
identi able gaps at the points of division;
for instance, three groups taking three available levels of amounts of housing loan;
the groups or categories should be de ned before collecting the data;
the attribute(s) used to separate the groups should discriminate quite clearly between
the groups so that group or category overlap is clearly non-existent or minimal;
group sizes of the dependent should not be grossly different and should be at least ve
times the number of independent variables.
Discriminat Analysis is used when:
the dependent is categorical with the predictor IVs at interval level such as age,
income,
attitudes, perceptions, and years of education, although dummy variables can be used
as predictors as in multiple regression. Logistic regression IVs can be of any level of
measurement.
there are more than two DV categories, unlike logistic regression, which is limited to a
dichotomous dependent variable.
Cluster Analysis - This statistical procedure is used to separate objects into a specific
number of groups that are mutually exclusive but that are also relatively homogeneous in
constitution. This process is similar to what occurs in market segmentationwhere the market
researcher is interested in the similarities that facilitate grouping consumers into segments and
is also interested in the attributes that make the market segments distinct.For example, You
need to identify people with similar patterns of past purchases so that you can tailor your
marketing strategies.
The objective of cluster analysis is to identify groups of object that are very similar with regard
to their price consciousness and brand loyalty and assign them into clusters. After having decided
on the clustering variables (brand loyalty and price consciousness), we need to decide on the
clustering procedure to form our groups of objects. This step is crucial for the analysis, as
different procedures require different decisions prior to analysis.
Conjoint Analysis - This statistical method is used to unpack the preferences of consumerswith
regard to different marketing offers. Two dimensions are of interest to the market researcher
in conjoint analysis: (1) The inferred utility functions of each attribute, and (2) the relative
importance of the preferred attributes to the consumers. For example a computer may be
described in terms of attributes such as processor type, hard disk size and amount of memory.
Each of these attributes is broken down into levels - for instance levels of the attribute for
memory size might be 1GB, 2GB, 3GB and 4GB.
These attributes and levels can be used to define different products or product profiles. The first
stage in conjoint analysis is to create a set of product profiles which customers or respondents are
then asked to compare and choose from. Obviously, the number of potential profiles increases
rapidly for every new attribute added, so there are techniques to simplify both the number of
profiles to be tested and the way in which preferences are discovered. Different type of conjoint
analysis (eg choice based conjoint, full-profile conjoint, or adaptive conjoint analysis) have
different approaches to coping with the balance between attribute number and amount of data
that needs to be collected.
By analysing which items are chosen or preferred from the product profiles offered to the
customer it is possible to work out statistically both what is driving the preference from the
attributes and levels shown, but more importantly, give an implicit numerical valuation for each
attribute and level - known as utilities or part-worths and importance scores.
The result is a detailed picture of how customers make decisions and a set of data that can be
used to build market models which can predict market share in new market conditions and test
the impact of product or service changes on the market to see where and how you can gain the
greatest improvements over your competitors. Basic assumptions of conjoint analysis are:
Pairwise comparison scale - a respondent is presented with two items at a time and
asked to select one (example: Do you prefer Pepsi or Coke?). This is an ordinal level
technique when a measurement model is not applied.
Constant sum scale - a respondent is given a constant sum of money, script, credits, or
points and asked to allocate these to various items (example : If you had 100 Yen to spend
on food products, how much would you spend on product A, on product B, on product C,
etc.). This is an ordinal level technique.
Q-Sort scale - Up to 140 items are sorted into groups based a rank-order procedure.
Guttman scale - This is a procedure to determine whether a set of items can be rankordered on a unidimensional scale. It utilizes the intensity structure among several
indicators of a given variable. Statements are listed in order of importance. The rating is
scaled by summing all responses until the first negative response in the list. The Guttman
scale is related to Rasch measurement; specifically, Rasch models bring the Guttman
approach within a probabilistic framework.
Continuous rating scale (also called the graphic rating scale) - respondents rate items by
placing a mark on a line. The line is usually labeled at each end. There are sometimes a
series of numbers, called scale points, (say, from zero to 100) under the line. Scoring and
codification is difficult.
Semantic differential scale - Respondents are asked to rate on a 7 point scale an item on
various attributes. Each attribute requires a scale with bipolar terminal labels.
Stapel scale - This is a unipolar ten-point rating scale. It ranges from +5 to -5 and has no
neutral zero point.
Thurstone scale - This is a scaling technique that incorporates the intensity structure
among indicators.
The following table classifies the various simple data types, associated distributions,
permissible operations, etc. Regardless of the logical possible values, all of these data types
are generally coded using real numbers, because the theory of random variables often
explicitly assumes that they hold real numbers.
Data Type
binary
Possible
values
0, 1
(arbitrary
labels)
1, 2, ..., K
categorica
(arbitrary
l
labels)
ordinal
integer orre
al
number(arb
itrary
Scale
Level
of
of
Example
relativ Permissible
measu Distribution
usage
e
statistics
remen
differ
t
ences
binary
outcome
("yes/no",
"true/false
",
"success/f
ailure",
etc.)
categorica
l outcome
(specific
blood
type,politi
cal party,
word,
etc.)
logistic,prob
it
Bernoulli
nomin
al
scale
Regression
analysis
incom
mode, Chiparabl
squared
e
categorical
relative
ordinal categorical?? relativ
score,
scale
e
significan
compa
t only for
multinomial
logit,multino
mial probit
ordinal
regression(or
dered
logit,ordered
scale)
binomial 0, 1, ..., N
count
creating a
ranking
probit)
number of
successes
mean, median,
interva binomial,beta
binomial
(e.g. yes
additiv mode,standard
l
-binomial,
regression(lo
votes) out
e??
deviation,corr
scale?? etc.
gistic,probit)
ofN possi
elation
ble
number of
items
(telephon
e calls,
people,
nonnegativ molecules
ratio
eintegers (0 , births,
scale
, 1, ...)
deaths,
etc.) in
given
interval/ar
ea/volum
e
realreal
valuedad number
ditive
rison
All statistics
permitted for
interval scales
plus the
Poisson,negat multip
following:geo
ive binomial, licativ
metric
etc.
e
mean,harmoni
c
mean,coefficie
nt of variation
Poisson,
negative
binomial
regression
over a
large
scale)
price,
income,
size,scale
parameter
real, etc.
valuedmu positive rea
ratio
(especiall
ltiplicativ l number
scale
y when
e
varying
over a
large
scale)
All statistics
permitted for
loginterval scales
normal,gamm
plus the
multip
a,exponential,
following:geo
licativ
etc. (usually
metric
e
a skeweddistr
mean,harmoni
ibution)
c
mean,coefficie
nt of variation
generalized
linear
modelwithlo
garithmiclin
k
Research ethicists everywhere today are challenged by issues that reflect global concerns in
other domains, such as the conduct of research in developing countries, the limits of
research involving genetic material and the protection of privacy in light of advances in
technology and Internet capabilities.
Q7b) A proper research report includes the following sections, submitted in the order listed,
each section to start on a new page. Some journals request a summary to be placed at the end of
the discussion. Some techniques articles include an appendix with equations, formulas,
calculations, etc. Some journals deviate from the format, such as by combining results and
discussion, or combining everything but the title, abstract, and literature as is done in the
journal Science. Reports will adhere to the following standard format:
Title
Abstract
Introduction
Results
Discussion
References
problem, what has been done before (with proper literature citations), and the objectives of the
current project. A clear relationship between the current project and the scope and limitations of
earlier work should be made so that the reasons for the project and the approach used will be
understood.
Experimental Details or Theoretical Analysis
This section should describe what was actually done. It is a succinct exposition of the laboratory
notebook, describing procedures, techniques, instrumentation, special precautions, and so on. It
should be sufficiently detailed that other experienced researchers would be able to repeat the
work and obtain comparable results.
In theoretical reports, this section would include sufficient theoretical or mathematical analysis to
enable derivations and numerical results to be checked. Computer programs from the public
domain should be cited. New computer programs should be described in outline form.
If the experimental section is lengthy and detailed, as in synthetic work, it can be placed at the
end of the report or as an appendix so that it does not interrupt the conceptual flow of the report.
Its placement will depend on the nature of the project and the discretion of the writer.
Results
In this section, relevant data, observations, and findings are summarized. Tabulation of data,
equations, charts, and figures can be used effectively to present results clearly and concisely.
Schemes to show reaction sequences may be used here or elsewhere in the report.
Discussion
The crux of the report is the analysis and interpretation of the results. What do the results mean?
How do they relate to the objectives of the project? To what extent have they resolved the
problem? Because the "Results" and "Discussion" sections are interrelated, they can often be
combined as one section.
Conclusions and Summary
A separate section outlining the main conclusions of the project is appropriate if conclusions
have not already been stated in the 'Discussion' section. Directions for future work are also
suitably expressed here.
A lengthy report, or one in which the findings are complex, usually benefits from a paragraph
summarizing the main features of the reportthe objectives, the findings, and the conclusions.
The last paragraph of text in manuscripts prepared for publication is customarily dedicated to
acknowledgments. However, there is no rule about this, and research reports or senior theses
frequently place acknowledgments following the title page.
Citing References
Literature references should be collated at the end of the report and cited in one of the formats
described in The ACS Style Guide or standard journals. Do not mix formats. All references
should be checked against the original literature. Never cite a reference that you have not read
yourself. Double check all journal year, volume, issue, and inclusive page numbers to insure the
accuracy of your citation.