Professional Documents
Culture Documents
Exploratory Study
Exploratory studies are undertaken to better comprehend the nature of
the problem, since very few studies might have been conducted in that
area.
The recent development of the internet and the busy life style of the
people in the west, lots of the individuals are showing interests in
accessing internet .
Descriptive Study:
A descriptive study is under taken in order to ascertain and be able to describe the
characteristics of the variables of interest in a situation.
For instance a study of class in terms of the percentage of members who are in their
senior and junior years, gender composition, age groupings, number of semesters until
graduation, and number of business courses taken, can only be considered as
descriptive in nature
4. Help make certain simple decisions (such as how many and what type of individuals
should be transferred from one department to another
Example: A bank manager wants to have a profile of the individuals who have loan
payments outstanding for six months and more. It would include details of their average
age, earnings, type of occupation they are in, full time/part time employment status, and the
like.
This information might help to ask for further information or make an immediate decision
on the types of individuals to whom he would not extend loans in future.
When the researcher wants to define the cause of one or more problems, then the study
is called a Causal Study.
When the researcher is interested in outline the important variables that are associated
with the problem, it is called a Correlational Study.
A correlational question:
A correlational hypothesis:
Organizational research can be done in the natural environment where work proceeds
normally (i.e., in non-contrived setting) or in artificial, contrived settings.
Studies to establish cause and effect relationships using the same natural environment
in which employees normally function are called field experiments
Example: employees who have been given recognition and employee who have not
been given recognition.
Example:Select all new employees with the same scores in the entry test and provide
one group training and the other no training and controlling that they are not exposed to
any senior employee who could guide them.)
The unit of analysis refers to the level of aggregation of the data collected during the
subsequent data analysis stages.
Individuals: If the problem statement focuses on how to raise the motivational levels of
employees in general, then we are interested in individual employees in the organization and
would like to find out what we can do to raise their motivation.
Here the unit of analysis is the individual.(managers perception on the factors which
influence the success of the project)
Dyads: If the researcher is interested in studying two-person interactions, then several twoperson groups, is known as dyads and will become unit of analysis.
For example, analysis of husband-wife(are they satisfied with the education provided by
the school) in families and mentor-mentee (perception on the benefit of mentoring).
Groups: If the problem statement is related to group effectiveness, however, then obviously the
unit of analysis would be at group level.
In such cases the unit of analysis will be groups.(use of I.T by the different department)
Organizations: If we compare different departments in the organization, then the data analysis
will be done at the departmental level - that is, the individuals in the department will be treated
as one unit and comparison made treating the department as the unit of analysis.
Cultures: If we want to study cultural differences among nations, we will have to collect data
from different countries and study the underlying patterns of culture in each country, here the
unit of analysis used will be cultures.
Cross-Sectional Studies
A study can be done in which data are gathered just once, perhaps over a period of days
or weeks or months, in order to answer a research question. Such studies are called
one-shot or cross-sectional studies.
(data collected from project managers and their psychological well being between
October till December)
Longitudinal Studies
In some cases, the researcher might want to study people or phenomena at more than
one point in time in order to answer the research question. For example, the researcher
might want to study employees behavior before and after a change in the top
management, to learn the effects of change.
Or when data on the dependent variable are gathered at two or more points in time to
answer the research question are called longitudinal studies. (use of electricity by a city
in summers and then in winters)
Types of Scales: Four types of scales are used in research, each with specific applications and
properties. The scales are
Nominal Scale: Simply the Nominal scale is count of the objects belonging to different
categories.
Ordinal Scale: The ordinal scale positions objects in some order (such as it indicates that
pineapples are juicer then apples and oranges are even more juicer than pineapples)
Interval Scale: It can gives us information as to what extent(level) one is juicer than the other.
How much better the pineapple is than the apple and orange is better than the pine apple. Is
pine apple only marginally better than the apple?
Ratio Scale: It is most comprehensive scale, has all characteristics of other scales.
Simple Category: This scale is also called a dichotomous scale. It offers two mutually
exclusive response choices. In the example shown in the slide, the response choices are yes
and no, but they could be other response choices too such as agree and disagree.
When there are multiple options for the rater but only one answer is sought, the multiplechoice, single-response scale is appropriate. The other response may be omitted when
exhaustiveness of categories is not critical or there is no possibility for another response. This
scale produces nominal data.
Multiple-Choice, Multiple Response Scale
This scale is a variation of the last and is called a checklist. It allows the rater to select one or
several alternatives. The cumulative feature of this scale can be beneficial when a complete
picture of the participants choice is desired, but it may also present a problem for reporting
when research sponsors expect the responses to sum to 100 percent. This scale generates
nominal data.
Likert scale
The Likert scale was developed by Rensis Likert and is the most frequently used variation of the
summated rating scale. Summated rating scales consist of statements that express either a
favorable or unfavorable attitude toward the object of interest. The participant is asked to agree
or disagree with each statement. Each response is given a numerical score to reflect its degree
of attitudinal favorableness and the scores may be summed to measure the participants overall
attitude. Likert scales may use 5, 7, or 9 scale points. They are quick and easy to construct. The
scale produces interval data.
Originally, creating a Likert scale involved a procedure known as item analysis. Item analysis
assesses each item based on how well it discriminates between those people whose total score
is high and those whose total score is low. It involves calculating the mean scores for each scale
item among the low scorers and the high scorers. The mean scores for the high-score and lowscore groups are then tested for statistical significance by computing t values. After finding the t
values for each statement, they are rank-ordered, and those statements with the highest t
values are selected. Researchers have found that a larger number of items for each attitude
object improves the reliability of the scale.
The semantic differential scale
The semantic differential scale measures the psychological meanings of an attitude object using
bipolar adjectives. Researchers use this scale for studies of brand and institutional image. The
method consists of a set of bipolar rating scales, usually with 7 points, by which one or more
participants rate one or more concepts on each scale item. The scale is based on the
proposition that an object can have several dimensions of connotative meaning. The meanings
are located in multidimensional property space, called semantic space. It is efficient and easy
for securing attitudes from a large sample. Attitudes may be measured in both direction and
intensity. The total set of responses provides a comprehensive picture of the meaning of an
object and a measure of the person doing the rating. It is standardized and produces interval
data
Numerical scales have equal intervals that separate their numeric scale points. The verbal
anchors serve as the labels for the extreme points. Numerical scales are often 5-point scales
but may have 7 or 10 points. The participants write a number from the scale next to each item. It
produces either ordinal or interval data.
The Stapel scale is used as an alternative to the semantic differential, especially when it is
difficult to find bipolar adjectives that match the investigative question. In the example, there are
three attributes of corporate image. The scale is composed of the word identifying the image
dimension and a set of 10 response categories for each of the three attributes. Stapel scales
produce interval data.
The constant-sum scale helps researchers to discover proportions. The participant allocates
points to more than one attribute or property indicant, such that they total a constant sum,
usually 100 or 10. Participant precision and patience suffer when too many stimuli are
proportioned and summed. A participants ability to add may also be taxed. Its advantage is its
compatibility with percent and the fact that alternatives that are perceived to be equal can be so
scored. This scale produces interval data.
The graphic rating scale was originally created to enable researchers to discern fine
differences. Theoretically, an infinite number of ratings is possible if participants are
sophisticated enough to differentiate and record them. They are instructed to mark their
response at any point along a continuum. Usually, the score is a measure of length from either
endpoint. The results are treated as interval data. The difficulty is in coding and analysis. Other
graphic rating scales use pictures, icons, or other visuals to communicate with the rater and
represent a variety of data types. Graphic scales are often used with children.
Probability Sampling
Systematic Sampling: Technique in which an initial starting point is selected by a random
process, after which every nth number on the list is selected to constitute part of the sample
Sampling interval (SI) = population list size (N) divided by a pre-determined sample size
(n)
For systematic sampling to work best, the list should be random in nature and not have
some underlying systematic pattern. E.g: Office directory with the Senior Manager,
Middle manager .names are listed in each department. This can create as systematic
problem
Stratified Sampling: Technique in which simple random subsamples are drawn from within
different strata that share some common characteristic. Within the group they are homogenous
and among the group they are heterogeneous.Example: The student body of CIIT is divided
into two groups (management science, engineering) and from each group, students are
selected for a sample using simple random sampling in each of the two groups, whereby the
size of the sample for each group is determined by that groups overall strength
Cluster Sampling: Technique in which the target population is first divided into clusters. Then, a
random sample of clusters is drawn and for each selected cluster either all the elements or a
sample of elements are included in the sample. Cluster samples offer more heterogeneity within
groups and more homogeneity among groups
Area sampling: Specific type of cluster sampling in which clusters consist of geographic
areas such as counties, city blocks, or particular boundaries within a locality. Area
sampling is less expensive than most other sampling designs and it is not dependent on
sampling frame.Key motivation in cluster sampling is cost reduction. Example: If you
wanted to survey the residents of the city, you would get a city map, take a sample of
city blocks and select respondents within each city block.
Double sampling: A sampling design where initially a sample is used in a study to collect some
preliminary information of interest, and later a subsample of this primary sample is use to
examine the matter in more detail.Example: A structured interview might indicate that a
subgroup of respondents has more insight into the problems of the organization. These
respondents might be interviewed again and again and asked additional questions.
Non-Probability Sampling
Convenience Sampling: Major advantages of convenience sampling are that is quick,
convenient and economical; a major disadvantage is that the sample may not be representative.
Convenience sampling is best used for the purpose of exploratory research and supplemented
subsequently with probability sampling.
Judgment (purposive) Sampling: Sampling technique in which the business researcher
selects the sample based on judgment about some appropriate characteristic of the sample
members. Example: Selection of certain students who are active in the university activities to
inquire about the sports and recreation facilities at the university.
In a structured interview, each candidate is asked similar questions in a predetermined format.
Emphasis tends to be on your past experience and assets you can bring to company. Typically,
the interviewer records your answers, which are potentially scored on a standard grid.
Unstructured interviews are much more casual and unrehearsed. They depend on free flowing
conversation which tends to focus on your personal qualities as they relate to the work.
Questions about skills and strengths can be asked and should be answered as formally as in a
structured interview.
1. Unstructured interviews are more flexible as questions can be adapted and changed
depending on the respondents answers. The interview can deviate from the interview
schedule.
2. Unstructured interviews generate qualitative data through the use of open questions. This
allows the respondent to talk in some depth, choosing their own words. This helps the
researcher develop a real sense of a persons understanding of a situation.
3. They also have increased validity because it gives the interviewer the opportunity to probe
for a deeper understanding, ask for clarification & allow the interviewee to steer the direction
of the interview etc.
Limitations
1. It can be time consuming to conduct an unstructured interview and analyze the qualitative
data (using methods such as thematic analysis).
2. Employing and training interviewers is expensive, and not as cheap as collecting data via
questionnaires. For example, certain skills may be needed by the interviewer. These include
the ability to establish rapport & knowing when to probe.
Therefore, internal validity refers to how well a piece of research allows you to choose among
alternate explanations of something. A research study with high internal validity lets you choose
one explanation over another with a lot of confidence, because it avoids (many possible)
confounds.
External validity refers to how well data and theories from one setting apply to another. This
question is usually asked about laboratory research: Does it apply in the everyday "real" world
outside the lab? The figure at the right summarize external and internal validity and the relation
between the two. The green ellipse represents internal validity, and the blue rounded rectangle
around it represents external validity,
Theoretical Framework
A theoretical framework is a collection of interrelated concepts, like a theory but not necessarily
so well worked-out. A theoretical framework guides your research, determining what things you
will measure, and what statistical relationships you will look for.
Theoretical frameworks are obviously critical in deductive, theory-testing sorts of studies
(see Kinds of Research for more information). In those kinds of studies, the theoretical
framework must be very specific and well-thought out.
Surprisingly, theoretical frameworks are also important in exploratory studies, where you really
don't know much about what is going on, and are trying to learn more. There are two reasons
why theoretical frameworks are important here. First, no matter how little you think you know
about a topic, and how unbiased you think you are, it is impossible for a human being not to
have preconceived notions, even if they are of a very general nature. For example, some people
fundamentally believe that people are basically lazy and untrustworthy, and you have keep your
wits about you to avoid being conned. These fundamental beliefs about human nature affect
how you look things when doing personnel research. In this sense, you are always being guided
by a theoretical framework, but you don't know it. Not knowing what your real framework is can
be a problem. The framework tends to guide what you notice in an organization, and what you
don't notice. In other words, you don't even notice things that don't fit your framework! We can
never completely get around this problem, but we can reduce the problem considerably by
simply making our implicit framework explicit. Once it is explicit, we can deliberately consider
other frameworks, and try to see the organizational situation through different lenses.
Kinds of Personnel Research
There are many kinds of personnel research. Three dimensions are particularly important in
classifying types of research:
Applied vs Basic research. Applied research is research designed to solve a particular problem
in a particular circumstance, such as determining the cause of low morale in a given department
of an organization. Basic research is designed to understand the underlying principles behind
human behavior. For example, you might try to understand what motivates people to work hard
at their jobs. This distinction is discussed in more detail in another handout. Click here to read it.
Exploratory vs Confirmatory. Exploratory research is research into the unknown. It is used when
you are investigating something but really don't understand it all, or are not completely sure
what you are looking for. It's sort of like a journalist whose curiousity is peaked by something
and just starts looking into something without really knowing what they're looking for.
Confirmatory research is where you have a pretty good idea what's going on. That is, you have
a theory (or several theories), and the objective of the research is to find out if the theory is
supported by the facts.
Quantitative vs Qualitative. Quantitative studies measure variables with some precision using
numeric scales. For example, you might measure a person's height and weight. Or you might
construct a survey in which you measure how much respondents like President Clinton, using a
1 to 10 scale. Qualitative studies are based on direct observation of behavior, or on transcripts
of unstructured interviews with informants. For example, you might talk to ten female executives
about their the decision-making process behind their choice to have children or not, and if so,
when. You might interview them for several hours, tape-recording the whole thing, and then
transcribe the recordings to written text, and then analyze the text.
As a general rule (but there are many exceptions), confirmatory studies tend to be quantitative,
while exploratory studies tend to be qualitative.
Census. A census is a study that obtains data from every member of a population. In
most studies, a census is not practical, because of the cost and/or time required.
Sample survey. A sample survey is a study that obtains data from a subset of a
population, in order to estimate population attributes.
(2) A clock is valid if it measures 'true' time, and reliable if it does so consistently.
We would regard it as an invalid tool if it showed the wrong time, yet if it was sometimes slow
and sometimes fast we would call it unreliable.
A clock that is always 10 minutes fast has high reliability yet poor validity.
Measure of Dispersion
Dispersion is the variability that exist in a set of observations.
Two sets of data might have the same mean, but the dispersion could be different.
The range
Range refers to the extreme values in a set ofobservations.
54,50,35,67,50
(35,67)
The variance
The variance is calculated by subtracting the mean from each of the observations in the data
set, taking a square of this difference, and dividing the total of these by the number of
observations.
The standard deviation
another measure of dispersion for interval and ratio scaled data, offers an index of the spread
of a distribution or the variability in the data.
It is a very commonly used, measure of dispersion, and is simply square root of the variance
Primary Data
Personal Interview
Focus Groups
Panels
Delphi Technique
Self-Administered Surveys
Secondary Data
Company Archives
Gov Publications
Industry Analysis
Primary Data Collection Methods
Focus Group
Panels
Observation
Usually consist of 8 to 10 members , with a moderator leading the discussion for 2 hours
on a particular topic, concept or product.
E.g Discussion on computers and computing , or women mothers , social networking etc
Less expensive and usually done for exploratory information. Cannot be generalized
Panels:
Similar to focus group but meets more than once in order to study the change or
interventions need to be studies over a period of time.
When the product is modified then the response of the panel can be observed
Observation measures:
Methods through which primary data is collected without the involving people.
E.g: Wear and tear of books , section of an office, seating area of railway station which
indicate the popularity, frequency of use etc.
E.g: The number of cans in the dust bin and their brands, the number of motor cycles vs
cars parked in the university parking lot
Interviewing:
In case large set of respondents are needed then more than one interviewer are used ,
hence they need to be trained so that biases , voice inflections, difference in wording are
avoided
Structured and Unstructured
Un Structured:
e.g. Tell me something about your unit and department , and perhaps even the
organization as a whole in terms of work, employee and whatever else you think is
important
Compared to other departments, what are the strengths and weakness of your
department
Encouraging the respondent to reflect on the positive and negative aspects of it.
Through unstructured the different major areas might be exposed. It from these the
researcher can pick some areas as focus variables which need further probing.
Now the researcher can device a more focused approach and develop a more structured
interview emphasizing on some particular issues.
Structured:
Know at the outset what information is needed. Focusing on factors relevant to the
problem.
The focus is on the factors which have surfaced during the un structured interview.
E.g: During the previous unstructured interview it was identified that the department
needs improvement.
Now you can focus on questions which addresses how to improve the department, i.e.
the factors which can improve the department
This can be done through face to face, over the telephone or through the computers via
internet.
The result could highlight the important factors influencing the issues.
This information is of qualitative in nature which could be then empirically tested and
verified using other methods like questionnaires.
Guideline for Interviews
Listen carefully
Physical setting
Adv : Wider reach in short time, some time easy to discuss personal information over the
phone
Dis: Can be terminated without warning, cannot have a prolonged interview, non verbal
cue.
Closed vs. Open Questions
Easy.
Pre-testing is a must.