You are on page 1of 18

Business research is an organized and deliberate process through

which organization effectively learns new knowledge and helps improve


performance.

Exploratory Study
Exploratory studies are undertaken to better comprehend the nature of
the problem, since very few studies might have been conducted in that
area.

Extensive interviews with many people might have to be undertaken to


get handle on the situation and to understand the phenomena.

After obtaining a better understanding, more rigorous research proceed.

Some qualitative studies (as opposed to quantitative data gathered


through questionnaire, etc.) where data are collected through observation
or interviews, are exploratory studies in nature.

When the data reveals some pattern regarding the phenomena of


interest, theories are developed and hypotheses formulated for
subsequent testing.

Example : What is the role of virtual markets for e -commerce ? (in


2005)

The recent development of the internet and the busy life style of the
people in the west, lots of the individuals are showing interests in
accessing internet .

Descriptive Study:
A descriptive study is under taken in order to ascertain and be able to describe the
characteristics of the variables of interest in a situation.

For instance a study of class in terms of the percentage of members who are in their
senior and junior years, gender composition, age groupings, number of semesters until
graduation, and number of business courses taken, can only be considered as
descriptive in nature

Descriptive studies that present data in a meaningful form help to:

1. Understand the characteristics of a group in a given situation.

2. Think systematically about aspects in a given situation.

3. Offer ideas for further probe and research

4. Help make certain simple decisions (such as how many and what type of individuals
should be transferred from one department to another

Example: A bank manager wants to have a profile of the individuals who have loan
payments outstanding for six months and more. It would include details of their average
age, earnings, type of occupation they are in, full time/part time employment status, and the
like.

This information might help to ask for further information or make an immediate decision
on the types of individuals to whom he would not extend loans in future.

Types of Investigation: Causal versus Correlation

When the researcher wants to define the cause of one or more problems, then the study
is called a Causal Study.

When the researcher is interested in outline the important variables that are associated
with the problem, it is called a Correlational Study.

Example:A causal study question:

A correlational question:

Are smoking, chewing tobacco related to cancer ?

A causal study hypothesis:

Does smoking cause cancer?

Smoking causes cancer.

A correlational hypothesis:

Smoking and cancer are related

Chewing and cancer are related

Contrived and Non-contrived

Organizational research can be done in the natural environment where work proceeds
normally (i.e., in non-contrived setting) or in artificial, contrived settings.

Correlation studies are invariably conducted in non-contrived settings, whereas rigorous


causal studies are done in contrived lab setting

Correlation studies done in organizations are called field studies

( factors influencing in a call center its employees turn over ).

Studies to establish cause and effect relationships using the same natural environment
in which employees normally function are called field experiments

Example: employees who have been given recognition and employee who have not
been given recognition.

Cause effect studies in contrived environment in which The environment extraneous


factors are controlled are termed as lab experiments.

Example:Select all new employees with the same scores in the entry test and provide
one group training and the other no training and controlling that they are not exposed to
any senior employee who could guide them.)

Unit of Analysis: Individuals, Dyads, Groups, Organizations, Cultures

The unit of analysis refers to the level of aggregation of the data collected during the
subsequent data analysis stages.

Individuals: If the problem statement focuses on how to raise the motivational levels of
employees in general, then we are interested in individual employees in the organization and
would like to find out what we can do to raise their motivation.

Here the unit of analysis is the individual.(managers perception on the factors which
influence the success of the project)

Dyads: If the researcher is interested in studying two-person interactions, then several twoperson groups, is known as dyads and will become unit of analysis.

For example, analysis of husband-wife(are they satisfied with the education provided by
the school) in families and mentor-mentee (perception on the benefit of mentoring).

Groups: If the problem statement is related to group effectiveness, however, then obviously the
unit of analysis would be at group level.

For example, if we wish to study group decision-making patterns, we would probably


examining such aspects as group size, group structure, cohesiveness, and the like, in
trying to explain the variance in group decision making.

In such cases the unit of analysis will be groups.(use of I.T by the different department)

Organizations: If we compare different departments in the organization, then the data analysis
will be done at the departmental level - that is, the individuals in the department will be treated
as one unit and comparison made treating the department as the unit of analysis.

(Conservation of energy initiatives by public and private organization)

Cultures: If we want to study cultural differences among nations, we will have to collect data
from different countries and study the underlying patterns of culture in each country, here the
unit of analysis used will be cultures.

(Moral values of Eastern vs Western cultures)

Cross-Sectional Studies

A study can be done in which data are gathered just once, perhaps over a period of days
or weeks or months, in order to answer a research question. Such studies are called
one-shot or cross-sectional studies.

(data collected from project managers and their psychological well being between
October till December)

Longitudinal Studies

In some cases, the researcher might want to study people or phenomena at more than
one point in time in order to answer the research question. For example, the researcher
might want to study employees behavior before and after a change in the top
management, to learn the effects of change.

Or when data on the dependent variable are gathered at two or more points in time to
answer the research question are called longitudinal studies. (use of electricity by a city
in summers and then in winters)

Types of Scales: Four types of scales are used in research, each with specific applications and
properties. The scales are

Nominal, Ordinal, Interval, Ratio

Nominal Scale: Simply the Nominal scale is count of the objects belonging to different
categories.
Ordinal Scale: The ordinal scale positions objects in some order (such as it indicates that
pineapples are juicer then apples and oranges are even more juicer than pineapples)
Interval Scale: It can gives us information as to what extent(level) one is juicer than the other.
How much better the pineapple is than the apple and orange is better than the pine apple. Is
pine apple only marginally better than the apple?
Ratio Scale: It is most comprehensive scale, has all characteristics of other scales.
Simple Category: This scale is also called a dichotomous scale. It offers two mutually
exclusive response choices. In the example shown in the slide, the response choices are yes
and no, but they could be other response choices too such as agree and disagree.
When there are multiple options for the rater but only one answer is sought, the multiplechoice, single-response scale is appropriate. The other response may be omitted when
exhaustiveness of categories is not critical or there is no possibility for another response. This
scale produces nominal data.
Multiple-Choice, Multiple Response Scale
This scale is a variation of the last and is called a checklist. It allows the rater to select one or
several alternatives. The cumulative feature of this scale can be beneficial when a complete
picture of the participants choice is desired, but it may also present a problem for reporting
when research sponsors expect the responses to sum to 100 percent. This scale generates
nominal data.
Likert scale
The Likert scale was developed by Rensis Likert and is the most frequently used variation of the
summated rating scale. Summated rating scales consist of statements that express either a
favorable or unfavorable attitude toward the object of interest. The participant is asked to agree
or disagree with each statement. Each response is given a numerical score to reflect its degree
of attitudinal favorableness and the scores may be summed to measure the participants overall
attitude. Likert scales may use 5, 7, or 9 scale points. They are quick and easy to construct. The
scale produces interval data.
Originally, creating a Likert scale involved a procedure known as item analysis. Item analysis
assesses each item based on how well it discriminates between those people whose total score
is high and those whose total score is low. It involves calculating the mean scores for each scale
item among the low scorers and the high scorers. The mean scores for the high-score and lowscore groups are then tested for statistical significance by computing t values. After finding the t

values for each statement, they are rank-ordered, and those statements with the highest t
values are selected. Researchers have found that a larger number of items for each attitude
object improves the reliability of the scale.
The semantic differential scale
The semantic differential scale measures the psychological meanings of an attitude object using
bipolar adjectives. Researchers use this scale for studies of brand and institutional image. The
method consists of a set of bipolar rating scales, usually with 7 points, by which one or more
participants rate one or more concepts on each scale item. The scale is based on the
proposition that an object can have several dimensions of connotative meaning. The meanings
are located in multidimensional property space, called semantic space. It is efficient and easy
for securing attitudes from a large sample. Attitudes may be measured in both direction and
intensity. The total set of responses provides a comprehensive picture of the meaning of an
object and a measure of the person doing the rating. It is standardized and produces interval
data
Numerical scales have equal intervals that separate their numeric scale points. The verbal
anchors serve as the labels for the extreme points. Numerical scales are often 5-point scales
but may have 7 or 10 points. The participants write a number from the scale next to each item. It
produces either ordinal or interval data.
The Stapel scale is used as an alternative to the semantic differential, especially when it is
difficult to find bipolar adjectives that match the investigative question. In the example, there are
three attributes of corporate image. The scale is composed of the word identifying the image
dimension and a set of 10 response categories for each of the three attributes. Stapel scales
produce interval data.
The constant-sum scale helps researchers to discover proportions. The participant allocates
points to more than one attribute or property indicant, such that they total a constant sum,
usually 100 or 10. Participant precision and patience suffer when too many stimuli are
proportioned and summed. A participants ability to add may also be taxed. Its advantage is its
compatibility with percent and the fact that alternatives that are perceived to be equal can be so
scored. This scale produces interval data.
The graphic rating scale was originally created to enable researchers to discern fine
differences. Theoretically, an infinite number of ratings is possible if participants are
sophisticated enough to differentiate and record them. They are instructed to mark their
response at any point along a continuum. Usually, the score is a measure of length from either
endpoint. The results are treated as interval data. The difficulty is in coding and analysis. Other
graphic rating scales use pictures, icons, or other visuals to communicate with the rater and
represent a variety of data types. Graphic scales are often used with children.
Probability Sampling
Systematic Sampling: Technique in which an initial starting point is selected by a random
process, after which every nth number on the list is selected to constitute part of the sample

Sampling interval (SI) = population list size (N) divided by a pre-determined sample size
(n)

For systematic sampling to work best, the list should be random in nature and not have
some underlying systematic pattern. E.g: Office directory with the Senior Manager,

Middle manager .names are listed in each department. This can create as systematic
problem
Stratified Sampling: Technique in which simple random subsamples are drawn from within
different strata that share some common characteristic. Within the group they are homogenous
and among the group they are heterogeneous.Example: The student body of CIIT is divided
into two groups (management science, engineering) and from each group, students are
selected for a sample using simple random sampling in each of the two groups, whereby the
size of the sample for each group is determined by that groups overall strength
Cluster Sampling: Technique in which the target population is first divided into clusters. Then, a
random sample of clusters is drawn and for each selected cluster either all the elements or a
sample of elements are included in the sample. Cluster samples offer more heterogeneity within
groups and more homogeneity among groups
Area sampling: Specific type of cluster sampling in which clusters consist of geographic
areas such as counties, city blocks, or particular boundaries within a locality. Area
sampling is less expensive than most other sampling designs and it is not dependent on
sampling frame.Key motivation in cluster sampling is cost reduction. Example: If you
wanted to survey the residents of the city, you would get a city map, take a sample of
city blocks and select respondents within each city block.
Double sampling: A sampling design where initially a sample is used in a study to collect some
preliminary information of interest, and later a subsample of this primary sample is use to
examine the matter in more detail.Example: A structured interview might indicate that a
subgroup of respondents has more insight into the problems of the organization. These
respondents might be interviewed again and again and asked additional questions.
Non-Probability Sampling
Convenience Sampling: Major advantages of convenience sampling are that is quick,
convenient and economical; a major disadvantage is that the sample may not be representative.
Convenience sampling is best used for the purpose of exploratory research and supplemented
subsequently with probability sampling.
Judgment (purposive) Sampling: Sampling technique in which the business researcher
selects the sample based on judgment about some appropriate characteristic of the sample
members. Example: Selection of certain students who are active in the university activities to
inquire about the sports and recreation facilities at the university.
In a structured interview, each candidate is asked similar questions in a predetermined format.
Emphasis tends to be on your past experience and assets you can bring to company. Typically,
the interviewer records your answers, which are potentially scored on a standard grid.
Unstructured interviews are much more casual and unrehearsed. They depend on free flowing
conversation which tends to focus on your personal qualities as they relate to the work.
Questions about skills and strengths can be asked and should be answered as formally as in a
structured interview.

Unstructured interviews may be so by design of the interviewer, or may be so due to the


spontaneity of the eventyou might find yourself in an unstructured interview after being
introduced to a potential employer by a friend, or while dropping off a resume in person at a
location in which you wish to work.
Structured Interview
This is also known as a formal interview (like a job interview).
The questions are asked in a set / standardized order and the interviewer will not deviate from
the interview schedule or probe beyond the answers received (so they are not flexible).
These are based on structured, closed-ended questions.
Strengths
1. Structured interviews are easy to replicate as a fixed set of closed questions are used,
which are easy to quantify this means it is easy to test for reliability.
2. Structured interviews are fairly quick to conduct which means that many interviews can
take place within a short amount of time. This means a large sample can be obtained
resulting in the findings being representative and having the ability to be generalized to a
large population.
Limitations
1. Structure interviews are not flexible. This means new questions cannot be asked
impromptu (i.e. during the interview) as an interview schedule must be followed.
2. The answers from structured interviews lack detail as only closed questions are asked
which generates quantitative data. This means a research will won't know why a person
behaves in a certain way.
Unstructured Interview
These are sometimes referred to as discovery interviews & are more like a guided
conservation than a strict structured interview. They are sometimes called informal interviews.
An interview schedule might not be used, and even if one is used, they will contain open-ended
questionsthat can be asked in any order. Some questions might be added / missed as the
Interview progresses.
Strengths

1. Unstructured interviews are more flexible as questions can be adapted and changed
depending on the respondents answers. The interview can deviate from the interview
schedule.
2. Unstructured interviews generate qualitative data through the use of open questions. This
allows the respondent to talk in some depth, choosing their own words. This helps the
researcher develop a real sense of a persons understanding of a situation.
3. They also have increased validity because it gives the interviewer the opportunity to probe
for a deeper understanding, ask for clarification & allow the interviewee to steer the direction
of the interview etc.
Limitations
1. It can be time consuming to conduct an unstructured interview and analyze the qualitative
data (using methods such as thematic analysis).
2. Employing and training interviewers is expensive, and not as cheap as collecting data via
questionnaires. For example, certain skills may be needed by the interviewer. These include
the ability to establish rapport & knowing when to probe.

COMMON PROBLEMS OF DATA COLLECTION 1. Irrelevant or duplicate data collected 2.


Pertinent data omitted 3. Erroneous or misinterpreted data collected 4. Too little data acquired
from client 5. Data base format causes disorganized health status profile 6. Poor documentation
from staff 7. Conflicting data 8. MDs handwriting 9. Language barrier 10.sInsufficient time
11.Lack of equipment
Internal and external validity
When we conduct experiments, our goal is to demonstrate cause and effect relationships
between the independent and dependent variables. We often try to do it in a way that enables
us to make statements about people at large. How well we can do this is referred to as study
generalisability. A study that readily allows its findings to generalize to the population at large
has high external validity. To the degree that we are successful in eliminating confounding
variables within the study itself is referred to as internal validity. External and internal validity are
not all-or-none, black-and-white, present-or-absent dimensions of an experimental design.
Validity varies along a continuum from low to high.
One major source of confounding arises from non-random patterns in the membership of
participants in the study, or within groups in the study. This can affect internal and external
validity in a variety of ways, none of which are necessarily predictable. It is often only after doing
a great deal of work that we discover that some glitch in our procedures or some oversight has
rendered our results uninterruptable.
Internal validity refers to how well an experiment is done, especially whether it
avoids confounding (more than one possible independent variable [cause] acting at the same
time). The less chance for confounding in a study, the higher its internal validity is.

Therefore, internal validity refers to how well a piece of research allows you to choose among
alternate explanations of something. A research study with high internal validity lets you choose
one explanation over another with a lot of confidence, because it avoids (many possible)
confounds.
External validity refers to how well data and theories from one setting apply to another. This
question is usually asked about laboratory research: Does it apply in the everyday "real" world
outside the lab? The figure at the right summarize external and internal validity and the relation
between the two. The green ellipse represents internal validity, and the blue rounded rectangle
around it represents external validity,
Theoretical Framework
A theoretical framework is a collection of interrelated concepts, like a theory but not necessarily
so well worked-out. A theoretical framework guides your research, determining what things you
will measure, and what statistical relationships you will look for.
Theoretical frameworks are obviously critical in deductive, theory-testing sorts of studies
(see Kinds of Research for more information). In those kinds of studies, the theoretical
framework must be very specific and well-thought out.
Surprisingly, theoretical frameworks are also important in exploratory studies, where you really
don't know much about what is going on, and are trying to learn more. There are two reasons
why theoretical frameworks are important here. First, no matter how little you think you know
about a topic, and how unbiased you think you are, it is impossible for a human being not to
have preconceived notions, even if they are of a very general nature. For example, some people
fundamentally believe that people are basically lazy and untrustworthy, and you have keep your
wits about you to avoid being conned. These fundamental beliefs about human nature affect
how you look things when doing personnel research. In this sense, you are always being guided
by a theoretical framework, but you don't know it. Not knowing what your real framework is can
be a problem. The framework tends to guide what you notice in an organization, and what you
don't notice. In other words, you don't even notice things that don't fit your framework! We can
never completely get around this problem, but we can reduce the problem considerably by
simply making our implicit framework explicit. Once it is explicit, we can deliberately consider
other frameworks, and try to see the organizational situation through different lenses.
Kinds of Personnel Research

There are many kinds of personnel research. Three dimensions are particularly important in
classifying types of research:
Applied vs Basic research. Applied research is research designed to solve a particular problem
in a particular circumstance, such as determining the cause of low morale in a given department
of an organization. Basic research is designed to understand the underlying principles behind
human behavior. For example, you might try to understand what motivates people to work hard
at their jobs. This distinction is discussed in more detail in another handout. Click here to read it.

Exploratory vs Confirmatory. Exploratory research is research into the unknown. It is used when
you are investigating something but really don't understand it all, or are not completely sure
what you are looking for. It's sort of like a journalist whose curiousity is peaked by something
and just starts looking into something without really knowing what they're looking for.
Confirmatory research is where you have a pretty good idea what's going on. That is, you have
a theory (or several theories), and the objective of the research is to find out if the theory is
supported by the facts.
Quantitative vs Qualitative. Quantitative studies measure variables with some precision using
numeric scales. For example, you might measure a person's height and weight. Or you might
construct a survey in which you measure how much respondents like President Clinton, using a
1 to 10 scale. Qualitative studies are based on direct observation of behavior, or on transcripts
of unstructured interviews with informants. For example, you might talk to ten female executives
about their the decision-making process behind their choice to have children or not, and if so,
when. You might interview them for several hours, tape-recording the whole thing, and then
transcribe the recordings to written text, and then analyze the text.
As a general rule (but there are many exceptions), confirmatory studies tend to be quantitative,
while exploratory studies tend to be qualitative.

Data Collection Methods


To derive conclusions from data, we need to know how the data were collected; that is, we need
to know the method(s) of data collection.
Methods of Data Collection
For this tutorial, we will cover four methods of data collection.

Census. A census is a study that obtains data from every member of a population. In
most studies, a census is not practical, because of the cost and/or time required.

Sample survey. A sample survey is a study that obtains data from a subset of a
population, in order to estimate population attributes.

Experiment. An experiment is a controlled study in which the researcher attempts to


understand cause-and-effect relationships. The study is "controlled" in the sense that the
researcher controls (1) how subjects are assigned to groups and (2) which treatments
each group receives.
In the analysis phase, the researcher compares group scores on some dependent
variable. Based on the analysis, the researcher draws a conclusion about whether the
treatment (independent variable) had a causal effect on the dependent variable.

Observational study. Like experiments, observational studies attempt to understand


cause-and-effect relationships. However, unlike experiments, the researcher is not able
to control (1) how subjects are assigned to groups and/or (2) which treatments each
group receives.

Sampling is the process of selecting a sufficient


number of right elements from the population so,
the major steps in the sampling include.
1. Defining the population
2. Determine the sample process
3. Determine the sampling design
4. Determine the appropriate sample size
5. Execute the sampling process
The Sampling Process
defining the population
Sampling begins with precisely defining the target population. The target population must be
defined in terms of elements, geographical boundaries and time. Example: A target population
may be, for example, all faculty members in the Department of Management Sciences in the VCOMSATS network, All housewives in Islamabad, All pre-college students in Rawalpindi,
The target group should be clearly defined if possible, for example, do all pre-college
students include only primary and secondary students or also students in other specialized
educational institutions?
determining the sample frame
the sampling frame is a (physical) representation of all the elements in the population from
which he sample is drawn. Also termed as a List.
Often, the list does not include the entire population. The discrepancy is often a source of error
associated with the selection of the sample (sampling frame error)
Information relating to sampling frames can be obtained from commercial organizations
Example: Student telephone directory (for the student population), the list of companies on the
stock exchange, the directory of medical doctors and specialists, the yellow pages (for
businesses)
Determining the sample design
Two major types of sampling
Probability sampling
The elements in the population have some known, non zero chances or probability of being
selected as sample subjects.
Non probability sampling
the elements do not have a known or predetermined chance of being selected as
subjects.
Factors affecting sampling design
The relevant target population of focus to the study
The parameters we are interested in investigating
The kind of sample frame is available
Costs and Time are attached to the sample design and collection of Data
Determining the sample size
The decision about the how large the sample size should be can be very difficult one. These
factors affecting the sampling decision are
The research objective
The extent of precision desired (the confidence interval)

The acceptance risk in predicting that level of precision(confidence level)


The amount of variability in the population itself
The cost and time constraints
In some cases, the size of population itself
Executing the sample process
In this final stage of sampling process, decision with respect to the
the target population, the sampling frame, the sample technique, and the sample size have to
be implemented. Example:
A young researcher was investigating the antecedents of salesperson performance.
To examine his hypotheses, data were collected from the chief sales executive in the Pakistan
(the target population) via mail questionnaire.
The sample was initially drawn from the published business register (the sampling
frame), but supplemented with respondent recommendations and other additions, in a
judgment sampling methodology.
The questionnaires were subsequently distributed to sales executives of 450 companies
(the sample size).

Difference between Validity and Reliability


Reliability is, roughly, whether you could replicate an experiment and get comparable results either because an individual's responses are consistent (for example, their reaction times in a
test are consistent when the test is carried out again), or the general overall results are
consistent (for example, the average score on a test is the same or similar when carried out
again on a comparable group)
Validity is whether the construct you are using really measures what you are using it to
measure. For example, if you devised a test to measure people's self-esteem, does it really
measure self-esteem, or something similar such as extraversion?
Examples to clarify the difference between reliability and validity
(1) This shows how reliability and validity correspond to random and systematic errors. The
precise and unbiased shots correspond to a valid and reliable tool. Imprecise and biased shots
correspond to a tool that is neither valid nor reliable.

(2) A clock is valid if it measures 'true' time, and reliable if it does so consistently.
We would regard it as an invalid tool if it showed the wrong time, yet if it was sometimes slow
and sometimes fast we would call it unreliable.
A clock that is always 10 minutes fast has high reliability yet poor validity.

Measure of Dispersion
Dispersion is the variability that exist in a set of observations.
Two sets of data might have the same mean, but the dispersion could be different.
The range
Range refers to the extreme values in a set ofobservations.
54,50,35,67,50
(35,67)
The variance
The variance is calculated by subtracting the mean from each of the observations in the data
set, taking a square of this difference, and dividing the total of these by the number of
observations.
The standard deviation
another measure of dispersion for interval and ratio scaled data, offers an index of the spread
of a distribution or the variability in the data.
It is a very commonly used, measure of dispersion, and is simply square root of the variance

Primary Data

Primary Data = information obtained exclusively for current research

Personal Interview

Focus Groups

Panels

Delphi Technique

Telephone Interview Computer assisted telephone interviewing and Computer


administered telephone survey

Self-Administered Surveys
Secondary Data

Company Archives
Gov Publications
Industry Analysis
Primary Data Collection Methods

Focus Group

Panels

Interviews (face to face, telephone, electronic media)

Questionnaires (personally, mail, electronic)

Observation

Other (projective tests)


Focus Group:

Usually consist of 8 to 10 members , with a moderator leading the discussion for 2 hours
on a particular topic, concept or product.

Member are chosen on the bases of their expertise on the topic.

E.g Discussion on computers and computing , or women mothers , social networking etc

Less expensive and usually done for exploratory information. Cannot be generalized
Panels:

Similar to focus group but meets more than once in order to study the change or
interventions need to be studies over a period of time.

Members are randomly chosen

E.g effect of advertisement of a certain brand need to be assessed quickly, panel


members could be exposed to the advertisement and intention of purchase could be
assessed.

When the product is modified then the response of the panel can be observed

Observation measures:

Methods through which primary data is collected without the involving people.

E.g: Wear and tear of books , section of an office, seating area of railway station which
indicate the popularity, frequency of use etc.

E.g: The number of cans in the dust bin and their brands, the number of motor cycles vs
cars parked in the university parking lot

Interviewing:

Collect data from the respondent on an issue of interest.

Usually administered at the exploratory stage of the research.

In case large set of respondents are needed then more than one interviewer are used ,
hence they need to be trained so that biases , voice inflections, difference in wording are
avoided
Structured and Unstructured
Un Structured:

No planned sequence of questions, help in exploring preliminary issues.

e.g. Tell me something about your unit and department , and perhaps even the
organization as a whole in terms of work, employee and whatever else you think is
important

Compared to other departments, what are the strengths and weakness of your
department

In case they identify a difference you can ask

How can you improve the situation ?

Encouraging the respondent to reflect on the positive and negative aspects of it.

Try to pleasant and see if the respondent is not comfortable.

Through unstructured the different major areas might be exposed. It from these the
researcher can pick some areas as focus variables which need further probing.

Now the researcher can device a more focused approach and develop a more structured
interview emphasizing on some particular issues.
Structured:

Know at the outset what information is needed. Focusing on factors relevant to the
problem.

The focus is on the factors which have surfaced during the un structured interview.

E.g: During the previous unstructured interview it was identified that the department
needs improvement.

Now you can focus on questions which addresses how to improve the department, i.e.
the factors which can improve the department

This can be done through face to face, over the telephone or through the computers via
internet.

Specific same questions are asked from different respondents.

The information collected is tabulated and then the data is analyzed.

The result could highlight the important factors influencing the issues.

This information is of qualitative in nature which could be then empirically tested and
verified using other methods like questionnaires.
Guideline for Interviews

Listen carefully

Motivate the respondents

How to take notes

Built proper trust and rapport with interviewee

Clarification of complex issues

Physical setting

Explaining the reasons for research and criteria of selection


Face to Face

Adv :Clarify doubts, repeating, rephrasing, getting non verbal cues

Dis : vast resources required, cost, anonymity


Telephone:

Adv : Wider reach in short time, some time easy to discuss personal information over the
phone

Dis: Can be terminated without warning, cannot have a prolonged interview, non verbal
cue.
Closed vs. Open Questions

Easy.

Cost of coding is reduced.

Quicker, standardized interviews.

Can be answered without thinking.

Pre-testing is a must.

Limit the richness of data.

You might also like