You are on page 1of 7

TYPES OF DIVERSIFICATION

1-RELATED DIVERSIFICATION
A process that takes place when a business expands its activities into product lines that are similar to
those it currently offers. For example, a manufacturer of computers might begin making calculators as a
form of related diversification of its existing business.

CONCENTRIC DIVERSIFICATION
Concentric diversification occurs when a firm adds related products or markets. The goal of such
diversification is to achieve strategic fit. Strategic fit allows an organization to achieve synergy.

2-UNRELATED DIVERSIFICATION
Unrelated Diversification is a form of diversification when the business adds new or unrelated product
lines and penetrates new markets. For example, if the shoe producer enters the business of clothing
manufacturing.

CONGLOMERATE DIVERSIFICATION
Conglomerate diversification is growth strategy that involves adding new products or services that are
significantly different from the organization's present products or services.
Conglomerate diversification occurs when the firm diversifies into an area(s) totally unrelated to the
organization current business. Synergy may result through the application of management expertise or
financial resources, but the primary purpose of conglomerate diversification is improved profitability of
the acquiring firm. Little, if any, concern is given to achieving marketing or production synergy with
conglomerate diversification.

Errors Managers Make on Performance Appraisals


Since we are all human, it is common for managers to make errors when assessing employee behavior
and writing performance appraisal documents. These errors are reflective of our unconscious biases
toward the employee.
So what are these rater errors?

Halo effect
1-Problem: When a manager rates an employee high on all items because of one characteristic that he or
she likes.
Example: If a worker has few absence but the supervisor has a good relationship with that employee, the
supervisor might give to the employee a high rating in all other areas of work, in order to balance the
rating. Sometimes it happens due to the emotional dependability based on the good relationship they
have.
Solution: Training raters to recognize the problem and differentiating the person with the performance
they do. OR
2-According to some HR executive who consider horn effect error as part of halo effect error believes that
Halo Effect is when a raters overall positive or negative impression of an individual employee leads to
rating him or her the same across all rating dimensions.
This is when a manager really likes or dislikes an employee and allows their personal feelings about this
employee to influence their performance ratings of them.
But most accurate definition is 1st one.

Horns effect
Problem: This is the opposite to the Halo effect and Horns effect occurs when a manager rates an
employee low on all items because of one characteristic that he or she dislikes.
Example: If a worker does a good performance and in some resting times he or she loves telling jokes, but
his or her supervisor hates jokes, the supervisor might give to the employee a lower rating in all other
areas of work, because they do not have that conexion. Sometimes it happens when they do not have a
close relationship and manager do not like the person her/him-self.
Solution: Is the same as in the Halo Effect. Training raters to recognize the problem and differentiating
the person with the performance they do.

Contrast
Problem: The tendency to rate people relative to other people rather than to the individual performance he
or she is doing.
Example: At school, if you are sat down where all the chatty people are and you are silent but you do not
pay attention and you do not do your homework, because you are drawing; when teacher gets angry with
the group, you might be excluded of the bad behavior they have just because you are silent; but not
because you are doing a good performance. Therefore, according to the group, you are not that chatty, but

you are either doing the proper performance. However the rater will only get the idea that your behavior
is not as bad as other, thus, you will be rate higher.
Solution: The rating should reflect the task requirement performance, not according to other people
attitude.

Central Tendency
Problem: When the manager evaluates every employee within a narrow range, as the average because he
or she is dismissing the differences in the performance that employees have done.
Example: When a professor because the average of the class tends to grade harder. Therefore, if the
performance of the class average is quite high, the professor will evaluate them more highly. On the
contrary, if the average of the class is lower, he or she would appraise lower.

Leniency
Problem: Rating of all employees are at the high end of the scale.
Example: When the professor tends to grade harder, because the average of the class.

Strictness
Problem: When a manager uses only the lower part of the scale to rate employees.
Example: When the professor tends to grade lower, because the average of the class.
Solution: try to focus more on the individual performance of every employee regardless the average
results.

Similar-to-Me / Different-from-Me
Problem: Sometimes, ratters are influenced by some of the characteristics that people show. Depending if
those characteristics are similar or different to ratters' one, they would be evaluated differently.
Example: A manager with higher education degree might give subordinates with higher education degree
a higher appraisal than those with only bachelors degrees.
Solution: Try to focus on the performance the employee is doing regardless the common characteristic
that you have

Recency effects
Problem: When the manager, according only to the last performance, that has been quite good, rates
higher.
Example: When a professor gives the course grade based just in the performance of the student, only in
the last week.
Solution: In order to avoid that, the manager should use some methods as documenting both in positive
and negative aspects.

Primacy effects
Problem: When the person who evaluates gives more weight according to information the manager has
received first.
Example: It could be a silly example. When we are watching a TV quiz and conquest have to remember a
list of things, they only remember the first ones. This is apply also in remembering human performance.
Solution: performance. When manager has to take some decision, is better not to do it according to what
he or she remembers. It is better to be based on real actions that have happened and are recorded.

Rater Bias
Problem: Raters when the manager rates according to his or her values and prejudices which at the same
time distort (distorsionar) the rating. Those differentiations can be made due to the ethnic group, gender,
age, religion, sex, appearance...
Example: Sometimes happen that a manager treats someone different, because he or she thinks that the
employee is homosexual.
Solution: If then, the examination is done by higher-level managers, this kind of appraising can be
corrected, because they are supposed to be more partial.

Sampling
Problem: When the rater evaluates the performance of an employee relying only on a small percentage of
the amount of work done.
Example: An employee has to do 100 reports. Then, the manager take five of them to check how the work
has been made, and the manager finds mistakes in those five reports. Therefore the manager will
appraised the work of the employee as a "poor" one, without having into account the other 95 reports that
the manager has not seen, that have been made correctly.
Solution: To follow the entire track of the performance, not just a little part of it.

Varying standards
Problem: When a manager appraises (evaluates) his or her employees and the manager uses different
standards and expectations for employees who are performing similar jobs.

Example: A professor does not grade the exams of all students in the same standards, sometimes it
depends on the affection that the professor has towards others. This affection will make professor give
students higher or lower grades.
Solution: The rater must use the same standards and weights for every employee. The manager should be
able to show coherent arguments in order to explain the difference. Therefore, it would be easier to know
if it is done, because the employee has done a good performance, or if it because the manager perception
is distorted.

DIFFERENCE BETWEEN RELIABILITY AND VALIDITY


RELIABILITY
Reliability refers to consistency of measurement being taken.
Reliability is the extent to which an experiment, test, or any measuring procedure shows the same result
on repeated trials. Without the agreement of independent observers able to replicate research procedures,
or the ability to use research tools and procedures that produce consistent measurements, researchers
would be unable to satisfactorily draw conclusions, formulate theories, or make claims about the
generalizability of their research. For researchers, four key types of reliability are:
1. Equivalency: related to the co-occurrence of two items
2. Stability: related to time consistency
3. Internal: related to the instruments
4. Inter-rater: related to the examiners criterion
5. Intra-rater: related to the examiners criterion

Test-retest reliability:
The degree to which the results are consistent over time.
Test-retest reliability indicates the repeatability of test scores with the passage of time. This estimate also
reflects the stability of the characteristic or construct being measured by the test.
Some constructs are more stable than others. For example, an individual's reading ability is more stable
over a particular period of time than that individual's anxiety level. Therefore, you would expect a higher
test-retest reliability coefficient on a reading test than you would on a test that measures anxiety. For
constructs that are expected to vary over time, an acceptable test-retest reliability coefficient may be
lower than is suggested in Table 1.

Alternate or parallel form reliability


It indicates how consistent test scores are likely to be if a person takes two or more forms of a test.
A high parallel form reliability coefficient indicates that the different forms of the test are very similar
which means that it makes virtually no difference which version of the test a person takes. On the other
hand, a low parallel form reliability coefficient suggests that the different forms are probably not
comparable; they may be measuring different things and therefore cannot be used interchangeably.
Inter-rater reliability
It indicates how consistent test scores are likely to be if the test is scored by two or more raters.
On some tests, raters evaluate responses to questions and determine the score. Differences in judgments
among raters are likely to produce variations in test scores. A high inter-rater reliability coefficient
indicates that the judgment process is stable and the resulting scores are reliable.
Inter-rater reliability coefficients are typically lower than other types of reliability estimates. However, it
is possible to obtain higher levels of inter-rater reliabilities if raters are appropriately trained.
Intra-rater reliability
Intra-rater reliability is a type of reliability assessment in which the same assessment is completed by the
same rater on two or more occasions. These different ratings are then compared, generally by means of
correlation. Since the same individual is completing both assessments, the rater's subsequent ratings are
contaminated by knowledge of earlier ratings.
Internal consistency reliability
It indicates the extent to which items on a test measure the same thing.
A high internal consistency reliability coefficient for a test indicates that the items on the test are very
similar to each other in content (homogeneous). It is important to note that the length of a test can affect
internal consistency reliability. For example, a very lengthy test can spuriously inflate the reliability
coefficient.
VALIDITY
The term validity refers to whether or not a test measures what it intends to measure. Or whether what
is being assessed corresponds to or relates to actual performance on the job.

On a test with high validity the items will be closely linked to the tests intended focus. For many
certification and licensure tests this means that the items will be highly related to a specific job or
occupation. If a test has poor validity then it does not measure the job-related content and competencies it
ought to.
There are several ways to estimate the validity of a test, including content validity, construct validity,
criterion-related validity (concurrent & predictive) and face validity.
1. Content Validity: related to objectives and their sampling.
Content-related validation requires a demonstration that the content of the test represents important jobrelated behaviors. In other words, test items should be relevant to and measure directly important
requirements and qualifications for the job.
Content validity refers to the connections between the test items and the subject-related tasks. The test
should evaluate only the content related to the field of study in a manner sufficiently representative,
relevant, and comprehensible.
2. Construct Validity: referring to the theory underlying the target.
Construct-related validation requires a demonstration that the test measures the construct or characteristic
it claims to measure, and that this characteristic is important to successful performance on the job.
It implies using the construct correctly (concepts, ideas, notions). Construct validity seeks agreement
between a theoretical concept and a specific measuring device or procedure. For example, a test of
intelligence nowadays must include measures of multiple intelligences, rather than just logicalmathematical and linguistic ability measures.
3. Criterion Validity: related to concrete criteria in the real world. It can be concurrent or
predictive.
Criterion-related validation requires demonstration of a correlation or other statistical relationship
between test performance and job performance. In other words, individuals who score high on the test
tend to perform better on the job than those who score low on the test. If the criterion is obtained at the
same time the test is given, it is called concurrent validity; if the criterion is obtained at a later time, it is
called predictive validity.
Also referred to as instrumental validity, it states that the criteria should be clearly defined by the teacher
in advance. It has to take into account other teachers criteria to be standardized and it also needs to
demonstrate the accuracy of a measure or procedure compared to another measure or procedure which has
already been demonstrated to be valid.
4. Concurrent Validity: correlating high with another measure already validated.
Concurrent validity is a statistical method using correlation, rather than a logical method. Examinees who
are known to be either masters or non-masters on the content measured by the test are identified before
the test is administered. Once the tests have been scored, the relationship between the examinees status as
either masters or non-masters and their performance (i.e., pass or fail) is estimated based on the test. This
type of validity provides evidence that the test is classifying examinees correctly. The stronger the
correlation is, the greater the concurrent validity of the test is.
5. Predictive Validity: Capable of anticipating some later measure.
This is another statistical approach to validity that estimates the relationship of test scores to an
examinee's future performance as a master or non-master. Predictive validity considers the question,
"How well does the test predict examinees' future status as masters or non-masters?" For this type of
validity, the correlation that is computed is based on the test results and the examinees later performance.
This type of validity is especially useful for test purposes such as selection or admissions.
6. Face Validity: related to the test overall appearance.
Like content validity, face validity is determined by a review of the items and not through the use of
statistical analyses. Unlike content validity, face validity is not investigated through formal procedures.
Instead, anyone who looks over the test, including examinees, may develop an informal opinion as to
whether or not the test is measuring what it is supposed to measure. While it is clearly of some value to
have the test appear to be valid, face validity alone is insufficient for establishing that the test is
measuring what it claims to measure.
PRACTICALITY
It refers to the economy of time, effort and money in testing. In other words, a test should be
Easy to design
Easy to administer
Easy to mark
Easy to interpret (the results)

BACKWASH
What is the impact of the test on the teaching/learning process?
Backwash effect (also known as washback) is the influence of testing on teaching and learning. It is also
the potential impact that the form and content of a test may have on learners conception of what is being
assessed (language proficiency) and what it involves. Therefore, test designers, delivers and raters have a
particular responsibility, considering that the testing process may have a substantial impact, either positive
or negative.

Exempt vs Non-exempt Employees


Definition of non-exempt employee
Most employees are entitled to overtime pay under the Fair Labor Standards Act. They are called nonexempt employees.
Definition of exempt employee
The Fair Labor Standards Act contains dozens of exemptions under which specific categories of
employers and employees are exempted from overtime requirements. The most common exemptions are
the white-collar exemptions for administrative, executive, and professional employees, computer
professionals, and outside sales employees.

Extranet
An extranet is a private network that uses Internet technology and the public telecommunication system to
securely share part of a business's information or operations with suppliers, vendors, partners, customers,
or other businesses.

Rational decision making


Rational decision making is a multi-step process for making choices between alternatives. The process of
rational decision making favors logic, objectivity, and analysis over subjectivity and insight. The word
"rational" in this context does not mean sane or clear-headed as it does in the colloquial sense.
A method for systematically selecting among possible choices that is based on reason and facts. In a
rational decision making process, a business manager will often employ a series of analytical steps
to review relevant facts, observations and possible outcomes before choosing a particular course of
action.

Unitarism refers to 'unity of purpose' and, therefore, requires that people share the same aims and
objectives.

Utilitarianism refers to the best outcome for the majority of people.

Expectancy theory
Expectancy theory (or expectancy theory of motivation) proposes an individual will behave or act in a
certain way because they are motivated to select a specific behavior over other behaviors due to what they
expect the result of that selected behavior will be.

Contingency theory
A contingency theory is an organizational theory that claims that there is no best way to organize a
corporation, to lead a company, or to make decisions. Instead, the optimal course of action is contingent
(dependent) upon the internal and external situation.

Ansoff Matrix
Ansoff's product/market growth matrix suggests that a business' attempts to grow depend on whether it
markets new or existing products in new or existing markets. The output from the Ansoff product/market
matrix is a series of suggested growth strategies which set the direction for the business strategy.

Difference between vertical integration and horizontal integration


A horizontal integration consists of companies that acquire a similar company in the same industry,
while a vertical integration consists of companies that acquire a company that operates either before or
after the acquiring company in the production process.
When a company wishes to grow through a horizontal integration, it is seeking to increase its size,
diversify its product or service, achieve economies of scale, reduce competition, or gain access to new
customers or markets. To do this, one company acquires another company of similar size and operations,
in the same industry. Two great examples of a horizontal integration are the acquisition of Pixar by
Disney or the acquisition of Instagram by Facebook.
When a company wishes to grow through a vertical integration, it is seeking to strengthen its supply
chain, reduce its production costs, capture upstream or downstream profits, or access
downstream distribution channels. To do this, one company acquires another company that is either
before or after it in the supply chain process. A great example of a vertical integration is when Verizon
and AT&T opened their own retail locations through acquisition.
When it comes to a vertical integration, a company can either integrate forward in a forward
integration or backward in a backward integration. A backward integration occurs when a company
decides to own another company that makes an input product to the acquiring company's product. An
example of this is if a car manufacturer acquires a tire manufacturing company. A forward integration
occurs when a company decides to take control of the post-production process. An example of this is if
the same car manufacturer acquires an automotive dealership. The vertical integration examples above
with Verizon and AT&T are also forward integrations.
Backward integration is a form of vertical integration that involves the purchase of, or merger with,
suppliers up the supply chain. Companies pursue backward integration when it is expected to result in
improved efficiency and cost savings. For example, this type of integration might cut transportation costs,
improve profit margins and make the firm more competitive.
By way of contrast, forward integration is a type of vertical integration that involves the purchase or
control of distributors. An example of forward integration is if the bakery sold its goods directly to
consumers at local farmers markets, or if it owned a chain of retail stores through which it could sell its
goods. If the bakery did not own a wheat farm, a wheat processor or a retail outlet, it would not be
vertically integrated at all.

Job Analysis
Job Analysis is a primary tool to collect job-related data.

Job Description and Job Specification

1-A job description is a list that a person might use for general tasks, or functions, and responsibilities of
a position. It may often include to whom the position reports,
2-A job specification is a written statement of the minimum qualifications and traits that a person needs in
order to perform the duties and undertake the responsibilities of a particular position. Specifications are
developed as part of the job analysis process.

Organization Structure
These structural elements are manifested in organizations in two generic organizational forms:
mechanistic or organic.

Mechanistic

Organic

Work tasks are specialized into separate


parts

Employees contribute to the common task

Tasks are rigidly defined (high


formalization)

Tasks are adjusted and redefined through teamwork

Strict hierarchy of authority

Less hierarchy, workers have greater responsibility

Centralized decision-making and control Decision-making and control are located anywhere in the
organization
Vertical communications

Horizontal communication

Whether an organization is better served with an organic or a mechanistic structure depends on its
environment, its size, and its technology.

You might also like