Professional Documents
Culture Documents
1-RELATED DIVERSIFICATION
A process that takes place when a business expands its activities into product lines that are similar to
those it currently offers. For example, a manufacturer of computers might begin making calculators as a
form of related diversification of its existing business.
CONCENTRIC DIVERSIFICATION
Concentric diversification occurs when a firm adds related products or markets. The goal of such
diversification is to achieve strategic fit. Strategic fit allows an organization to achieve synergy.
2-UNRELATED DIVERSIFICATION
Unrelated Diversification is a form of diversification when the business adds new or unrelated product
lines and penetrates new markets. For example, if the shoe producer enters the business of clothing
manufacturing.
CONGLOMERATE DIVERSIFICATION
Conglomerate diversification is growth strategy that involves adding new products or services that are
significantly different from the organization's present products or services.
Conglomerate diversification occurs when the firm diversifies into an area(s) totally unrelated to the
organization current business. Synergy may result through the application of management expertise or
financial resources, but the primary purpose of conglomerate diversification is improved profitability of
the acquiring firm. Little, if any, concern is given to achieving marketing or production synergy with
conglomerate diversification.
Halo effect
1-Problem: When a manager rates an employee high on all items because of one characteristic that he or
she likes.
Example: If a worker has few absence but the supervisor has a good relationship with that employee, the
supervisor might give to the employee a high rating in all other areas of work, in order to balance the
rating. Sometimes it happens due to the emotional dependability based on the good relationship they
have.
Solution: Training raters to recognize the problem and differentiating the person with the performance
they do. OR
2-According to some HR executive who consider horn effect error as part of halo effect error believes that
Halo Effect is when a raters overall positive or negative impression of an individual employee leads to
rating him or her the same across all rating dimensions.
This is when a manager really likes or dislikes an employee and allows their personal feelings about this
employee to influence their performance ratings of them.
But most accurate definition is 1st one.
Horns effect
Problem: This is the opposite to the Halo effect and Horns effect occurs when a manager rates an
employee low on all items because of one characteristic that he or she dislikes.
Example: If a worker does a good performance and in some resting times he or she loves telling jokes, but
his or her supervisor hates jokes, the supervisor might give to the employee a lower rating in all other
areas of work, because they do not have that conexion. Sometimes it happens when they do not have a
close relationship and manager do not like the person her/him-self.
Solution: Is the same as in the Halo Effect. Training raters to recognize the problem and differentiating
the person with the performance they do.
Contrast
Problem: The tendency to rate people relative to other people rather than to the individual performance he
or she is doing.
Example: At school, if you are sat down where all the chatty people are and you are silent but you do not
pay attention and you do not do your homework, because you are drawing; when teacher gets angry with
the group, you might be excluded of the bad behavior they have just because you are silent; but not
because you are doing a good performance. Therefore, according to the group, you are not that chatty, but
you are either doing the proper performance. However the rater will only get the idea that your behavior
is not as bad as other, thus, you will be rate higher.
Solution: The rating should reflect the task requirement performance, not according to other people
attitude.
Central Tendency
Problem: When the manager evaluates every employee within a narrow range, as the average because he
or she is dismissing the differences in the performance that employees have done.
Example: When a professor because the average of the class tends to grade harder. Therefore, if the
performance of the class average is quite high, the professor will evaluate them more highly. On the
contrary, if the average of the class is lower, he or she would appraise lower.
Leniency
Problem: Rating of all employees are at the high end of the scale.
Example: When the professor tends to grade harder, because the average of the class.
Strictness
Problem: When a manager uses only the lower part of the scale to rate employees.
Example: When the professor tends to grade lower, because the average of the class.
Solution: try to focus more on the individual performance of every employee regardless the average
results.
Similar-to-Me / Different-from-Me
Problem: Sometimes, ratters are influenced by some of the characteristics that people show. Depending if
those characteristics are similar or different to ratters' one, they would be evaluated differently.
Example: A manager with higher education degree might give subordinates with higher education degree
a higher appraisal than those with only bachelors degrees.
Solution: Try to focus on the performance the employee is doing regardless the common characteristic
that you have
Recency effects
Problem: When the manager, according only to the last performance, that has been quite good, rates
higher.
Example: When a professor gives the course grade based just in the performance of the student, only in
the last week.
Solution: In order to avoid that, the manager should use some methods as documenting both in positive
and negative aspects.
Primacy effects
Problem: When the person who evaluates gives more weight according to information the manager has
received first.
Example: It could be a silly example. When we are watching a TV quiz and conquest have to remember a
list of things, they only remember the first ones. This is apply also in remembering human performance.
Solution: performance. When manager has to take some decision, is better not to do it according to what
he or she remembers. It is better to be based on real actions that have happened and are recorded.
Rater Bias
Problem: Raters when the manager rates according to his or her values and prejudices which at the same
time distort (distorsionar) the rating. Those differentiations can be made due to the ethnic group, gender,
age, religion, sex, appearance...
Example: Sometimes happen that a manager treats someone different, because he or she thinks that the
employee is homosexual.
Solution: If then, the examination is done by higher-level managers, this kind of appraising can be
corrected, because they are supposed to be more partial.
Sampling
Problem: When the rater evaluates the performance of an employee relying only on a small percentage of
the amount of work done.
Example: An employee has to do 100 reports. Then, the manager take five of them to check how the work
has been made, and the manager finds mistakes in those five reports. Therefore the manager will
appraised the work of the employee as a "poor" one, without having into account the other 95 reports that
the manager has not seen, that have been made correctly.
Solution: To follow the entire track of the performance, not just a little part of it.
Varying standards
Problem: When a manager appraises (evaluates) his or her employees and the manager uses different
standards and expectations for employees who are performing similar jobs.
Example: A professor does not grade the exams of all students in the same standards, sometimes it
depends on the affection that the professor has towards others. This affection will make professor give
students higher or lower grades.
Solution: The rater must use the same standards and weights for every employee. The manager should be
able to show coherent arguments in order to explain the difference. Therefore, it would be easier to know
if it is done, because the employee has done a good performance, or if it because the manager perception
is distorted.
Test-retest reliability:
The degree to which the results are consistent over time.
Test-retest reliability indicates the repeatability of test scores with the passage of time. This estimate also
reflects the stability of the characteristic or construct being measured by the test.
Some constructs are more stable than others. For example, an individual's reading ability is more stable
over a particular period of time than that individual's anxiety level. Therefore, you would expect a higher
test-retest reliability coefficient on a reading test than you would on a test that measures anxiety. For
constructs that are expected to vary over time, an acceptable test-retest reliability coefficient may be
lower than is suggested in Table 1.
On a test with high validity the items will be closely linked to the tests intended focus. For many
certification and licensure tests this means that the items will be highly related to a specific job or
occupation. If a test has poor validity then it does not measure the job-related content and competencies it
ought to.
There are several ways to estimate the validity of a test, including content validity, construct validity,
criterion-related validity (concurrent & predictive) and face validity.
1. Content Validity: related to objectives and their sampling.
Content-related validation requires a demonstration that the content of the test represents important jobrelated behaviors. In other words, test items should be relevant to and measure directly important
requirements and qualifications for the job.
Content validity refers to the connections between the test items and the subject-related tasks. The test
should evaluate only the content related to the field of study in a manner sufficiently representative,
relevant, and comprehensible.
2. Construct Validity: referring to the theory underlying the target.
Construct-related validation requires a demonstration that the test measures the construct or characteristic
it claims to measure, and that this characteristic is important to successful performance on the job.
It implies using the construct correctly (concepts, ideas, notions). Construct validity seeks agreement
between a theoretical concept and a specific measuring device or procedure. For example, a test of
intelligence nowadays must include measures of multiple intelligences, rather than just logicalmathematical and linguistic ability measures.
3. Criterion Validity: related to concrete criteria in the real world. It can be concurrent or
predictive.
Criterion-related validation requires demonstration of a correlation or other statistical relationship
between test performance and job performance. In other words, individuals who score high on the test
tend to perform better on the job than those who score low on the test. If the criterion is obtained at the
same time the test is given, it is called concurrent validity; if the criterion is obtained at a later time, it is
called predictive validity.
Also referred to as instrumental validity, it states that the criteria should be clearly defined by the teacher
in advance. It has to take into account other teachers criteria to be standardized and it also needs to
demonstrate the accuracy of a measure or procedure compared to another measure or procedure which has
already been demonstrated to be valid.
4. Concurrent Validity: correlating high with another measure already validated.
Concurrent validity is a statistical method using correlation, rather than a logical method. Examinees who
are known to be either masters or non-masters on the content measured by the test are identified before
the test is administered. Once the tests have been scored, the relationship between the examinees status as
either masters or non-masters and their performance (i.e., pass or fail) is estimated based on the test. This
type of validity provides evidence that the test is classifying examinees correctly. The stronger the
correlation is, the greater the concurrent validity of the test is.
5. Predictive Validity: Capable of anticipating some later measure.
This is another statistical approach to validity that estimates the relationship of test scores to an
examinee's future performance as a master or non-master. Predictive validity considers the question,
"How well does the test predict examinees' future status as masters or non-masters?" For this type of
validity, the correlation that is computed is based on the test results and the examinees later performance.
This type of validity is especially useful for test purposes such as selection or admissions.
6. Face Validity: related to the test overall appearance.
Like content validity, face validity is determined by a review of the items and not through the use of
statistical analyses. Unlike content validity, face validity is not investigated through formal procedures.
Instead, anyone who looks over the test, including examinees, may develop an informal opinion as to
whether or not the test is measuring what it is supposed to measure. While it is clearly of some value to
have the test appear to be valid, face validity alone is insufficient for establishing that the test is
measuring what it claims to measure.
PRACTICALITY
It refers to the economy of time, effort and money in testing. In other words, a test should be
Easy to design
Easy to administer
Easy to mark
Easy to interpret (the results)
BACKWASH
What is the impact of the test on the teaching/learning process?
Backwash effect (also known as washback) is the influence of testing on teaching and learning. It is also
the potential impact that the form and content of a test may have on learners conception of what is being
assessed (language proficiency) and what it involves. Therefore, test designers, delivers and raters have a
particular responsibility, considering that the testing process may have a substantial impact, either positive
or negative.
Extranet
An extranet is a private network that uses Internet technology and the public telecommunication system to
securely share part of a business's information or operations with suppliers, vendors, partners, customers,
or other businesses.
Unitarism refers to 'unity of purpose' and, therefore, requires that people share the same aims and
objectives.
Expectancy theory
Expectancy theory (or expectancy theory of motivation) proposes an individual will behave or act in a
certain way because they are motivated to select a specific behavior over other behaviors due to what they
expect the result of that selected behavior will be.
Contingency theory
A contingency theory is an organizational theory that claims that there is no best way to organize a
corporation, to lead a company, or to make decisions. Instead, the optimal course of action is contingent
(dependent) upon the internal and external situation.
Ansoff Matrix
Ansoff's product/market growth matrix suggests that a business' attempts to grow depend on whether it
markets new or existing products in new or existing markets. The output from the Ansoff product/market
matrix is a series of suggested growth strategies which set the direction for the business strategy.
Job Analysis
Job Analysis is a primary tool to collect job-related data.
1-A job description is a list that a person might use for general tasks, or functions, and responsibilities of
a position. It may often include to whom the position reports,
2-A job specification is a written statement of the minimum qualifications and traits that a person needs in
order to perform the duties and undertake the responsibilities of a particular position. Specifications are
developed as part of the job analysis process.
Organization Structure
These structural elements are manifested in organizations in two generic organizational forms:
mechanistic or organic.
Mechanistic
Organic
Centralized decision-making and control Decision-making and control are located anywhere in the
organization
Vertical communications
Horizontal communication
Whether an organization is better served with an organic or a mechanistic structure depends on its
environment, its size, and its technology.