You are on page 1of 3

Question. Why is the term linear used in the name linear programming?

Ans. Linear programming is a quantitative analysis technique for optimizing an objective function given in a
set of constraints. As the name implies, functions must be linear in order for linear programming technique
to be used.
Question. Describes the steps which is follows in formatting in linear programming?
Ans.

Step 1. It is always good to place all information in a table format, if it is not given to you, to better
understand the problem
Step 2. List the constraints
Step 3. Decide what the decision variables are going to be. Decision variables tell us how much of a
quantity we should produce or use. They are usually quantities such as no. of gallons of milk.
Step 4. Determine the objective function. Profit (P) = (Sale price overhead and labour costs raw
material costs)
Step 5. Develop equations for the numerical valued constraints (i.e. constraints with or signs).
Step 6. Write out the full LP problem and include the subject to (s.t.) statement and the desired sign
of the decision variables.

Question. In the graph analysis of a linear programming model what occur when the slope of the objective
function is same as the slope of the one constraint equation?
Ans. A situation in which more than one optimal solution will be possible. If the slope of the objective
function is the same as the slope of one of the sides of the feasible region, the line representing the objective
function may be coincident with this side leading to multiple solutions.
Question. Summarize the steps for solving a linear programming model graphically?
Ans. The graphical method for solving linear programming problems in two unknowns is as follows.
Graph the feasible region.
Compute the coordinates of the corner points.
Substitute the coordinates of the corner points into the objective function to see which gives the
optimal value. This point gives the solution to the linear programming problem.
If the feasible region is not bounded, this method can be misleading: optimal solutions always exist
when the feasible region is bounded, but may or may not exist when the feasible region is
unbounded.
If the feasible region is unbounded, we are minimizing the objective function, and its coefficients are
non-negative, then a solution exists, so this method yields the solution.
To determine if a solution exists in the general unbounded case:
Bound the feasible region by adding a vertical line to the right of the rightmost corner point, and a
horizontal line above the highest corner point.
Calculate the coordinates of the new corner points you obtain.
Find the corner point that gives the optimal value of the objective function.
If this optimal value occurs at a point of the original (unbounded) region, then the LP problem has a
solution at that point. If not, then the LP problem has no optimal solution.

Question. What are the advantages and limitations of linear programming methods?
Ans. Advantages of Linear Programming:
The main advantage of linear programming is its simplicity and easy way of understanding.
Linear programming makes use of available resources
To solve many diverse combination problems
Helps in Re-evaluation process- linear programming helps in changing condition of the process or system.
Linear programming is adaptive and more flexibility to analyze the problems.
The better quality of decision is provided.
Disadvantage of Linear Programming:
Linear programming works only with the variables that are linear.
The idea is static, it does not consider change and evolution of variables.
Non linear function cannot be solved over here.
Impossibility of solving some problem which has more than two variables in graphical method.
Question. What constitutes the feasible solution area on the graph of linear programming model?
Ans. The feasible solution space is the set of all points that satisfies all constraints. (that the x1 and x2 axes
form non negativity constraints.) The shaded area is the feasible solution space for our problem. The next
step is to determine which point in the feasible solution space will produce the optimal value of the objective
function. This determination is made using the objective function. Its feasible region is a convex polytope,
which is a set defined as the intersection of finitely many half spaces, each of which is defined by a linear
inequality. Its objective function is a real-valued affine function defined on this polyhedron. A linear
programming algorithm finds a point in the polyhedron where this function has the smallest (or largest)
value if such a point exists.
Question. How is the optimal solution point identify on the graph of a linear programming?
Ans. Procedure for finding the optimal solution using the objective function approach:
1. Graph the constraints.
2. Identify the feasible solution space.
3. Set the objective function equal to some amount that is divisible by each of the objective function
coefficients. This will yield integer values for the x1 and x2 intercepts and simplify plotting the line. Often,
the product of the two objective function coefficients provides a satisfactory line. Ideally, the line will cross
the feasible solution space close to the optimal point, and it will not be necessary to slide a straight edge
because the optimal solution can be readily identified visually.
4. After identifying the optimal point, determine which two constraints intersect there. Solve their equations
simultaneously to obtain the values of the decision variables at the optimum.
5. Substitute the values obtained in the previous step into the objective function to determine the value of the
objective function at the optimum.
Question. What are the two phases in the TWO Phase method of Minimization?
Ans. First Phase:
(a) All the terms on R.H.S. should be non negative. If some are -ve then they must be made +ve as explained
earlier.
(b) Express constraints in standard form.
(c) Add artificial variables in equality constraints or (>) type constraints.

(d) Form a new objective function W which consisted of the sum of all the artificial variables
W = A1 + A2 + + Am
Function (W) is known as infeasibility form.
(e) Function W is to be minimized subject to constraints of original problem and the optimum basic feasible
solution is obtained.
Any of the following three cases may arise:

Min. W > 0 and at least one artificial variable appears in column Basic variables at Positive level.
In such case, no feasible solution exists for the original L.P.P. and the procedure is stopped.

(ii) Min. W = 0 and at least one artificial variable appears in column Basic Variables at zero level.
In such a case, the optimum basic feasible solution to the infeasibility form may or may not be a
basic feasible solution to the given (original) L.P.P. To obtain a basic feasible solution, we continue
phase I and try to drive all artificial variables out of the basis and then proceed to phase II.

(iii) Min. W=0 and no artificial variable appears in the column Basic variables current solution. In
such a case a basic feasible solution to the original L.P.P. has been found. Proceed to phase II.

Second Phase:

Use the optimum basic feasible solution of phase I as a starting solution for the original L.P.P. Using
simplex method make iterations till an optimal basic feasible solution for it is obtained.It may be
noted that the new objective function W is always of minimization type regardless of whether the
given (original ) L.P.P. is of maximization or minimization type.

Question. Distinguish between Infeasibility and Unboundedness?


Ans. Infeasibility: It has already been stated that a solution is called feasible if it satisfies all the constraints
and non negativity condition. Sometimes it is possible that the constraints may be inconsistent so that there
is no feasible solution to the problem. Such a situation is called infeasibility. In a solution to LPP
infeasibility is evident if there is no feasible region in which all the constraints may be satisfied continuously.
Unboundness: For a maximization type of linear programming problem, unboundness occurs when there is
no constraint on the solution so that one more of the decision variables can be increased indefinitely without
violating any of the restriction(constraint).
Question. No need to introduce artificial variables in maximization, Why?
Ans. To get a starting basic feasible solution, add n0n-negative variables to the left hand side of each of the
equations corresponding to the constraints of the types ( > ) and ( = ). These variables are called artificial
variables. Thus we change the constraint to get a basic solution. This violates the corresponding constraints.
This is only for the starting purpose. But in the final solution (if it exists) if the artificial variables will
become non-basic, (their values will be zero) then we are coming back to the original constraints. This
method or driving the artificial variables out of the basis is called the Big M technique. This result is
achieved by assigning a very large (big) per unit penalty to these variables in the objective function. Such a
penalty will be a -M for maximization and a +M for minimization problems, on the right hand side, the value
of M being strictly positive.

You might also like