Linear Programming: Theory and Applications

Linear Programming: Theory and Applications

Catherine Lewis May 11, 2008

1

Contents

1 Introduction to Linear Programming

3

1.1 What is a linear program? . . . . . . . . . . . . . . . . . . . . . . 3

1.2 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.3 Manipulating a Linear Programming Problem . . . . . . . . . . . 6

1.4 The Linear Algebra of Linear Programming . . . . . . . . . . . . 7

1.5 Convex Sets and Directions . . . . . . . . . . . . . . . . . . . . . 8

2 Examples from Bazaraa et. al. and Winston

11

2.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

3 The Theory Behind Linear Programming

17

3.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

3.2 The General Representation Theorem . . . . . . . . . . . . . . . 19

4 An Outline of the Proof

20

5 Examples With Convex Sets and Extreme Points From Bazaara

et. al.

22

6 Tools for Solving Linear Programs

23

6.1 Important Precursors to the Simplex Method . . . . . . . . . . . 23

7 The Simplex Method In Practice

25

8 What if there is no initial basis in the Simplex tableau?

28

8.1 The Two-Phase Method . . . . . . . . . . . . . . . . . . . . . . . 29

8.2 The Big-M Method . . . . . . . . . . . . . . . . . . . . . . . . . . 31

9 Cycling

33

9.1 The Lexicographic Method . . . . . . . . . . . . . . . . . . . . . 34

9.2 Bland's Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

9.3 Theorem from [2] . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

9.4 Which Rule to Use? . . . . . . . . . . . . . . . . . . . . . . . . . 39

10 Sensitivity Analysis

39

10.1 An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

10.1.1 Sensitivity Analysis for a cost coefficient . . . . . . . . . . 40

1

10.1.2 Sensitivity Analysis for a right-hand-side value . . . . . . 41

11 Case Study: Busing Children to School

41

11.1 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

11.2 The Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

11.2.1 Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

11.2.2 Objective Function . . . . . . . . . . . . . . . . . . . . . . 43

11.2.3 Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . 43

11.3 The Complete Program . . . . . . . . . . . . . . . . . . . . . . . 47

11.4 Road Construction and Portables . . . . . . . . . . . . . . . . . . 49

11.4.1 Construction . . . . . . . . . . . . . . . . . . . . . . . . . 49

11.4.2 Portable Classrooms . . . . . . . . . . . . . . . . . . . . . 50

11.5 Keeping Neighborhoods Together . . . . . . . . . . . . . . . . . . 55

11.6 The Case Revisited . . . . . . . . . . . . . . . . . . . . . . . . . . 56

11.6.1 Shadow Prices . . . . . . . . . . . . . . . . . . . . . . . . 56

11.6.2 The New Result . . . . . . . . . . . . . . . . . . . . . . . 57

12 Conclusion

57

2

1 Introduction to Linear Programming

Linear programming was developed during World War II, when a system with which to maximize the efficiency of resources was of utmost importance. New war-related projects demanded attention and spread resources thin. "Programming" was a military term that referred to activities such as planning schedules efficiently or deploying men optimally. George Dantzig, a member of the U.S. Air Force, developed the Simplex method of optimization in 1947 in order to provide an efficient algorithm for solving programming problems that had linear structures. Since then, experts from a variety of fields, especially mathematics and economics, have developed the theory behind "linear programming" and explored its applications [1].

This paper will cover the main concepts in linear programming, including examples when appropriate. First, in Section 1 we will explore simple properties, basic definitions and theories of linear programs. In order to illustrate some applications of linear programming, we will explain simplified "real-world" examples in Section 2. Section 3 presents more definitions, concluding with the statement of the General Representation Theorem (GRT). In Section 4, we explore an outline of the proof of the GRT and in Section 5 we work through a few examples related to the GRT.

After learning the theory behind linear programs, we will focus methods of solving them. Section 6 introduces concepts necessary for introducing the Simplex algorithm, which we explain in Section 7. In Section 8, we explore the Simplex further and learn how to deal with no initial basis in the Simplex tableau. Next, Section 9 discusses cycling in Simplex tableaux and ways to counter this phenomenon. We present an overview of sensitivity analysis in Section 10. Finally, we put all of these concepts together in an extensive case study in Section 11.

1.1 What is a linear program?

We can reduce the structure that characterizes linear programming problems (perhaps after several manipulations) into the following form:

3

Minimize c1x1 + c2x2 + ? ? ? + cnxn = z

Subject to a11x1 + a12x2 + ? ? ? + a1nxn = b1

a21x1 ...

+ a22x2 + ? ? ? + a2nxn = b2

...

...

... ...

am1x1 + am2x2 + ? ? ? + amnxn = bm

x1,

x2,

...,

xn 0.

In linear programming z, the expression being optimized, is called the objective function. The variables x1, x2 . . . xn are called decision variables, and their values are subject to m + 1 constraints (every line ending with a bi, plus the nonnegativity constraint). A set of x1, x2 . . . xn satisfying all the constraints is called a feasible point and the set of all such points is called the feasible region. The solution of the linear program must be a point (x1, x2, . . . , xn) in the feasible region, or else not all the constraints would be satisfied.

The following example from Chapter 3 of Winston [3] illustrates that geometrically interpreting the feasible region is a useful tool for solving linear programming problems with two decision variables. The linear program is:

Minimize 4x1 + x2 = z

Subject to 3x1 + x2 10

x1 + x2 5

x1

3

x1, x2 0.

We plotted the system of inequalities as the shaded region in Figure 1. Since all of the constraints are "greater than or equal to" constraints, the shaded region above all three lines is the feasible region. The solution to this linear program must lie within the shaded region.

Recall that the solution is a point (x1, x2) such that the value of z is the smallest it can be, while still lying in the feasible region. Since z = 4x1 + x2, plotting the line x1 = (z - x2)/4 for various values of z results in isocost lines, which have the same slope. Along these lines, the value of z is constant. In Figure 1, the dotted lines represent isocost lines for different values of z. Since isocost lines are parallel to each other, the thick dotted isocost line for which z = 14 is clearly the line that intersects the feasible region at the smallest possible value for z. Therefore, z = 14 is the smallest possible value of z given

4

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download