Analysis of Objective Functions and Constraints in Linear Programming
Introduction
In this lesson, we will analyze the characteristics of objective functions and constraints in linear and integer linear programming problems. We will examine how linear functions represent costs and how economies of scale and fixed costs can be modeled using additional variables. We will also discuss the impact of introducing integer variables on the computational complexity of the problems. The main objective is to understand how to address situations where the relationships between variables are not linear and how to transform non-linear functions into linear forms for use in linear programming.
Linear Programming (LP): Optimization problems where the objective function and constraints are linear.
Integer Linear Programming (ILP): LP problems where some or all variables are restricted to integer values.
Economies of Scale: Cost advantages that enterprises obtain due to their scale of operation, with cost per unit of output decreasing with increasing scale.
Fixed Costs: Costs that do not change with the level of production.
Computational Complexity: The amount of resources (time and space) required to solve a problem.
Linear Objective Functions
In a linear programming problem, the objective function is a weighted sum of the decision variables. Each variable \(x_i\) has an associated cost \(c_i\) and contributes to the total cost in proportion to its value. The total cost \(C\) is given by: \[C = \sum_{i=1}^{n} c_i x_i\] This model is reasonable in many situations, such as purchasing goods where the total cost is the sum of the costs of individual goods multiplied by the quantities purchased. For example, if we buy apples, bananas, and oranges, the total cost is the sum of the cost of each fruit multiplied by the quantity purchased.
Let’s say we want to buy:
\(x_1\) apples at a cost of \(c_1\) each
\(x_2\) bananas at a cost of \(c_2\) each
\(x_3\) oranges at a cost of \(c_3\) each
The total cost \(C\) would be: \[C = c_1 x_1 + c_2 x_2 + c_3 x_3\] If we double the number of bananas, the cost associated with bananas doubles.
Limitations of the Linear Model
However, the linear model is not always realistic. For instance, in the presence of quantity discounts, the unit cost of a good might decrease as the quantity purchased increases. This introduces an economy of scale that a linear function cannot capture.
Consider purchasing \(n\) types of goods, where \(x_i\) is the quantity of good \(i\) and \(c_i\) is its unit cost. If the seller offers a discount for larger quantities, the total cost might not be a linear function of \(x_i\). For example, if we buy more than 50 shirts, the price per shirt might be lower than if we buy only 10 shirts.
Another limitation arises when the cost of a variable depends on the values of other variables. In such cases, the variables interact non-linearly, and the cost cannot be expressed as a simple linear function.
In physical systems, particles often interact with each other. Altering the value of one variable might affect the entire system in a non-linear way. For instance, changing the energy of one particle might influence the energy of all other particles, making the total energy a non-linear function of individual particle energies.
Non-Linear Objective Functions
In many real-world scenarios, variables interact with each other non-linearly. The cost associated with a variable can depend not only on its value but also on the values of other variables.
Consider a production process where the cost of producing a certain quantity of a product depends not only on the quantity itself but also on the quantities of other products being produced simultaneously. In this case, the variables representing the quantities of different products interact, and the total cost is a non-linear function of these variables.
Non-Linear Programming
Various types of non-linear programming exist, such as quadratic programming, which are more complex and will not be covered in this lesson. It is important to note that not all real-world problems can be accurately modeled with linear functions.
While we focus on linear programming in this course, it is essential to acknowledge the existence of non-linear programming techniques. These methods can handle more complex relationships between variables but are generally more challenging to solve.
When dealing with non-linear problems, using a linear model can lead to approximations. To accurately represent the problem, we should use the true non-linear objective function. However, for simplicity or due to the limitations of available tools, we might resort to linear approximations.
Fixed Costs and Production Thresholds
Another aspect to consider is the presence of fixed costs. In some cases, the cost of producing a good may be proportional to its value only above a certain threshold.
Modeling Fixed Costs
Suppose there is a fixed cost \(V\) to start production and a threshold \(\tau\) below which production is not profitable. The cost function \(f(x)\) for producing \(x\) nails could be represented as follows: \[f(x) = \begin{cases} 0 & \text{if } x = 0 \\ V & \text{if } 0 < x \leq \tau \\ V + c(x - \tau) & \text{if } x > \tau \end{cases}\] where \(c\) is the unit cost of production above the threshold \(\tau\).
Linearizing the Cost Function
To model this function in a linear programming problem, we can introduce a variable \(y\) that represents the quantity produced in excess of the threshold \(\tau\) and a binary variable \(z\) that indicates whether production is started or not.
Let \(y\) represent the excess production above the threshold \(\tau\). We can define \(y\) as \(y = \max(0, x - \tau)\).
Let \(z\) be a binary variable such that:
\(z = 1\) if \(y > 0\) (i.e., \(x > \tau\))
\(z = 0\) if \(y = 0\) (i.e., \(x \leq \tau\))
The cost function can then be rewritten as: \[f(y) = Vz + cy\] With the constraints:
\(z = 1\) if \(y > 0\)
\(z = 0\) if \(y = 0\)
If \(z = 0\), then \(y = 0\) and \(f(y) = 0\). If \(z = 1\), then \(y > 0\) and \(f(y) = V + cy\).
To ensure the correct relationship between \(z\) and \(y\), we can introduce the following constraints:
\[\begin{aligned} y &\geq 0 \label{eq:y_non_negative}\\ y &\geq x - \tau \label{eq:y_excess}\\ y &\leq Mz \label{eq:y_upper_bound}\\ x - \tau &\leq Mz \label{eq:x_tau_upper_bound} \\ x &\leq \tau + y \label{eq:x_upper_bound} \end{aligned}\]
where \(M\) is a large constant.
Constraint [eq:y_non_negative] ensures that \(y\) is non-negative.
Constraint [eq:y_excess] ensures that \(y\) is at least the excess production.
Constraint [eq:y_upper_bound] forces \(y\) to be 0 if \(z=0\). If \(z=1\), \(y\) can be at most \(M\).
Constraint [eq:x_tau_upper_bound] links \(x\) to \(z\), ensuring that \(x\) can exceed \(\tau\) only if \(z=1\).
Constraint [eq:x_upper_bound] ensures that \(x\) does not exceed \(\tau\) if \(y=0\).
Variables:
\(x\): Quantity produced
\(y\): Excess production above threshold \(\tau\) (\(y = \max(0, x - \tau)\))
\(z\): Binary variable indicating if production is started (\(z = 1\) if \(x > \tau\), \(z = 0\) otherwise)
Objective Function: \[f(y) = Vz + cy\] Constraints: \[\begin{aligned} y &\geq 0 \\ y &\geq x - \tau \\ y &\leq Mz \\ x - \tau &\leq Mz \\ x &\leq \tau + y \end{aligned}\] where \(M\) is a large constant.
Economies of Scale
Economies of scale occur when the unit cost of production decreases as the quantity produced increases. This can be due to various factors, such as the distribution of fixed costs over a larger number of units produced.
Piecewise Linear Function
A cost function that takes into account economies of scale can be represented as a piecewise linear function. For example, the unit cost might be \(c_1\) for the first \(N_1\) units, \(c_2\) for the next \(N_2 - N_1\) units, and so on, with \(c_1 > c_2 > \dots\).
The function \(f(x)\) can be represented as: \[f(x) = \begin{cases} 0 & \text{if } x = 0 \\ K_1 + c_1 x & \text{if } 0 < x \leq N_1 \\ K_2 + c_2 x & \text{if } N_1 < x \leq N_2 \\ K_3 + c_3 x & \text{if } N_2 < x \leq N_3 \\ \vdots \\ K_n + c_n x & \text{if } N_{n-1} < x \leq N_n \end{cases}\]
Where:
\(K_i\) represents the fixed cost for interval \(i\). It can be calculated as: \[K_i = \sum_{j=1}^{i-1} (c_j - c_i) N_j\]
\(c_i\) is the unit cost for interval \(i\), with \(c_1 > c_2 > \dots > c_n\).
\(N_i\) is the upper bound of interval \(i\).
Modeling with Binary Variables
To linearize a piecewise linear cost function, we can introduce binary variables that indicate in which interval the variable \(x\) lies. For example, we can define a variable \(z_i\) that equals 1 if \(x\) is in interval \(i\) and 0 otherwise.
\(z_i = 1\) if \(N_{i-1} < x \leq N_i\)
\(z_i = 0\) otherwise
We also introduce continuous variables \(x_i\) to represent the portion of \(x\) that falls within each interval \(i\).
\(x_i = x - N_{i-1}\) if \(N_{i-1} < x \leq N_i\)
\(x_i = 0\) if \(x \leq N_{i-1}\)
\(x_i = N_i - N_{i-1}\) if \(x > N_i\)
The cost function can then be expressed as a weighted sum of these binary variables and the variables representing the quantity produced in each interval:
\[f(x) = \sum_{i=1}^n (K_i z_i + c_i x_i)\]
With the following constraints: \[\begin{aligned} x &= \sum_{i=1}^n x_i \label{eq:x_sum}\\ \sum_{i=1}^n z_i &= 1 \label{eq:z_sum}\\ N_{i-1} z_i &\leq x_i \leq N_i z_i, \quad \forall i \in \{1, \dots, n\} \label{eq:x_i_bounds} \end{aligned}\]
Constraint [eq:x_sum] ensures that \(x\) is the sum of all \(x_i\).
Constraint [eq:z_sum] ensures that only one \(z_i\) can be 1, meaning \(x\) falls into only one interval.
Constraint [eq:x_i_bounds] ensures that \(x_i\) is within the correct bounds for each interval. If \(z_i = 0\), then \(x_i = 0\). If \(z_i = 1\), then \(N_{i-1} \leq x_i \leq N_i\).
Variables:
\(x\): Total quantity produced
\(x_i\): Quantity produced in interval \(i\)
\(z_i\): Binary variable indicating if \(x\) is in interval \(i\) (\(z_i = 1\) if \(N_{i-1} < x \leq N_i\), \(z_i = 0\) otherwise)
Objective Function: \[f(x) = \sum_{i=1}^n (K_i z_i + c_i x_i)\] Constraints: \[\begin{aligned} x &= \sum_{i=1}^n x_i \\ \sum_{i=1}^n z_i &= 1 \\ N_{i-1} z_i &\leq x_i \leq N_i z_i, \quad \forall i \in \{1, \dots, n\} \end{aligned}\]
Computational Complexity
The introduction of integer or binary variables to linearize non-linear functions increases the computational complexity of the problem. Problems with continuous variables are generally solvable in polynomial time, while those with integer variables can be NP-hard.
Continuous Variables: Linear programming problems with only continuous variables can be solved efficiently using algorithms like the simplex method or interior-point methods. These algorithms typically have polynomial time complexity, meaning the time required to solve the problem grows polynomially with the size of the input.
Integer Variables: When integer or binary variables are introduced, the problem becomes significantly harder to solve. Integer linear programming (ILP) problems are often NP-hard, meaning there is no known algorithm that can solve all instances of the problem in polynomial time. The time required to solve an ILP problem can grow exponentially with the size of the input in the worst case.
Therefore, while linearizing non-linear functions using binary variables allows us to use linear programming techniques, it comes at the cost of increased computational complexity. This trade-off between model accuracy and computational feasibility is a crucial consideration in optimization problems.
When dealing with large-scale problems, the presence of integer variables can make the problem intractable, meaning it might be impossible to find an optimal solution within a reasonable amount of time.
In such cases, we might need to resort to approximation algorithms or heuristics that can find good solutions quickly, even if they are not guaranteed to be optimal.
It is essential to carefully consider the trade-off between the accuracy of the model and the computational resources available when deciding whether to introduce integer variables.
Conclusion
In this lesson, we examined different types of objective functions, both linear and non-linear, and discussed how to model fixed costs and economies of scale. We saw that it is possible to transform non-linear functions into linear forms by introducing additional variables, particularly binary variables. However, the introduction of integer variables increases the computational complexity of the problem.
Linear objective functions are a weighted sum of the decision variables.
Economies of scale and fixed costs introduce non-linearities into cost functions.
Non-linear functions can be linearized by introducing binary variables.
The introduction of integer variables increases the computational complexity of the problem, often making it NP-hard.
There is a trade-off between model accuracy (using non-linear functions or integer variables) and computational feasibility (solving the problem in a reasonable amount of time).
Questions for the next lesson:
How can we determine if a linear programming problem is bounded or unbounded?
What are the methods for solving linear programming problems with integer variables?
How can we evaluate the efficiency of a linear programming algorithm?
Further reading:
Quadratic programming
Solution methods for integer linear programming problems (e.g., branch and bound, cutting plane methods)
Sensitivity analysis in linear programming
Approximation algorithms and heuristics for NP-hard problems