A system of equations in the variables \(x_1, x_2, \dots, x_n\) is called homogeneous if all the constant terms are zero—that is, if each equation of the system has the form \[a_1x_1 + a_2x_2 + \dots + a_nx_n = 0 \nonumber \] Clearly \(x_1 = 0, x_2 = 0, \dots, x_n = 0\) is a solution to such a system; it is called the trivial solution. Any solution in which at least one variable has a nonzero value is called a nontrivial solution. Our chief goal in this section is to give a useful condition for a homogeneous system to have nontrivial solutions. The following example is instructive.
Show that the following homogeneous system has nontrivial solutions. \[ \begin
The reduction of the augmented matrix to reduced row-echelon form is outlined below. \[\left[ \begin
The existence of a nontrivial solution in Example \(\PageIndex<1>\) is ensured by the presence of a parameter in the solution. This is due to the fact that there is a nonleading variable (\(x_3\) in this case). But there must be a nonleading variable here because there are four variables and only three equations (and hence at most three leading variables). This discussion generalizes to a proof of the following fundamental theorem.1>
If a homogeneous system of linear equations has more variables than equations, then it has a nontrivial solution (in fact, infinitely many).
Proof. Suppose there are \(m\) equations in \(n\) variables where \(n > m\), and let \(R\) denote the reduced row-echelon form of the augmented matrix. If there are \(r\) leading variables, there are \(n - r\) nonleading variables, and so \(n - r\) parameters. Hence, it suffices to show that \(r < n\). But \(r \leq m\) because \(R\) has \(r\) leading 1s and \(m\) rows, and \(m < n\) by hypothesis. So \(r \leq m < n\), which gives \(r < n\). Note that the converse of Theorem [thm:001473] is not true: if a homogeneous system has nontrivial solutions, it need not have more variables than equations (the system \(x_1 + x_2 = 0\), \(2x_1 + 2x_2 = 0\) has nontrivial solutions but \(m = 2 = n\).) Theorem [thm:001473] is very useful in applications. The next example provides an illustration from geometry.
We call the graph of an equation \(ax^2 + bxy + cy^2 + dx + ey + f = 0\) a conic if the numbers \(a\), \(b\), and \(c\) are not all zero. Show that there is at least one conic through any five points in the plane that are not all on a line.
Let the coordinates of the five points be \((p_1, q_1)\), \((p_2, q_2)\), \((p_3, q_3)\), \((p_4, q_4)\), and \((p_5, q_5)\). The graph of \(ax^2 + bxy + cy^2 + dx + ey + f = 0\) passes through \((p_i, q_i)\) if \[ap_i^2 + bp_iq_i + cq_i^2 + dp_i + eq_i + f = 0 \nonumber \] This gives five equations, one for each \(i\), linear in the six variables \(a\), \(b\), \(c\), \(d\), \(e\), and \(f\). Hence, there is a nontrivial solution by Theorem [thm:001473]. If \(a = b = c = 0\), the five points all lie on the line with equation \(dx + ey + f = 0\), contrary to assumption. Hence, one of \(a\), \(b\), \(c\) is nonzero.
As for rows, two columns are regarded as equal if they have the same number of entries and corresponding entries are the same. Let \(\mathbf\) and \(\mathbf\) be columns with the same number of entries. As for elementary row operations, their sum \(\mathbf + \mathbf\) is obtained by adding corresponding entries and, if \(k\) is a number, the scalar product \(k\mathbf\) is defined by multiplying each entry of \(\mathbf\) by \(k\). More precisely: \[\mbox \mathbf = \left[ \begin x_1 \\ x_2 \\ \vdots \\ x_n \end \right] \mbox \mathbf = \left[ \begin y_1 \\ y_2 \\ \vdots \\ y_n \end \right] \mbox \mathbf + \mathbf = \left[ \begin x_1 + y_1 \\ x_2 + y_2 \\ \vdots \\ x_n + y_n \end \right] \mbox k\mathbf = \left[ \begin kx_1 \\ kx_2 \\ \vdots \\ kx_n \end \right]. \nonumber \] A sum of scalar multiples of several columns is called a linear combination of these columns. For example, \(s\mathbf + t\mathbf\) is a linear combination of \(\mathbf\) and \(\mathbf\) for any choice of numbers \(s\) and \(t\).
If \(\mathbf
Let \(\mathbf
For \(\mathbf
Our interest in linear combinations comes from the fact that they provide one of the best ways to describe the general solution of a homogeneous system of linear equations. When solving such a system with \(n\) variables \(x_1, x_2, \dots, x_n\), write the variables as a column 1 matrix: \(\mathbf
Now let \(\mathbf
Solve the homogeneous system with coefficient matrix \[A = \left[ \begin
The reduction of the augmented matrix to reduced form is \[\left[ \begin
The gaussian algorithm systematically produces solutions to any homogeneous linear system, called basic solutions, one for every parameter.
Moreover, the algorithm gives a routine way to express every solution as a linear combination of basic solutions as in Example [exa:001560], where the general solution \(\mathbf\) becomes \[\mathbf = s \left[ \begin 2 \\ 1 \\ 0 \\ 0 \end \right] + t \left[ \begin \frac \\ 0 \\ \frac \\ 1 \end \right] = s \left[ \begin 2 \\ 1 \\ 0 \\ 0 \end \right] + \fract \left[ \begin 1 \\ 0 \\ 3 \\ 5 \end \right] \nonumber \] Hence by introducing a new parameter \(r = t/5\) we can multiply the original basic solution \(\mathbf_2\) by 5 and so eliminate fractions. For this reason:
Convention:
Any nonzero scalar multiple of a basic solution will still be called a basic solution.
In the same way, the gaussian algorithm produces basic solutions to every homogeneous system, one for each parameter (there are no basic solutions if the system has only the trivial solution). Moreover every solution is given by the algorithm as a linear combination of these basic solutions (as in Example \(\PageIndex\) . If \(A\) has rank \(r\), Theorem \(\PageIndex\) shows that there are exactly \(n-r\) parameters, and so \(n-r\) basic solutions. This proves:
Find basic solutions of the homogeneous system with coefficient matrix \(A\), and express every solution as a linear combination of the basic solutions, where
\[A = \left[ \begin 1 & -3 & 0 & 2 & 2 \\ -2 & 6 & 1 & 2 & -5 \\ 3 & -9 & -1 & 0 & 7 \\ -3 & 9 & 2 & 6 & -8 \end \right] \nonumber \]
The reduction of the augmented matrix to reduced row-echelon form is
\[\left[ \begin
so the general solution is \(x_1 = 3r - 2s - 2t\), \(x_2 = r\), \(x_3 = -6s + t\), \(x_4 = s\), and \(x_5 = t\) where \(r\), \(s\), and \(t\) are parameters. In matrix form this is
\[\mathbf = \left[ \begin x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \end \right] = \left[ \begin 3r - 2s - 2t \\ r \\ -6s + t \\ s \\ t \end \right] = r \left[ \begin 3 \\ 1 \\ 0 \\ 0 \\ 0 \end \right] + s \left[ \begin -2 \\ 0 \\ -6 \\ 1 \\ 0 \end \right] + t \left[ \begin -2 \\ 0 \\ 1 \\ 0 \\ 1 \end \right] \nonumber \]
Hence basic solutions are
\[\mathbf_1 = \left[ \begin 3 \\ 1 \\ 0 \\ 0 \\ 0 \end \right], \ \mathbf_2 = \left[ \begin -2 \\ 0 \\ -6 \\ 1 \\ 0 \end \right],\ \mathbf_3 = \left[ \begin -2 \\ 0 \\ 1 \\ 0 \\ 1 \end \right] \nonumber \]
This page titled 1.3: Homogeneous Equations is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by W. Keith Nicholson (Lyryx Learning Inc.) via source content that was edited to the style and standards of the LibreTexts platform.