A system of linear equations can be solved by using Gaussian elimination. For example, I have three equations:

\(
\begin{matrix}
2u&+v&+w&=5\\
4u&-6v& &=-2 \\
-2u&+7v&+2w&=9
\end{matrix}\)

The problem is to find the unknown values of \(u\), \(v\) and \(w\) that satisfies the three equations.

Gaussian Elimination

I start by subtracting multiples of the first equation from the other equations.

The coefficient \(2\) is the first pivot. Elimination is constantly finding the right multiplier \(l\) by dividing the pivot into the members below it, so that one of the variables is eliminated from the equation. To eliminate \(u\) from \(4u-6v =-2\), I must subtract \(2(2u+v+w=5)\) from it. The multiplier \(l=4/2\). Similarly, to eliminate \(u\) from \(-2u+7v+2w=9\), the multiplier is \(l=-2/2=-1\).

So, the operations

1. Subtract \(2\) times equation 1 from equation 2.
2. Subtract \(-1\) times equation 1 from equation 3.

result in :

\(
\begin{matrix}
2u&+v&+w&=5\\
&-8v&-2w &=-12 \\
&+8v&+3w&=14
\end{matrix}\)

The pivot for the second stage of elimination is \(-8\). I now ignore the first equation. A multiple of the second equation will be subtracted from the third equation, so as to eliminate \(v\).

3. Subtract \(l=-1\) times equation 2 from equation 3.

\(
\begin{matrix}
2u&+v&+w&=5\\
&-8v&-2w &=-12 \\
& &+1w&=2
\end{matrix}\)

Backward Substitution

The elimination process is now complete in the forward direction. I have a triangular system. The last equation gives \(w=2\). Substituting into the second equation, we find \(v=1\). Then, the first equation gives \(u=1\). This process is called backward substitution.

One good way to write down the forward elimination steps is to include the right side as an extra column in the matrix called the augmented matrix.

\(\left[
\begin{array}{rrr|r}
2 & 1 & 1& 5\\
4 &-6 & 0& -2 \\
-2& 7 & 2& 9
\end{array}\right]\)

Performing \((E_{2}-2E_{1})\rightarrow{E_{1}}\) and \((E_{3}-(-1)E_{1})\rightarrow{E_{1}}\),

\(\left[
\begin{array}{rrr|r}
2 & 1 & 1& 5\\
0 &-8 &-2& -12 \\
0 & 8 & 3& 14
\end{array}\right]\)

Performing \((E_{3}-(-1)E_{2})\rightarrow{E_{2}}\),

\(\left[
\begin{array}{rrr|r}
2 & 1 & 1& 5\\
0 &-8 &-2& -12 \\
0 & 0 & 1& 2
\end{array}\right]\)

The breakdown of elimination

Under what circumstances would the elimination method break down? Something must go wrong in the singular case and something might go wrong in the non-singular case.

With a full set of \(n\) pivots, there is only one solution. The system is non-singular and it is solved by forward elimination and back-substitution. But, if a zero appears in the pivot position, elimination has to stop – either temporarily or permanently. The system might or might not be singular.

If the first coefficient is zero, in the upper left corner, the elimination of of \(u\) from the other equations is impossible. The same is true at every intermediate stage. Notice that a zero can appear in a pivot. Note that a zero can appear in a pivot position. even if the original coefficient in that place was not zero. Roughly speaking, we do not know whether a zero will appear until we try, by actually going through the elimination process.

In many cases this problem can be cured and elimination can proceed. Such a system counts as non-singular, only the algorithm needs repair. In other cases, a breakdown is unavoidable. Those incurable are singular, they have no solution or else infinitely many and a full set of pivots cannot be found.

Example. Non-singular(cured by exchanging equations 2 and 3)

\(\left[
\begin{array}{rrr|r}
1 & 1 & 1& 5\\
2 & 2 & 5& -12 \\
4 & 6 & 8& 2
\end{array}\right]\)

Performing \((E_{2}-(2)E_{1})\rightarrow{E_{2}}\) and \((E_{3}-(4)E_{1})\rightarrow{E_{3}}\),

\(\left[
\begin{array}{rrr|r}
1 & 1 & 1& 5\\
0 & 0 & 3& -22 \\
0 & 2 & 4& -18
\end{array}\right]\)

Exchanging equations 2 and 3,

\(\left[
\begin{array}{rrr|r}
1 & 1 & 1& 5\\
0 & 2 & 4& -18\\
0 & 0 & 3& -22
\end{array}\right]\)

The system is now triangular and back-substitution will solve it.

Example. Singular(Incurable).

\(\left[
\begin{array}{rrr|r}
1 & 1 & 1& 5\\
2 & 2 & 5& -12 \\
4 & 4 & 8& 2
\end{array}\right]
\rightarrow
\left[
\begin{array}{rrr|r}
1 & 1 & 1& 5\\
0 & 0 & 3& -22 \\
0 & 0 & 4& -18
\end{array}\right]
\)

Implementation of Gaussian Elimination with backward substitution in C++

This is a basic implementation of gaussian elimination and backward substitution routines.

void gaussElimination(float ** a, int n, bool isSingular){
	int i,j,k,p;
	float l,pivot;
	float * temp;

	for (i=0;i<n;i++){
		pivot = a[i][i];

		if (pivot == 0){
			isSingular = true;
			for(p=i;p<n;p++){
				if (a[p][i]!=0){
					isSingular = false;
					break;
				}
			}

			/* Exchange row i with p */
			if(!isSingular){
				temp = *(a+i);
				*(a+i) = *(a+p);
				*(a+p) = temp;
			}
		}

		if(isSingular)
			return;

		pivot = a[i][i];

		for(j=i+1;j<n;j++){

			/* Find the multiplier l*/

			l = (a[j][i]/pivot);

			for(k=i;k<=n;k++){
				a[j][k]=a[j][k] - (l * a[i][k]);
			}
		}
	}
}

void backwardSubstitution(float ** a, float * x, int n){

	for (int i=n-1;i>=0;i--){
		x[i] = a[i][n];
		for(int j=i+1;j<n;j++){
			x[i]= x[i]- a[i][j]*x[j];
		}
		x[i]=x[i]/a[i][i];
	}
}