In the last post, we’ve met the strong term of a galois extension – We’ve seen **5** different definitions of it! Before I move on to the **fundamental theorem of galois theory**, I want to present **Artin’s lemma** – a strong lemma that will turn out to be pretty handy in the proof of the fundamental theorem. Ok let’s begin:

## The lemma

Suppose that is an automorphism group of some field . Then:

[K:K^G]\leq|G|

This lemma states that if you pick any sub-group of and map it to it’s fixed field, then the degree of over the fixed field will be smaller than the order of the group.

For example, if is a group of size 4, then the degree of as a vector space over is at most 4!

You can think about this lemma in as it tells you that:

*“If you pick a big group, it can be mapped to a small field”*

( has higher degrees over smaller fields…)

How can we prove such a statement? Well… Basically, we want to show that the degree of the extension is **bounded**. But what is the degree of the extension? This is just the number of elements in a basis for the vector space. So we need to show that a basis for the vector space over has no more than elements in it.

But this is a problem that anyone who studied linear algebra (should) know how to approach:

We pick elements in the vector space, and show that they are linearly dependent. So, let’s do it:

## The proof

Suppose that and let be some arbitrary elements in , where . We want to show that those elements are linearly dependent. That is, we **want** to find that are not all zero such that:

\sum_{i=1}^m\alpha_iv_i=\alpha_1v_1+\alpha_2v_2+\dots\alpha_mv_m=0

The trick here is to not only look at this equation, but to create a **system** of equations, and since , the system will **have **non-trivial solutions. How can we create such a system? easy, just use the automorphisms of .. Ok, so let’s create new equations, one for each automorphism . The equations will be:

\sum_{i=1}^m\alpha_i\cdot \sigma_j(v_i)=0 \ \ \ , \ \ \ \alpha_i\in K

[One thing that bothered me when I first saw this proof was – “why can’t we just pick elements in ?” well, this follows from the simple fact that the -s are **not** the coefficients, they are the **variables**, and our goal is to prove that they are in fact in .]

Since , the orginal equation is .

The **variables** in the equation are . Let’s write this system as a matrix representation:

\left(\begin{array}{cccc} \sigma_{1}(v_{1}) & \sigma_{1}(v_{2}) & \cdots & \sigma_{1}(v_{m})\\ \sigma_{2}(v_{1}) & \ddots & & \sigma_{2}(v_{m})\\ \vdots & & \ddots & \vdots\\ \sigma_{n}(v_{1}) & \sigma_{n}(v_{2}) & \cdots & \sigma_{n}(v_{m}) \end{array}\right)\left(\begin{array}{c} \alpha_{1}\\ \alpha_{2}\\ \vdots\\ \alpha_{m} \end{array}\right)=\left(\begin{array}{c} 0\\ 0\\ \vdots\\ 0 \end{array}\right)

Now, let be a solution to this system where the number of the elements that satisfy is **minimal** (unless the solution is the **zero vector**).

Note that -s are all in , my goal is to prove, that after a little arrangement, those -s are in fact elements of . This will yield a non-trivial linear combination, and that’s exactly what we’re looking for.

Let’s rearrange the -s such that the first entry is **not** 0 (There is no problem with it, we just rename the -s or swap rows of the matrix). We can also devide all the equations in to get a new solution – . This solution satisfies the same property as the previous solution (minimality with respect to number non-zero elements).

Now, we can pick some , I state that the vector:

(\sigma(x_1),\sigma(x_2),\sigma(x_3),\dots,\sigma(x_m))

Is **also **a solution to the system. Why? for every :

\sum_{i=1}^m\sigma(x_i)\cdot \sigma_j(v_i)=\sigma(\sum_{i=1}^mx_i\cdot \sigma^{-1}\sigma_j(v_i))

However, is also an automorphism in , so for some . Thus:

\sum_{i=1}^m\sigma(x_i)\cdot \sigma_j(v_i)=\sigma(\sum_{i=1}^mx_i\cdot \sigma^{-1}\sigma_j(v_i))=\sigma(\overbrace{\sum_{i=1}^mx_i\cdot \sigma_l(v_i)}^0)=\sigma(0)=0

Moreover, since , we must have . If we substract the solution we will get a **new solution** (why?):

(\sigma(x_1),\sigma(x_2),\sigma(x_3),\dots,\sigma(x_m))-(x_1,x_2,x_3,\dots,x_m)

=(\overbrace{\sigma(x_1)-x_1}^{1-1=0},\sigma(x_2)-x_2,\sigma(x_3)-x_3,\dots,\sigma(x_m)-x_m)

=(0,\sigma(x_2)-x_2,\sigma(x_3)-x_3,\dots,\sigma(x_m)-x_m)

But this new solution has **less** non-zero elements than ,the only way that this possible is that the new solution is the **zero vector**:

(0,\sigma(x_2)-x_2,\sigma(x_3)-x_3,\dots,\sigma(x_m)-x_m)=(0,\dots0)

That is, for every . And since we’ve picked an arbitrary element of , then this is true for every . In other words: for every .

So as it turns out, this solution is in fact a non-trivial linear combination of arbitrary elements of and that’s exactly what we wanted. We can now conclude that the degree of the extension must be smaller or equals the order of the group:

[K:K^G]\leq |G|

## Summary

Even though this post was pretty short and only discussed one simple **lemma** the proof turned out to be pretty long, but yet, a really nice one. It combines tools from **basic linear algebra** and arguments that are related to automorphisms and galois theory. I find this proof really elegant!

In the next post, we are going to prove **the fundamental theorem of galois theory** – This will give us a full understanding of the realtion between field extensions and group. After proving this theorem, we are going to see some great results, and have some fun with our brand new tools!