Ring of polynomial functions

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

In mathematics, the ring of polynomial functions on a vector space V over an infinite field k gives a coordinate-free analog of a polynomial ring. It is denoted by k[V]. If V has finite dimension and is viewed as an algebraic variety, then k[V] is precisely the coordinate ring of V.

The explicit definition of the ring can be given as follows. If k[t_1, \dots, t_n] is a polynomial ring, then we can view t_i as coordinate functions on k^n; i.e., t_i(x) = x_i when x = (x_1, \dots, x_n). This suggests the following: given a vector space V, let k[V] be the subring generated by the dual space V^* of the ring of all functions V \to k. If we fix a basis for V and write t_i for its dual basis, then k[V] consists of polynomials in t_i; it is a polynomial ring.

In applications, one also defines k[V] when V is defined over some subfield of k (e.g., k is the complex field and V is a real vector space.) The same definition still applies.

Symmetric multilinear maps

Let k be an infinite field of characteristic zero (or at least very large) and V a finite-dimensional vector space.

Let S^q(V) denote the vector space of multilinear functionals \textstyle \lambda: \prod_1^q V \to k that are symmetric; \lambda(v_1, \dots, v_q) is the same for all permutations of v_i's.

Any λ in S^q(V) gives rise to a homogeneous polynomial function f of degree q: we just let f(v) = \lambda(v, \dots, v). To see that f is a polynomial function, choose a basis e_i, \, 1 \le i \le n of V and t_i its dual. Then

\lambda(v_1, \dots, v_q) = \sum_{i_1, \dots, i_q = 1}^n \lambda(e_{i_1}, \dots, e_{i_q}) t_{i_1}(v_1) \cdots t_{i_q}(v_q),

which implies f is a polynomial in ti's.

Thus, there is a well-defined linear map:

\phi: S^q(V) \to k[V]_q, \, \phi(\lambda)(v) = \lambda(v, \cdots, v).

We show it is an isomorphism. Choosing a basis as before, any homogeneous polynomial function f of degree q can be written as:

f = \sum_{i_1, \dots, i_q = 1}^n a_{i_1 \cdots i_q} t_{i_1} \cdots t_{i_q}

where a_{i_1 \cdots i_q} are symmetric in i_1, \dots, i_q. Let

\psi(f)(v_1, \dots, v_q) = \sum_{i_1, \cdots, i_q = 1}^n a_{i_1 \cdots i_q} t_{i_1}(v_1) \cdots t_{i_q}(v_q).

Clearly, φ ∘ ψ is the identity; in particular, φ is surjective. To see φ is injective, suppose φ(λ) = 0. Consider

\phi(\lambda)(t_1 v_1 + \cdots + t_q v_q) = \lambda(t_1 v_1 + \cdots + t_q v_q, ..., t_1 v_1 + \cdots +
t_q v_q),

which is zero. The coefficient of t1t2tq in the above expression is q! times λ(v1, …, vq); it follows that λ = 0.

Note: φ is independent of a choice of basis; so the above proof shows that ψ is also independent of a basis, the fact not a priori obvious.

Example: A bilinear functional gives rise to a quadratic form in a unique way and any quadratic form arises in this way.

Taylor series expansion

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

Given a smooth function, locally, one can get a partial derivative of the function from its Taylor series expansion and, conversely, one can recover the function from the series expansion. This fact continues to hold for polynomials functions on a vector space. If f is in k[V], then we write: for x, y in V,

f(x + y) = \sum_{n=0}^{\infty} g_n(x, y)

where gn(x, y) are homogeneous of degree n in y and only finitely many of them are nonzero. We then let

(P_y f)(x) = g_1(x, y),

resulting in the linear endomorphism Py of k[V]. It is called the polarization operator. We then have, as promised:

Theorem — For each f in k[V] and x, y in V,

f(x + y) = \sum_{n=0}^{\infty} {1 \over n!} P_y^n f(x).

Proof: We first note that (Py f) (x) is the coefficient of t in f(x + t y); in other words, since g0(x, y) = g0(x, 0) = f(x),

P_y f (x) = \left . {d \over dt} \right |_{t=0} f(x + ty)

where the right-hand side is, by definition,

\left . {f(x+ty) - f(x) \over t} \right |_{t=0}.

The theorem follows from this. For example, for n = 2, we have:

P_y^2 f (x) = \left . {\partial \over \partial t_1} \right |_{t_1=0} P_y f(x + t_1 y) = \left . {\partial \over \partial t_1} \right |_{t_1=0} \left . {\partial \over \partial t_2} \right |_{t_2=0} f(x + (t_1 + t_2) y) = 2! g_2(x, y).

The general case is similar. \square

Operator product algebra

When the polynomials are valued not over a field k, but instead are valued over some algebra, then one may define additional structure. Thus, for example, one may consider the ring of functions over GL(n,m), instead of for k = GL(1,m).[clarification needed] In this case, one may impose an additional axiom.

The operator product algebra is an associative algebra of the form

A^i(x)B^j(y) = \sum_k f^{ij}_k (x,y,z) C^k(z)

The structure constants f^{ij}_k (x,y,z) are required to be single-valued functions, rather than sections of some vector bundle. The fields (or operators) A^i(x) are required to span the ring of functions. In practical calculations, it is usually required that the sums be analytic within some radius of convergence; typically with a radius of convergence of |x-y|. Thus, the ring of functions can be taken to be the ring of polynomial functions.

The above can be considered to be an additional requirement imposed on the ring; it is sometimes called the bootstrap. In physics, a special case of the operator product algebra is known as the operator product expansion.

See also

Notes

References

  • Lua error in package.lua at line 80: module 'strict' not found..

<templatestyles src="Asbox/styles.css"></templatestyles>