The essential theoretical core is the following
Given an S-sorted signature <S,> and an S-indexed set X = <X_{s} | sS> with X_{s"",s}= for all sS, we can form a new S-sorted signature <S, (X)> withDefinition: The <S,>-algebra T[X] is called "the" <S, >-algebra freely generated by X. The <S, (X)>-terms that make up the carriers of T[X] , are called derived operators.(X)_{"",s} = _{s}X_{s}
(X)_{w,s} = _{w,s}, for all w"".
Let T(X) be the initial <S, (X)>-algebra. We can view T(X) as an <S, >-algebra, T[X], by simply not viewing the elements from X as new constant operators but keeping everything else the same, that is we take:
T[X]_{s} = T(X)_{s}
and_{T[X]} = _{T(X)}.for all .
The <S, >-algebra freely generated by X has the following important property.
Theorem: Let A be any <S, >-algebra, then for any S-indexed family of mappings, = < _{s}:X_{s} A_{s} | sS> there is a unique homomorphism ^{#}:T[X] A that extends in the sense that if we let _{X} be the S-indexed family of inclusion mappings _{X} = < _{X,s}:X_{s}T[X]_{s }| sS> then ^{#}_{sX,s} = _{s }for all sS (i.e., for each xX_{s}T[X]_{s} , ^{#}_{s}(x) = = _{s}(x)).
Proof: Given ,
extend A as a <S, (X)>-algebra,
A+,
where for each sS
and xX_{s},
x_{A} = _{s}(x).
Then, since T(X)
is
the initial (X)-algebra
there exists a unique homomorphism h:T(X)A+.
But since T[X]
is
just T(X)
with the elements of X no longer treated as
operators, it follows that U(h) also
gives us a <S, >-homomorphism ^{#}
:T[X]
A with the property that ^{#}_{sX,s}
= _{s }for
all sS.
The homomorphism is unique for if :T[X]
A with _{sX,s}
= _{s }
for all sS,
then U()
would give us a homomorphism h":T(X)A+
which implies h" = h, by the initiality
of h, but then U()
= U(h) = U() so= .
end
The statement of the above theorem can be done a little more neatly if , for any <S, >-algebra A, we define U(A) to be the corresponding S-indexed set of its carriers and for any homomorphism h:AB we define U(h) to be the S-indexed mapping U(h)=< hs:AsBs | sS> used to define the homomorphism h. Given this additional notation we can state the theorem as follows:
Let X be an S-indexed set of variables and let _{X} be the S-indexed family of inclusion mappings _{X} = < _{X,s}:X_{s}T[X]_{s }then for any <S, >-algebra, A, and S-indexed map :XU(A) there is a unique homomorphism ^{#}:T[X] A such that ^{#}_{X} = h. This equality can be expressed by the following commuting diagramWe now turn to the applications of the above result:
Given a signature <S,>, an S-sorted set of variables X and an <S,> -algebra A the above result provides a precise meaning of what it means to interpret a derived operator as a function .
Definition: Given a derived
operator t (i.e., an <S, (X)>-term
t) then for any <S,>
-algebra A, we define the derived operation,
t_{A,} to be the function such that, for any assignment,
h,
of values in A to the elements of X
(i.e., any any
S-indexed family of mappings,
h = < h_{s}:X_{s}
A_{s} | sS> )
we define t_{A}(h) = h^{#}(t).
end
Example:
Recall the Signature sigP in programming
form:
There is only one sort in this signature. So let X be the 1-sorted set X = {x1, x2, x3}. Then a typical <1, SigP(X)>-term, t, would be the following:name: sigPsorts: {int}
operators:
zero: -->int
pred:int-->int
succ:int-->int
add:int.int-->int
mult:int.int-->intEnd of signature sigP.
t = "add(pred(x2( )), succ( succ(x1( ))))".
Note, that we are treating the variables (the elements of X) as <1, SigP(X)>-terms and are thus writing "x1( )" rather than just "x1".
Now say we want to interpret this term in the algebra alg_A
where
To convince yourself that everything is on the up-and-up, try some variations:
Definition: Given an <S, (X)>-term
t and an assignment, ,
of <S, (Y)>-terms
to the elements of X, that is, an
S-indexed mapping :X U(T[Y]),
we define the result of substituting(x)
for x in
t , for all xX,
to be the <S, (Y)>-term ^{#}(t).
[this is a direct application of the above theorem but with
in place of h].
end
Clearly we can follow one substitution with another, which is to say, we can compose compositions. The above framework leads us to the following formal definition of composition.
Definition: Given an <S, (X)>-term
t , an assignment, ,
of <S, (Y)>-terms
to the elements of X, and an assignment, ,
of <S, (Z)>-terms
to the elements of Y, then we define their
composite,
denoted ,
to be the assignment U(^{#}):X U(T[Z]),
where "" denotes the composition
for S-indexed mappings.
end
Note: We could use just one
symbol for both composition of assignments and composition of S-indexed
mappings and write=U(^{#}),
since the two uses can be distinguished by context. Such overloading
is commonly used in mathematical treatments for the sake of brevity, but
it comes at a price.
Proposition: The above composition
is associative. that is, given an assignment ,
of <S, (Y)>-terms
to the elements of X, an assignment, ,
of <S, (Z)>-terms
to the elements of Y, and an assignment, ,
of <S, (W)>-terms
to the elements of Z, then
()
= ().
Proof: Consider the following
diagram:
U(^{#})U(^{#})_{Y } = U(^{#}) = ()^{#}_{Y} which then implies that ^{#}^{#} = ()^{#}
but then ()
= (
U(^{#}))
= U( ^{#})(
U(^{#}))
= (U( ^{#})
U(^{#}))
= ()^{#}
= ()
end