Saturday, July 26, 2014

series for 1/(1-x) and 1/(1-x^2)

Let's compute $1 \over 1-x$
 
 1+ox  |1-x
 1- x    1+x+x²+x³+...
    x+ox²
    x-x²
      x²+ox³
      x²-x³
Ok, you see the pattern, $$\begin{equation}  \label{eq:1} {1\over 1-x} = 1 + x + x^2 + x^3 + \ldots \end{equation}.$$ Similarly, we prove that $$\begin{equation}\label{eq:2}{1\over 1+x} = 1-x+x^2-x^3+x^4-x^5+\ldots\end{equation}.$$ What if we add two series together? Yes, $$\begin{equation}\label{eq:3} 2(1+x^2+x^4+x^6+\ldots) = {1\over 1-x} + {1\over 1+x} = {2 \over 1-x^2}.\end{equation}$$ The latter can also be obtained by substitution $a$ for $x^2$ in $2(1+x^2+x^4+x^6+\ldots) = 1 +a +a^2+a^3+\ldots) ={2\over 1-a}= {2\over 1-x^2}$.

In the same vain, we can compute $$\begin{matrix} \Biggl({1\over 1-x}\Biggr)^2 = {1\over 1-x}\cdot {1\over 1-x} = (1+x+x^2+x^3+\ldots)(1+x+x^2+x^3+\ldots) = \\ (1+x+x^2+x^3+\ldots) +\\ (x+x^2+x^3+x^4+\ldots) +\\ (x^2+x^3+x^4+x^5+\ldots) + \ldots = \\ 1 + 2x + 3x^2 \ldots = \sum_0^\infty{{d\over dx}(x^n)} = \sum_0^\infty{nx^{n-1}} \end{matrix}$$ Similarly, we can prove that $(1-a)\sum_0^{n-1}{a^i} = 1 + a + a^2 + \ldots + a ^ {n-1}- a - a^2 - a^3 - \ldots - a^n = 1-a^n$ or ${1 - a^n \over 1-a} = \sum_0^{n-1}{a^i}.$

Note that $1+x+x^2+x^3+\ldots$ maintains a constant value of 1. That is, \eqref{eq:1} corresponds to a linear system. What is a corresponding linear system?

has a transfer function which has transfer function $$y = k(F_b\cdot z^{-1}y + x) = {kx \over 1 - z^{-1}kF_b}.$$ Here, $z^{-1}$ is a righward shit $R[a_0, a_1, a_2,\ldots] = z^{-1}[a_0, a_1, a_2,\ldots] = [0,a_0,a_1,a_2,\ldots]$ and $H = {k\over 1-z^{-1}kF_b}$ is the transfer function. Well, the specification is redundant. I can set simply k=1. We have already seen this system for the feedback amplifier Fb=3 and constant input x=2. Here is the generalized matrix form for x=1 and arbitrary Fb

$$\begin{bmatrix} y_n \\ 1\end{bmatrix} = \begin{bmatrix} F_b^n & 1 \\0 & 1\end{bmatrix}^n \, \begin{bmatrix} y_0 \\ 1\end{bmatrix} = S \Lambda^n S^{-1} Y_0 = \begin{bmatrix} 1 & 1\\0 & 1-F_b\end{bmatrix} \, \begin{bmatrix} F_b^n & 0\\0 & 1^n\end{bmatrix} \, \begin{bmatrix} 1 & {-1\over 1-F_b}\\0 & {1\over 1-F_b}\end{bmatrix}\, \begin{bmatrix} y_0 \\ 1\end{bmatrix}$$ You see how y receives constant input, 2 from another state variable, 1, which maintains itself constant. The eigendecomposition fails when y's feedback is 1 because of $1\over 1-Fb$, which is infinite. Yet, the transfer function-based solution, $Y = {X\over 1-z^{-1}} = {2\over (1-z^{-1})^2}$, is still valid. It simply means that $y_n = 1*y_0 + 2$. The system implements a perfect integrator in this case. It does not exponentiate the accumulated value but adds inputs received from x together. The diagram is also valid in this case. If we want, we can also draw the system using nodes for the state variables

y 1 2 Fb 1

Again, input to the y is 2 from node "constant 1".

The state varying over time t can be represented as a vector $y_1, y_2, \cdots, y_t, \cdots$. The coefficients of polynomial are used to represent such vectors. For instance $1+2y+1y^2 + 5x^3$ represents vector 1,2,1,5. Now, series \eqref{eq:1} represents variable stuck at state $1,1,1,1,1,\cdots$. This can be represented by a diagram

1 1

You see, it was in state 1 and next state is 1*1=1, so is the next state. It evolves by staying constant. It was sequence ${1\over 1-y} = 1+y+y^2+y^3 + \ldots$. Do you remember that ${1\over 1+y}$ was alternating $1,-1,+1-1,\ldots$? That is because feedback -1 in the denominator!

1 -1

When we add two together,

1 1 + -1 1 we read output from the summation node (it does not feedback anything and thus, is not a state variable). You see that at first time step, both accumulators are 1, so the output is 2. On the next step, the integrators are in counterphase, -1 and 1, so they cancel out for y^1. In the next step, they are in phase again, and output 2 is repeated. The sequence is $[2,0,2,0\ldots]$ The corresponding matrix for sequence $[2,0,2,0,\ldots]$ $$ y_n = a_n + b_n = \begin{bmatrix} 1 & 1 \end{bmatrix} \, \begin{bmatrix} a_n \\ b_n\end{bmatrix} = \begin{bmatrix} 1 & 1 \end{bmatrix} \, \begin{bmatrix} 1 & 0 \\0 & -1\end{bmatrix} \begin{bmatrix} a_{n-1} \\ b_{n-1}\end{bmatrix} = \begin{bmatrix} 1 & 1 \end{bmatrix} \, \begin{bmatrix} 1 & 0 \\0 & -1\end{bmatrix}^n \begin{bmatrix} a_{0} \\ b_{0}\end{bmatrix} $$ is diagonal right away.

The interesting thing is that transfer function approach implies 0 initial values. Laplace way to solve diff eq do use the initial values.

This post was aided by Finite State Machine Designer

No comments: