Hello Fred, Comments interspersed below: "Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message news:eomdnUxuDP4BCnWiRVn-hA@centurytel.net...> > <r_obert@REMOVE_THIS.hotmail.com> wrote in message > news:cd4huv4lcd6t0i1biojbn0dlgjlu0jncfq@4ax.com... > > Excellent synopsis. Thanks. > > > > Clay, > > Succinct too!Sometimes I'm too lazy to write a lot, so my way is to be terse.> > It took me a while to figure out why PM bothered to go to the x^n form of > the polynomial because the cosine nx series is a perfectly good basis set. > It seems the barycentric form of Lagrange interpolation - which they useto> find the next approximant / design - needs to use that x^n form.Otherwise,> they could have used the more direct form of cosine(nx) and generated the > filter coefficients directly. I also don't understand why they used the - > what was it, 1/x? - weighting (I don't mean filter design criteria > weighting).The weighting shows up with the type II,III, and IV filters. Basically you have a sum of trig functions and part of the change to the polynomial from involves the factoring out of a trig function. Since the poly sans the factored out trig function is what is "best fitted", some compensation is needed for the factored out trig function. Examples: Even Sym - Even Length H(f) = cos(pi*F) * sum( b_k*cos(2pikF)) k is in 0,1,2,...,n-1 Odd Sym - Odd Length H(f) = sin(2pi*F) * sum( b_k*cos(2pikF)) k is in 0,1,2,...,n-1 Odd Sym - Even Length H(f) = sin(pi*F) * sum( b_k*cos(2pikF)) k is in 0,1,2,...,n-1 In all of the above cases, the cosine poly is what is "Remezed" - The factored out term acts as a weight function.> > Anyway, I've always viewed the PM implementation with Remez exchange asthe> "engine" as being pretty direct - but probably because I ignored the x^n > part inside - it was "just a detail". > > In my understanding, the Remez exchange algorithm isn't limited to > polynomials. Any sum of basis functions that satisfy the Haar condition > will do - and there are many. Sets such as: cos(nx), sin(nx)/x, etc....The Remez algo will find the best fit (in a min-max way) poly to fit the data. That's how the algo is developed. However there is nothing to say you can't change from one basis to another as long as certain criteria are adhered to. Of course Remez himself proved that as you moved the abcissal values about, the deviation factor rho takes on an extreme value when the chebyshev criterion is met. His algo as he presented it does assume a polynomial, but as PM showed, this polynomial can also represent something else.> > Also, there are many ways to solve for the next approximant: > 1) Solution of n+1 linear equations in n coefficients and one error term. > 2) Lagrange interpolation > 3) Maybe some form of Newton interpolation - which can be more efficientin> some situations > > I think the first one is hard to get to work on a PC with double precision > when the order gets to be in the 100's. > The second one may be better in this regard. > I don't know about the third one.1) Inverting a large matrix is quite painful and the numerical issues will be tremendous 2) LaGrange Interpolation will work but the barycentric form is both more efficient and better from a numerical point of view. 3) The Newton method is quite efficient but as you build a table adding in successive points, the accuracy obtained in interpolation depends on whether you are looking near the early used or the latter used points in the tableau. This is not very good for high order interpolation. In Lagrange interpolation, all of the points are equally represented in the interpolation process, so the errors are spread out. 4) The Barycentric form is good in that it is a two step interpolation method with the results from the 1st step being only dependent on the abcissal values. Thus if you want to interpolate a bunch of points as is done in the Remez algo, you do step 1 once, and then the step two operation is done for each point. This actually results in less overall computation. Plus there is a bunch of literature on the good numerical properties of barycentric interpolation.> > I'm interested in knowing what method people use to get long filters like > length 4096, etc. Any suggestions? I want to code up a better set of > programs.I've used double precision to design filters with length 2048. It would be a simple matter to overload the math operators with a high precision math library. I have my Remez written in c, but it would be zero effort to change it over to c++ so it could use high precision math. With double precision case, many failures stem from underflow[1]. I can't design filters with 200 dB attenuation. But if I keep the transition bandwiths narrow enough, I can design very long filters. I thought some designed the very big filters by windowing methods. Clay [1] The key calculations in interpolation involve differences and when these throw away most of the digits (from using similarly sized numbers), this is when the trouble arises.> > Fred > > > > > >