## Bit Serial Arithmetic De-mystified

### Bit Serial Arithmetic De-mystified

What the heck is bit serial arithmetic?

### Bit Serial Arithmetic De-mystified

It, the heck, is when you perform arithmetic serially, one bit at a
time, LSB first, instead of on parallel words.
Got it ? B-I-T--S-E-R-I-A-L

Peter Alfke

> What the heck is bit serial arithmetic?

### Bit Serial Arithmetic De-mystified

Uhh, I suppose you gave this answer simply because it was E.Bob who
asked, but have never dunnit either.  Except of course things like
LFSRs, encryption and error correction algorithms, which are bit serial
to some extent.  Care to give an example?

Cheers,

Herman
http://www.AerospaceSoftware.com

> It, the heck, is when you perform arithmetic serially, one bit at a
> time, LSB first, instead of on parallel words.
> Got it ? B-I-T--S-E-R-I-A-L

> Peter Alfke

> > What the heck is bit serial arithmetic?

### Bit Serial Arithmetic De-mystified

The arithmetic is performed one bit per clock cycle, so that it takes N
clock cycles to perform an N bit wide operation.  For straight
arithmetic operations, it is usually done LSB first since arithmetic
operations are left transitive.  Occasionally there are times when an
MSB first bit serial operation is desirable, such as priority encodes,
compares and first one's detections.

e-bob, I bet you are thinking, well geez, that's pretty inefficient.
True it takes more clock cycles to perform an operation, but you get the
advantage of a much smaller circuit (1/Nth the size in the general case,
and often even smaller).  For example, a bit-serial adder is a 3 input
XOR gate, a majority-3 gate and two flip-flops.  That can be done in a
pair of FPGA logic elements.  Since you don't have wide fanout controls
or long carries, the clock rate can be quite a bit higher than the clock
for more conventional bit-parallel arithmetic.  In many cases, the
time-hardware product for a bit serial design is smaller than the
equivalent bit-parallel design (meaning you get the same performance in
a smaller area).

Modern FPGAs are capable of clock rates of well over 100MHz, which is
much higher than the data rates on typical applications slated for FPGAs
in today's market.  If you do a parallel design for a relatively low
data rate, say 10 MHz, you leave alot of the FPGA's capability on the
table so you might be passing up an opportunity for a much smaller
device.

> What the heck is bit serial arithmetic?

--
-Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930     Fax 401/884-7950

http://users.ids.net/~randraka

### Bit Serial Arithmetic De-mystified

> What the heck is bit serial arithmetic?

Try to find documentation on the PDP8S. That was a bit serial machine.
Some old IC logic books have spec sheets for bit serial adders.

Jerry
--
Engineering is the art of making what you want from things you can get.
-----------------------------------------------------------------------

### Bit Serial Arithmetic De-mystified

> Uhh, I suppose you gave this answer simply because it was E.Bob who
> asked, but have never dunnit either.  Except of course things like
> LFSRs, encryption and error correction algorithms, which are bit serial
> to some extent.  Care to give an example?

> Cheers,

> Herman
> http://www.AerospaceSoftware.com

> > It, the heck, is when you perform arithmetic serially, one bit at a
> > time, LSB first, instead of on parallel words.
> > Got it ? B-I-T--S-E-R-I-A-L

> > Peter Alfke

> > > What the heck is bit serial arithmetic?

outputs  -- sum and carry. The combinatorial logic consists of two XORs,
aka half adders. To do a parallel addition, you use one of these per
bit, and the carry out from one bit becomes the carry in to the next
higher bit. (How carry is treated for the MSB and LSB is left as an
exercise for the student's imagination.) To do a serial addition, the
addends are in shift registers, and the carry out of the single adder is
recycled through a flip-flop. Usually, the sum is shifted into one of
the addend registers. The longer the word, the more clock cycles an
addition takes, but there's no carry propagation time to worry about of
look-ahead logic to build. (And double-precision arithmetic can take
only twice as long, which isn't usually the case with parallel circuits.
The treatment of carry for LSB and MSB is the same as for parallel.

The classical software division routine is mostly bit serial (only the
subtracts are parallel), with the quotient shifted into the register
that the dividend is coming ut of.

Jerry
--
Engineering is the art of making what you want from things you can get.
-----------------------------------------------------------------------

### Bit Serial Arithmetic De-mystified

Quote:> What the heck is bit serial arithmetic?

it is doing a calculation one bit at a time. It's more less
the same as  doing  e.g. a 32 bit addition on an 8 bit cpu,
using 4, 8 bit adds (with a carry in and out), it's just done
with 1 bit adds, it can save HW but will cost on clockcycles
needed to do a calculation.

-Lasse

Sent via Deja.com http://www.deja.com/

### Bit Serial Arithmetic De-mystified

> > What the heck is bit serial arithmetic?

> Try to find documentation on the PDP8S. That was a bit serial machine.
> Some old IC logic books have spec sheets for bit serial adders.

WARNING: WORTHLESS TRIVIA IN THIS POST

From a historical point of view, there are some rather famous machines and
applications that were serial.  The ENIAC, for instance, was a serial
machine, operating one decimal digit at a time.  Each decimal digit
consisted of a ring of 10 flip-flops.  Many of the early aerospace designs
were bit serial, as the need to minimize mass and power were important.

Just like in many Computer Science algorithms, there's a space-time
trade-off.  Of course, the simplified hardware can go really fast with
respect to clock rate.  It's possible to perhaps win on speed, too, with a
serial architecture, as other sections of the FPGA, not used by a parallel
adder, can be dedicated to performing some other useful computational
function in parallel.

Quote:> Jerry
> --
> Engineering is the art of making what you want from things you can get.

Excellent quote.

"There's nothing like real data to f' up a great theory."

rk

### Bit Serial Arithmetic De-mystified

...
Quote:

>        WARNING: WORTHLESS TRIVIA IN THIS POST

> From a historical point of view, there are some rather famous machines and
> applications that were serial.  The ENIAC, for instance, was a serial
> machine, operating one decimal digit at a time.  Each decimal digit
> consisted of a ring of 10 flip-flops.  Many of the early aerospace designs
> were bit serial, as the need to minimize mass and power were important.

> Just like in many Computer Science algorithms, there's a space-time
> trade-off.  Of course, the simplified hardware can go really fast with
> respect to clock rate.  It's possible to perhaps win on speed, too, with a
> serial architecture, as other sections of the FPGA, not used by a parallel
> adder, can be dedicated to performing some other useful computational
> function in parallel.

Worthless trivia are what will make this NG worth following even after
I've learned all the answers. (Ahem!)

I want to add that ENIAC's serial architecture was well suited to the
mercury delay line that constituted its main memory. (Disks are serial
too, bit we don't use them for main memory.)

Jerry
--
Engineering is the art of making what you want from things you can get.
-----------------------------------------------------------------------

### Bit Serial Arithmetic De-mystified

> The arithmetic is performed one bit per clock cycle,
> so that it takes N clock cycles to perform an N bit wide operation.
> For straight arithmetic operations,
> it is usually done LSB first since arithmetic operations are left transitive.
> Occasionally there are times
> when an MSB first bit serial operation is desirable,
> such as priority encodes, compares and first one's detections.

> True it takes more clock cycles to perform an operation,
> but you get the advantage of a much smaller circuit
> (1/Nth the size in the general case, and often even smaller).
> For example, a bit-serial adder is a 3 input XOR gate,
> a majority-3 gate and two flip-flops.
> That can be done in a pair of FPGA logic elements.
> Since you don't have wide fanout controls or long carries,
> the clock rate can be quite a bit higher than the clock
> for more conventional bit-parallel arithmetic.
> In many cases, the time-hardware product for a bit serial design
> is smaller than the equivalent bit-parallel design
> (meaning you get the same performance in a smaller area).

> Modern FPGAs are capable of clock rates of well over 100MHz,
> which is much higher than the data rates on typical applications
> slated for FPGAs in today's market.
> If you do a parallel design for a relatively low data rate, say 10 MHz,
> you leave a lot of the FPGA's capability on the table
> so you might be passing up an opportunity for a much smaller device.

Thanks Ray,

I just thought that you might have meant "on-line" arithmetic
when you said "bit-serial" arithmetic.
But, apparently, that is not the case.

### Bit Serial Arithmetic De-mystified

That is cool.  An excellent description...I know because I didn't know
it before, and now I do!  (obviously there are a lot of fine points).

but when I think about combining bit serial arithmetic with something
like SP/DIF (also serial format easily handled by FPGA's) the possibilities
are endless...

>The arithmetic is performed one bit per clock cycle, so that it takes N
>clock cycles to perform an N bit wide operation.  For straight
>arithmetic operations, it is usually done LSB first since arithmetic
>operations are left transitive.  Occasionally there are times when an
>MSB first bit serial operation is desirable, such as priority encodes,
>compares and first one's detections.

--
DSP Audio Effects! http://gweep.net/~shifty
.        .       .      .     .    .   .  . ... .  .   .    .     .      .
"La la la laaa laaa laaa                   "      |     Niente

### Bit Serial Arithmetic De-mystified

Now that I understand it a little bit better, what techniques do you
use to handle unsynchronized input streams, e.g.  two sp/dif streams?
And how do you handle incoming streams with varying data rates?

or is bit serial arithmetic only usually used within the chip, once
data is been clocked in parallel fashion?

again, thanks.

-Noah

>it is doing a calculation one bit at a time. It's more less
>the same as  doing  e.g. a 32 bit addition on an 8 bit cpu,
>using 4, 8 bit adds (with a carry in and out), it's just done
>with 1 bit adds, it can save HW but will cost on clockcycles
>needed to do a calculation.

>-Lasse

>Sent via Deja.com http://www.deja.com/

--
DSP Audio Effects! http://gweep.net/~shifty
.        .       .      .     .    .   .  . ... .  .   .    .     .      .
"La la la laaa laaa laaa                   "      |     Niente

### Bit Serial Arithmetic De-mystified

>  ...

> Thanks Ray,

> I just thought that you might have meant "on-line" arithmetic
> when you said "bit-serial" arithmetic.
> But, apparently, that is not the case.

I'll bite. What's "on-line" arithmetic? You owe me one.

Jerry
--
Engineering is the art of making what you want from things you can get.
-----------------------------------------------------------------------

### Bit Serial Arithmetic De-mystified

> Now that I understand it a little bit better, what techniques do you
> use to handle unsynchronized input streams, e.g.  two sp/dif streams?
> And how do you handle incoming streams with varying data rates?

> or is bit serial arithmetic only usually used within the chip, once
> data is been clocked in parallel fashion?

> again, thanks.

> -Noah

> >it is doing a calculation one bit at a time. It's more less
> >the same as  doing  e.g. a 32 bit addition on an 8 bit cpu,
> >using 4, 8 bit adds (with a carry in and out), it's just done
> >with 1 bit adds, it can save HW but will cost on clockcycles
> >needed to do a calculation.

> >-Lasse

> >Sent via Deja.com http://www.deja.com/

> --
>                                  DSP Audio Effects! http://gweep.net/~shifty
>      .        .       .      .     .    .   .  . ... .  .   .    .     .      .
> "La la la laaa laaa laaa                   "      |     Niente

That's a hardware question, of course. Depending on what you want to do,
and on the resources you have, you can use parallel or serial data
busses. A rate converter can be a shift register clocked in at one rate
and out at another, a parallel transfer from one shift register to
another (UART to UART for communication rate conversion) or anything
else that works.

Jerry
--
Engineering is the art of making what you want from things you can get.
-----------------------------------------------------------------------

### Bit Serial Arithmetic De-mystified

On-line arithmetic is basically doing arithmetic with signed digits.  It
was originally developed to accelerate addition and subtraction by
eliminating the carry propagation, but has found better application in
dividers and multipliers.  It is kind of messy, and a real pain to convert
to and from binary number systems.

> >  ...

> > Thanks Ray,

> > I just thought that you might have meant "on-line" arithmetic
> > when you said "bit-serial" arithmetic.
> > But, apparently, that is not the case.

> I'll bite. What's "on-line" arithmetic? You owe me one.

> Jerry
> --
> Engineering is the art of making what you want from things you can get.
> -----------------------------------------------------------------------

--
-Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930     Fax 401/884-7950

http://users.ids.net/~randraka

I'm working on it. Like a map in a dream, so far it seems to make sense
but be just out of reach. I'll go back over it a few more times.

Thanks for all the nice stuff.

Jerry
--
Engineering is the art of making what you want from things you can get.
-----------------------------------------------------------------------