128 bits? I don't think so...

128 bits? I don't think so...

Post by Christopher B. Brow » Mon, 30 Aug 1999 04:00:00



On Sat, 28 Aug 1999 20:56:19 GMT, Michael V. Ferranti

>Here I was, minding my own business, and wouldn't you know it?

>>I think this is the last move - 64 bits is big enough for everyone.
>><chuckle>

>Didn't they say that about having 640KB of RAM? <grins>  The next Sony
>Playstation's going to be a 128bit system, so I hear.

Does this merely mean that they're going to have some 128 bit data registers?

Or does this mean that they are going to have a linear 128 bit address space?

The former would be quite useful for doing graphics, as it lets you
stick lots of data into a register and essentially hit several pixels
in parallel.  And while it may make good press, it doesn't mean that
they actually have 128 bit addressing.

There's very little value in greater than 64 bit addressing unless you
actually plan to address more than 4 billion GB of data.  And that is a
*ferociously* large number; the only folks that could be touching that
kind of bulk would be those doing heavy duty high energy physics.

And even with applications that require those sorts of quantities of
data, access times on Very, Very Very Large Disk Arrays are such that
it is likely that segmented addressing wouldn't hurt performance overly
much...
--
"It has every known bug fix to everything." -- KLH (out of context)

 
 
 

128 bits? I don't think so...

Post by Doug Ma » Tue, 31 Aug 1999 04:00:00


On Sun, 29 Aug 1999 02:50:34 GMT, Christopher B. Browne


> >>I think this is the last move - 64 bits is big enough for everyone.
> >><chuckle>

> There's very little value in greater than 64 bit addressing unless you
> actually plan to address more than 4 billion GB of data.  And that is a
> *ferociously* large number; the only folks that could be touching that
> kind of bulk would be those doing heavy duty high energy physics.

Oh, there are lots of other applications for huge storage.  Problems
like direct numerical simulation of atmospheric dynamics are far beyond
the capabilities of today's machines.  4 billion GB is only enough
to store a million-cubed 3D array of single-precision real numbers;
many large problems of interest will require more than that.  With
storage capacities much larger than 64 bits (and processing speed
to match), all sorts of new applications for computer simulation
will arise.  For example, it would be very useful to simulate all the
biological functions of a human, down to the intracellular level.
64 bits won't cover that.

Quote:> And even with applications that require those sorts of quantities of
> data, access times on Very, Very Very Large Disk Arrays are such that
> it is likely that segmented addressing wouldn't hurt performance overly
> much...

Access times, like CPU times, are much faster than they used to be.
50 years ago, access times of present disks would have been unimaginable.
If speeds double every 18 months (like Moore's law for CPU's),
then we'll gain a factor of 2^64 in less than 100 years.

Doug.

 
 
 

128 bits? I don't think so...

Post by Christopher B. Brow » Tue, 31 Aug 1999 04:00:00



>On Sun, 29 Aug 1999 02:50:34 GMT, Christopher B. Browne

>> >>I think this is the last move - 64 bits is big enough for everyone.
>> >><chuckle>

>> There's very little value in greater than 64 bit addressing unless you
>> actually plan to address more than 4 billion GB of data.  And that is a
>> *ferociously* large number; the only folks that could be touching that
>> kind of bulk would be those doing heavy duty high energy physics.

>Oh, there are lots of other applications for huge storage.  Problems
>like direct numerical simulation of atmospheric dynamics are far beyond
>the capabilities of today's machines.  4 billion GB is only enough
>to store a million-cubed 3D array of single-precision real numbers;
>many large problems of interest will require more than that.  With
>storage capacities much larger than 64 bits (and processing speed
>to match), all sorts of new applications for computer simulation
>will arise.  For example, it would be very useful to simulate all the
>biological functions of a human, down to the intracellular level.
>64 bits won't cover that.

Fair enough.

That still doesn't differ with the point that consumer video games
are not, at this point, one of those applications.

In order for 64 bits to "not be enough," you pretty much need to have
the billions of GB of storage online, which rules out video games being
an example of something requiring >64 bit addressing...

Quote:>> And even with applications that require those sorts of quantities of
>> data, access times on Very, Very Very Large Disk Arrays are such that
>> it is likely that segmented addressing wouldn't hurt performance overly
>> much...

>Access times, like CPU times, are much faster than they used to be.
>50 years ago, access times of present disks would have been unimaginable.
>If speeds double every 18 months (like Moore's law for CPU's),
>then we'll gain a factor of 2^64 in less than 100 years.

Based on that 18 month factor, the period of time between the inadequacy
of 32 bits and the inadequacy of 64 bits would represent a period of
approximately 20 years.

Considering that the inadequacy of 32 bits is only now starting to
seriously bite people in contexts other than the "woo-woo Really,
Really Huge" applications, I'd hazard the guess that 64 bits doesn't
have greatly less than 20 useful years left to it.

Estimations based on exponential growth factors are obviously imprecise;
I would nonetheless suggest that 64 bits still has a *few* useful years
left to it...
--
"Anyone who says you can have a lot of widely dispersed people hack
away on a complicated piece of code and avoid total anarchy has never
managed a software project."  Andrew Tanenbaum, 1992.

 
 
 

128 bits? I don't think so...

Post by David » Tue, 31 Aug 1999 04:00:00


Quote:Christopher B. Browne writes:
> Michael V. Ferranti posted:

>> Didn't they say that about having 640KB of RAM? <grins> The next Sony
>> Playstation's going to be a 128bit system, so I hear.

> Does this merely mean that they're going to have some 128 bit data
> registers?

Quite likely.

Alternatively, it may really have two 64-bit or four 32-bit processors
in it.  Video game makers often play silly games to inflate their
numbers.  (For instance, the NeoGeo cartridges would call themselves
32M, 64M, and 128M cartridges, never stating that they meant mega-BIT.)
They think it will boost sales for some stupid reason.

-- David

 
 
 

128 bits? I don't think so...

Post by lars.wir.. » Tue, 31 Aug 1999 04:00:00



Quote:> >to store a million-cubed 3D array of single-precision real numbers;
> >many large problems of interest will require more than that.  With
> >storage capacities much larger than 64 bits (and processing speed
> >to match), all sorts of new applications for computer simulation
> >will arise.  For example, it would be very useful to simulate all the
> >biological functions of a human, down to the intracellular level.
> >64 bits won't cover that.

> Fair enough.

> That still doesn't differ with the point that consumer video games
> are not, at this point, one of those applications.

> In order for 64 bits to "not be enough," you pretty much need to have
> the billions of GB of storage online, which rules out video games being
> an example of something requiring >64 bit addressing...

> >> And even with applications that require those sorts of quantities of
> >> data, access times on Very, Very Very Large Disk Arrays are such that
> >> it is likely that segmented addressing wouldn't hurt performance overly
> >> much...

> >Access times, like CPU times, are much faster than they used to be.
> >50 years ago, access times of present disks would have been unimaginable.
> >If speeds double every 18 months (like Moore's law for CPU's),
> >then we'll gain a factor of 2^64 in less than 100 years.

> Based on that 18 month factor, the period of time between the inadequacy
> of 32 bits and the inadequacy of 64 bits would represent a period of
> approximately 20 years.

There some fundamental problems with Moore's law. While it may still
be valid for some extent in the future there is a definite density
limit ruled by quantum mechanics. Also power consumtion have to be
kept in reasonable figures (I don't have the estimates for memory
sizes and processor speeds at hand, but we shouldn't be _that_ far
from the limit now).

Another problem is that of the speed double every 18 month - that is
only partly due to clock frequency increases, but also due to
increased complexity. To make full use of this one will have to
increase parallellity - we've seen machines that can add 4-bits number
in parallell and 8-bits, 16-bits, 32-bits, 64-bits and even floating
point numbers of different sizes. The problem is that there is little
point in handling more than 128-bit floating point. The next logical
step is to use the parallellity in a better way either by using SIMD
or on core SMP or something similar.

The use of the same size for data registers and address space is
practical for internal use, but it's a waste of pins to assign pins to
a lot of address lines that will never be used.

/Lars

 
 
 

128 bits? I don't think so...

Post by (Paul Chandler » Tue, 31 Aug 1999 04:00:00




>On Sat, 28 Aug 1999 20:56:19 GMT, Michael V. Ferranti

>>Here I was, minding my own business, and wouldn't you know it?

>>>I think this is the last move - 64 bits is big enough for everyone.
>>><chuckle>

>>Didn't they say that about having 640KB of RAM? <grins>  The next Sony
>>Playstation's going to be a 128bit system, so I hear.

>Does this merely mean that they're going to have some 128 bit data registers?

>Or does this mean that they are going to have a linear 128 bit address space?

>The former would be quite useful for doing graphics, as it lets you
>stick lots of data into a register and essentially hit several pixels
>in parallel.  And while it may make good press, it doesn't mean that
>they actually have 128 bit addressing.

>There's very little value in greater than 64 bit addressing unless you
>actually plan to address more than 4 billion GB of data.  And that is a
>*ferociously* large number; the only folks that could be touching that
>kind of bulk would be those doing heavy duty high energy physics.

>And even with applications that require those sorts of quantities of
>data, access times on Very, Very Very Large Disk Arrays are such that
>it is likely that segmented addressing wouldn't hurt performance overly
>much...
>--
>"It has every known bug fix to everything." -- KLH (out of context)


The processor is based on an abandoned MIPSRxxxxx design. The chip
already has support for 128-bit memory addressing and floating point
computations.  Typical of the game systems of the past, there will be
two seperate data bus sizes to differing chips.  The video bus is a
dual 64-bit path, while the RAM controller is 128-bit.  MIPS
processors are *usually* designed to take a zero performance hit when
using math of a higher complexity (i.e.: 32bit integer multiply
compared to 128bit integer multiply) NOTE: only applies to integer
math and not the FPU, whereas intel chips give a performance boost by
dropping back into 16-bit mode.

Sega started this *with the Genesis proclaiming that it was a
"32-bit" unit.  Everyone, that isn't a total waste of human *,
should know that it is an extremely broad and generic term and
pointless "label" any system by its "bits".  A great example is RAMBUS
memory, used in SGI workstations, CRAY supercomputers, and the N64.
It is by far the fastest RAM technology today (don't even think about
mentioning PC-133, not even close).  It uses a high-frequency 8-bit
bus, because it is optimum.  I have seen numerous posts of people
proclaiming that PC-100 is better than RAMBUS because it has a wider
bus... this is an uneducated and inaccurate statement.

"bits", "megahertz", "megs", "gigs", "polygons", etc... are marketing
buzzwords simply thrown around by those that don't understand them.
Those of us that do, know that they give absolutely no indication of
anything important or measurable to the end-user.  Saying that a
console has a 64-bit processor, is the equivalent of saying that your
Hundai has a 3.045" diameter piston head, which is 0.0025" larger than
a Daihatsu.  What does it mean to anyone, and who cares?

 
 
 

128 bits? I don't think so...

Post by David » Tue, 31 Aug 1999 04:00:00



Quote:

> There's very little value in greater than 64 bit addressing unless you
> actually plan to address more than 4 billion GB of data.

Not necessarily.

Even if you don't ever come close to using 64-bits of address for
physical memory, there is value in mapping smaller amounts of RAM to a
64- or 128-bit virtual address space.

Large sparse matrix algorithms become far easier to implement if you can
allocate terabytes of virtual memory and only map physical RAM to the
parts of the matrix that are non-zero.

-- David

 
 
 

128 bits? I don't think so...

Post by Christopher B. Brow » Wed, 01 Sep 1999 04:00:00




>> There's very little value in greater than 64 bit addressing unless you
>> actually plan to address more than 4 billion GB of data.

>Not necessarily.

>Even if you don't ever come close to using 64-bits of address for
>physical memory, there is value in mapping smaller amounts of RAM to a
>64- or 128-bit virtual address space.

>Large sparse matrix algorithms become far easier to implement if you can
>allocate terabytes of virtual memory and only map physical RAM to the
>parts of the matrix that are non-zero.

64 bits is enough to represent address spaces of on the order of 10^20.
To be precise, 18446744073709551616, or roughly half that if it is
made signed.

That is a Rather Large Number.  10^8 terabytes, to within an order
of magnitude.

128 bits squares that, to 340282366920938463463374607431768211456.

Unless we're talking about memory devices about the size of the planet,
128 bits makes for One Sparse Matrix...
--
Een schip op het strand is een baken in zee.

 
 
 

128 bits? I don't think so...

Post by Spike » Wed, 01 Sep 1999 04:00:00



Quote:> In order for 64 bits to "not be enough," you pretty much need to have
> the billions of GB of storage online, which rules out video games being
> an example of something requiring >64 bit addressing...

Depends if it's a Microsoft game...
They might JUST be able to fit Space Invaders into that....

:)

--
-----------------------------------------------------------------------------

|                           |graphical shell for a 16 bit patch to an 8 bit |
|   Andrew Halliwell BSc    |operating system originally  coded for a 4 bit |
|            in             |microprocessor, written by a 2 bit company,that|
|     Computer Science      |       can't stand 1 bit of competition.       |
-----------------------------------------------------------------------------
|GCv3.12 GCS>$ d-(dpu) s+/- a C++ US++ P L/L+ E-- W+ N++ o+ K PS+  w-- M+/++|
|PS+++ PE- Y t+ 5++ X+/X++ R+ tv+ b+ DI+ D+ G e++ h/h+ !r!|  Space for hire |
-----------------------------------------------------------------------------

 
 
 

128 bits? I don't think so...

Post by lars.wir.. » Wed, 01 Sep 1999 04:00:00



> > There's very little value in greater than 64 bit addressing unless you
> > actually plan to address more than 4 billion GB of data.

> Not necessarily.

> Even if you don't ever come close to using 64-bits of address for
> physical memory, there is value in mapping smaller amounts of RAM to a
> 64- or 128-bit virtual address space.

> Large sparse matrix algorithms become far easier to implement if you can
> allocate terabytes of virtual memory and only map physical RAM to the
> parts of the matrix that are non-zero.

Algorithms for sparse matrixes should make use of the sparseness and
by doing so the need of linear addressing is not really that big
deal. If you had the need of exceeding the 64-bit limit (18exabytes)
it would take three and a half year to only traverse the memory under
the assumption that memory accesses are made at 128-bit width and
500GHz.

/Lars

 
 
 

128 bits? I don't think so...

Post by wa.. » Wed, 01 Sep 1999 04:00:00



Quote:>There some fundamental problems with Moore's law.

It has been superseded.

Both bandwidth and storage density are increasing at a faster rate
than Moore's Law specifies for transistor density.

If you just enter college as a freshman this fall, then by the time
you graduate you'll probably be able to buy a 4-terabyte non-volatile
memory module - with access speed comparable to the RAM on a 486
motherboard - for $50.  A new medium-high-end computer will have at
least two slots for these memory modules, with at least one slot being
external and hot-swappable.  There will be no functional distinction
between memory and mass storage, because these non-volatile memory
modules will be used as fixed media, removable media, and memory (with
a gigabyte or so of cache).

The pre-prototype development version of these memory modules is in
the lab right now.

Realize that a machine with 32-bit addresses cannot use two of these
memory modules within its address space - at least, not without a lot
of external help.

 
 
 

128 bits? I don't think so...

Post by David » Thu, 02 Sep 1999 04:00:00




>>> There's very little value in greater than 64 bit addressing unless
>>> you actually plan to address more than 4 billion GB of data.

>> Not necessarily.

>> Even if you don't ever come close to using 64-bits of address for
>> physical memory, there is value in mapping smaller amounts of RAM to
>> a 64- or 128-bit virtual address space.

>> Large sparse matrix algorithms become far easier to implement if you
>> can allocate terabytes of virtual memory and only map physical RAM to
>> the parts of the matrix that are non-zero.

> Algorithms for sparse matrixes should make use of the sparseness and
> by doing so the need of linear addressing is not really that big
> deal. If you had the need of exceeding the 64-bit limit (18exabytes)
> it would take three and a half year to only traverse the memory under
> the assumption that memory accesses are made at 128-bit width and
> 500GHz.

Depends on your application.

You may never have a need to completely traverse your sparse matrix.  If
you only need get/set operations and maybe doing local traversals over
regions of the matrix, your concern becomes unimportant.

On the other hand, if you can eliminate all the sparse-matrix-handling
code and let the memory management hardware do most of the work for you
(perhaps by using a 4K memory page for each populated matrix cell), it
may be beneficial for the application's overall performance.

Or maybe not.  It depends on the application.  One algorithm never fits
all.

-- David

 
 
 

128 bits? I don't think so...

Post by S » Sat, 04 Sep 1999 04:00:00


This is correct!  PEOPLE WAKE UP!




>>On Sat, 28 Aug 1999 20:56:19 GMT, Michael V. Ferranti

>>>Here I was, minding my own business, and wouldn't you know it?

>>>>I think this is the last move - 64 bits is big enough for everyone.
>>>><chuckle>

>>>Didn't they say that about having 640KB of RAM? <grins>  The next Sony
>>>Playstation's going to be a 128bit system, so I hear.

>>Does this merely mean that they're going to have some 128 bit data registers?

>>Or does this mean that they are going to have a linear 128 bit address space?

>>The former would be quite useful for doing graphics, as it lets you
>>stick lots of data into a register and essentially hit several pixels
>>in parallel.  And while it may make good press, it doesn't mean that
>>they actually have 128 bit addressing.

>>There's very little value in greater than 64 bit addressing unless you
>>actually plan to address more than 4 billion GB of data.  And that is a
>>*ferociously* large number; the only folks that could be touching that
>>kind of bulk would be those doing heavy duty high energy physics.

>>And even with applications that require those sorts of quantities of
>>data, access times on Very, Very Very Large Disk Arrays are such that
>>it is likely that segmented addressing wouldn't hurt performance overly
>>much...
>>--
>>"It has every known bug fix to everything." -- KLH (out of context)

>The processor is based on an abandoned MIPSRxxxxx design. The chip
>already has support for 128-bit memory addressing and floating point
>computations.  Typical of the game systems of the past, there will be
>two seperate data bus sizes to differing chips.  The video bus is a
>dual 64-bit path, while the RAM controller is 128-bit.  MIPS
>processors are *usually* designed to take a zero performance hit when
>using math of a higher complexity (i.e.: 32bit integer multiply
>compared to 128bit integer multiply) NOTE: only applies to integer
>math and not the FPU, whereas intel chips give a performance boost by
>dropping back into 16-bit mode.

>Sega started this *with the Genesis proclaiming that it was a
>"32-bit" unit.  Everyone, that isn't a total waste of human *,
>should know that it is an extremely broad and generic term and
>pointless "label" any system by its "bits".  A great example is RAMBUS
>memory, used in SGI workstations, CRAY supercomputers, and the N64.
>It is by far the fastest RAM technology today (don't even think about
>mentioning PC-133, not even close).  It uses a high-frequency 8-bit
>bus, because it is optimum.  I have seen numerous posts of people
>proclaiming that PC-100 is better than RAMBUS because it has a wider
>bus... this is an uneducated and inaccurate statement.

>"bits", "megahertz", "megs", "gigs", "polygons", etc... are marketing
>buzzwords simply thrown around by those that don't understand them.
>Those of us that do, know that they give absolutely no indication of
>anything important or measurable to the end-user.  Saying that a
>console has a 64-bit processor, is the equivalent of saying that your
>Hundai has a 3.045" diameter piston head, which is 0.0025" larger than
>a Daihatsu.  What does it mean to anyone, and who cares?

 
 
 

128 bits? I don't think so...

Post by Drew Northu » Mon, 06 Sep 1999 04:00:00


Quote:> >compared to 128bit integer multiply) NOTE: only applies to integer
> >math and not the FPU, whereas intel chips give a performance boost by
> >dropping back into 16-bit mode.

Last I knew the float data type was 80 bits wide (64 bit # and 16 bit
exponent).  It is just a might difficult to proccess that in 16 bit mode.
My source is the latest intel architechture manual--what's yours?
Drew Northup, N1XIM
 
 
 

128 bits? I don't think so...

Post by John Wilco » Tue, 21 Sep 1999 04:00:00




> On Sat, 28 Aug 1999 20:56:19 GMT, Michael V. Ferranti

> >Here I was, minding my own business, and wouldn't you know it?

> >>I think this is the last move - 64 bits is big enough for everyone.
> >><chuckle>

> >Didn't they say that about having 640KB of RAM? <grins>  The next Sony
> >Playstation's going to be a 128bit system, so I hear.

> Does this merely mean that they're going to have some 128 bit data
registers?

> Or does this mean that they are going to have a linear 128 bit address
space?

> The former would be quite useful for doing graphics, as it lets you
> stick lots of data into a register and essentially hit several pixels
> in parallel.  And while it may make good press, it doesn't mean that
> they actually have 128 bit addressing.

> There's very little value in greater than 64 bit addressing unless you
> actually plan to address more than 4 billion GB of data.  And that is a
> *ferociously* large number; the only folks that could be touching that
> kind of bulk would be those doing heavy duty high energy physics.

Or Anyone that will run Windows 2010 with Office 2010 & Indernet Exploder
15. ;)

- Show quoted text -

> And even with applications that require those sorts of quantities of
> data, access times on Very, Very Very Large Disk Arrays are such that
> it is likely that segmented addressing wouldn't hurt performance overly
> much...
> --
> "It has every known bug fix to everything." -- KLH (out of context)