32 vs 24 bit PixelFormat, Anyone have a clue?

32 vs 24 bit PixelFormat, Anyone have a clue?

Post by Kelce Wilso » Fri, 25 Sep 1998 04:00:00




> It's me again.

> > If I specify the new Bitmap's format as pf32bit, the colors
> > are changed (from pf24bit).

> Yes. It's something I ran into yesterday afternoon. Testing proved that
> the bit orders are:
> pf24bit -> BB GG RR
> pf32bit -> 00 RR GG BB

This makes perfect sense in reconciling my experimentation with the
Bitmap format spec I have.

From your other posts, it seems like you could use a copy of it.
E-Mail me if you are interested and I'll send it to you.

However, I do have a question.  Perhaps I am a moron, but there
seems to be a fundamental disconnect between the name "32 bit"
and my arithmetic.

Note that the pf32bit format uses only 24 bits, JUST LIKE pf24bit.
The extra 8 bits are unused.  Shouldn't the name "32 bit color" be
changed to:
"24 bit color, but with a different byte order and 2 wasted bytes"?

I mean if you use 32 bits to carry only 24 bits of information, it
seems dishonest to call it "32 bit color" when it is no better than
"24 bit color".

What am I missing here?

You know, the same question goes for "16 bit color".  It only uses
15 bits (3 bits for each red, blue, and green).  The most significant
bit is unused.  Shouldn't it be renamed:
"15 bit color with 1 wasted bit"?

Suppose I were to build a race car with a V-6 engine.  I could put
a V-6 sticker on the fender and be honest.

But, then my competitor builds a car with an engine that has 8
cylinders.  He only puts pistons, spark plugs, and fuel injectors
in 6 and leaves the last 2 as just empty weight in the engine
compartment.  Well, he goes and slaps a V-8 sticker on the side
of his car.  Everyone is going to go buy his car, because we all
know more cylinders = more power.  Really, though, his engine
only has the same power as mine, but his car is carrying around
a bunch of dead weight.  My car is actually better than his!

Shouldn't the same rules apply to naming color format?
Or maybe there is something going on that I don't understand!

At least the 4 and 8 bit color types are honest (if not confusing
by using a palette).  24 bit and monochrome are the only color
schemes that make sense.

-Kelce Wilson
The views in this post are entirely my own and do not reflect the
position of the U.S. Air Force

 
 
 

32 vs 24 bit PixelFormat, Anyone have a clue?

Post by Chris Hi » Fri, 25 Sep 1998 04:00:00


On Thu, 24 Sep 1998 11:37:55 -0400, Kelce Wilson


>However, I do have a question.  Perhaps I am a moron, but there
>seems to be a fundamental disconnect between the name "32 bit"
>and my arithmetic.

There ARE 32 bits per pixel.  Some of them aren't used for color data,
but 32 bits more naturally align in memory than 24.

Quote:>Note that the pf32bit format uses only 24 bits, JUST LIKE pf24bit.
>The extra 8 bits are unused.  Shouldn't the name "32 bit color" be
>changed to:
>"24 bit color, but with a different byte order and 2 wasted bytes"?

No, there is only 1 extra byte! :)  And that name is too tedious to
type/say.

Quote:>I mean if you use 32 bits to carry only 24 bits of information, it
>seems dishonest to call it "32 bit color" when it is no better than
>"24 bit color".

I agree, but for the purposes of programming it works well enough.
32-bits-per-pixel is a more descriptive term that is used in the Win32
SDK docs.

Quote:>You know, the same question goes for "16 bit color".  It only uses
>15 bits (3 bits for each red, blue, and green).  The most significant
>bit is unused.  Shouldn't it be renamed:
>"15 bit color with 1 wasted bit"?

Actually, some 16 bit schemes use the extra bit for one of the colors.
The human eye is more sensitive to some colors, so the extra bit isn't
always wasted.

Chris Hill


 
 
 

32 vs 24 bit PixelFormat, Anyone have a clue?

Post by Jeff Cottingha » Fri, 25 Sep 1998 04:00:00


In some cases that "extra " byte is used for alpha channel info
-goldilocks

> On Thu, 24 Sep 1998 11:37:55 -0400, Kelce Wilson

> >However, I do have a question.  Perhaps I am a moron, but there
> >seems to be a fundamental disconnect between the name "32 bit"
> >and my arithmetic.

> There ARE 32 bits per pixel.  Some of them aren't used for color data,
> but 32 bits more naturally align in memory than 24.

> >Note that the pf32bit format uses only 24 bits, JUST LIKE pf24bit.
> >The extra 8 bits are unused.  Shouldn't the name "32 bit color" be
> >changed to:
> >"24 bit color, but with a different byte order and 2 wasted bytes"?

> No, there is only 1 extra byte! :)  And that name is too tedious to
> type/say.

> >I mean if you use 32 bits to carry only 24 bits of information, it
> >seems dishonest to call it "32 bit color" when it is no better than
> >"24 bit color".

> I agree, but for the purposes of programming it works well enough.
> 32-bits-per-pixel is a more descriptive term that is used in the Win32
> SDK docs.

> >You know, the same question goes for "16 bit color".  It only uses
> >15 bits (3 bits for each red, blue, and green).  The most significant
> >bit is unused.  Shouldn't it be renamed:
> >"15 bit color with 1 wasted bit"?

> Actually, some 16 bit schemes use the extra bit for one of the colors.
> The human eye is more sensitive to some colors, so the extra bit isn't
> always wasted.

> Chris Hill


 
 
 

32 vs 24 bit PixelFormat, Anyone have a clue?

Post by Damien Saussa » Sat, 26 Sep 1998 04:00:00


Quote:> From your other posts, it seems like you could use a copy of it.
> E-Mail me if you are interested and I'll send it to you.


Quote:> Note that the pf32bit format uses only 24 bits, JUST LIKE pf24bit.
> The extra 8 bits are unused.  Shouldn't the name "32 bit color" be
> changed to:
> "24 bit color, but with a different byte order and 2 wasted bytes"?

Thats not right. The extra 1 byte (and not 2) is sometimes used as the
"transparency" value or as some other type of value for computing
puposes: sometimes you need more infos than the actual look of the
color. For instance in picture analysis you may need 2 blues that look
the same but may be differenciated. And the alpha fits perfectly to that
purpose.

Quote:> You know, the same question goes for "16 bit color".  It only uses
> 15 bits (3 bits for each red, blue, and green).  The most significant
> bit is unused.  Shouldn't it be renamed:
> "15 bit color with 1 wasted bit"?
> At least the 4 and 8 bit color types are honest (if not confusing
> by using a palette).  24 bit and monochrome are the only color
> schemes that make sense.

The color format are different between the types of computers: Silicon
graphics don't always use a palette for 8 bit colors. And the order of
the bits are not the same either.
So what's true for the PC is not for the other computers.

--
----------------------------------------
Damien Saussac
Ingnieur en Informatique
Institut National de l'Audiovisuel

ICQ# 148 00 862

 
 
 

32 vs 24 bit PixelFormat, Anyone have a clue?

Post by Caius Mari » Sun, 27 Sep 1998 04:00:00



>However, I do have a question.  Perhaps I am a moron, but there
>seems to be a fundamental disconnect between the name "32 bit"
>and my arithmetic.

Too easy.

Quote:>Note that the pf32bit format uses only 24 bits, JUST LIKE pf24bit.
>The extra 8 bits are unused.  Shouldn't the name "32 bit color" be
>changed to:
>"24 bit color, but with a different byte order and 2 wasted bytes"?

>I mean if you use 32 bits to carry only 24 bits of information, it
>seems dishonest to call it "32 bit color" when it is no better than
>"24 bit color".

>What am I missing here?

The format allows you to specify the number of bits per component and their
bit position within the 4-byte value. The default placement is to have one
wasted byte, however that does not have to be the case.

John - N8086N
------------------------------------------------
Are you interested in a professional society or
guild for programmers?

See www.colosseumbuilders.com/american.htm

EMail Address:

|c.o.l.o.s.s.e.u.m.b.u.i.l.d.e.r.s.|
|c.o.m.|

 
 
 

1. Question re: 24-bit vs. 32-bit color

        Assuming that the "X" in X-bit color refers to the power of two that
gives you the number of colors in the image, then 8-bit gives you 256 colors,
15-bit gives you around 32K colors, 16-bit gives you around 64K colors, and
24-bit gives you about 16M colors.  

        Well, I remember hearing from what sounded like a reputable source
that the human eyeball, with NO degree of color blindness whatsoever (which
is very rare) is capable of distinguishing about 7M colors.  This being the
case, anything above 20-bit color would be somewhat pointless.  Can humans,
in the biological sense really make use of 21-bit (and higher) images?

        In a second case, assuming that no one out there has a monitor with
a higher resolution than 2048x2048 (the highest I've heard of), then only
19 bits would be needed to ensure that each pixel on the screen could have a
different color.  Will more bits really buy much of anything?  

        I know that in each of these arguments I'm missing some important
fact, but if anyone out there would be willing to point it out to me, I'd be
delighted to listen.  Why isn't 20-bit the upper-limit standard?

--
-----------------------------------------------------------------------------


-----------------------------------------------------------------------------

2. keyboard simulation and directinput

3. Q: JPEG PixelFormat 8 bit vs 24 bit

4. Problems with xcf in 1.1.x (CVS)

5. 24 bit RGB to 32 bit CMYK

6. GIF contest

7. 24 Bit Textures in 16/32 Bit

8. opengl using Borland C++ 5.02

9. 32 bit / 24 bit - Riva TNT / Permedia 3 ???

10. Convert 32-bit BMP to 24-bit JPEG

11. 2, 4, 8 bit color palette into 16,24 or 32 bit

12. ?? difference between 24- and 32-bit color