If we first ask ourselves how many colors it takes to display a picture with all the colors the human eye can distinguish,
the answere is: about 16 million... (Which isn't absolutely true though, the actual numer is probably closer to 100 million).
This number/value (16 million) is something a group of people decided would be enough to display pictures with photographic
quality. There are also a couple of practical reasons why they arrived at that conclusion. (the exact number is 16.777.216)
16,8 mil colors is also called 24-bit graphics or even 'TRUE color' or 'REAL color'.
It should be noted that in recent years(?) some have been using even more colors, e.g. 48-bit graphics... (I'll mention the
deal with 32-bit graphics a bit later)
A color on a computer is a combination of the three light components
R_ed, G_reen ä B_lue, RGB. Different colors are obtained by varying the amount of each of the three components.
When it comes to 24-bit graphics, (which is the highest number a computer normally can display) it works like this:
Every componenet (R,G,B) has 256 nuances - these components are also known as 'channels'.
Because these can be combined with each other, we can calculate the total ammount of colors by multiplying 256(R)
with 256(G) and 256(B), the answere is as we already know, 16.777.216 (16,8 million colors). You have probably
noticed that the expression '24-bit' is often used to describe 'true-color' (the 16,8 million colors mode).
When describing colors with bits, it's often reffered to as color-depth. The easiest way to calculate the ammount
of colors a certain amount of bits allows is to punch it in on your calculator. A color-depth of 24 bits means
'2 raised to the 24:th power'. 8 bits allow '2 raised to the 8:th power (='256')'.
The earlier mentioned 48-bit color mode, uses 16 bits for every channel (RGB). This means
that each one of the channels has 65.536 nuances (or 'steps' if you like). e.g. this allows for 65.536 grey-scales
compared to the 256 allowed in 24-bit. (The grey nuances are obtained by mixing the exact same valeues in every channel -
e.g. in 24-bit RGB, R=0,G=0,B=0 is black, R='2'55,G='2'55,B='2'55 is white and R=128,G=128,B=128 is mid-grey). Now
16-bits per channel means a total number of: 65.536(R)*65.536(G)*65.536(B) ='2',8*10e14 colors!!! Thats over 280 trillion colors!!!
(281 thousand billions of colors!)
Now, how about 32-bit graphics? That's '2 raised to the 32:nd power' which gives us 4294967296 (~4billion) colors right? Wrong!
In the case of 32-bit graphics, we are still talking 24-bit color. The additional 8 bit come from a 4:th channel. This channel contains
the transparency information. It is called the 'alpha channel'. In this case it's not RGB, it's RGBA! If we add an 16-bit alpha channel to 48-bit
color graphics we actually get 64-bit graphics. This will probably be the next evolution in computer graphics.
It may seem unnecessary to have 16-bits of color information per channel and it's true. The human eye can not see so many colors. However, if we are
going to increase from 8-bits, the next logical step actually is 16-bit, since computers operate with bytes (1 byte = 8 bits) and 16 is the next multiple of 8.
Now, with the arrival of Microsoft's graphical API; DirectX 9 - Manufacturers of graphics hardware
have moved to 64-bit and even 128-bit color. In these cases there are 16 and 32 bits per channel, respectively.
(16-bit per color channel means 65.536 nuances of a particular color, and 32-bit equals ~4billion nuances.)
They now use floating point values instead of integers to calculate the color values. While it is still true that
the human eye can barely distinguish more than 10-bits per color channel, it makes sense to do
all interim calculations within the graphics hardware with floating point precision. The result will
be images with greater quality, especially when showing images with both very dark and very
bright areas (High dynamic range).