Some help understanding colour depth

Hi! So, the lecture made me try to look into colour depth for the first time and really try to figure out what’s what. I’ve read a few articles and the general understanding started to come into focus but there are a few things left that I would really appreciate some help understanding.

First off, it maybe be my misunderstanding here, but the lecture made it look like at first we were talking about texture resolution (as in X pixels by X pixels), not texture colour depth…? I mean, pixels were addressed followed by the examples of textures often being “1 by 1, or 2 by 2, or 4 by 4 …or 256 by 256 (pixels…?)”. And the latter was then called 8 bit. However from what I understand colour depth is all about the maximum number of colours possible for an image of particular depth, while the image can potentially be of any resolution. I’m sorry if I’m not making much sense, this one single part of the lecture has me very confused. I would be glad if someone could explain to me just what it is that’s I’m not getting here. Maybe it was never about the pixel resolution of the texture and I just got it wrong? If that’s the case it would all start to make sense because other that this I pretty much figured out how colour depth works.

Another thing that’s confusing is that 8 bit from what understood is quite a lot of colours. I mean, if that’s 256 per colour channel, that gives 256(R)*256(G)*256(B)= around 16 million colours. I found out that all this time JPEG images have all been 8 bit and they look fine (except maybe slight banding of darker colours where an otherwise smooth gradient would be). What makes me wonder here is that when you google “8 bit”, or for example “8 bit games” you get the screenshots with an obviously very limited colour palette (like Super Mario or Donkey Kong etc.) This feels contradictory to the 16 million colours I’ve calculated before and I just can’t stop trying to crack this puzzle. If there’s someone out there who understands this better than I do, please be so kind as to help me out! Thank you! :grinning:

1 Like

Not sure exactly what I can explain to you.

The image dimension for a screen is done in pixels.
One pixel has a color value and sometimes an alpha transparency value (or even more info).
depending on the image standard you’ve chosen; GIF, JPG, PNG, TIFF, RAW …

If you count all pixels in a row, you get the density of the image. Like 72 dots per inch (DPI). Which was an old monitor (tv) standard.
For paper and printers, we had things like 300 PPI (Points per Inch). Moderns PC’s, TV’s, Mobile screens have a wide range of DPI/PPI. How higher the number, the sharper the image. But then image BYTE size will grow.
In the old days, (floppy. disk, chip) memory was expensive, so engineers invented ways to crunch the data in an image file. Making less file size. Which leads to many image file standards ‘.JPG’, ‘.PNG’, ‘.GIF’ …

One of the old options was to use the ‘.GIF’ standard, which used only 256 color values or less (128, 64, …) Meaning, less color info, smaller files. You can choose your own 256 colors, like 256 shades of green. If you need rainbow colors, then you’ll get a noisy GIF image (sometimes seen on the web). Because only 256 rainbow colors are uses, in the unlimited rainbow color pallet.

Then there was .JPG allowing millions of colors. More colors mean bigger file sizes, and longer download times (internet using modems), in the old times. ‘.JPG’ solved this problem, using an algorithm that throws away color information! Because the human eye is bad in identifying color variations. So why keep them as data?
Less color, lesser file size. The downside was you lose color info data. So never store images in JPG, use PNG

‘.PNG’ was a new standard, to solve the ‘.JPG’ -losing color- problem. It doesn’t throw away color info but has a much better compression algorithm.

But as technology evolves, higher DPI screens, 4K, 8K TV. And more ‘dynamic’ color ranges.
The screen pixel in the earlier days has evolved also.
From one bit (the green terminal monitor) (on/off). To 8 bit, color pallet range of 256 (GIF).
And millions (8bit for Green, 8 bit for blue, and 8bit for red) (JPG).

With new standards for 16bit for Green,16 bit for blue, and 16bit for red, with 8bit for alpha and bits for light intensity. Creating huge files, with new compressions algorithms.

And it all depends, on what you want to do with it. Blender has also a special image construct, which contains not only color data but also Z-depth, ambient occlusion, etc … Nasa using 32bit RGB or more

More info on ‘JPG’:
When creating a ‘JPG’ file you can say how much compression is needed. Most people do the default value. But with a trial and error approach, you can do more compression to have still good quality.
But with gigabytes of space (or even unlimited Cloud space) who bothers …?

‘JPG’ works best on busy prints, like grass, people crowds. But having a picture of the sky, with a lot of blue tints. Lowering the compression will give you noisy (blocks) results. Due to the fact that JPG throws away colors.
If you watch a movie on the web, and the internet slows down, you’ll see the same block effect. Because the MPG movie standard is based on the same compression ideas as JPG.

If you open a JPG file and save it as a JPG file, again data compression kicks in, lowering the quality of the JPG image (blocky).


Thanks for the careful explanation! I happen to work with images a lot, so most of it I’m familiar with, but I barely have ever had to deal with colour depth, especially the theoretical part. So now that I’ve read your reply it feels I probably got everything right, it might just be a false feeling there’s more to that than I understood.

I guess I’m left a little confused as to why numbers like 1024*1024 px (power of two) are used for texture dimensions after all. All the explanations make it clear what the power of two has to do with the maximum colours quantity for a certain colour depth, but it’s still not clear what it has to do with the dimensions of an image and why the dimensions should be like that (albeit not strictly just that). But maybe it’s something that just should be taken for granted.

One confusing little bit about the 8 bit games at the end of my original message though — do you have any idea about that? Surely what we picture when we say “8 bit games” is not 16 million colours but a very limited colour palette, so that is still a mystery to me. If you have any idea, please do share! :slight_smile:

1 Like

It’s all about efficient computer memory usage. Like disk space, which are divided into blocks of n-bytes. (factor of two).

if a disk block can store 4096 bytes and you file (a GIF) is only 500 bytes. Than those 3596 (4096-500) bytes in this disk block are lost. Not available for other files. This is the same for computer memory. Meaning a 1024x1024, fits for 100% in the memory structure and also performance wise it helps. CPU’s of 32, 64 bits are used to manipulate the image. Especially for GPU’s which are capable of manipulating huge memory blocks in a single cycle. But still use the factor of two.

For my project I use 2048x2048, but for high detail textures (fonts, signs) I go for 4094x4096, but then my laptop has difficulties. This factor two, is also implemented in the color depths.
GIF (old stangard) uses 8 bits, 256 colors. color images 8bit for RGB and some 8 bit for alpha. And this is increased for heigh definition images, with color info, which the human eye is not capable of seeing. But it can be used to manipulate image information.


This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.