Nathaniel3W":slktrtj1 said:
I also thought the BGR image looked best. I thought the colors looked more vibrant, but that could just be a result of your camera (phone?) automatically adjusting white balance or something between photos.
What's the looping for? Does it have to happen sequentially? If you can run 1000 concurrent threads on your GPU, the process would take no time at all.
I am always amazed at the depth of technical minutiae you dive into on your projects. I would never worry about the pixel arrangement of a display; I have crafting recipes to make. Crafting recipes, and also variety of reagents (not so few that everything uses the same ingredients, and not so many that an ingredient is only used for one or two recipes), recipe and ingredient drop rates, balance versus store-bought equipment, variety and cost of crafting tools, and probably other issues I haven't thought of yet.
I also thought the image of BGR looked brighter, but on inspection I believe it's because BGR is filling the "gaps" between sub-pixels, thus there is less black and more luminosity detail encoded (which was the plan). I took the photos in quick succession with as minimal camera movement as possible.
The looping is going across all pixels of the image, then all colours in the palette to compare against all other colours in the palette - the palette being 2^15 in size. It was all done in parallel for-loops so was taking advantage of all 12 logical CPU threads. It would be an excellent candidate for GPU compute acceleration, but this was a quick test for an idea.
It was a bank holiday Monday in the UK, so essentially a 3 day weekend, I usually spend time researching interesting stuff on the weekends and this one came out of some interesting technology being used in my current project. Some video encoding formats (YUV) encode more luminosity information than colour, just like JPEG image compression, and that got me thinking about sub-pixels encoding luminosity. Another requirement for this project is GPU accelerated font rendering which got me reading about font rendering in general.
I think everyone who has ever taken a screenshot on Windows and then zoomed in on the text has encountered
ClearType aliasing, so I looked at this in the same way that YUV/JPEG works with trying to cram more into luminosity than colour. The GBA's low resolution and BGR layout made it a good candidate to experiment with - my first thought was "why does no-one use sub-pixels for image rendering" - and indeed for images that need to be scaled down onto a display (64x64 icon displaying as 24x24) can take advantage of this (they probably don't due to a desire to retain colour "accuracy", even though accuracy is lost from the down-scale and humans aren't as sensitive to colour as they are luminosity).
The dithering exploration came from the fact that the GBA has a 15bit display, rather than 24bit. I think that aspect was a failure.
None of this is usable with my current project as it is video encoding related - you cannot guarantee the pixel arrangement of everyone watching a video (it could be a PenTile display on a smartphone).
Here's another sub-pixel demonstration with my main-boy the WhatsApp bird:
Left is standard, right is sub-pixel.
Notice how on an RGB monitor at 100% scale the "eye" of the bird looks more rounded with the sub-pixel version (move your face closer to the screen), but generally the two images look almost identical. The zoomed-image shows the RGB awareness difference.
EDIT: Here's a version with the sub-pixels set to be 100% pronounced (Windows XP era font smoothing was like this).
The eye on an RGB display should look roundest on this version - however the colour shifting is now visibly obvious.
On a non-RGB display this bird would look like ass with crap colours. Here is what it looks like made aware for a BGR display:
This should look terrible on an RGB display, simulating how crap the other images will look like on a BGR display.
ANYWAY time to get back to actual work.