On Thu, 17 Jul 2014 17:39:39 +0530
Weldon Goree <weldon_AT_langurwallah.org> wrote:
Hey Weldon,
> I'm thinking back to my DSP class and trying to remember why time domain
> storage like this isn't in favor with most of the image formats. I guess
> TD storage limits you to lossless compression (you can lossy-ly compress
> frequency domain data and the brain is apparently more forgiving of the
> artifacts. Or so they say). And now that I think of it, this is roughly
> the image equivalent of PCM audio, isn't it? (Except that PCM has a more
> complex header, and the channels are stored sequentially rather than
> interleaved.)
Well, it's hard to compare Audio with Image-encoding. With PCM we are
dealing with a method to store and represent wave-functions in an
efficient manner.
If you just regard the losslessness of both formats, you are right.
Except from the losses you get from the approximation (PCM-width vs.
8bit channels), these are way below recognizable.
In both cases, we simply deal with lossless formats.
> Now that I say that, I wonder if serial storage of the color channels
> would compress better, since you'll generally have low-frequency content
> for most of the channels most of the time.
The current sequential storage is simpler to implement. To be fair, I
also don't have a clear understand how you would implement a serial way
without messing with the parser too much and making it more intelligent.
> On that note, I'm now curious how this format would handle line
> noise/steganography. I'll report back if I find anything interesting on
> either question.
This is actually a very basic issue and to put it short, the format
itself doesn't care. Given you basically just store the pixels line by
line in a sequence, anything up to the point of having your image-file
is not an issue.
Now, when we go over to the second step, compressing the image, the
deal is simple: The more random the data, the less efficient the
compression. If you have an image directly piped from /dev/random, as
you should know, compression wouldn't do anything.
However, if we are dealing with ordered information (for instance an
image with a clear black background and just a small square in the
middle:
##########
###....###
###....###
##########
The compression could optimize it with a dictionary:
{#, .} and the compilate: 13#,4.,6#,4.,13#
I was surprised bz2 beat png in this regard (for sprites and all of
that), so I'm looking forward to real tests with more noisy images.
I'm sure, though, that finding patterns in this rather "random" data is
harder.
Let's see.
Cheers
FRIGN
--
FRIGN <dev_AT_frign.de>
Received on Thu Jul 17 2014 - 15:22:04 CEST