On Fri, Sep 28, 2018 at 08:27:30PM +0200, Laslo Hunhold wrote:
> On Fri, 28 Sep 2018 13:38:03 +0000
> No, not even that. We only need normalization really if we want to do
> "perceptual" string comparisons, which is generally questionable for
> UNIX tools.
mmmh... for the reason I stated before, the fonts files will probably be more
and more NFD normalization only (lighter font files, and significantly less
work to do for font designers). Font files will miss more and more pre-combined
(legacy) glyphs: full decomposition in base glyphs will be more and more
required.
I have not gone into the details of the EGC boundaries algorithm, but I'm really
curious to how the unicode consortium algorithm can know that an unicode point
is an EGC terminator without looking the next unicode point.
--
Sylvain
Received on Sat Sep 29 2018 - 14:59:15 CEST