[Koha-bugs] Bug 2246] Label printing doesn't work with Unicode characters

Mason James mason.loves.sushi at gmail.com
Wed Dec 3 04:12:30 CET 2008


On 2008/12/3, at 3:43 PM, bugzilla-daemon at pippin.metavore.com wrote:

> http://bugs.koha.org/cgi-bin/bugzilla/show_bug.cgi?id=2246
>
>
>
>
>
> ------- Comment #10 from joe.atzberger at liblime.com  2008-12-02  
> 18:43 -------
> After extensive review, I am more inclined to agree with Mason  
> inasmuch as PDFs
> are limited to 3 default encodings: MacRomanEncoding,  
> MacExpertEncoding, or
> WinAnsiEncoding.  None of those cover as much Unicode as we need.
>
> The (1236 page!) Adobe reference book that I'm checking says: "For  
> character
> encodings that are not predefined, the PDF file must contain a  
> stream that
> defines the CMap."
>
> It looks like we would have to define a mapping for every non-basic- 
> ASCII
> character that  we might want to use.  This "ToUnicode Mapping File  
> Tutorial"
> might be useful to pursue this route:
> www.adobe.com/devnet/acrobat/pdfs/5411.ToUnicode.pdf
>
> I'm not sure how much of this prTTFont would encapsulate, but it  
> does not look
> like fun though.
>


heya atz

yeah, the more i look into this little "bug" the more bizarre the  
whole PDF-spec starts to look.. ;/

check out this thread as an example

http://www.adobeforums.com/webx/.59b52c35

i think the gist of it is that PDF has a half-assed handling of small  
subset of unicode, that works just a little bit, and is really hard  
to do.


the short-term fix looks to be get the labels-print-pdf.pl to handle  
problem items, before the call to PDF::Reuse

?by trying strip/convert the detected unicode characters?











More information about the Koha-bugs mailing list