[OAI-implementers] XML encoding problems with DSpace at
Wed, 19 Feb 2003 09:40:28 -0500
The 1.1 provider is back up and running at
http://memory.loc.gov/cgi-bin/oai1_1 (and cgi-bin/oai for that
matter). Sorry for any inconvenience. The 2.0 version
(http://memory.loc.gov/cgi-bin/oai2_0) does supercede it, but we have
not (except by accident) disabled support for the 1.1 repository.
>>> Tim Brody <firstname.lastname@example.org> 02/18/03 10:30AM >>>
Celestial keeps a record of errors that occurred during harvesting:
I reset the errors occasionally to save space.
The mods format appears to be AWOL:
The OAI 1.1 memory.loc.gov interface is returning internal server
errors, has this interface been removed (lcoa1 supercede it?)?
How to determine what character encoding a PDF is in probably depends
your PDF tool (unless you fancy writing a PDF parser :-)
Reading the PDF spec:
The default encoding is ISOLatin1, otherwise quoting the doc:
"If text is encoded in Unicode the first two bytes of the text must be
the Unicode Byte Order marker, <FE FF>."
I guess that if a Text object in PDF is in Unicode it uses UTF-16. I've
not done anything with PDF metadata to know for certain.
All the best,
Caroline Arms wrote:
> As a data provider, LC would like to know if it is generating
> characters. The gradual migration to UNICODE is going to give us
> problems, in part BECAUSE some systems work so hard to recognize
> character encodings and adjust. I'm with Hussein. Notify data
> of problems (even if you do adjust) so that the problem can be fixed
> close to home as possible.
> As a related aside, if anyone has a suggestion for an efficient way
> (preferably unix-based) to check that the metadata in a PDF file is
> in UTF-8 encoding (or consistently in any other UNICODE encoding),
> Caroline Arms
> Office of Strategic Initiatives
> Library of Congress
OAI-implementers mailing list