[Koha-devel] mysqldump & restore issues
Christopher Curry
ccurry at amphilsoc.org
Tue Sep 21 16:40:48 CEST 2010
Thanks, Mike. I'll try your recommendations and post back what I find out.
Cheers,
Christopher Curry
Assistant Technical Librarian / Assistant IT Officer
American Philosophical Society
105 South Fifth Street
Philadelphia, PA 19106-3386
Tel. (215) 599-4299
ccurry at amphilsoc.org <mailto:ccurry at amphilsoc.org>
Main Library number: (215)440-3400
APS website: http://www.amphilsoc.org
On 09/21/2010 10:11 AM, Mike Hafen wrote:
> It could still be memory, because MySQL keeps indexes and such in
> memory. It could also be a bad sector on the disk, because MySQL
> caches query results to disk when they are bigger than the space
> allocated in memory for query results.
>
> The only character I can think of that would cause trouble in the dump
> would be an unescaped single quote, or maybe half of a UTF16 character
> that happens to have the same value as an ASCII single quote. It
> seems unlikely to me though that mysqldump wouldn't spot such a
> character as it was dumping.
>
> Have you tried something like 'select * from biblioitems limit
> 41537,1'. That should pull the specific row that has the error, and
> you could visually examine it, or see if MySQL throws an error on the
> row. That could tell you if the problem is in the data file
> specifically, or if it's somewhere else (memory or disk spaced used to
> cache the result).
>
> I doubt it's the disk overrunning what it can physically allocate,
> since as you say the query results can't be more than a few
> gigabytes. Unless the disk is already nearly full, which I'm sure
> you've already checked.
>
> I think MySQL could try to allocate memory beyond what it can, beyond
> the 3.x GB addressable by a 32bit app, but the error message indicates
> it's a problem with the disk drive. Assuming the error message is
> accurate I'd start with a bad block scan on the host's hard drive.
> That's what I recommend.
>
> Good luck.
>
> On Tue, Sep 21, 2010 at 7:39 AM, Christopher Curry
> <ccurry at amphilsoc.org <mailto:ccurry at amphilsoc.org>> wrote:
>
> Mike,
>
> Thanks for the reply. I did try querying individual records
> around that row; in fact, I used "limit 41536,1" to find out the
> specific biblioitemnumber. I checked the record in Koha and found
> no problems with it or adjacent records and I had no problem
> querying these individual records with select statements.
>
> I did try a few queries like "select * from biblioitems limit
> 1,41537" and did get the same 2013 error when querying large
> datasets, but not when querying smaller data sets. "limit
> 1,41535" also threw the error, but "limit 1,20,000" did not. I
> thought this might indicate a memory issue, but the syslog error:
> "InnoDB: Database page corruption on disk or a failed file read of
> page 58164" led me to think there was corrupted data in the database.
>
> The VM has 120 GB of dynamically expanding storage and the vdi is
> housed on a filesystem with 108GB free, so there shouldn't be a
> problem with with running out of disk space. The dump is less
> than 2GB, so the database itself can't be much larger than that,
> can it?
>
> The VM currently has 1.5 GB allocated for memory, and I tried
> increasing this to 2GB, which did not prevent the error. The host
> is 32-bit and is maxed out at 4GB of memory, so I can't go much
> higher than this without destabilizing my host.
>
> I had another theory that there might be an offending character in
> one of the MARCXML records that is messing with the format of the
> SQL commands in the .sql file. Anyone know if this is a
> possibility? I'm no SQL expert, so I'm not sure what characters
> to look for.
>
> Cheers,
>
> Christopher Curry
> Assistant Technical Librarian / Assistant IT Officer
>
> American Philosophical Society
> 105 South Fifth Street
> Philadelphia, PA 19106-3386
> Tel. (215) 599-4299
>
> ccurry at amphilsoc.org <mailto:ccurry at amphilsoc.org>
>
> Main Library number: (215)440-3400
> APS website: http://www.amphilsoc.org
>
>
>
> On 09/20/2010 06:17 PM, Mike Hafen wrote:
>> Sorry to hear you are having problems with your failover server.
>> I've had problems with failovers servers and backups before, it's
>> not fun.
>>
>> Have you checked the memory and hard drive of the failover
>> server? (the host) That would be my first guess. Also, it may
>> be the filesystem on either the host or the guest, but I'm not up
>> on file systems so I can't say for sure.
>>
>> There's a good chance the row number in the error message relates
>> to the biblioitemnumber. You could try just querying the
>> database for rows around that. Something like 'select * from
>> biblioitems where biblioitemnumber > 41470 and biblioitemnumber <
>> 41600. If you see the error it could be data, disk, or
>> filesystem related. If you don't it's probably memory related,
>> but maybe filesystem related.
>>
>> Good luck.
>>
>> 2010/9/17 Christopher Curry <ccurry at amphilsoc.org
>> <mailto:ccurry at amphilsoc.org>>
>>
>> Hello all,
>>
>> I'm trying to create a mirrored failover server with data
>> from our live Koha. In order to do so, I'm using *mysqldump*
>> and *mysql* for backup and restore.
>>
>> I've discovered a troubling problem and I can't determine the
>> cause.
>>
>> I run this command to backup the live server:
>>
>> *mysqldump --single-transaction -ukoha -p koha >
>> /home/koha/KohaServerBackups/koha.`/bin/date
>> +\%Y\%m\%d\%H\%M\%S`.sql*
>>
>> This seems to work correctly (and very quickly! 1.7 GB
>> database exports in 2 min)
>>
>> Then, I run this command:
>>
>> *mysql -v -ukoha -p koha <
>> /home/koha/KohaServerBackups/backupFileName.sql*
>>
>> This also seems to work, as I receive no warnings or error
>> messages.
>>
>> I'm exporting from a 3.00.05.001 system and importing to a
>> 3.00.05.003 system, so I then run the
>> *$KOHA_SOURCE/installer/data/mysql/updatedatabase.pl
>> <http://updatedatabase.pl>* script.
>>
>> Relevant specs:
>>
>> MySQL version: mysql Ver 14.12 Distrib 5.0.51a, for
>> debian-linux-gnu (i486) using readline 5.2
>> OS: Debian Lenny
>>
>> Failover server is virtual, running on VirtualBox 3.2.8, on
>> an Ubuntu 10.4 host.
>>
>>
>> All GUI functions of the Koha failover server seem to operate
>> correctly, but when I run *mysqlcheck -ukoha -p koha* the
>> check fails on koha.biblioitems with the following error message:
>>
>> mysqlcheck: Got error: 2013: Lost connection to MySQL server
>> during query when executing 'CHECK TABLE ... '
>>
>> If I try mysqldump, I get the same error, but it is more
>> specific, reporting that it falls on row 41536.
>>
>> If I check /var/log/syslog, I see this
>> http://pastebin.com/YuuFBHry
>>
>> "InnoDB: Database page corruption on disk or a failed file
>> read of page 58164", etc.
>>
>> Both mysqlcheck & mysqldump work without error on the live
>> server, so I'm thinking that something must be happening to
>> the data during the export or import that corrupts the InnoDB
>> data, but this is speculation, since I'm not a MySQL expert.
>>
>> Has anyone seen behavior like this? Any suggestions for
>> further troubleshooting/resolution?
>>
>> --
>>
>> Cheers,
>>
>> Christopher Curry
>> Assistant Technical Librarian / Assistant IT Officer
>>
>> American Philosophical Society
>> 105 South Fifth Street
>> Philadelphia, PA 19106-3386
>> Tel. (215) 599-4299
>>
>> ccurry at amphilsoc.org <mailto:ccurry at amphilsoc.org>
>>
>> Main Library number: (215)440-3400
>> APS website: http://www.amphilsoc.org
>>
>>
>>
>> _______________________________________________
>> Koha-devel mailing list
>> Koha-devel at lists.koha-community.org
>> <mailto:Koha-devel at lists.koha-community.org>
>> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: </pipermail/koha-devel/attachments/20100921/e4e5913d/attachment.htm>
More information about the Koha-devel
mailing list