From charl at prograbiz.com Wed Sep 1 06:25:29 2010 From: charl at prograbiz.com (charl at prograbiz.com) Date: Wed, 1 Sep 2010 06:25:29 +0200 (SAST) Subject: [Koha-devel] Customizing Opac Message-ID: <41973.41.220.143.5.1283315129.squirrel@webmail.lantic.net> Can anyone please advise how to move the Cart and Lists buttons more to the right and swop them around? With other words first Lists button then Cart button and position them more to the right. Thanks for any help! From richard at katipo.co.nz Wed Sep 1 06:49:26 2010 From: richard at katipo.co.nz (Richard Anderson) Date: Wed, 01 Sep 2010 16:49:26 +1200 Subject: [Koha-devel] Customizing Opac In-Reply-To: <41973.41.220.143.5.1283315129.squirrel@webmail.lantic.net> References: <41973.41.220.143.5.1283315129.squirrel@webmail.lantic.net> Message-ID: <4C7DDB56.8050009@katipo.co.nz> Hi, Here's an excellent resource on customizing the OPAC http://www.myacpl.org/koha/?p=3 and http://www.myacpl.org/koha/?p=30 Regards, Richard -- Richard Anderson | Business Development Manager Katipo Communications Ltd PO Box 12487 | Wellington | New Zealand Email: richard at katipo.co.nz | http://www.katipo.co.nz Ph: (04) 934 1285 | Mob: 021 280 1504 Fax: (04) 934 1286 From fridolyn.somers at gmail.com Wed Sep 1 10:43:36 2010 From: fridolyn.somers at gmail.com (Fridolyn SOMERS) Date: Wed, 1 Sep 2010 10:43:36 +0200 Subject: [Koha-devel] Customizing Opac In-Reply-To: <4C7DDB56.8050009@katipo.co.nz> References: <41973.41.220.143.5.1283315129.squirrel@webmail.lantic.net> <4C7DDB56.8050009@katipo.co.nz> Message-ID: Try this : Add a rule in opac.css or in system preference "*OPACUserCSS"* : #listsmenulink,#cmspan { float: right; } It will set Lists and Cart to the right. Regards, On Wed, Sep 1, 2010 at 6:49 AM, Richard Anderson wrote: > Hi, > > Here's an excellent resource on customizing the OPAC > http://www.myacpl.org/koha/?p=3 > and > http://www.myacpl.org/koha/?p=30 > > Regards, > Richard > -- > > Richard Anderson | Business Development Manager > Katipo Communications Ltd > PO Box 12487 | Wellington | New Zealand > Email: richard at katipo.co.nz | http://www.katipo.co.nz > Ph: (04) 934 1285 | Mob: 021 280 1504 > Fax: (04) 934 1286 > > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > -- Fridolyn SOMERS Information and Communication Technologies engineer Lyon - FRANCE fridolyn.somers at gmail.com -------------- section suivante -------------- Une pi?ce jointe HTML a ?t? nettoy?e... URL: From charl at prograbiz.com Wed Sep 1 23:58:14 2010 From: charl at prograbiz.com (charl at prograbiz.com) Date: Wed, 1 Sep 2010 23:58:14 +0200 (SAST) Subject: [Koha-devel] Customizing Cataloging Screen Message-ID: <51043.70.29.56.245.1283378294.squirrel@webmail.lantic.net> Is it possible to put the advanced search link in the opening cataloging screen? (Currently only the basic "Cataloging search" is available) Thank you! From chrisc at catalyst.net.nz Thu Sep 2 00:01:55 2010 From: chrisc at catalyst.net.nz (Chris Cormack) Date: Thu, 2 Sep 2010 10:01:55 +1200 Subject: [Koha-devel] Customizing Cataloging Screen In-Reply-To: <51043.70.29.56.245.1283378294.squirrel@webmail.lantic.net> References: <51043.70.29.56.245.1283378294.squirrel@webmail.lantic.net> Message-ID: <20100901220155.GM4455@rorohiko> * charl at prograbiz.com (charl at prograbiz.com) wrote: > Is it possible to put the advanced search link in the opening cataloging > screen? (Currently only the basic "Cataloging search" is available) > You just want a link to the advanced search there as well? Instead of just clicking on the search link across the top? I think I must have misunderstood you, as the link to the advanced search is in the top navigation on every page. Chris > Thank you! > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel -- Chris Cormack Catalyst IT Ltd. +64 4 803 2238 PO Box 11-053, Manners St, Wellington 6142, New Zealand -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: From nengard at gmail.com Thu Sep 2 13:54:32 2010 From: nengard at gmail.com (Nicole Engard) Date: Thu, 2 Sep 2010 07:54:32 -0400 Subject: [Koha-devel] Signing off on Patches Message-ID: Is there a tutorial on how to sign off on someone's patch on the wiki or elsewhere? If not, can someone write one since this is the new process we want to try and follow for all 3.4 patches? Thanks a bunch! Nicole From frederic at tamil.fr Thu Sep 2 14:09:03 2010 From: frederic at tamil.fr (Frederic Demians) Date: Thu, 02 Sep 2010 14:09:03 +0200 Subject: [Koha-devel] Signing off on Patches In-Reply-To: References: Message-ID: <4C7F93DF.5060008@tamil.fr> > Is there a tutorial on how to sign off on someone's patch on the wiki > or elsewhere? If not, can someone write one since this is the new > process we want to try and follow for all 3.4 patches Isn't it as simple as: * You apply someone patch to your local repo. * You test it. * If the patch is ok, you prepare a new one for it (git format-patch -s). -s = signoff * Send the new patch as usual. -- Fr?d?ric -------------- next part -------------- An HTML attachment was scrubbed... URL: From nengard at gmail.com Thu Sep 2 14:09:11 2010 From: nengard at gmail.com (Nicole Engard) Date: Thu, 2 Sep 2010 08:09:11 -0400 Subject: [Koha-devel] Another tutorial request - generating public ssh keys Message-ID: Hi all, To access the manual people need to generate public ssh keys - I used to have this documented somewhere, but can't find it. I'm wondering if someone could write up a tutorial for the wiki on how to generate a public ssh key for the various different operating systems that people might be using? Thanks in advance! Nicole From nengard at gmail.com Thu Sep 2 14:29:01 2010 From: nengard at gmail.com (Nicole Engard) Date: Thu, 2 Sep 2010 08:29:01 -0400 Subject: [Koha-devel] Signing off on Patches In-Reply-To: <4C7F93DF.5060008@tamil.fr> References: <4C7F93DF.5060008@tamil.fr> Message-ID: If it is then I'll add this to the wiki for others ... cause while that sounds easy, it's not something I knew off the top of my head :) (and I assume others don't know either). Nicole 2010/9/2 Frederic Demians : > > Is there a tutorial on how to sign off on someone's patch on the wiki > or elsewhere? If not, can someone write one since this is the new > process we want to try and follow for all 3.4 patches > > Isn't it as simple as: > > You apply someone patch to your local repo. > You test it. > If the patch is ok, you prepare a new one for it (git format-patch -s). -s = > signoff > Send the new patch as usual. > > -- > Fr?d?ric > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > From nengard at gmail.com Thu Sep 2 14:48:33 2010 From: nengard at gmail.com (Nicole Engard) Date: Thu, 2 Sep 2010 08:48:33 -0400 Subject: [Koha-devel] Signing off on Patches In-Reply-To: References: <4C7F93DF.5060008@tamil.fr> Message-ID: I created a page: http://wiki.koha-community.org/wiki/Sign_off_on_patches I ask you all to participate in beefing it up a bit. For example - instead of saying 'apply the patch' - I need someone to write up how to apply someone else's patch because we don't all know how to do that. Thanks Nicole On Thu, Sep 2, 2010 at 8:29 AM, Nicole Engard wrote: > If it is then I'll add this to the wiki for others ... cause while > that sounds easy, it's not something I knew off the top of my head :) > (and I assume others don't know either). > > Nicole > > 2010/9/2 Frederic Demians : >> >> Is there a tutorial on how to sign off on someone's patch on the wiki >> or elsewhere? If not, can someone write one since this is the new >> process we want to try and follow for all 3.4 patches >> >> Isn't it as simple as: >> >> You apply someone patch to your local repo. >> You test it. >> If the patch is ok, you prepare a new one for it (git format-patch -s). -s = >> signoff >> Send the new patch as usual. >> >> -- >> Fr?d?ric >> >> _______________________________________________ >> Koha-devel mailing list >> Koha-devel at lists.koha-community.org >> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >> > From fridolyn.somers at gmail.com Thu Sep 2 15:49:49 2010 From: fridolyn.somers at gmail.com (Fridolyn SOMERS) Date: Thu, 2 Sep 2010 15:49:49 +0200 Subject: [Koha-devel] Customizing Cataloging Screen In-Reply-To: <51043.70.29.56.245.1283378294.squirrel@webmail.lantic.net> References: <51043.70.29.56.245.1283378294.squirrel@webmail.lantic.net> Message-ID: You may use this code in "*intranetuserjs*" system preference : $(document).ready(function() { $("li>form#searchform").parent().after("
  • Advanced Search
  • "); }); It adds a link to advanced search right after the simple search textbox. Enjoy, On Wed, Sep 1, 2010 at 11:58 PM, wrote: > Is it possible to put the advanced search link in the opening cataloging > screen? (Currently only the basic "Cataloging search" is available) > > Thank you! > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > -- Fridolyn SOMERS Information and Communication Technologies engineer Lyon - FRANCE fridolyn.somers at gmail.com -------------- section suivante -------------- Une pi?ce jointe HTML a ?t? nettoy?e... URL: From ohiocore at gmail.com Thu Sep 2 17:05:54 2010 From: ohiocore at gmail.com (Joe Atzberger) Date: Thu, 2 Sep 2010 11:05:54 -0400 Subject: [Koha-devel] Another tutorial request - generating public ssh keys In-Reply-To: References: Message-ID: On linux: ssh-keygen --joe On Thu, Sep 2, 2010 at 8:09 AM, Nicole Engard wrote: > Hi all, > > To access the manual people need to generate public ssh keys - I used > to have this documented somewhere, but can't find it. I'm wondering > if someone could write up a tutorial for the wiki on how to generate a > public ssh key for the various different operating systems that people > might be using? > > Thanks in advance! > Nicole > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nengard at gmail.com Thu Sep 2 17:16:35 2010 From: nengard at gmail.com (Nicole Engard) Date: Thu, 2 Sep 2010 11:16:35 -0400 Subject: [Koha-devel] Another tutorial request - generating public ssh keys In-Reply-To: References: Message-ID: Page created: http://wiki.koha-community.org/wiki/Generating_Public_SSH_Keys Feel free to add to it for other operating systems if you have that know-how. Thanks Nicole On Thu, Sep 2, 2010 at 11:05 AM, Joe Atzberger wrote: > On linux: ssh-keygen > > --joe > On Thu, Sep 2, 2010 at 8:09 AM, Nicole Engard wrote: >> >> Hi all, >> >> To access the manual people need to generate public ssh keys - I used >> to have this documented somewhere, but can't find it. ?I'm wondering >> if someone could write up a tutorial for the wiki on how to generate a >> public ssh key for the various different operating systems that people >> might be using? >> >> Thanks in advance! >> Nicole >> _______________________________________________ >> Koha-devel mailing list >> Koha-devel at lists.koha-community.org >> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > > From nengard at gmail.com Tue Sep 7 15:03:04 2010 From: nengard at gmail.com (Nicole Engard) Date: Tue, 7 Sep 2010 09:03:04 -0400 Subject: [Koha-devel] September Newsletter: Call for Articles Message-ID: It's that time again, the next newsletter will be published in 8 days. I need any and all announcements/tips/tricks/news/koha events sent to me by the 13th of September if I'm to get the newsletter out on time. Articles should be short and if you have a lot to say then send along a link to the full article. Remember you don't have to write a whole lot, just a short 2 liner is A-OK! Thanks in advance, Nicole C. Engard From nengard at gmail.com Wed Sep 8 18:38:00 2010 From: nengard at gmail.com (Nicole Engard) Date: Wed, 8 Sep 2010 12:38:00 -0400 Subject: [Koha-devel] Amazon API Changes Message-ID: Hi all, Got this today - do we need to make any changes to Koha code to accommodate this? ----------- Dear Product Advertising API Developer, On November 8, 2010 the Reviews response group of the Product Advertising API will no longer return customer reviews content and instead will return a link to customer reviews content hosted on Amazon.com. You will be able to display customer reviews on your site using that link. Please refer to the Product Advertising API Developer guide found here for more details. The Reviews response group will continue to function as before until November 8 and the new link to customer reviews is available to you now through the Product Advertising API as well. Thank you for advertising products for sale on Amazon.com. Sincerely, The Product Advertising API Team From rpurus01 at students.poly.edu Thu Sep 9 21:27:38 2010 From: rpurus01 at students.poly.edu (rohith purushotham) Date: Thu, 9 Sep 2010 15:27:38 -0400 Subject: [Koha-devel] perl installation on centos Message-ID: Hi, Im trying to run Koha's perl installer on centos when i run this command- perl Makefile.PL .. i get the following msg... [root at koha ~]# perl Makefile.PL Can't open perl script "Makefile.PL": No such file or directory [root at koha ~]# I believe the solution to this is running the above command on -chmod... but could anyone please help me with how to run the -chmod for perl installation???? -Ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From colin.campbell at ptfs-europe.com Fri Sep 10 00:00:23 2010 From: colin.campbell at ptfs-europe.com (Colin Campbell) Date: Thu, 09 Sep 2010 23:00:23 +0100 Subject: [Koha-devel] perl installation on centos In-Reply-To: References: Message-ID: <4C8958F7.70802@ptfs-europe.com> On 09/09/10 20:27, rohith purushotham wrote: > > Hi, > > Im trying to run Koha's perl installer on centos > > when i run this command- perl Makefile.PL .. i get the following msg... > > [root at koha ~]# perl Makefile.PL > Can't open perl script "Makefile.PL": No such file or directory > [root at koha ~]# > Sounds like you are not in the root directory of the koha source. You should be in the same directory as Makefile.PL. You may still need to give the full path i.e. perl ./Makefile.PL -- Colin Campbell Chief Software Engineer, PTFS Europe Limited Content Management and Library Solutions +44 (0) 208 366 1295 (phone) +44 (0) 7759 633626 (mobile) colin.campbell at ptfs-europe.com skype: colin_campbell2 http://www.ptfs-europe.com From tajoli at cilea.it Mon Sep 13 14:55:39 2010 From: tajoli at cilea.it (Zeno Tajoli) Date: Mon, 13 Sep 2010 14:55:39 +0200 Subject: [Koha-devel] Where discuss RFCs ? Message-ID: <4C8E1F4B.2070907@cilea.it> Hi to all, I write this mail to koha and koha-devel mailing lists because this topic is quit wide. So where do you think we need to discuss RFC? An example is this RFC: http://wiki.koha-community.org/wiki/Analytic_Record_support The original post is from Bywater Then in the "talk" section there is a replay of Savitra Sirohi. I want to add my considerations also (a user asked us a develop about analytics on unimarc). But where ? In the main section of the wiki page ? In the "Talk" section ? On the mailing lists ? Bye Zeno Tajoli -- Zeno Tajoli CILEA - Segrate (MI) tajoliAT_SPAM_no_prendiATcilea.it (Indirizzo mascherato anti-spam; sostituisci qaunto tra AT con @) From ian.walls at bywatersolutions.com Mon Sep 13 16:55:26 2010 From: ian.walls at bywatersolutions.com (Ian Walls) Date: Mon, 13 Sep 2010 10:55:26 -0400 Subject: [Koha-devel] [Koha] Where discuss RFCs ? In-Reply-To: <4C8E1F4B.2070907@cilea.it> References: <4C8E1F4B.2070907@cilea.it> Message-ID: My initial goal in posting the Analytics Record support RFC was to get people talking, and it looks like in that, I'm successful. Now we just need to decide where to hold the conversation. I think that for asking questions and coordinating work, we can do that on the main Koha list. I recommend this over devel list, since we want to get non developers providing their thoughts and requirements For enumerating new additions/modifications to the open spec, we should use the wiki. I have no problem with others editing the original RFC, but maybe starting out on the talk page is the best way to go, so we can collect our thoughts before editing the 'core' document. Once we're underway with actual coding, nitty-gritty details of how to make this work should go to the devel list, or come up on IRC. Sound good? I hope everyone who has ideas on how to make this development better will free to speak up and help us improve the spec. I really want to help build something that'll work for all kinds of Koha libraries the world over, but I can't presume to know what all those libraries may require without feedback. Cheers, -Ian On Mon, Sep 13, 2010 at 8:55 AM, Zeno Tajoli wrote: > Hi to all, > > I write this mail to koha and koha-devel mailing lists because this > topic is quit wide. > > So where do you think we need to discuss RFC? > > An example is this RFC: > http://wiki.koha-community.org/wiki/Analytic_Record_support > > The original post is from Bywater > Then in the "talk" section there is a replay of Savitra Sirohi. > > I want to add my considerations also (a user asked us a develop about > analytics > on unimarc). > > But where ? > In the main section of the wiki page ? > In the "Talk" section ? > On the mailing lists ? > > Bye > Zeno Tajoli > > -- > Zeno Tajoli > CILEA - Segrate (MI) > tajoliAT_SPAM_no_prendiATcilea.it > (Indirizzo mascherato anti-spam; sostituisci qaunto tra AT con @) > > _______________________________________________ > Koha mailing list > Koha at lists.katipo.co.nz > http://lists.katipo.co.nz/mailman/listinfo/koha > -- Ian Walls Lead Development Specialist ByWater Solutions Phone # (888) 900-8944 http://bywatersolutions.com ian.walls at bywatersolutions.com Twitter: @sekjal -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at bigballofwax.co.nz Tue Sep 14 12:04:05 2010 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Tue, 14 Sep 2010 22:04:05 +1200 Subject: [Koha-devel] [Koha] 3.4 RFCs - Acquisitions, Serials and SDI In-Reply-To: References: <4C8F3478.40908@biblibre.com> Message-ID: On 14 September 2010 21:38, savitra sirohi wrote: > Henri, yes it is tricky, but here is our current plan: > > Acquisitions - currency, discounts, charges [SEPT] Regarding the discounts, did you see Robin made a note, we have already done some work on discounts, very similar to your proposal, the patch is attached to the bug. So hopefully that will save some effort :) > Serials - binding loose issues [SEPT] > Analytical Records/Article Indexing [OCT] > Acquisitions - Approvals [NOV] > Acquisitions - Payments [NOV] > SDI [DEC] > Integrating Subscriptions with Acquistions [JAN] > These all look good, in parallel we will be working on the template toolkit work. Plus the other RFC. I hope to push up a wip branch with the template toolkit work in the near future so there isn't too much double up. Chris From gmcharlt at gmail.com Tue Sep 14 17:21:09 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Tue, 14 Sep 2010 11:21:09 -0400 Subject: [Koha-devel] Bugzilla change - adding regression as a keyword Message-ID: Hi, At Chris Nighswonger's request, I've set up a way to mark a bug as a regression. Specifically, I've defined a bugzilla keyword, "regression". This has the effect of adding a keyword field on the bug entry form and the bug search form. Please see bug 4867 for an example. There are a couple other ways we could choose to mark bugs as regressions, e.g., by defining a new severity or priority. Feedback welcome. Regards, Galen -- Galen Charlton gmcharlt at gmail.com From irmalibraries at gmail.com Wed Sep 15 03:43:49 2010 From: irmalibraries at gmail.com (Irma Birchall) Date: Wed, 15 Sep 2010 11:43:49 +1000 Subject: [Koha-devel] Bugzilla change - adding regression as a keyword In-Reply-To: References: Message-ID: <4C9024D5.6070809@gmail.com> Galen, By "regression" do you intend it to mean: "arrested development: an abnormal state in which development has stopped prematurely" OR do you mean: "currently on hold" ??? Thanks, Irma On 15/09/2010 1:21 AM, Galen Charlton wrote: > Hi, > > At Chris Nighswonger's request, I've set up a way to mark a bug as a > regression. Specifically, I've defined a bugzilla keyword, > "regression". This has the effect of adding a keyword field on the > bug entry form and the bug search form. Please see bug 4867 for an > example. > > There are a couple other ways we could choose to mark bugs as > regressions, e.g., by defining a new severity or priority. Feedback > welcome. > > Regards, > > Galen -- Irma Birchall CALYX information essentials Koha & Kete implementations and services T:(02) 80061603 M: 0413 510 717 _irma at calyx.net.au_ _www.calyx.net.au_ _http://koha-community.org/_ _http://kete.net.nz/_ -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at bigballofwax.co.nz Wed Sep 15 03:48:23 2010 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Wed, 15 Sep 2010 13:48:23 +1200 Subject: [Koha-devel] Bugzilla change - adding regression as a keyword In-Reply-To: <4C9024D5.6070809@gmail.com> References: <4C9024D5.6070809@gmail.com> Message-ID: 2010/9/15 Irma Birchall : > Galen, > By "regression" do you intend it to mean: "arrested development: an abnormal > state in which development has stopped prematurely" > OR do you mean: "currently on hold" ??? Hi Irma In computer science parlance, a regression bug is one that has broken previously functional behaviour. It has actually regressed "regressed" past participle, past tense of re?gress (Verb) 1. Return to a former or less developed state. In that sense. When you get a code base the size of Koha, you often to see these type of bugs, where someone has fixed, or added something new and in the process broken something that previously worked. Whenever this happens, we should do two things, 1/ revert the change that broke it, fix that and then re apply it 2/ write a unit test so that we can test programmatically that the previous broken thing is now working (and can test it is still working from now and into the future. Chris From robin at catalyst.net.nz Wed Sep 15 03:49:08 2010 From: robin at catalyst.net.nz (Robin Sheat) Date: Wed, 15 Sep 2010 13:49:08 +1200 Subject: [Koha-devel] Bugzilla change - adding regression as a keyword In-Reply-To: <4C9024D5.6070809@gmail.com> References: <4C9024D5.6070809@gmail.com> Message-ID: <1284515348.21409.212.camel@zarathud> Op woensdag 15-09-2010 om 11:43 uur [tijdzone +1000], schreef Irma Birchall: > By "regression" do you intend it to mean: "arrested development: an > abnormal state in which development has stopped prematurely" > OR do you mean: "currently on hold" ??? In software development, a "regression" is when something has regressed. That is, gone from working to not working. So, if adding new borrowers works now, and you upgrade and it stops, that's a regression. So, of your two, it's closest to the first one. They're usually considered to be pretty important because everyone hates when things that did work break, compared to some new feature that you weren't relying on not being quite right. -- Robin Sheat Catalyst IT Ltd. ? +64 4 803 2204 GPG: 5957 6D23 8B16 EFAB FEF8 7175 14D3 6485 A99C EB6D -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: Dit berichtdeel is digitaal ondertekend URL: From henridamien at koha-fr.org Wed Sep 15 09:02:01 2010 From: henridamien at koha-fr.org (LAURENT) Date: Wed, 15 Sep 2010 09:02:01 +0200 Subject: [Koha-devel] Bugzilla change - adding regression as a keyword In-Reply-To: References: Message-ID: <4C906F69.4060806@koha-fr.org> Le 14/09/2010 17:21, Galen Charlton a ?crit : > Hi, > > At Chris Nighswonger's request, I've set up a way to mark a bug as a > regression. Specifically, I've defined a bugzilla keyword, > "regression". This has the effect of adding a keyword field on the > bug entry form and the bug search form. Please see bug 4867 for an > example. > > There are a couple other ways we could choose to mark bugs as > regressions, e.g., by defining a new severity or priority. Feedback > welcome. > > Regards, > > Galen I warmly welcome this new status. From gmcharlt at gmail.com Wed Sep 15 14:18:43 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Wed, 15 Sep 2010 08:18:43 -0400 Subject: [Koha-devel] Bugzilla change - adding regression as a keyword In-Reply-To: <4C9024D5.6070809@gmail.com> References: <4C9024D5.6070809@gmail.com> Message-ID: Hi, 2010/9/14 Irma Birchall : > By "regression" do you intend it to mean: "arrested development: an abnormal > state in which development has stopped prematurely" > OR do you mean: "currently on hold" ??? Chris and Robin explained it well - a "regression" is a bug where that which was working now does not, to much wailing and gnashing of teeth. :) Regards, Galen -- Galen Charlton gmcharlt at gmail.com From tomascohen at gmail.com Wed Sep 15 19:41:02 2010 From: tomascohen at gmail.com (Tomas Cohen Arazi) Date: Wed, 15 Sep 2010 14:41:02 -0300 Subject: [Koha-devel] Error on Index 'Any' when searching authorities Message-ID: When searching for authorities putting my search string on "Anywhere" y get no results, and the logs show: auth_finder.pl: oAuth error: Unsupported Use attribute (114) Any Bib-1 I was first recommended to review some threads about the topic (thanks jcamins) of the 'Heading-main' index, but I had no positive results. Reindexing yielded the same (null) results everytime. Then I remebered having switched recently to 'dom' model for authorities... because it was recommended during the install (at least in the 3.0.x branch, If I'm not wrong it was alleged more efficient). The thing is that when I switched back to grs1, I got my query results back! I think 'dom' usage should be discouraged until this is fixed. Thanks To+ From nengard at gmail.com Wed Sep 15 21:28:20 2010 From: nengard at gmail.com (Nicole Engard) Date: Wed, 15 Sep 2010 15:28:20 -0400 Subject: [Koha-devel] Official Koha Newsletter: Volume 1, Issue 9: September 2010 Message-ID: The newsletter can be read online (with live links) here: http://koha-community.org/koha-newsletter-volume-1issue-9-september-2010 and subscribed to via RSS here: http://feeds.feedburner.com/KohaNewsletter Official Koha Newsletter (ISSN 2153-8328) Volume 1, Issue 9: September 2010 Table of Contents * Koha in Libraries o NHUSD Live on Koha 3 o N?mes public libraries now live with Koha * Koha Tips & Tricks o Advanced Search Link on Cataloging Screen o Spine Label Customizations * Koha Events o Thank you to our sponsors! o 6 Weeks ?til KohaCon ? Are you registered? o Powhiri at Kawiu Marae Koha in Libraries NHUSD Live on Koha 3 by Chris Hobbs New Haven Unified School District (NHUSD) in Union City, California, has completed its migration to Koha 3. NHUSD relied on the stellar support available from the Koha community as well as internal resources to successfully migrate nearly 200,000 items to Koha for 13 schools. NHUSD serves nearly 13,000 K-12 students and their families in the San Francisco Bay area. Inquiries about the project may be made to the Director of Technology at NHUSD, Chris Hobbs (chobbs at nhusd.k12.ca.us). N?mes public libraries now live with Koha by Paul Poulain BibLibre announce that at the beginning of July, N?mes Public Libraries successfully migrated from Loris to Koha and are now live. ?We are very happy with Koha. We used a lot of free software for technical purposes, but it?s the first time we move to a free software for the libraries? said Niels Lor, IT Project Manager at N?mes. ?N?mes perfectly understand that Free Software is also a matter of sharing, that?s why they are sponsoring many improvements. Some of them are already written, and will be integrated in koha 3.4, some are still underway? said Paul Poulain, BibLibre CEO. Read more? Koha Tips & Tricks Advanced Search Link on Cataloging Screen by Fridolyn Somers Q : Is it possible to put the advanced search link in the opening cataloging screen? A : Use this code in ?intranetuserjs? system preference : $(document).ready(function() { $(?li$gt;form#searchform?).parent().after(?$lt;li$gt;$lt;a href=?/cgi-bin/koha/catalogue/search.pl?$gt;Advanced Search$lt;/a$gt;$lt;/li$gt;?); }); It adds a link to advanced search right after the simple search textbox. Spine Label Customizations by David Schuster You can adjust the font/etc.. with the new quick spine labeler! I wanted BOLD and LARGE on my 1?1 Dymo spine. hmmm headers, footers and 10 labels came out? time to investigate? So this is what I had to do. 1. I had to go into Firefox and make a couple of modifications 2. File page Setup 3. margins&header/Footer tab and change all 4 to 0.0 4. set the header footer drop down options to blank. (Thanks to INCOLSA for these tips) Then in the SpineLabelFormat system preference I put the following.

    After I saved it BAM! a bold spine label on my Dymo 1?1 label? Way cool ? thanks Chris N for this feature. Now I?m working on how I can get that on the item level screen after we add the item! ENHANCEMENT! I had to tweak some of my settings but it works! Koha Events Thank you to our sponsors! by Nicole C. Engard With KohaCon right around the corner, it is time to give recognition to our amazing sponsors. * Biblibre * Butte Public Library * Bywater Solutions * Calyx * Catalyst IT * Equinox * Horowhenua District Council * Horowhenua Library Trust * KohaAloha * Libriotech * The Library * Xercode Without these sponsors KohaCon would not only not be free, but not be possible! Join me in thanking them all for their generosity and dedication to Koha and open source. 6 Weeks ?til KohaCon ? Are you registered? by Russell Garlick KohaCon10 starts on October 25th in Wellington, New Zealand. We have an exciting line up of speakers on a range of topics related to Koha and Open Source and Open Standards in libraries. See our programme for details. KohaCon10 is a free conference (that is right it will cost nothing for you to attend), but you still need to register to reserve your place. Registrations from the international Koha community have been very strong. Over half of all available spaces are already taken. If you have been holding off on the premise that you will have plenty of time to do this later, then please register now. Please do not rely on there being free spaces on the day. Registration is quick and easy via the website. http://www.kohacon10.org.nz/2010/registration/ We look forward to seeing you in Wellington. Powhiri at Kawiu Marae by Joann Ransom Koha Conference attendees are invited to Levin on the 28th October for a very special event. The Library Trust is honoured that Muaupoko, the tangata whenua or people of the land, are hosting us at Kawiu Marae following the Mayoral reception in Council chambers. This is a first for the Library Trust and the purpose of this post is to encourage Conference attendees to join us for what will be a very special evening. We will be welcomed onto the Marae in a traditional manner. This will include the karanga where we are called on the Marae, a wero or taiaha challenge (to see if we come in peace), the laying down of koha, speeches, waiata (songs) a hakari or meal and kapa haka entertainment (singing, poi, haka). Information about the various components of the powhiri and what to expect can be found here. Read more? Newsletter edited by Nicole C. Engard, Koha Documentation Manager. Please send future story ideas to nengard at gmail.com From fridolyn.somers at gmail.com Thu Sep 16 09:28:01 2010 From: fridolyn.somers at gmail.com (Fridolyn SOMERS) Date: Thu, 16 Sep 2010 09:28:01 +0200 Subject: [Koha-devel] Error on Index 'Any' when searching authorities In-Reply-To: References: Message-ID: I agree, DOM configuration did not worked after install with default configuration. Did you re-index after switching from DOM to GRS1 ? Does search on an index work (ie Heaging) ? Do you have "all any" in zebradb/marc_defs/unimarc/authorities/record.abs ? Regards, -- Fridolyn SOMERS ICT engineer PROGILONE - Lyon - France fridolyn.somers at gmail.com On Wed, Sep 15, 2010 at 7:41 PM, Tomas Cohen Arazi wrote: > When searching for authorities putting my search string on "Anywhere" > y get no results, and the logs show: > > auth_finder.pl: oAuth error: Unsupported Use attribute (114) Any Bib-1 > > I was first recommended to review some threads about the topic (thanks > jcamins) of the 'Heading-main' index, but I had no positive results. > Reindexing yielded the same (null) results everytime. > > Then I remebered having switched recently to 'dom' model for > authorities... because it was recommended during the install (at least > in the 3.0.x branch, If I'm not wrong it was alleged more efficient). > > The thing is that when I switched back to grs1, I got my query results > back! I think 'dom' usage should be discouraged until this is fixed. > > Thanks > To+ > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > -------------- section suivante -------------- Une pi?ce jointe HTML a ?t? nettoy?e... URL: From tomascohen at gmail.com Thu Sep 16 13:37:18 2010 From: tomascohen at gmail.com (Tomas Cohen Arazi) Date: Thu, 16 Sep 2010 08:37:18 -0300 Subject: [Koha-devel] Error on Index 'Any' when searching authorities In-Reply-To: References: Message-ID: 2010/9/16 Fridolyn SOMERS : > I agree, DOM configuration did not worked after install with default > configuration. > > Did you re-index after switching from DOM to GRS1 ? I reindexed auth, linking to newly created authorities didn't work until I reindexed biblios too (?), > Does search on an index work (ie Heaging) ? It worked with DOM, it works with GRS1. > Do you have "all any" in zebradb/marc_defs/unimarc/authorities/record.abs ? No I don't. that's how it must be added in record.abs for grs1 not complaining about not defined 'any' index? What's the proper solution for this and why is not a default setting? To+ From laurenthdl at alinto.com Thu Sep 16 14:04:18 2010 From: laurenthdl at alinto.com (LAURENT Henri-Damien) Date: Thu, 16 Sep 2010 14:04:18 +0200 Subject: [Koha-devel] Error on Index 'Any' when searching authorities In-Reply-To: References: Message-ID: <4C9207C2.5020700@alinto.com> Le 16/09/2010 13:37, Tomas Cohen Arazi a ?crit : > 2010/9/16 Fridolyn SOMERS : >> I agree, DOM configuration did not worked after install with default >> configuration. >> >> Did you re-index after switching from DOM to GRS1 ? > > I reindexed auth, linking to newly created authorities didn't work > until I reindexed biblios too (?), > > >> Does search on an index work (ie Heaging) ? > > It worked with DOM, it works with GRS1. > >> Do you have "all any" in zebradb/marc_defs/unimarc/authorities/record.abs ? > > No I don't. that's how it must be added in record.abs for grs1 not > complaining about not defined 'any' index? > What's the proper solution for this and why is not a default setting? > all any is not taken into account with DOM (from what I have tested. It is one of the regressions you can have when changing from one option to an other grs1 to DOM, or chr to icu, same struggles and surprises. Maybe indexdata fixed that. But I have seen no news about that in changelogs lately.) DOM config file should be modified to add any or Any as index wherever you index some data. Thus, it should even come into xsl configuration file. It is not beautiful, but it would work. My 2 cts... -- Henri-Damien LAURENT From fridolyn.somers at gmail.com Thu Sep 16 14:46:05 2010 From: fridolyn.somers at gmail.com (Fridolyn SOMERS) Date: Thu, 16 Sep 2010 14:46:05 +0200 Subject: [Koha-devel] Error on Index 'Any' when searching authorities In-Reply-To: <4C9207C2.5020700@alinto.com> References: <4C9207C2.5020700@alinto.com> Message-ID: "all any" is contained is default configuration. I think all is an instruction to set index that means "all indexes". Regards. On Thu, Sep 16, 2010 at 2:04 PM, LAURENT Henri-Damien wrote: > Le 16/09/2010 13:37, Tomas Cohen Arazi a ?crit : > > 2010/9/16 Fridolyn SOMERS : > >> I agree, DOM configuration did not worked after install with default > >> configuration. > >> > >> Did you re-index after switching from DOM to GRS1 ? > > > > I reindexed auth, linking to newly created authorities didn't work > > until I reindexed biblios too (?), > > > > > >> Does search on an index work (ie Heaging) ? > > > > It worked with DOM, it works with GRS1. > > > >> Do you have "all any" in > zebradb/marc_defs/unimarc/authorities/record.abs ? > > > > No I don't. that's how it must be added in record.abs for grs1 not > > complaining about not defined 'any' index? > > What's the proper solution for this and why is not a default setting? > > > all any is not taken into account with DOM (from what I have tested. It > is one of the regressions you can have when changing from one option to > an other grs1 to DOM, or chr to icu, same struggles and surprises. Maybe > indexdata fixed that. But I have seen no news about that in changelogs > lately.) > DOM config file should be modified to add any or Any as index wherever > you index some data. > Thus, it should even come into xsl configuration file. > It is not beautiful, but it would work. > > My 2 cts... > > -- > Henri-Damien LAURENT > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > -- Fridolyn SOMERS ICT engineer PROGILONE - Lyon - France fridolyn.somers at gmail.com -------------- section suivante -------------- Une pi?ce jointe HTML a ?t? nettoy?e... URL: From tomascohen at gmail.com Thu Sep 16 15:11:14 2010 From: tomascohen at gmail.com (Tomas Cohen Arazi) Date: Thu, 16 Sep 2010 10:11:14 -0300 Subject: [Koha-devel] Error on Index 'Any' when searching authorities In-Reply-To: References: <4C9207C2.5020700@alinto.com> Message-ID: 2010/9/16 Fridolyn SOMERS : > "all any" is contained is default configuration. True story. But I get [warn] Index 'any' not found in attset(s) when reindexing authorities. Here, ppl say it is not enough. We need to have the mapping in authorities/etc/bib1.att http://old.nabble.com/Cron-Daemon-Warning-td22472776.html Should it be set as a default? Do we need to add 'Any' too? (i.e. all any,Any) Thanks To+ From henridamien at koha-fr.org Thu Sep 16 15:30:11 2010 From: henridamien at koha-fr.org (LAURENT) Date: Thu, 16 Sep 2010 15:30:11 +0200 Subject: [Koha-devel] Error on Index 'Any' when searching authorities In-Reply-To: References: <4C9207C2.5020700@alinto.com> Message-ID: <4C921BE3.5040404@koha-fr.org> Le 16/09/2010 15:11, Tomas Cohen Arazi a ?crit : > 2010/9/16 Fridolyn SOMERS : >> "all any" is contained is default configuration. > > True story. But I get > > [warn] Index 'any' not found in attset(s) > > when reindexing authorities. > > Here, ppl say it is not enough. We need to have the mapping in > authorities/etc/bib1.att > http://old.nabble.com/Cron-Daemon-Warning-td22472776.html > > Should it be set as a default? Do we need to add 'Any' too? (i.e. all any,Any) Index names is case insensitive (at least with record.abs...) So imho, for what it's worth, any is better than Any HTH -- Henri-Damien LAURENT From tomascohen at gmail.com Thu Sep 16 15:37:41 2010 From: tomascohen at gmail.com (Tomas Cohen Arazi) Date: Thu, 16 Sep 2010 10:37:41 -0300 Subject: [Koha-devel] Error on Index 'Any' when searching authorities In-Reply-To: <4C921BE3.5040404@koha-fr.org> References: <4C9207C2.5020700@alinto.com> <4C921BE3.5040404@koha-fr.org> Message-ID: On Thu, Sep 16, 2010 at 10:30 AM, LAURENT wrote: > Le 16/09/2010 15:11, Tomas Cohen Arazi a ?crit : >> 2010/9/16 Fridolyn SOMERS : >>> "all any" is contained is default configuration. >> >> True story. But I get >> >> [warn] Index 'any' not found in attset(s) >> >> when reindexing authorities. >> >> Here, ppl say it is not enough. We need to have the mapping in >> authorities/etc/bib1.att >> http://old.nabble.com/Cron-Daemon-Warning-td22472776.html >> >> Should it be set as a default? Do we need to add 'Any' too? (i.e. all any,Any) > Index names is case insensitive (at least ?with record.abs...) > So imho, for what it's worth, any is better than Any What I'm trying to state is that default setting warns the user about an index missing. I just want to reach a default configuration that doesn't tell such thing to the users. As Frydolin said, adding Any to bib1.att solves this warning. This means, adding this line to the file: att 1016 Any If this is correct, it should be added as a default. To+ To+ From ian.walls at bywatersolutions.com Thu Sep 16 22:18:07 2010 From: ian.walls at bywatersolutions.com (Ian Walls) Date: Thu, 16 Sep 2010 16:18:07 -0400 Subject: [Koha-devel] Invitation to join KUDOS Message-ID: Everyone, The KUDOS board is inviting all North American institutions with an installation of Koha to become a member. If you join now as a charter member, there will be no registration fee through 2011 (since we don't yet have non-profit status, nor an agreed-upon due structure). If you wish to become a member of KUDOS, we ask that you fill out this simple online form . Of course, we are looking for volunteers to do all kinds of things, so please check off the categories in which you are interested. Positions on the board are available! Once we have a good base of interest, we can all begin moving forward with determining how KUDOS can serve the greater Koha community. Thank you for your interest. Ian Walls (Acting) President, KUDOS -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccurry at amphilsoc.org Fri Sep 17 21:22:57 2010 From: ccurry at amphilsoc.org (Christopher Curry) Date: Fri, 17 Sep 2010 15:22:57 -0400 Subject: [Koha-devel] mysqldump & restore issues Message-ID: <4C93C011.3040506@amphilsoc.org> Hello all, I'm trying to create a mirrored failover server with data from our live Koha. In order to do so, I'm using *mysqldump* and *mysql* for backup and restore. I've discovered a troubling problem and I can't determine the cause. I run this command to backup the live server: *mysqldump --single-transaction -ukoha -p koha > /home/koha/KohaServerBackups/koha.`/bin/date +\%Y\%m\%d\%H\%M\%S`.sql* This seems to work correctly (and very quickly! 1.7 GB database exports in 2 min) Then, I run this command: *mysql -v -ukoha -p koha < /home/koha/KohaServerBackups/backupFileName.sql* This also seems to work, as I receive no warnings or error messages. I'm exporting from a 3.00.05.001 system and importing to a 3.00.05.003 system, so I then run the *$KOHA_SOURCE/installer/data/mysql/updatedatabase.pl* script. Relevant specs: MySQL version: mysql Ver 14.12 Distrib 5.0.51a, for debian-linux-gnu (i486) using readline 5.2 OS: Debian Lenny Failover server is virtual, running on VirtualBox 3.2.8, on an Ubuntu 10.4 host. All GUI functions of the Koha failover server seem to operate correctly, but when I run *mysqlcheck -ukoha -p koha* the check fails on koha.biblioitems with the following error message: mysqlcheck: Got error: 2013: Lost connection to MySQL server during query when executing 'CHECK TABLE ... ' If I try mysqldump, I get the same error, but it is more specific, reporting that it falls on row 41536. If I check /var/log/syslog, I see this http://pastebin.com/YuuFBHry "InnoDB: Database page corruption on disk or a failed file read of page 58164", etc. Both mysqlcheck & mysqldump work without error on the live server, so I'm thinking that something must be happening to the data during the export or import that corrupts the InnoDB data, but this is speculation, since I'm not a MySQL expert. Has anyone seen behavior like this? Any suggestions for further troubleshooting/resolution? -- Cheers, Christopher Curry Assistant Technical Librarian / Assistant IT Officer American Philosophical Society 105 South Fifth Street Philadelphia, PA 19106-3386 Tel. (215) 599-4299 ccurry at amphilsoc.org Main Library number: (215)440-3400 APS website: http://www.amphilsoc.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleonard at myacpl.org Mon Sep 20 18:17:16 2010 From: oleonard at myacpl.org (Owen Leonard) Date: Mon, 20 Sep 2010 12:17:16 -0400 Subject: [Koha-devel] Should CSS file be outside the translated path? Message-ID: I've been working on Bug 4048, "js libs must be outside of translated paths." http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=4048 Is there any reason why CSS files should not also be moved? -- Owen -- Web Developer Athens County Public Libraries http://www.myacpl.org From henridamien at koha-fr.org Mon Sep 20 18:28:04 2010 From: henridamien at koha-fr.org (LAURENT) Date: Mon, 20 Sep 2010 18:28:04 +0200 Subject: [Koha-devel] Should CSS file be outside the translated path? In-Reply-To: References: Message-ID: <4C978B94.3050305@koha-fr.org> Le 20/09/2010 18:17, Owen Leonard a ?crit : > I've been working on Bug 4048, "js libs must be outside of translated paths." > > http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=4048 > > Is there any reason why CSS files should not also be moved? > > -- Owen > The only reason I see would be if css would add some links to data when authid or any number is displayed. This is not the case at the moment and could/should be part of a distinct css imho. So owen++ -- Henri-Damien LAURENT From mdhafen at tech.washk12.org Tue Sep 21 00:17:05 2010 From: mdhafen at tech.washk12.org (Mike Hafen) Date: Mon, 20 Sep 2010 16:17:05 -0600 Subject: [Koha-devel] mysqldump & restore issues In-Reply-To: <4C93C011.3040506@amphilsoc.org> References: <4C93C011.3040506@amphilsoc.org> Message-ID: Sorry to hear you are having problems with your failover server. I've had problems with failovers servers and backups before, it's not fun. Have you checked the memory and hard drive of the failover server? (the host) That would be my first guess. Also, it may be the filesystem on either the host or the guest, but I'm not up on file systems so I can't say for sure. There's a good chance the row number in the error message relates to the biblioitemnumber. You could try just querying the database for rows around that. Something like 'select * from biblioitems where biblioitemnumber > 41470 and biblioitemnumber < 41600. If you see the error it could be data, disk, or filesystem related. If you don't it's probably memory related, but maybe filesystem related. Good luck. 2010/9/17 Christopher Curry > Hello all, > > I'm trying to create a mirrored failover server with data from our live > Koha. In order to do so, I'm using *mysqldump* and *mysql* for backup and > restore. > > I've discovered a troubling problem and I can't determine the cause. > > I run this command to backup the live server: > > *mysqldump --single-transaction -ukoha -p koha > > /home/koha/KohaServerBackups/koha.`/bin/date +\%Y\%m\%d\%H\%M\%S`.sql* > > This seems to work correctly (and very quickly! 1.7 GB database exports in > 2 min) > > Then, I run this command: > > *mysql -v -ukoha -p koha < /home/koha/KohaServerBackups/backupFileName.sql > * > > This also seems to work, as I receive no warnings or error messages. > > I'm exporting from a 3.00.05.001 system and importing to a 3.00.05.003 > system, so I then run the *$KOHA_SOURCE/installer/data/mysql/ > updatedatabase.pl* script. > > Relevant specs: > > MySQL version: mysql Ver 14.12 Distrib 5.0.51a, for debian-linux-gnu (i486) > using readline 5.2 > OS: Debian Lenny > > Failover server is virtual, running on VirtualBox 3.2.8, on an Ubuntu 10.4 > host. > > > All GUI functions of the Koha failover server seem to operate correctly, > but when I run *mysqlcheck -ukoha -p koha* the check fails on > koha.biblioitems with the following error message: > > mysqlcheck: Got error: 2013: Lost connection to MySQL server during query > when executing 'CHECK TABLE ... ' > > If I try mysqldump, I get the same error, but it is more specific, > reporting that it falls on row 41536. > > If I check /var/log/syslog, I see this http://pastebin.com/YuuFBHry > > "InnoDB: Database page corruption on disk or a failed file read of page > 58164", etc. > > Both mysqlcheck & mysqldump work without error on the live server, so I'm > thinking that something must be happening to the data during the export or > import that corrupts the InnoDB data, but this is speculation, since I'm not > a MySQL expert. > > Has anyone seen behavior like this? Any suggestions for further > troubleshooting/resolution? > > -- > > Cheers, > > Christopher Curry > Assistant Technical Librarian / Assistant IT Officer > > American Philosophical Society > 105 South Fifth Street > Philadelphia, PA 19106-3386 > Tel. (215) 599-4299 > > ccurry at amphilsoc.org > Main Library number: (215)440-3400 > APS website: http://www.amphilsoc.org > > > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccurry at amphilsoc.org Tue Sep 21 15:39:50 2010 From: ccurry at amphilsoc.org (Christopher Curry) Date: Tue, 21 Sep 2010 09:39:50 -0400 Subject: [Koha-devel] mysqldump & restore issues In-Reply-To: References: <4C93C011.3040506@amphilsoc.org> Message-ID: <4C98B5A6.5020100@amphilsoc.org> Mike, Thanks for the reply. I did try querying individual records around that row; in fact, I used "limit 41536,1" to find out the specific biblioitemnumber. I checked the record in Koha and found no problems with it or adjacent records and I had no problem querying these individual records with select statements. I did try a few queries like "select * from biblioitems limit 1,41537" and did get the same 2013 error when querying large datasets, but not when querying smaller data sets. "limit 1,41535" also threw the error, but "limit 1,20,000" did not. I thought this might indicate a memory issue, but the syslog error: "InnoDB: Database page corruption on disk or a failed file read of page 58164" led me to think there was corrupted data in the database. The VM has 120 GB of dynamically expanding storage and the vdi is housed on a filesystem with 108GB free, so there shouldn't be a problem with with running out of disk space. The dump is less than 2GB, so the database itself can't be much larger than that, can it? The VM currently has 1.5 GB allocated for memory, and I tried increasing this to 2GB, which did not prevent the error. The host is 32-bit and is maxed out at 4GB of memory, so I can't go much higher than this without destabilizing my host. I had another theory that there might be an offending character in one of the MARCXML records that is messing with the format of the SQL commands in the .sql file. Anyone know if this is a possibility? I'm no SQL expert, so I'm not sure what characters to look for. Cheers, Christopher Curry Assistant Technical Librarian / Assistant IT Officer American Philosophical Society 105 South Fifth Street Philadelphia, PA 19106-3386 Tel. (215) 599-4299 ccurry at amphilsoc.org Main Library number: (215)440-3400 APS website: http://www.amphilsoc.org On 09/20/2010 06:17 PM, Mike Hafen wrote: > Sorry to hear you are having problems with your failover server. I've > had problems with failovers servers and backups before, it's not fun. > > Have you checked the memory and hard drive of the failover server? > (the host) That would be my first guess. Also, it may be the > filesystem on either the host or the guest, but I'm not up on file > systems so I can't say for sure. > > There's a good chance the row number in the error message relates to > the biblioitemnumber. You could try just querying the database for > rows around that. Something like 'select * from biblioitems where > biblioitemnumber > 41470 and biblioitemnumber < 41600. If you see the > error it could be data, disk, or filesystem related. If you don't > it's probably memory related, but maybe filesystem related. > > Good luck. > > 2010/9/17 Christopher Curry > > > Hello all, > > I'm trying to create a mirrored failover server with data from our > live Koha. In order to do so, I'm using *mysqldump* and *mysql* > for backup and restore. > > I've discovered a troubling problem and I can't determine the cause. > > I run this command to backup the live server: > > *mysqldump --single-transaction -ukoha -p koha > > /home/koha/KohaServerBackups/koha.`/bin/date +\%Y\%m\%d\%H\%M\%S`.sql* > > This seems to work correctly (and very quickly! 1.7 GB database > exports in 2 min) > > Then, I run this command: > > *mysql -v -ukoha -p koha < > /home/koha/KohaServerBackups/backupFileName.sql* > > This also seems to work, as I receive no warnings or error messages. > > I'm exporting from a 3.00.05.001 system and importing to a > 3.00.05.003 system, so I then run the > *$KOHA_SOURCE/installer/data/mysql/updatedatabase.pl > * script. > > Relevant specs: > > MySQL version: mysql Ver 14.12 Distrib 5.0.51a, for > debian-linux-gnu (i486) using readline 5.2 > OS: Debian Lenny > > Failover server is virtual, running on VirtualBox 3.2.8, on an > Ubuntu 10.4 host. > > > All GUI functions of the Koha failover server seem to operate > correctly, but when I run *mysqlcheck -ukoha -p koha* the check > fails on koha.biblioitems with the following error message: > > mysqlcheck: Got error: 2013: Lost connection to MySQL server > during query when executing 'CHECK TABLE ... ' > > If I try mysqldump, I get the same error, but it is more specific, > reporting that it falls on row 41536. > > If I check /var/log/syslog, I see this http://pastebin.com/YuuFBHry > > "InnoDB: Database page corruption on disk or a failed file read of > page 58164", etc. > > Both mysqlcheck & mysqldump work without error on the live server, > so I'm thinking that something must be happening to the data > during the export or import that corrupts the InnoDB data, but > this is speculation, since I'm not a MySQL expert. > > Has anyone seen behavior like this? Any suggestions for further > troubleshooting/resolution? > > -- > > Cheers, > > Christopher Curry > Assistant Technical Librarian / Assistant IT Officer > > American Philosophical Society > 105 South Fifth Street > Philadelphia, PA 19106-3386 > Tel. (215) 599-4299 > > ccurry at amphilsoc.org > > Main Library number: (215)440-3400 > APS website: http://www.amphilsoc.org > > > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdhafen at tech.washk12.org Tue Sep 21 16:11:11 2010 From: mdhafen at tech.washk12.org (Mike Hafen) Date: Tue, 21 Sep 2010 08:11:11 -0600 Subject: [Koha-devel] mysqldump & restore issues In-Reply-To: <4C98B5A6.5020100@amphilsoc.org> References: <4C93C011.3040506@amphilsoc.org> <4C98B5A6.5020100@amphilsoc.org> Message-ID: It could still be memory, because MySQL keeps indexes and such in memory. It could also be a bad sector on the disk, because MySQL caches query results to disk when they are bigger than the space allocated in memory for query results. The only character I can think of that would cause trouble in the dump would be an unescaped single quote, or maybe half of a UTF16 character that happens to have the same value as an ASCII single quote. It seems unlikely to me though that mysqldump wouldn't spot such a character as it was dumping. Have you tried something like 'select * from biblioitems limit 41537,1'. That should pull the specific row that has the error, and you could visually examine it, or see if MySQL throws an error on the row. That could tell you if the problem is in the data file specifically, or if it's somewhere else (memory or disk spaced used to cache the result). I doubt it's the disk overrunning what it can physically allocate, since as you say the query results can't be more than a few gigabytes. Unless the disk is already nearly full, which I'm sure you've already checked. I think MySQL could try to allocate memory beyond what it can, beyond the 3.x GB addressable by a 32bit app, but the error message indicates it's a problem with the disk drive. Assuming the error message is accurate I'd start with a bad block scan on the host's hard drive. That's what I recommend. Good luck. On Tue, Sep 21, 2010 at 7:39 AM, Christopher Curry wrote: > Mike, > > Thanks for the reply. I did try querying individual records around that > row; in fact, I used "limit 41536,1" to find out the specific > biblioitemnumber. I checked the record in Koha and found no problems with > it or adjacent records and I had no problem querying these individual > records with select statements. > > I did try a few queries like "select * from biblioitems limit 1,41537" and > did get the same 2013 error when querying large datasets, but not when > querying smaller data sets. "limit 1,41535" also threw the error, but > "limit 1,20,000" did not. I thought this might indicate a memory issue, but > the syslog error: "InnoDB: Database page corruption on disk or a failed file > read of page 58164" led me to think there was corrupted data in the > database. > > The VM has 120 GB of dynamically expanding storage and the vdi is housed on > a filesystem with 108GB free, so there shouldn't be a problem with with > running out of disk space. The dump is less than 2GB, so the database > itself can't be much larger than that, can it? > > The VM currently has 1.5 GB allocated for memory, and I tried increasing > this to 2GB, which did not prevent the error. The host is 32-bit and is > maxed out at 4GB of memory, so I can't go much higher than this without > destabilizing my host. > > I had another theory that there might be an offending character in one of > the MARCXML records that is messing with the format of the SQL commands in > the .sql file. Anyone know if this is a possibility? I'm no SQL expert, so > I'm not sure what characters to look for. > > Cheers, > > Christopher Curry > Assistant Technical Librarian / Assistant IT Officer > > American Philosophical Society > 105 South Fifth Street > Philadelphia, PA 19106-3386 > Tel. (215) 599-4299 > > ccurry at amphilsoc.org > Main Library number: (215)440-3400 > APS website: http://www.amphilsoc.org > > > > On 09/20/2010 06:17 PM, Mike Hafen wrote: > > Sorry to hear you are having problems with your failover server. I've had > problems with failovers servers and backups before, it's not fun. > > Have you checked the memory and hard drive of the failover server? (the > host) That would be my first guess. Also, it may be the filesystem on > either the host or the guest, but I'm not up on file systems so I can't say > for sure. > > There's a good chance the row number in the error message relates to the > biblioitemnumber. You could try just querying the database for rows around > that. Something like 'select * from biblioitems where biblioitemnumber > > 41470 and biblioitemnumber < 41600. If you see the error it could be data, > disk, or filesystem related. If you don't it's probably memory related, but > maybe filesystem related. > > Good luck. > > 2010/9/17 Christopher Curry > >> Hello all, >> >> I'm trying to create a mirrored failover server with data from our live >> Koha. In order to do so, I'm using *mysqldump* and *mysql* for backup >> and restore. >> >> I've discovered a troubling problem and I can't determine the cause. >> >> I run this command to backup the live server: >> >> *mysqldump --single-transaction -ukoha -p koha > >> /home/koha/KohaServerBackups/koha.`/bin/date +\%Y\%m\%d\%H\%M\%S`.sql* >> >> This seems to work correctly (and very quickly! 1.7 GB database exports >> in 2 min) >> >> Then, I run this command: >> >> *mysql -v -ukoha -p koha < >> /home/koha/KohaServerBackups/backupFileName.sql* >> >> This also seems to work, as I receive no warnings or error messages. >> >> I'm exporting from a 3.00.05.001 system and importing to a 3.00.05.003 >> system, so I then run the *$KOHA_SOURCE/installer/data/mysql/ >> updatedatabase.pl* script. >> >> Relevant specs: >> >> MySQL version: mysql Ver 14.12 Distrib 5.0.51a, for debian-linux-gnu >> (i486) using readline 5.2 >> OS: Debian Lenny >> >> Failover server is virtual, running on VirtualBox 3.2.8, on an Ubuntu 10.4 >> host. >> >> >> All GUI functions of the Koha failover server seem to operate correctly, >> but when I run *mysqlcheck -ukoha -p koha* the check fails on >> koha.biblioitems with the following error message: >> >> mysqlcheck: Got error: 2013: Lost connection to MySQL server during query >> when executing 'CHECK TABLE ... ' >> >> If I try mysqldump, I get the same error, but it is more specific, >> reporting that it falls on row 41536. >> >> If I check /var/log/syslog, I see this http://pastebin.com/YuuFBHry >> >> "InnoDB: Database page corruption on disk or a failed file read of page >> 58164", etc. >> >> Both mysqlcheck & mysqldump work without error on the live server, so I'm >> thinking that something must be happening to the data during the export or >> import that corrupts the InnoDB data, but this is speculation, since I'm not >> a MySQL expert. >> >> Has anyone seen behavior like this? Any suggestions for further >> troubleshooting/resolution? >> >> -- >> >> Cheers, >> >> Christopher Curry >> Assistant Technical Librarian / Assistant IT Officer >> >> American Philosophical Society >> 105 South Fifth Street >> Philadelphia, PA 19106-3386 >> Tel. (215) 599-4299 >> >> ccurry at amphilsoc.org >> Main Library number: (215)440-3400 >> APS website: http://www.amphilsoc.org >> >> >> >> _______________________________________________ >> Koha-devel mailing list >> Koha-devel at lists.koha-community.org >> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccurry at amphilsoc.org Tue Sep 21 16:40:48 2010 From: ccurry at amphilsoc.org (Christopher Curry) Date: Tue, 21 Sep 2010 10:40:48 -0400 Subject: [Koha-devel] mysqldump & restore issues In-Reply-To: References: <4C93C011.3040506@amphilsoc.org> <4C98B5A6.5020100@amphilsoc.org> Message-ID: <4C98C3F0.7030202@amphilsoc.org> Thanks, Mike. I'll try your recommendations and post back what I find out. Cheers, Christopher Curry Assistant Technical Librarian / Assistant IT Officer American Philosophical Society 105 South Fifth Street Philadelphia, PA 19106-3386 Tel. (215) 599-4299 ccurry at amphilsoc.org Main Library number: (215)440-3400 APS website: http://www.amphilsoc.org On 09/21/2010 10:11 AM, Mike Hafen wrote: > It could still be memory, because MySQL keeps indexes and such in > memory. It could also be a bad sector on the disk, because MySQL > caches query results to disk when they are bigger than the space > allocated in memory for query results. > > The only character I can think of that would cause trouble in the dump > would be an unescaped single quote, or maybe half of a UTF16 character > that happens to have the same value as an ASCII single quote. It > seems unlikely to me though that mysqldump wouldn't spot such a > character as it was dumping. > > Have you tried something like 'select * from biblioitems limit > 41537,1'. That should pull the specific row that has the error, and > you could visually examine it, or see if MySQL throws an error on the > row. That could tell you if the problem is in the data file > specifically, or if it's somewhere else (memory or disk spaced used to > cache the result). > > I doubt it's the disk overrunning what it can physically allocate, > since as you say the query results can't be more than a few > gigabytes. Unless the disk is already nearly full, which I'm sure > you've already checked. > > I think MySQL could try to allocate memory beyond what it can, beyond > the 3.x GB addressable by a 32bit app, but the error message indicates > it's a problem with the disk drive. Assuming the error message is > accurate I'd start with a bad block scan on the host's hard drive. > That's what I recommend. > > Good luck. > > On Tue, Sep 21, 2010 at 7:39 AM, Christopher Curry > > wrote: > > Mike, > > Thanks for the reply. I did try querying individual records > around that row; in fact, I used "limit 41536,1" to find out the > specific biblioitemnumber. I checked the record in Koha and found > no problems with it or adjacent records and I had no problem > querying these individual records with select statements. > > I did try a few queries like "select * from biblioitems limit > 1,41537" and did get the same 2013 error when querying large > datasets, but not when querying smaller data sets. "limit > 1,41535" also threw the error, but "limit 1,20,000" did not. I > thought this might indicate a memory issue, but the syslog error: > "InnoDB: Database page corruption on disk or a failed file read of > page 58164" led me to think there was corrupted data in the database. > > The VM has 120 GB of dynamically expanding storage and the vdi is > housed on a filesystem with 108GB free, so there shouldn't be a > problem with with running out of disk space. The dump is less > than 2GB, so the database itself can't be much larger than that, > can it? > > The VM currently has 1.5 GB allocated for memory, and I tried > increasing this to 2GB, which did not prevent the error. The host > is 32-bit and is maxed out at 4GB of memory, so I can't go much > higher than this without destabilizing my host. > > I had another theory that there might be an offending character in > one of the MARCXML records that is messing with the format of the > SQL commands in the .sql file. Anyone know if this is a > possibility? I'm no SQL expert, so I'm not sure what characters > to look for. > > Cheers, > > Christopher Curry > Assistant Technical Librarian / Assistant IT Officer > > American Philosophical Society > 105 South Fifth Street > Philadelphia, PA 19106-3386 > Tel. (215) 599-4299 > > ccurry at amphilsoc.org > > Main Library number: (215)440-3400 > APS website: http://www.amphilsoc.org > > > > On 09/20/2010 06:17 PM, Mike Hafen wrote: >> Sorry to hear you are having problems with your failover server. >> I've had problems with failovers servers and backups before, it's >> not fun. >> >> Have you checked the memory and hard drive of the failover >> server? (the host) That would be my first guess. Also, it may >> be the filesystem on either the host or the guest, but I'm not up >> on file systems so I can't say for sure. >> >> There's a good chance the row number in the error message relates >> to the biblioitemnumber. You could try just querying the >> database for rows around that. Something like 'select * from >> biblioitems where biblioitemnumber > 41470 and biblioitemnumber < >> 41600. If you see the error it could be data, disk, or >> filesystem related. If you don't it's probably memory related, >> but maybe filesystem related. >> >> Good luck. >> >> 2010/9/17 Christopher Curry > > >> >> Hello all, >> >> I'm trying to create a mirrored failover server with data >> from our live Koha. In order to do so, I'm using *mysqldump* >> and *mysql* for backup and restore. >> >> I've discovered a troubling problem and I can't determine the >> cause. >> >> I run this command to backup the live server: >> >> *mysqldump --single-transaction -ukoha -p koha > >> /home/koha/KohaServerBackups/koha.`/bin/date >> +\%Y\%m\%d\%H\%M\%S`.sql* >> >> This seems to work correctly (and very quickly! 1.7 GB >> database exports in 2 min) >> >> Then, I run this command: >> >> *mysql -v -ukoha -p koha < >> /home/koha/KohaServerBackups/backupFileName.sql* >> >> This also seems to work, as I receive no warnings or error >> messages. >> >> I'm exporting from a 3.00.05.001 system and importing to a >> 3.00.05.003 system, so I then run the >> *$KOHA_SOURCE/installer/data/mysql/updatedatabase.pl >> * script. >> >> Relevant specs: >> >> MySQL version: mysql Ver 14.12 Distrib 5.0.51a, for >> debian-linux-gnu (i486) using readline 5.2 >> OS: Debian Lenny >> >> Failover server is virtual, running on VirtualBox 3.2.8, on >> an Ubuntu 10.4 host. >> >> >> All GUI functions of the Koha failover server seem to operate >> correctly, but when I run *mysqlcheck -ukoha -p koha* the >> check fails on koha.biblioitems with the following error message: >> >> mysqlcheck: Got error: 2013: Lost connection to MySQL server >> during query when executing 'CHECK TABLE ... ' >> >> If I try mysqldump, I get the same error, but it is more >> specific, reporting that it falls on row 41536. >> >> If I check /var/log/syslog, I see this >> http://pastebin.com/YuuFBHry >> >> "InnoDB: Database page corruption on disk or a failed file >> read of page 58164", etc. >> >> Both mysqlcheck & mysqldump work without error on the live >> server, so I'm thinking that something must be happening to >> the data during the export or import that corrupts the InnoDB >> data, but this is speculation, since I'm not a MySQL expert. >> >> Has anyone seen behavior like this? Any suggestions for >> further troubleshooting/resolution? >> >> -- >> >> Cheers, >> >> Christopher Curry >> Assistant Technical Librarian / Assistant IT Officer >> >> American Philosophical Society >> 105 South Fifth Street >> Philadelphia, PA 19106-3386 >> Tel. (215) 599-4299 >> >> ccurry at amphilsoc.org >> >> Main Library number: (215)440-3400 >> APS website: http://www.amphilsoc.org >> >> >> >> _______________________________________________ >> Koha-devel mailing list >> Koha-devel at lists.koha-community.org >> >> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ohiocore at gmail.com Tue Sep 21 17:15:35 2010 From: ohiocore at gmail.com (Joe Atzberger) Date: Tue, 21 Sep 2010 11:15:35 -0400 Subject: [Koha-devel] mysqldump & restore issues In-Reply-To: References: <4C93C011.3040506@amphilsoc.org> <4C98B5A6.5020100@amphilsoc.org> Message-ID: I wouldn't be surprised if this was related to dynamic allocation on the VM. Essentially, it can inject a random very large I/O latency to any operation. It may also change the hardware model used by the VM. I would recommend looking at InnoDB diagnostics, and possibly starting up w/ innodb_force_recovery=1 set in my.cnf. Run "check table" on each table (or at least the ones you've detected problems with) for closer analysis. --Joe -------------- next part -------------- An HTML attachment was scrubbed... URL: From ohiocore at gmail.com Tue Sep 21 17:17:47 2010 From: ohiocore at gmail.com (Joe Atzberger) Date: Tue, 21 Sep 2010 11:17:47 -0400 Subject: [Koha-devel] mysqldump & restore issues In-Reply-To: References: <4C93C011.3040506@amphilsoc.org> <4C98B5A6.5020100@amphilsoc.org> Message-ID: For those who didn't read the pasted logs, one link they mentioned is here: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccurry at amphilsoc.org Tue Sep 21 22:26:20 2010 From: ccurry at amphilsoc.org (Christopher Curry) Date: Tue, 21 Sep 2010 16:26:20 -0400 Subject: [Koha-devel] mysqldump & restore issues In-Reply-To: References: <4C93C011.3040506@amphilsoc.org> <4C98B5A6.5020100@amphilsoc.org> Message-ID: <4C9914EC.8030605@amphilsoc.org> Joe, Thanks for the information. I think I'll just try restoring the mysqldump on a a physical host and see if I have the same problems. If it works, I'll build another VM with fixed-size storage and hopefully that will be the end of my worries. I'm hoping to not have to look too closely at the InnoDB stuff. I wasn't aware that dynamic allocation could have this effect. I was trying to minimize the footprint of the VM image so I could thoughtlessly copy the image to backup my work. Now that I have the process all mapped out, I should be able to decide on a sufficient size and still have some easy backup capability for templates, et al. Cheers, Christopher Curry Assistant Technical Librarian / Assistant IT Officer American Philosophical Society 105 South Fifth Street Philadelphia, PA 19106-3386 Tel. (215) 599-4299 ccurry at amphilsoc.org Main Library number: (215)440-3400 APS website: http://www.amphilsoc.org On 09/21/2010 11:15 AM, Joe Atzberger wrote: > I wouldn't be surprised if this was related to dynamic allocation on > the VM. Essentially, it can inject a random very large I/O latency to > any operation. It may also change the hardware model used by the VM. > > I would recommend looking at InnoDB diagnostics, and possibly starting > up w/ innodb_force_recovery=1 set in my.cnf. Run "check table" on > each table (or at least the ones you've detected problems with) for > closer analysis. > > --Joe -------------- next part -------------- An HTML attachment was scrubbed... URL: From tajoli at cilea.it Fri Sep 24 17:34:57 2010 From: tajoli at cilea.it (Zeno Tajoli) Date: Fri, 24 Sep 2010 17:34:57 +0200 Subject: [Koha-devel] The work on bibliografic relationships (RFC 3.4 Analytic Record support) Message-ID: <4C9CC521.8020409@cilea.it> Hi to all, as I say, I have a sponsor to develop this RFC. Savitra Sirohi wrote here that want to do the work in Oct 2010. I can also work on this RFC in Oct 2010, but also later. Probably aslo one person from ByWater has a task on this RFC (Ian Walls?) Are there others developers ? I think we need to start to coordinate ourselves. Do we need a bug on bugzilla ? A specific git server / branch ? How to develop in parallel ? I have add a chat log about this RFC in http://wiki.koha-community.org/wiki/Talk:Analytic_Record_support I also add same consideration about MARC21 and Unimarc here: http://wiki.koha-community.org/wiki/Analytic_Record_support#MARC21_and_Unimarc_about_relationships My consideration: --I dont' want to use the embedded fields in Unimarc, too complex to use --In the near future I will add a optional plugin in z39.50 section to traslate Unimarc 4xx fields done with embedded fields to 4xx fields with standard fields. In Italy all Unimarc z39.50 server use embedded fields. -- I like the idea to have all relation in SQL tables but with the option to insert them into MARC data. For me the default is 'insert them into MARC, select to not insert' -- I suggest the idea to work manly with cataloguing plugins Bye Zeno -- Zeno Tajoli CILEA - Segrate (MI) tajoliAT_SPAM_no_prendiATcilea.it (Indirizzo mascherato anti-spam; sostituisci qaunto tra AT con @) From nengard at gmail.com Sat Sep 25 16:30:45 2010 From: nengard at gmail.com (Nicole Engard) Date: Sat, 25 Sep 2010 10:30:45 -0400 Subject: [Koha-devel] Proposal to change phone and email language on patron In-Reply-To: References: <1275007046.2430.201.camel@zarathud> Message-ID: Enhancement request added: http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=5252 Feel free to discuss other options there. On Sun, May 30, 2010 at 8:51 PM, Nicole Engard wrote: > Page created: http://wiki.koha-community.org/wiki/JQuery_Library I > used the same template we use for the SQL Report Library. > > Nicole > > On Sun, May 30, 2010 at 2:32 PM, Brendan Gallagher > wrote: >> I think a page on the wiki - similar to the SQL library would be good - >> something like JQuery Library.? This could be a good piece to add and I know >> that we've got a few that we would be interested in adding. >> >> Thanks, >> Brendan >> >> On Sun, May 30, 2010 at 12:51 AM, Liz Rea wrote: >>> >>> Yep, will hit you with it Tuesday. :) >>> >>> Liz >>> >>> On Fri, May 28, 2010 at 7:59 PM, Nicole Engard wrote: >>> > Liz, that is exactly why I was asking for this - it was confusing >>> > librarians that the home was all that printed to the screen. ?But it >>> > sounds like jquery is the answer for now ... want to share that code >>> > with me? >>> > >>> > Nicole >>> > >>> > On Fri, May 28, 2010 at 5:45 PM, Liz Rea wrote: >>> >> NEKLS has changed all of the labels for phone to "Phone (Primary)" and >>> >> "Phone (Secondary)" instead of home/mobile, using jquery. Same for email. We >>> >> also added a little note that says "Will be printed on transit slips" beside >>> >> the correct fields, as an additional reminder as to which one will print >>> >> out. The reason for this was that Koha only shows the primary phone/email >>> >> address on printed slips. People who only had mobile phones, and had those >>> >> phone numbers added to the mobile field were not getting their number >>> >> printed out on hold slips, confusing library staff and making it hard to >>> >> contact them regarding their holds. Of course, the easy answer is not >>> >> technological: just put the mobile number in the home phone field. As it >>> >> turns out, some of our staff are very literal in their interpretation of the >>> >> labels on the fields. >>> >> >>> >> Which actually brings up another thing that probably needs to be added >>> >> as a "boy it would be nice if..." bug: if there is only a mobile number in >>> >> the record, that is the one that should be printed on the slip(s). That is, >>> >> if we're not going to change the labels permanently. >>> >> >>> >> The logic should be as follows: if the home phone is empty, then print >>> >> the mobile. (same with email addresses: if no home email, print the work >>> >> email on the slip) >>> >> >>> >> And happy weekend, everybody. >>> >> >>> >> Liz Rea >>> >> NEKLS >>> >> >>> >> On May 27, 2010, at 7:37 PM, Robin Sheat wrote: >>> >> >>> >>> Op donderdag 27-05-2010 om 20:29 uur [tijdzone -0400], schreef Nicole >>> >>> Engard: >>> >>>> What do you all think? Anyone have any strong feelings either way? ?I >>> >>>> will gladly make the change if others think this makes sense, but I >>> >>>> wanted to bring the question to you all first. >>> >>> >>> >>> This would suit some work we've been doing with a library internal to >>> >>> a >>> >>> company: in this situation 'Phone (home)' doesn't make sense, and >>> >>> 'primary' would be much more meaningful. >>> >>> >>> >>> -- >>> >>> Robin Sheat >>> >>> Catalyst IT Ltd. >>> >>> ? +64 4 803 2204 >>> >>> _______________________________________________ >>> >>> Koha-devel mailing list >>> >>> Koha-devel at lists.koha-community.org >>> >>> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >>> >> >>> >> >>> > >>> _______________________________________________ >>> Koha-devel mailing list >>> Koha-devel at lists.koha-community.org >>> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >> >> >> -- >> --------------------------------------------------------------------------------------------------------------- >> Brendan A. Gallagher >> ByWater Solutions >> CEO, Director of Innovation >> Support and Consulting for Open Source Software >> Installation, Data Migration, Training, Customization, Hosting >> and Complete Support Packages >> Headquarters: Santa Barbara, CA - Office: West Haven, CT >> Phone # (888) 900-8944 >> http://bywatersolutions.com >> info at bywatersolutions.com >> >> >> See us at ALA : BOOTH # 817 >> > From M.de.Rooy at rijksmuseum.nl Mon Sep 27 13:34:35 2010 From: M.de.Rooy at rijksmuseum.nl (Marcel de Rooy) Date: Mon, 27 Sep 2010 11:34:35 +0000 Subject: [Koha-devel] Authority merge Message-ID: <809BE39CD64BFD4EB9036172EBCCFA3103D556@S-MAIL-1B.rijksmuseum.intra> Hi developers, I have a question on the update of biblio records after changing an authority. The 3.0 / 3.2 code in authorities.pl/AuthoritiesMarc.pm apparently replaces the complete MARC field in the biblio record (say 700) with the report tag of the authority record (say 100 for PERSO_NAME). This means that if I have an additional subfield e.g. 4 (relator code) in the biblio record with such an authority, an update of the authority record (without such a relator code) simply discards such extra subfields on the biblio side. My question is: Are we misunderstanding MARC in my library and should we always put the relator code on the authority side, or is this code doing something unintentional ?? If the first is true and I would have two separate relator codes for one authority, should I then make one authority record per relator code? It seems somewhat odd.. If the latter should be the case, I could write a bug report and submit a patch for it. Thanks for your time. Marcel -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccurry at amphilsoc.org Wed Sep 29 22:18:28 2010 From: ccurry at amphilsoc.org (Christopher Curry) Date: Wed, 29 Sep 2010 16:18:28 -0400 Subject: [Koha-devel] mysqldump & restore issues In-Reply-To: <4C9914EC.8030605@amphilsoc.org> References: <4C93C011.3040506@amphilsoc.org> <4C98B5A6.5020100@amphilsoc.org> <4C9914EC.8030605@amphilsoc.org> Message-ID: <4CA39F14.90104@amphilsoc.org> Joe, You were right. I tried this restore on a physical host and my problem disappeared, so I rebuilt the VM with fixed-size storage and confirmed that dynamic allocation was the source of the problem. Cheers, Christopher Curry Assistant Technical Librarian / Assistant IT Officer American Philosophical Society 105 South Fifth Street Philadelphia, PA 19106-3386 Tel. (215) 599-4299 ccurry at amphilsoc.org Main Library number: (215)440-3400 APS website: http://www.amphilsoc.org On 09/21/2010 04:26 PM, Christopher Curry wrote: > Joe, > > Thanks for the information. I think I'll just try restoring the > mysqldump on a a physical host and see if I have the same problems. > If it works, I'll build another VM with fixed-size storage and > hopefully that will be the end of my worries. I'm hoping to not have > to look too closely at the InnoDB stuff. > > I wasn't aware that dynamic allocation could have this effect. I was > trying to minimize the footprint of the VM image so I could > thoughtlessly copy the image to backup my work. Now that I have the > process all mapped out, I should be able to decide on a sufficient > size and still have some easy backup capability for templates, et al. > > Cheers, > > Christopher Curry > Assistant Technical Librarian / Assistant IT Officer > > American Philosophical Society > 105 South Fifth Street > Philadelphia, PA 19106-3386 > Tel. (215) 599-4299 > > ccurry at amphilsoc.org > > Main Library number: (215)440-3400 > APS website: http://www.amphilsoc.org > > > > On 09/21/2010 11:15 AM, Joe Atzberger wrote: >> I wouldn't be surprised if this was related to dynamic allocation on >> the VM. Essentially, it can inject a random very large I/O latency >> to any operation. It may also change the hardware model used by the VM. >> >> I would recommend looking at InnoDB diagnostics, and possibly >> starting up w/ innodb_force_recovery=1 set in my.cnf. Run "check >> table" on each table (or at least the ones you've detected problems >> with) for closer analysis. >> >> --Joe > > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From rijal.it at gmail.com Thu Sep 30 07:59:15 2010 From: rijal.it at gmail.com (Nitesh Rijal) Date: Thu, 30 Sep 2010 11:44:15 +0545 Subject: [Koha-devel] [koha] mail configuraiton help Message-ID: Hello all. I have been running koha in my server for about 10 libraries. Each of them have a different public IP for accesssing. What are the things that I need inorder to use the mail functionality for sending overdue notices and other related functions? Is there some step by step guide for it? We already have a mail server at some other IP address, so is it possible to use that server as mail server for koha as well or is it necessary that the machine that has koha server, should also have mail server configured in it? Please reply. Regards. -- Nitesh Rijal BE IT rijal.it at gmail.com http://niteshrijal.com.np http://facebook.com/openrijal http://twitter.com/openrijal +9779841458173 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdhafen at tech.washk12.org Thu Sep 30 16:20:14 2010 From: mdhafen at tech.washk12.org (Mike Hafen) Date: Thu, 30 Sep 2010 08:20:14 -0600 Subject: [Koha-devel] [koha] mail configuraiton help In-Reply-To: References: Message-ID: Koha relies a lot on the mail functions of the server. For example I have my server running postfix and set to send email through another mail server. As far as Koha itself, a lot of the mail functionality runs from cron. There are a couple perl scripts, most notably in this case the following: overdue_notices.pl, advance_notices.pl, and process_message_queue.pl. These are in the crontab.example. 2010/9/29 Nitesh Rijal > Hello all. > > I have been running koha in my server for about 10 libraries. Each of them > have a different public IP for accesssing. > > What are the things that I need inorder to use the mail functionality for > sending overdue notices and other related functions? > > Is there some step by step guide for it? > > We already have a mail server at some other IP address, so is it possible > to use that server as mail server for koha as well or is it necessary that > the machine that has koha server, should also have mail server configured in > it? > > Please reply. > > Regards. > > -- > Nitesh Rijal > BE IT > rijal.it at gmail.com > http://niteshrijal.com.np > http://facebook.com/openrijal > http://twitter.com/openrijal > +9779841458173 > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nengard at gmail.com Thu Sep 30 21:50:46 2010 From: nengard at gmail.com (Nicole Engard) Date: Thu, 30 Sep 2010 15:50:46 -0400 Subject: [Koha-devel] October Newsletter Call for Articles Message-ID: It's that time again, the next newsletter will be published in a couple of weeks. I need any and all announcements/tips/tricks/news/koha events sent to me by the 13th of October if I'm to get the newsletter out on time. Articles should be short and if you have a lot to say then send along a link to the full article. Remember you don't have to write a whole lot, just a short 2 liner is A-OK! Thanks in advance, Nicole C. Engard