From tomascohen at gmail.com Mon Nov 2 02:52:27 2015 From: tomascohen at gmail.com (Tomas Cohen Arazi) Date: Sun, 1 Nov 2015 22:52:27 -0300 Subject: [Koha-devel] IMPORTANT: Global Bug Squashing Week Message-ID: Hi everyone, I'm sorry I missed to send this email earlier. I have set a Trello board for managing the team-work to have the bugfixes we need for the upcoming 3.22 release. There are several minor ones, but also a couple nasty ones (Plack-related). The point is to work together on those, have a fluent communication on what are we doing, and most important, have fun while we do important things for the release. Video conference sessions if we need them, discussions about a specific bug solution on IRC. Whatever you find useful. For using the Trello board, you need to have access to Trello (you need to sign into an account). But it is not different than what you can see in bugzilla. Only categorized by priority (set by participants) and with information about who is doing what at a given time. If you think something needs to have higher priority, just say so. If you want to work on a bug, but don't want to join the propietary Trello, just tell me you are working on that bug, so I mark it on the board and others know, and spend time on another bugs. Please help Koha, and join us! The Koha development team -- Tom?s Cohen Arazi Theke Solutions (http://theke.io) ? +54 9351 3513384 GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F -------------- next part -------------- An HTML attachment was scrubbed... URL: From robin at catalyst.net.nz Mon Nov 2 23:00:12 2015 From: robin at catalyst.net.nz (Robin Sheat) Date: Tue, 03 Nov 2015 11:00:12 +1300 Subject: [Koha-devel] [Koha] Zebra (Koha 3.20.05) - Run it as a cron job or use default daemon? In-Reply-To: References: <76B657F0CE544943AD3B3CD99B9C1C05532B6C@RCMMX1.rcmusic.local> <5635EA2F.2020008@web.de> Message-ID: <1446501612.28981.69.camel@catalyst.net.nz> Tomas Cohen Arazi schreef op ma 02-11-2015 om 14:55 [-0300]: > To enable it you need to > - Comment the rebuild_zebra.pl line in /etc/cron.d/koha-common > - Enable the indexer daemon in /etc/default/koha-common > - Restart Koha's daemons: > $ sudo service koha-common stop ; sudo service koha-common start Y'know, it could be reasonable to have the rebuilder check the state of the indexer daemon switch and not do anything if it's on. -- Robin Sheat Catalyst IT Ltd. ? +64 4 803 2204 GPG: 5FA7 4B49 1E4D CAA4 4C38 8505 77F5 B724 F871 3BDF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: This is a digitally signed message part URL: From bargioni at pusc.it Tue Nov 3 10:13:49 2015 From: bargioni at pusc.it (Stefano Bargioni) Date: Tue, 3 Nov 2015 10:13:49 +0100 Subject: [Koha-devel] Link to the Current Stable Release is broken Message-ID: At page http://koha-community.org/download-koha/, the link to the "Current Stable Release (.tar.gz)" is broken. Thx. Stefano From akafortes at gmail.com Tue Nov 3 13:16:22 2015 From: akafortes at gmail.com (akafortes) Date: Tue, 3 Nov 2015 05:16:22 -0700 (MST) Subject: [Koha-devel] Can't hide fields in Normal View on OPAC Message-ID: <1446552982220-5859835.post@n5.nabble.com> I have edited my MARC bibliographic framework and changed the visibility of the fields that I don't want to be shown in the OPAC. The problem is that in the "Normal View" I still get some of the fields that were supposed to be hidden. They are only hidden in the "MARC View". Is there a way to change this? -- View this message in context: http://koha.1045719.n5.nabble.com/Can-t-hide-fields-in-Normal-View-on-OPAC-tp5859835.html Sent from the Koha-devel mailing list archive at Nabble.com. From fridolin.somers at biblibre.com Tue Nov 3 13:36:05 2015 From: fridolin.somers at biblibre.com (Fridolin SOMERS) Date: Tue, 03 Nov 2015 13:36:05 +0100 Subject: [Koha-devel] koha 3.14.17 released In-Reply-To: <55898913.6060106@biblibre.com> References: <54C25C09.4040006@biblibre.com> <55898913.6060106@biblibre.com> Message-ID: <5638AA35.1070800@biblibre.com> The Koha community is proud to announce the release of 3.14.17. This is a maintainance release. As always you can download the release from http://download.koha-community.org. Have a look at release post : http://koha-community.org/koha-3-14-17-released Since 3.22 is going to be released this month, this will be the last 3.14 version. Regards, -- Fridolin SOMERS Biblibre - P?les support et syst?me fridolin.somers at biblibre.com From gmc at esilibrary.com Tue Nov 3 14:27:10 2015 From: gmc at esilibrary.com (Galen Charlton) Date: Tue, 3 Nov 2015 08:27:10 -0500 Subject: [Koha-devel] Link to the Current Stable Release is broken In-Reply-To: References: Message-ID: Hi Stefano, On Tue, Nov 3, 2015 at 4:13 AM, Stefano Bargioni wrote: > At page http://koha-community.org/download-koha/, the link to the "Current Stable Release (.tar.gz)" is broken. > Thx. Stefano I checked, and it looks like somebody corrected the link earlier today. Regards, Galen -- Galen Charlton Infrastructure and Added Services Manager Equinox Software, Inc. / The Open Source Experts email: gmc at esilibrary.com direct: +1 770-709-5581 cell: +1 404-984-4366 skype: gmcharlt web: http://www.esilibrary.com/ Supporting Koha and Evergreen: http://koha-community.org & http://evergreen-ils.org From tomascohen at gmail.com Tue Nov 3 14:28:35 2015 From: tomascohen at gmail.com (Tomas Cohen Arazi) Date: Tue, 3 Nov 2015 10:28:35 -0300 Subject: [Koha-devel] Link to the Current Stable Release is broken In-Reply-To: References: Message-ID: Thanks for reporting! 2015-11-03 6:13 GMT-03:00 Stefano Bargioni : > At page http://koha-community.org/download-koha/, the link to the > "Current Stable Release (.tar.gz)" < > http://download.koha-community.org/koha-latest.tar.gz> is broken. > Thx. Stefano > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -- Tom?s Cohen Arazi Theke Solutions (http://theke.io) ? +54 9351 3513384 GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F -------------- next part -------------- An HTML attachment was scrubbed... URL: From barton at bywatersolutions.com Tue Nov 3 15:00:36 2015 From: barton at bywatersolutions.com (Barton Chittenden) Date: Tue, 3 Nov 2015 09:00:36 -0500 Subject: [Koha-devel] [Koha] Zebra (Koha 3.20.05) - Run it as a cron job or use default daemon? In-Reply-To: <1446501612.28981.69.camel@catalyst.net.nz> References: <76B657F0CE544943AD3B3CD99B9C1C05532B6C@RCMMX1.rcmusic.local> <5635EA2F.2020008@web.de> <1446501612.28981.69.camel@catalyst.net.nz> Message-ID: On Mon, Nov 2, 2015 at 5:00 PM, Robin Sheat wrote: > Tomas Cohen Arazi schreef op ma 02-11-2015 om 14:55 [-0300]: > > To enable it you need to > > - Comment the rebuild_zebra.pl line in /etc/cron.d/koha-common > > - Enable the indexer daemon in /etc/default/koha-common > > - Restart Koha's daemons: > > $ sudo service koha-common stop ; sudo service koha-common start > > Y'know, it could be reasonable to have the rebuilder check the state of > the indexer daemon switch and not do anything if it's on. > Or, to take this a step further, have some sort of trigger that would request that all items get rebuilt by the daemon, obviating the need for a full rebuild. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tomascohen at gmail.com Tue Nov 3 15:02:26 2015 From: tomascohen at gmail.com (Tomas Cohen Arazi) Date: Tue, 3 Nov 2015 11:02:26 -0300 Subject: [Koha-devel] [Koha] Zebra (Koha 3.20.05) - Run it as a cron job or use default daemon? In-Reply-To: References: <76B657F0CE544943AD3B3CD99B9C1C05532B6C@RCMMX1.rcmusic.local> <5635EA2F.2020008@web.de> <1446501612.28981.69.camel@catalyst.net.nz> Message-ID: 2015-11-03 11:00 GMT-03:00 Barton Chittenden : > > > On Mon, Nov 2, 2015 at 5:00 PM, Robin Sheat wrote: > >> Tomas Cohen Arazi schreef op ma 02-11-2015 om 14:55 [-0300]: >> > To enable it you need to >> > - Comment the rebuild_zebra.pl line in /etc/cron.d/koha-common >> > - Enable the indexer daemon in /etc/default/koha-common >> > - Restart Koha's daemons: >> > $ sudo service koha-common stop ; sudo service koha-common start >> >> Y'know, it could be reasonable to have the rebuilder check the state of >> the indexer daemon switch and not do anything if it's on. >> > > Or, to take this a step further, have some sort of trigger that would > request that all items get rebuilt by the daemon, obviating the need for a > full rebuild. > Nah, this is an incremental update thing. We could discuss polling vs. using a real queue management tool. But that's for another thread I guess. -- Tom?s Cohen Arazi Theke Solutions (http://theke.io) ? +54 9351 3513384 GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F -------------- next part -------------- An HTML attachment was scrubbed... URL: From tomascohen at gmail.com Tue Nov 3 15:35:08 2015 From: tomascohen at gmail.com (Tomas Cohen Arazi) Date: Tue, 3 Nov 2015 11:35:08 -0300 Subject: [Koha-devel] IMPORTANT: Global Bug Squashing Week In-Reply-To: References: Message-ID: Hi everyone, this is just a reminder for those interested on spending time fixing bugs for the release that join us on the Trello board and/or contact me or any other QA team member so we join forces to fix the small remaining issues for the release. 2015-11-01 22:52 GMT-03:00 Tomas Cohen Arazi : > Hi everyone, I'm sorry I missed to send this email earlier. I have set a > Trello board for managing the team-work to have the bugfixes we need for > the upcoming 3.22 release. > > There are several minor ones, but also a couple nasty ones (Plack-related). > > The point is to work together on those, have a fluent communication on > what are we doing, and most important, have fun while we do important > things for the release. Video conference sessions if we need them, > discussions about a specific bug solution on IRC. Whatever you find useful. > > For using the Trello board, you need to have access to Trello (you need to > sign into an account). But it is not different than what you can see in > bugzilla. Only categorized by priority (set by participants) and with > information about who is doing what at a given time. If you think something > needs to have higher priority, just say so. If you want to work on a bug, > but don't want to join the propietary Trello, just tell me you are working > on that bug, so I mark it on the board and others know, and spend time on > another bugs. > > Please help Koha, and join us! > The Koha development team > > -- > Tom?s Cohen Arazi > Theke Solutions (http://theke.io) > ? +54 9351 3513384 > GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F > -- Tom?s Cohen Arazi Theke Solutions (http://theke.io) ? +54 9351 3513384 GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F -------------- next part -------------- An HTML attachment was scrubbed... URL: From veron at veron.ch Tue Nov 3 16:42:34 2015 From: veron at veron.ch (=?UTF-8?Q?Marc_V=c3=a9ron?=) Date: Tue, 3 Nov 2015 16:42:34 +0100 Subject: [Koha-devel] IMPORTANT: Global Bug Squashing Week In-Reply-To: References: Message-ID: <5638D5EA.1010105@veron.ch> An HTML attachment was scrubbed... URL: From mtompset at hotmail.com Tue Nov 3 16:54:54 2015 From: mtompset at hotmail.com (Mark Tompsett) Date: Tue, 3 Nov 2015 10:54:54 -0500 Subject: [Koha-devel] Can't hide fields in Normal View on OPAC In-Reply-To: <1446552982220-5859835.post@n5.nabble.com> References: <1446552982220-5859835.post@n5.nabble.com> Message-ID: Greetings, I believe bug 11592 would help solve that, but I have yet to get back to it. GPML, Mark Tompsett -----Original Message----- From: akafortes Sent: Tuesday, November 03, 2015 7:16 AM To: koha-devel at lists.koha-community.org Subject: [Koha-devel] Can't hide fields in Normal View on OPAC I have edited my MARC bibliographic framework and changed the visibility of the fields that I don't want to be shown in the OPAC. The problem is that in the "Normal View" I still get some of the fields that were supposed to be hidden. They are only hidden in the "MARC View". Is there a way to change this? -- View this message in context: http://koha.1045719.n5.nabble.com/Can-t-hide-fields-in-Normal-View-on-OPAC-tp5859835.html Sent from the Koha-devel mailing list archive at Nabble.com. _______________________________________________ Koha-devel mailing list Koha-devel at lists.koha-community.org http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel website : http://www.koha-community.org/ git : http://git.koha-community.org/ bugs : http://bugs.koha-community.org/ From tomascohen at gmail.com Tue Nov 3 17:13:47 2015 From: tomascohen at gmail.com (Tomas Cohen Arazi) Date: Tue, 3 Nov 2015 13:13:47 -0300 Subject: [Koha-devel] IMPORTANT: Global Bug Squashing Week In-Reply-To: References: Message-ID: And i forgot to add the url to the email: https://trello.com/b/U2qRMwJO El 3 nov. 2015 11:35 a. m., "Tomas Cohen Arazi" escribi?: > Hi everyone, this is just a reminder for those interested on spending time > fixing bugs for the release that join us on the Trello board and/or contact > me or any other QA team member so we join forces to fix the small remaining > issues for the release. > > 2015-11-01 22:52 GMT-03:00 Tomas Cohen Arazi : > >> Hi everyone, I'm sorry I missed to send this email earlier. I have set a >> Trello board for managing the team-work to have the bugfixes we need for >> the upcoming 3.22 release. >> >> There are several minor ones, but also a couple nasty ones >> (Plack-related). >> >> The point is to work together on those, have a fluent communication on >> what are we doing, and most important, have fun while we do important >> things for the release. Video conference sessions if we need them, >> discussions about a specific bug solution on IRC. Whatever you find useful. >> >> For using the Trello board, you need to have access to Trello (you need >> to sign into an account). But it is not different than what you can see in >> bugzilla. Only categorized by priority (set by participants) and with >> information about who is doing what at a given time. If you think something >> needs to have higher priority, just say so. If you want to work on a bug, >> but don't want to join the propietary Trello, just tell me you are working >> on that bug, so I mark it on the board and others know, and spend time on >> another bugs. >> >> Please help Koha, and join us! >> The Koha development team >> >> -- >> Tom?s Cohen Arazi >> Theke Solutions (http://theke.io) >> ? +54 9351 3513384 >> GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F >> > > > > -- > Tom?s Cohen Arazi > Theke Solutions (http://theke.io) > ? +54 9351 3513384 > GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.moravec at gmail.com Tue Nov 3 21:05:40 2015 From: josef.moravec at gmail.com (Josef Moravec) Date: Tue, 03 Nov 2015 20:05:40 +0000 Subject: [Koha-devel] IMPORTANT: Global Bug Squashing Week In-Reply-To: References: Message-ID: Hello Tomas, Trello says I need an invitation to join, could you please invite me? Thanks Josef ?t 3. 11. 2015 v 17:13 odes?latel Tomas Cohen Arazi napsal: > And i forgot to add the url to the email: > > https://trello.com/b/U2qRMwJO > El 3 nov. 2015 11:35 a. m., "Tomas Cohen Arazi" > escribi?: > >> Hi everyone, this is just a reminder for those interested on spending >> time fixing bugs for the release that join us on the Trello board and/or >> contact me or any other QA team member so we join forces to fix the small >> remaining issues for the release. >> >> 2015-11-01 22:52 GMT-03:00 Tomas Cohen Arazi : >> >>> Hi everyone, I'm sorry I missed to send this email earlier. I have set a >>> Trello board for managing the team-work to have the bugfixes we need for >>> the upcoming 3.22 release. >>> >>> There are several minor ones, but also a couple nasty ones >>> (Plack-related). >>> >>> The point is to work together on those, have a fluent communication on >>> what are we doing, and most important, have fun while we do important >>> things for the release. Video conference sessions if we need them, >>> discussions about a specific bug solution on IRC. Whatever you find useful. >>> >>> For using the Trello board, you need to have access to Trello (you need >>> to sign into an account). But it is not different than what you can see in >>> bugzilla. Only categorized by priority (set by participants) and with >>> information about who is doing what at a given time. If you think something >>> needs to have higher priority, just say so. If you want to work on a bug, >>> but don't want to join the propietary Trello, just tell me you are working >>> on that bug, so I mark it on the board and others know, and spend time on >>> another bugs. >>> >>> Please help Koha, and join us! >>> The Koha development team >>> >>> -- >>> Tom?s Cohen Arazi >>> Theke Solutions (http://theke.io) >>> ? +54 9351 3513384 >>> GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F >>> >> >> >> >> -- >> Tom?s Cohen Arazi >> Theke Solutions (http://theke.io) >> ? +54 9351 3513384 >> GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F >> > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Andreas.Hedstrom.Mace at sub.su.se Wed Nov 4 09:54:36 2015 From: Andreas.Hedstrom.Mace at sub.su.se (=?utf-8?B?QW5kcmVhcyBIZWRzdHLDtm0gTWFjZQ==?=) Date: Wed, 4 Nov 2015 08:54:36 +0000 Subject: [Koha-devel] IMPORTANT: Global Bug Squashing Week In-Reply-To: References: Message-ID: <92AE32DC9C30BE42B2675D0FD82D91F2105E4552@mail.intranet.sub.su.se> Or maybe make the Trello board public? Best regards, Andreas ____________________________________ Andreas Hedstr?m Mace Librarian Stockholm University Library Stockholm University 106 91 Stockholm Tel: +46 (0) 8-16 49 17 www.sub.su.se ____________________________________ Fr?n: koha-devel-bounces at lists.koha-community.org [mailto:koha-devel-bounces at lists.koha-community.org] F?r Josef Moravec Skickat: den 3 november 2015 21:06 Till: Tomas Cohen Arazi; koha-devel ?mne: Re: [Koha-devel] IMPORTANT: Global Bug Squashing Week Hello Tomas, Trello says I need an invitation to join, could you please invite me? Thanks Josef ?t 3. 11. 2015 v 17:13 odes?latel Tomas Cohen Arazi > napsal: And i forgot to add the url to the email: https://trello.com/b/U2qRMwJO El 3 nov. 2015 11:35 a. m., "Tomas Cohen Arazi" > escribi?: Hi everyone, this is just a reminder for those interested on spending time fixing bugs for the release that join us on the Trello board and/or contact me or any other QA team member so we join forces to fix the small remaining issues for the release. 2015-11-01 22:52 GMT-03:00 Tomas Cohen Arazi >: Hi everyone, I'm sorry I missed to send this email earlier. I have set a Trello board for managing the team-work to have the bugfixes we need for the upcoming 3.22 release. There are several minor ones, but also a couple nasty ones (Plack-related). The point is to work together on those, have a fluent communication on what are we doing, and most important, have fun while we do important things for the release. Video conference sessions if we need them, discussions about a specific bug solution on IRC. Whatever you find useful. For using the Trello board, you need to have access to Trello (you need to sign into an account). But it is not different than what you can see in bugzilla. Only categorized by priority (set by participants) and with information about who is doing what at a given time. If you think something needs to have higher priority, just say so. If you want to work on a bug, but don't want to join the propietary Trello, just tell me you are working on that bug, so I mark it on the board and others know, and spend time on another bugs. Please help Koha, and join us! The Koha development team -- Tom?s Cohen Arazi Theke Solutions (http://theke.io) ? +54 9351 3513384 GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F -- Tom?s Cohen Arazi Theke Solutions (http://theke.io) ? +54 9351 3513384 GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F _______________________________________________ Koha-devel mailing list Koha-devel at lists.koha-community.org http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel website : http://www.koha-community.org/ git : http://git.koha-community.org/ bugs : http://bugs.koha-community.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mirko at abunchofthings.net Wed Nov 4 10:32:11 2015 From: mirko at abunchofthings.net (Mirko Tietgen) Date: Wed, 04 Nov 2015 10:32:11 +0100 Subject: [Koha-devel] General IRC meeting and 3.24 elections in 30 minutes Message-ID: <5639D09B.4090302@abunchofthings.net> Hello everyone, just a short reminder that we have a community IRC meeting in half an hour, and we are electing the team for the next release cycle. Please attend and show your support for our volunteers. The meeting agenda can be found at http://wiki.koha-community.org/wiki/General_IRC_meeting_4_November_2015 Candidates for the 3.24 release cycle can be found here http://wiki.koha-community.org/wiki/Roles_for_3.24 See you there http://koha-community.org/get-involved/irc/ Mirko -- Mirko Tietgen mirko at abunchofthings.net http://koha.abunchofthings.net http://meinkoha.de -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From z.tajoli at cineca.it Wed Nov 4 18:12:23 2015 From: z.tajoli at cineca.it (Tajoli Zeno) Date: Wed, 4 Nov 2015 18:12:23 +0100 Subject: [Koha-devel] IMPORTANT: Global Bug Squashing Week In-Reply-To: <92AE32DC9C30BE42B2675D0FD82D91F2105E4552@mail.intranet.sub.su.se> References: <92AE32DC9C30BE42B2675D0FD82D91F2105E4552@mail.intranet.sub.su.se> Message-ID: <563A3C77.1090503@cineca.it> Hi to all, Il 04/11/2015 09:54, Andreas Hedstr?m Mace ha scritto: > Or maybe make the Trello board public? now Trello is public, but to work on it you need to be inscript into the board. At 05/11/2015 I will try to sign-off bug 14969, Remove C4::Dates from serials/*.pl files Bye Zeno Tajoli -- Zeno Tajoli /Dipartimento Sviluppi Innovativi/ - Automazione Biblioteche Email: z.tajoli at cineca.it Fax: 051/6132198 *CINECA* Consorzio Interuniversitario - Sede operativa di Segrate (MI) From fridolin.somers at biblibre.com Thu Nov 5 09:33:54 2015 From: fridolin.somers at biblibre.com (Fridolin SOMERS) Date: Thu, 05 Nov 2015 09:33:54 +0100 Subject: [Koha-devel] Indexes of Physical presentation Message-ID: <563B1472.3050000@biblibre.com> Hie, In intranet search, physical presentation uses index ff8-23. In opac search, physical presentation uses index Material-type. This is strange because the search always use the coded values, meaning it is ff8-23 the correct index. Do you agree ? Regards, -- Fridolin SOMERS Biblibre - P?les support et syst?me fridolin.somers at biblibre.com From akafortes at gmail.com Thu Nov 5 12:55:52 2015 From: akafortes at gmail.com (akafortes) Date: Thu, 5 Nov 2015 04:55:52 -0700 (MST) Subject: [Koha-devel] Delete report group Message-ID: <1446724552275-5860244.post@n5.nabble.com> Is there a way to edit or delete a Report group? -- View this message in context: http://koha.1045719.n5.nabble.com/Delete-report-group-tp5860244.html Sent from the Koha-devel mailing list archive at Nabble.com. From oleonard at myacpl.org Thu Nov 5 13:53:57 2015 From: oleonard at myacpl.org (Owen Leonard) Date: Thu, 5 Nov 2015 07:53:57 -0500 Subject: [Koha-devel] Delete report group In-Reply-To: <1446724552275-5860244.post@n5.nabble.com> References: <1446724552275-5860244.post@n5.nabble.com> Message-ID: > Is there a way to edit or delete a Report group? Yes. Report groups and subgroups are stored as authorized values. They can be edited and deleted from the Authorized values interface in Administration: Administration -> Authorized values -> Category REPORT_GROUP Administration -> Authorized values -> Category REPORT_SUBGROUP -- Owen -- Web Developer Athens County Public Libraries http://www.myacpl.org From fridolin.somers at biblibre.com Thu Nov 5 15:36:35 2015 From: fridolin.somers at biblibre.com (Fridolin SOMERS) Date: Thu, 05 Nov 2015 15:36:35 +0100 Subject: [Koha-devel] UNIMARC Titles facet does not work Message-ID: <563B6973.9050003@biblibre.com> Hie, A question for UNIMARC specialists in Bug 15142. It may need discussion how fix it. Thanks for your help -- Fridolin SOMERS Biblibre - P?les support et syst?me fridolin.somers at biblibre.com From barton at bywatersolutions.com Thu Nov 5 17:47:16 2015 From: barton at bywatersolutions.com (Barton Chittenden) Date: Thu, 5 Nov 2015 11:47:16 -0500 Subject: [Koha-devel] Searching numeric ranges Message-ID: I am working on searching lexile number ranges. ccl.properties shows lex 1=9903 r=r The 'r=r' bit means that I should be able to search using a numeric range separated by a dash, e.g. 500-600 Should return any numeric results from 500 to 600. The following query works: cgi-bin/koha/catalogue/search.pl?q=ccl%3Dlex%2Cst-numeric%3D500-600 However, when I try adding that as an item in the search menu, as follows: $(document).ready(function(){ //add lexile to search pull downs $("select[name='idx']").append(""); }); That gets munged... the url reads cgi-bin/koha/catalogue/search.pl?idx=lex%2Cst-numeric&q=500-600 And I get the following message: No results found No results match your search for 'lex,st-numeric: 500-600'. --Barton -------------- next part -------------- An HTML attachment was scrubbed... URL: From chrisc at catalyst.net.nz Sun Nov 8 20:19:58 2015 From: chrisc at catalyst.net.nz (Chris Cormack) Date: Mon, 9 Nov 2015 08:19:58 +1300 Subject: [Koha-devel] New Release team Message-ID: <20151108191958.GI18151@rorohiko.wgtn.cat-it.co.nz> Hi All Could the new people for the 3.24 release cycle that need rights to be able to push to git.koha-community.org please send me their ssh keys. Thanks Chris -- Chris Cormack Catalyst IT Ltd. +64 4 803 2238 PO Box 11-053, Manners St, Wellington 6142, New Zealand -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: Digital signature URL: From dcook at prosentient.com.au Mon Nov 9 07:25:07 2015 From: dcook at prosentient.com.au (David Cook) Date: Mon, 9 Nov 2015 17:25:07 +1100 Subject: [Koha-devel] What is the largest library collection you've heard of? / Also different metadata formats / RDF (aka all the things) Message-ID: <052f01d11ab7$60bb4cc0$2231e640$@prosentient.com.au> Hi all: I was just wondering. what's the biggest library collection you've heard of? What's the biggest Koha collection you've heard of? Most recently, I recall there being a mention of 13-14 million bibliographic records in a Turkish public library consortium: http://koha.1045719.n5.nabble.com/KOHA-with-PostgreSQL-td5856359.html I did some Googling and arrived at this list: https://en.wikipedia.org/wiki/List_of_largest_libraries. The two biggest are the British Library with 170+ million items and the Library of Congress with 160+ millions. Library and Archives Canada come in third with 54 million, and the New York Public Library comes in fourth with 53 million. It drops off pretty fast after that. 2 in the 40s, 4 in the 30s, 6 in the 20s, and 3 in the teens. I think the largest Koha I've managed had a little over 1 million items and 1 million biblios. I'm guessing that many cases of "large" libraries must be in the 1-10 million range, which doesn't actually seem that bad. I sometimes wonder about the merit of a table that stored something like ("id","type","metadata"). The primary key has an index by default I believe and then maybe add one to "type" if we find necessary although it might only ever need to be accessed after the row is already retrieved. Just thinking about adding different metadata formats to Koha. In theory, Zebra can handle any XML metadata format we throw at it. I think you can index different record types into Zebra. We'd need to change how the indexing runs and add some XSLTs for indexing/retrieving those metadata formats, but I think it's doable. There's a few ways of handling the results afterward. you could add templates to the existing XSLTs so you could feed the metadata to the same XSLT regardless of format. Or we could adopt an intermediary data format (when retrieving data from Zebra, we can define our own XSLTs per record type I believe) and do our displays based on that intermediary format. The remaining troubles would then be with other places in Koha that use the MARCXML directly. such as cataloguing, which relies on mappings between the relational database and MARC and items which are composed/decomposed to/from MARCXML. But I think that's all achievable. Of course, I don't have a project at the moment that would involve adding metadata formats. Thinking more about RDF, but I think that's a bit of a barrel of monkeys. While I think RDF has merit when it comes to browsing records, I still don't see how you could effectively retrieve an RDF record from a local triplestore if you're relying on data stored on a remote server. Your RDF record might have the title you want to find, but what if you want to find a record by author? There's no local data referring to the author. You just have a triple in the record that contains an IRI pointing to the author record on another server. I don't have a lot of experience with RDF or triplestore or linked data in general. but I assume that there must be some sort of local caching of data in search indexes? David Cook Systems Librarian Prosentient Systems 72/330 Wattle St, Ultimo, NSW 2007 -------------- next part -------------- An HTML attachment was scrubbed... URL: From fridolin.somers at biblibre.com Mon Nov 9 10:39:04 2015 From: fridolin.somers at biblibre.com (Fridolin SOMERS) Date: Mon, 09 Nov 2015 10:39:04 +0100 Subject: [Koha-devel] Link to the Current Stable Release is broken In-Reply-To: References: Message-ID: <564069B8.7090301@biblibre.com> Le 03/11/2015 14:27, Galen Charlton a ?crit : > Hi Stefano, > > On Tue, Nov 3, 2015 at 4:13 AM, Stefano Bargioni wrote: >> At page http://koha-community.org/download-koha/, the link to the "Current Stable Release (.tar.gz)" is broken. >> Thx. Stefano > > I checked, and it looks like somebody corrected the link earlier today. Yep its me when I released 3.14.17. > > Regards, > > Galen > -- Fridolin SOMERS Biblibre - P?les support et syst?me fridolin.somers at biblibre.com From jonathan.druart at bugs.koha-community.org Mon Nov 9 17:30:27 2015 From: jonathan.druart at bugs.koha-community.org (Jonathan Druart) Date: Mon, 9 Nov 2015 16:30:27 +0000 Subject: [Koha-devel] Benchmarks - prior to the 3.22 release Message-ID: Hi devs, Please have a look at the these benchmarks: http://wiki.koha-community.org/wiki/Benchmark_for_3.22 I let you draw the appropriate conclusions. Cheers, Jonathan From paul.a at navalmarinearchive.com Mon Nov 9 18:56:46 2015 From: paul.a at navalmarinearchive.com (Paul A) Date: Mon, 09 Nov 2015 12:56:46 -0500 Subject: [Koha-devel] Benchmarks - prior to the 3.22 release In-Reply-To: Message-ID: <5.2.1.1.2.20151109121553.03f9c968@pop.navalmarinearchive.com> At 04:30 PM 11/9/2015 +0000, Jonathan Druart wrote: >Hi devs, > >Please have a look at the these benchmarks: > >http://wiki.koha-community.org/wiki/Benchmark_for_3.22 > >I let you draw the appropriate conclusions. Jonathan, Is there a write up (perhaps specific code examples) on how these benchmarks are obtained? How fast are the machines (server and client)? allow any caching? if from a work station, what browser, and how is latency compensated for? For example, using one of our cataloguers' workstations -- not exactly modern, cache disabled, and using a 10/100 ethernet card on our Gigabit LAN -- with Firebug "Net", I can "add item" in 1.66s (onload: 1.68s) -- the actual POST additem.pl is 698ms, and nearly all the remainder is GETting the various *.js (utilities.js slowest at 156ms) and *.css (skin.css slowest at 125ms) But -- I'm maybe comparing "apples and pears" to the benchmarks on the Wiki. Best -- Paul From jonathan.druart at bugs.koha-community.org Mon Nov 9 19:26:01 2015 From: jonathan.druart at bugs.koha-community.org (Jonathan Druart) Date: Mon, 9 Nov 2015 18:26:01 +0000 Subject: [Koha-devel] Benchmarks - prior to the 3.22 release In-Reply-To: <5.2.1.1.2.20151109121553.03f9c968@pop.navalmarinearchive.com> References: <5.2.1.1.2.20151109121553.03f9c968@pop.navalmarinearchive.com> Message-ID: 2015-11-09 17:56 GMT+00:00 Paul A : > At 04:30 PM 11/9/2015 +0000, Jonathan Druart wrote: >> >> Hi devs, >> >> Please have a look at the these benchmarks: >> >> http://wiki.koha-community.org/wiki/Benchmark_for_3.22 >> >> I let you draw the appropriate conclusions. > > > Jonathan, > > Is there a write up (perhaps specific code examples) on how these benchmarks > are obtained? How fast are the machines (server and client)? allow any > caching? if from a work station, what browser, and how is latency > compensated for? > > For example, using one of our cataloguers' workstations -- not exactly > modern, cache disabled, and using a 10/100 ethernet card on our Gigabit LAN > -- with Firebug "Net", I can "add item" in 1.66s (onload: 1.68s) -- the > actual POST additem.pl is 698ms, and nearly all the remainder is GETting the > various *.js (utilities.js slowest at 156ms) and *.css (skin.css slowest at > 125ms) > > But -- I'm maybe comparing "apples and pears" to the benchmarks on the Wiki. Yes you are, see first lines on the wiki page. > > Best -- Paul > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ From tomascohen at gmail.com Mon Nov 9 21:19:40 2015 From: tomascohen at gmail.com (Tomas Cohen Arazi) Date: Mon, 9 Nov 2015 17:19:40 -0300 Subject: [Koha-devel] Benchmarks - prior to the 3.22 release In-Reply-To: References: Message-ID: 2015-11-09 13:30 GMT-03:00 Jonathan Druart < jonathan.druart at bugs.koha-community.org>: > Hi devs, > > Please have a look at the these benchmarks: > > http://wiki.koha-community.org/wiki/Benchmark_for_3.22 > It is expected that broader DBIC usage would have more footprint [1]. I wonder what your Plack setup is, as the packages integration uses Starman with prefork, so the load time should not be user-noticeable. Hum... I agree with Paul A, that running in Plack feels so close to zero-time, that those numbers sound unrealistic. Maybe we should go back to caching sysprefs (you could test that if you have the VM ready for it) and have the workers last shorter time (something between 1 and 5 requests) [2]. [1] Only as an example, we are now retireving sysprefs through DBIC. [2] Dobrica mentioned this in Marseille. I don't recall how many beers we had before that. -- Tom?s Cohen Arazi Theke Solutions (http://theke.io) ? +54 9351 3513384 GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.a at navalmarinearchive.com Mon Nov 9 22:19:16 2015 From: paul.a at navalmarinearchive.com (Paul A) Date: Mon, 09 Nov 2015 16:19:16 -0500 Subject: [Koha-devel] Benchmarks - prior to the 3.22 release In-Reply-To: References: <5.2.1.1.2.20151109121553.03f9c968@pop.navalmarinearchive.com> <5.2.1.1.2.20151109121553.03f9c968@pop.navalmarinearchive.com> Message-ID: <5.2.1.1.2.20151109155626.04b12af0@pop.navalmarinearchive.com> At 06:26 PM 11/9/2015 +0000, Jonathan Druart wrote: >2015-11-09 17:56 GMT+00:00 Paul A : > > At 04:30 PM 11/9/2015 +0000, Jonathan Druart wrote: > >> > >> Hi devs, > >> > >> Please have a look at the these benchmarks: > >> > >> http://wiki.koha-community.org/wiki/Benchmark_for_3.22 > >> > >> I let you draw the appropriate conclusions. > > > > > > Jonathan, > > > > Is there a write up (perhaps specific code examples) on how these > benchmarks > > are obtained? How fast are the machines (server and client)? allow any > > caching? if from a work station, what browser, and how is latency > > compensated for? > > > > For example, using one of our cataloguers' workstations -- not exactly > > modern, cache disabled, and using a 10/100 ethernet card on our Gigabit LAN > > -- with Firebug "Net", I can "add item" in 1.66s (onload: 1.68s) -- the > > actual POST additem.pl is 698ms, and nearly all the remainder is > GETting the > > various *.js (utilities.js slowest at 156ms) and *.css (skin.css slowest at > > 125ms) > > > > But -- I'm maybe comparing "apples and pears" to the benchmarks on the > Wiki. > >Yes you are, see first lines on the wiki page. Thanks for the pointer. Next week I can probably mirror our prod. machine back to a sandbox. Is your script specific to 3.16|18 or should it run on 3.8.24 on Ubuntu 14.04? If there's a chance of success, I can certainly give it a try... Best -- Paul From dcook at prosentient.com.au Tue Nov 10 04:56:35 2015 From: dcook at prosentient.com.au (David Cook) Date: Tue, 10 Nov 2015 14:56:35 +1100 Subject: [Koha-devel] Searching numeric ranges In-Reply-To: References: Message-ID: <059601d11b6b$cb864500$6292cf00$@prosentient.com.au> After sending an email to Adam at Indexdata (and the Zebra/YAZ lists), I need to revise my last comment on this thread? The URL cgi-bin/koha/catalogue/search.pl?q=ccl%3Dlex%2Cst-numeric%3D500-600 is better read as lex,st-numeric:500-600 which works. However, the URL cgi-bin/koha/catalogue/search.pl?idx=lex%2Cst-numeric &q=500-600 creates the CCL query (rk=( lex,st-numeric="300-600")) which is translated into PQF as @attr 1=9903 @attr 4=109 @attr 2=102 300-600, which generates the error [117] Unsupported Relation attribute -- v2 addinfo ''. The problem is due to the rk=() and the double quotes. The double quotes make CCL2RPN think that it?s a single term and not a range. So it doesn?t do the r=r/r=o magic. If you remove those quotes to form a CCL query of (rk=( lex,st-numeric=300-600)), you?ll get a PQF query of @and @attr 1=9903 @attr 4=109 @attr 2=102 300 @attr 1=9903 @attr 4=109 @attr 2=102 600, which still generates the error [117] Unsupported Relation attribute -- v2 addinfo ''? but it?s more in line with what we?d expect the PQF query to look like. You might notice here that the problem with the query is that the relation attributes of @attr 2=4 and @attr 2=2 are being replaced with @attr 2=102. This means it won?t be treated as a range, which is definitely not what we want! But we don?t even get that far? we get that error because it seems that the @attr 4=109 structure attribute doesn?t seem to play nicely with @attr 2=102 (relevance). I think it?s a bug in the CCL2RPN conversion. The rk=() is specified at a higher level than the r=r/r=o special attributes, so the @attr 2=4 and @attr 2=2 attributes shouldn?t be overwritten. However, we?ll see what Indexdata has to say about the issue? David Cook Systems Librarian Prosentient Systems 72/330 Wattle St, Ultimo, NSW 2007 From: koha-devel-bounces at lists.koha-community.org [mailto:koha-devel-bounces at lists.koha-community.org] On Behalf Of Barton Chittenden Sent: Friday, 6 November 2015 3:47 AM To: Koha-devel Subject: [Koha-devel] Searching numeric ranges I am working on searching lexile number ranges. ccl.properties shows lex 1=9903 r=r The 'r=r' bit means that I should be able to search using a numeric range separated by a dash, e.g. 500-600 Should return any numeric results from 500 to 600. The following query works: cgi-bin/koha/catalogue/search.pl?q=ccl%3Dlex%2Cst-numeric%3D500-600 However, when I try adding that as an item in the search menu, as follows: $(document).ready(function(){ //add lexile to search pull downs $("select[name='idx']").append(""); }); That gets munged... the url reads cgi-bin/koha/catalogue/search.pl?idx=lex%2Cst-numeric &q=500-600 And I get the following message: No results found No results match your search for 'lex,st-numeric: 500-600'. --Barton -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.druart at bugs.koha-community.org Tue Nov 10 09:20:37 2015 From: jonathan.druart at bugs.koha-community.org (Jonathan Druart) Date: Tue, 10 Nov 2015 08:20:37 +0000 Subject: [Koha-devel] Benchmarks - prior to the 3.22 release In-Reply-To: References: Message-ID: I use plack with the koha.psgi file from the source (misc/plack/koha.psgi) and the Proxy/ProxyPass directives in the apache config. For more info on what does the tests, please see the first lines on the wiki page, which point to bug 13690 comment 2: === I have created a selenium script (see bug 13691) which: - goes on the mainpage and processes a log in (main) - creates a patron category (add patron category) - creates a patron (add patron) - adds 3 items (add items) - checks the 3 items out to the patron (checkout) - checks the 3 items in (checkin) === 2015-11-09 20:19 GMT+00:00 Tomas Cohen Arazi : > > > 2015-11-09 13:30 GMT-03:00 Jonathan Druart > : >> >> Hi devs, >> >> Please have a look at the these benchmarks: >> >> http://wiki.koha-community.org/wiki/Benchmark_for_3.22 > > > It is expected that broader DBIC usage would have more footprint [1]. I > wonder what your Plack setup is, as the packages integration uses Starman > with prefork, so the load time should not be user-noticeable. > > Hum... > > I agree with Paul A, that running in Plack feels so close to zero-time, that > those numbers sound unrealistic. Maybe we should go back to caching sysprefs > (you could test that if you have the VM ready for it) and have the workers > last shorter time (something between 1 and 5 requests) [2]. > > [1] Only as an example, we are now retireving sysprefs through DBIC. > [2] Dobrica mentioned this in Marseille. I don't recall how many beers we > had before that. > > -- > Tom?s Cohen Arazi > Theke Solutions (http://theke.io) > ? +54 9351 3513384 > GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F From dcook at prosentient.com.au Tue Nov 10 22:31:01 2015 From: dcook at prosentient.com.au (David Cook) Date: Wed, 11 Nov 2015 08:31:01 +1100 Subject: [Koha-devel] Searching numeric ranges In-Reply-To: References: Message-ID: <066901d11bff$18807fe0$49817fa0$@prosentient.com.au> I?ve got a response from Adam at Indexdata. Impressed as always by the speed in which he replies! David: ?it seems to me that CCL2RPN isn?t working correctly when a CCL qualifier for @attr 2=102 is used in conjunction with a qualifier containing r=o/r=r, and Zebra isn?t capable of processing the @attr 2=102 relation attribute in conjunction with the @attr 4=109 structure attribute. Adam: Yep. The "local r=r or r=o" does not take precedence over the outer one. That's a bug in YAZ. It's been fixed and is part of next release of YAZ - version 5.15.0. I feel vindicated in that it was a bug as I suspected, although this fact doesn?t necessarily help us at the moment. The YAZ packaged with Wheezy is version 4.2.30? a long shot away from 5.15.0. Even on OpenSuse 13.2, I?m still only running YAZ 5.1.2. I?m optimistic about getting a newer version of YAZ into the OpenSuse repositories, as I was successful in getting them to package Zebra 2.0.60, but I suspect it will be difficult for Debian users since Debian hasn?t even acknowledged the issue raised by Robin in regards to updating Zebra to 2.0.60. I have to run at the moment, but just a FYI. I?ve asked Adam more specifically about relevance relation attributes and numeric, year, and date registers? hopefully he can provide some insight there as well? David Cook Systems Librarian Prosentient Systems 72/330 Wattle St, Ultimo, NSW 2007 From: koha-devel-bounces at lists.koha-community.org [mailto:koha-devel-bounces at lists.koha-community.org] On Behalf Of Barton Chittenden Sent: Friday, 6 November 2015 3:47 AM To: Koha-devel Subject: [Koha-devel] Searching numeric ranges I am working on searching lexile number ranges. ccl.properties shows lex 1=9903 r=r The 'r=r' bit means that I should be able to search using a numeric range separated by a dash, e.g. 500-600 Should return any numeric results from 500 to 600. The following query works: cgi-bin/koha/catalogue/search.pl?q=ccl%3Dlex%2Cst-numeric%3D500-600 However, when I try adding that as an item in the search menu, as follows: $(document).ready(function(){ //add lexile to search pull downs $("select[name='idx']").append(""); }); That gets munged... the url reads cgi-bin/koha/catalogue/search.pl?idx=lex%2Cst-numeric &q=500-600 And I get the following message: No results found No results match your search for 'lex,st-numeric: 500-600'. --Barton -------------- next part -------------- An HTML attachment was scrubbed... URL: From Katrin.Fischer.83 at web.de Tue Nov 10 23:14:21 2015 From: Katrin.Fischer.83 at web.de (Katrin Fischer) Date: Tue, 10 Nov 2015 23:14:21 +0100 Subject: [Koha-devel] Indexes of Physical presentation In-Reply-To: <563B1472.3050000@biblibre.com> References: <563B1472.3050000@biblibre.com> Message-ID: <56426C3D.6040005@web.de> Hi Fridolin, I have tried to look into this, but I am not sure what you mean with 'physical presentation'. Can you explain a bit more? Is this UNIMARC specific? Katrin Am 05.11.2015 um 09:33 schrieb Fridolin SOMERS: > Hie, > > In intranet search, physical presentation uses index ff8-23. > In opac search, physical presentation uses index Material-type. > > This is strange because the search always use the coded values, meaning > it is ff8-23 the correct index. > Do you agree ? > > Regards, > From dcook at prosentient.com.au Thu Nov 12 02:21:39 2015 From: dcook at prosentient.com.au (David Cook) Date: Thu, 12 Nov 2015 12:21:39 +1100 Subject: [Koha-devel] Searching numeric ranges In-Reply-To: References: Message-ID: <071001d11ce8$7b531d50$71f957f0$@prosentient.com.au> The latest note on the matter? Barton, good work on highlighting this issue. It was indeed a bug in YAZ, and it?s one that Indexdata fixed just 35 hours ago: http://git.indexdata.com/?p=yaz.git;a=commit;h=31596bdcae098f8acea695d44c44ee5f646b4c1f So version 5.15.0 is the latest version of YAZ. I haven?t actually tested the fix yet, but the key problem was with the 2=102 relevance relation attribute incorrectly overriding the relation attributes created by r=o/r=r in the CCL2RPN conversion. The error message that followed were due to the relevance 2=102 attribute being incompatible with numeric indexes. I talked to Adam Dickmeiss from Indexdata, and he said that relevance works with every query structure except 4=107 (zebra local number) and 4=109 (numeric strings/indexes). In those cases, he suggests using r=o/r=r, and if you look at ccl.properties? we indeed do add r=o for st-numeric? so that?s OK. He did suggest that he might add support for relevance on numeric indexes, but not sure about that one. Barton: I don?t think you?re going to be able to add lexile searching to the advanced search without making a workaround. After all, even with a new version of YAZ released, it?s not going to be available via any package managers yet? and it could break things for people with a version of YAZ less than 5.15.0? David Cook Systems Librarian Prosentient Systems 72/330 Wattle St, Ultimo, NSW 2007 From: koha-devel-bounces at lists.koha-community.org [mailto:koha-devel-bounces at lists.koha-community.org] On Behalf Of Barton Chittenden Sent: Friday, 6 November 2015 3:47 AM To: Koha-devel Subject: [Koha-devel] Searching numeric ranges I am working on searching lexile number ranges. ccl.properties shows lex 1=9903 r=r The 'r=r' bit means that I should be able to search using a numeric range separated by a dash, e.g. 500-600 Should return any numeric results from 500 to 600. The following query works: cgi-bin/koha/catalogue/search.pl?q=ccl%3Dlex%2Cst-numeric%3D500-600 However, when I try adding that as an item in the search menu, as follows: $(document).ready(function(){ //add lexile to search pull downs $("select[name='idx']").append(""); }); That gets munged... the url reads cgi-bin/koha/catalogue/search.pl?idx=lex%2Cst-numeric &q=500-600 And I get the following message: No results found No results match your search for 'lex,st-numeric: 500-600'. --Barton -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.druart at bugs.koha-community.org Thu Nov 12 11:06:16 2015 From: jonathan.druart at bugs.koha-community.org (Jonathan Druart) Date: Thu, 12 Nov 2015 10:06:16 +0000 Subject: [Koha-devel] INSTALL.debian is outdated Message-ID: 2015-11-11 22:37 GMT+00:00 Robin Sheat : > Aparrish schreef op wo 11-11-2015 om 08:13 [-0700]: >> I followed the official instructions: >> http://git.koha-community.org/gitweb/?p=koha.git;a=blob;f=INSTALL.debian;hb=HEAD > > We need to delete those, they're from 2012 and are super out of date. Shall we 1) delete them, 2)replace them with the content of the wiki page or 3) just put a link to the wiki page? From tomascohen at gmail.com Thu Nov 12 15:35:55 2015 From: tomascohen at gmail.com (Tomas Cohen Arazi) Date: Thu, 12 Nov 2015 11:35:55 -0300 Subject: [Koha-devel] Koha 3.22 beta released Message-ID: A beta release of Koha 3.22 is now available for download. We encourage all users of Koha to consider downloading and testing the beta prior to its release later this month. Debian packages for this beta will be available soon on the unstable repository. Koha 3.22 is a major release, that comes with many new features. This beta preview is released for testing purposes. Its use on production sites is discouraged. Draft release notes and download links can be found below. Share and enjoy (and test)! Download: http://koha-community.org/download-koha/ Draft release notes: http://koha-community.org/koha-3-22-beta-released/ Note: if you want to try the new Plack integration, read the instructions here: http://wiki.koha-community.org/wiki/Plack Thanks everyone! -- Tom?s Cohen Arazi Theke Solutions (http://theke.io) ? +54 9351 3513384 GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F -------------- next part -------------- An HTML attachment was scrubbed... URL: From dcook at prosentient.com.au Thu Nov 12 23:54:45 2015 From: dcook at prosentient.com.au (David Cook) Date: Fri, 13 Nov 2015 09:54:45 +1100 Subject: [Koha-devel] Trying to contact Zeno Tajoli at Cineca... Message-ID: <078201d11d9d$203a07c0$60ae1740$@prosentient.com.au> Hi Zeno: I CCed you into my koha list response, but I'm getting unrecoverable errors trying to send to you. Here's the relevant line: <-- 554 5.7.1 >: Recipient address rejected: Failed SPF check; prosentient.com.au, Unknown mechanism type 'ip' in 'v=spf1' record I'm guessing that your v=spf1 record might contain "ip" instead of "ip4"? I don't know much about SPF, but that's my best guess. Thought I'd try to send you a head's up about it via the listserv in any case. Maybe it's a problem on my end but it's not happening when I send email to anyone else. David Cook Systems Librarian Prosentient Systems 72/330 Wattle St, Ultimo, NSW 2007 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dcook at prosentient.com.au Fri Nov 13 00:03:01 2015 From: dcook at prosentient.com.au (David Cook) Date: Fri, 13 Nov 2015 10:03:01 +1100 Subject: [Koha-devel] Searching numeric ranges In-Reply-To: References: Message-ID: <079101d11d9e$476f9bb0$d64ed310$@prosentient.com.au> Hopefully this will be my last email on the topic. As I mentioned before, YAZ 5.15.0 contains the CCL2RPN fix, which fixes the error that Barton was encountering during the search process outlined below. It might also be worth mentioning that Zebra 2.0.62 will contain a fix that allows 4=109 searches to use the 2=102 relevance attribute. This probably won?t affect us too much since st-numeric uses r=o, but it might be useful in cases where we?re searching for notforloan as that?s a CCL qualifier for 1=8008 4=109. With versions of Zebra prior to 2.0.62, doing a search for rk=(notforloan = 1) will result in a 117 error. The short-term fix would be to just add r=o to notforloan as well, but remembering to add r=o anytime you use 4=109 is prone to human error. Anyway, as I said above, hopefully that?s my last post about this topic, so I?ll stop spamming everyone ;). David Cook Systems Librarian Prosentient Systems 72/330 Wattle St, Ultimo, NSW 2007 From: koha-devel-bounces at lists.koha-community.org [mailto:koha-devel-bounces at lists.koha-community.org] On Behalf Of Barton Chittenden Sent: Friday, 6 November 2015 3:47 AM To: Koha-devel Subject: [Koha-devel] Searching numeric ranges I am working on searching lexile number ranges. ccl.properties shows lex 1=9903 r=r The 'r=r' bit means that I should be able to search using a numeric range separated by a dash, e.g. 500-600 Should return any numeric results from 500 to 600. The following query works: cgi-bin/koha/catalogue/search.pl?q=ccl%3Dlex%2Cst-numeric%3D500-600 However, when I try adding that as an item in the search menu, as follows: $(document).ready(function(){ //add lexile to search pull downs $("select[name='idx']").append(""); }); That gets munged... the url reads cgi-bin/koha/catalogue/search.pl?idx=lex%2Cst-numeric &q=500-600 And I get the following message: No results found No results match your search for 'lex,st-numeric: 500-600'. --Barton -------------- next part -------------- An HTML attachment was scrubbed... URL: From info at bywatersolutions.com Fri Nov 13 00:56:03 2015 From: info at bywatersolutions.com (Brendan Gallagher) Date: Thu, 12 Nov 2015 17:56:03 -0600 Subject: [Koha-devel] INSTALL.debian is outdated In-Reply-To: References: Message-ID: Maybe we just bring this up for a vote at an IRC meeting? On Thu, Nov 12, 2015 at 4:06 AM, Jonathan Druart < jonathan.druart at bugs.koha-community.org> wrote: > 2015-11-11 22:37 GMT+00:00 Robin Sheat : > > Aparrish schreef op wo 11-11-2015 om 08:13 [-0700]: > >> I followed the official instructions: > >> > http://git.koha-community.org/gitweb/?p=koha.git;a=blob;f=INSTALL.debian;hb=HEAD > > > > We need to delete those, they're from 2012 and are super out of date. > > Shall we 1) delete them, 2)replace them with the content of the wiki > page or 3) just put a link to the wiki page? > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -- --------------------------------------------------------------------------------------------------------------- Brendan A. Gallagher ByWater Solutions CEO Support and Consulting for Open Source Software Installation, Data Migration, Training, Customization, Hosting and Complete Support Packages Headquarters: Santa Barbara, CA - Office: Redding, CT Phone # (888) 900-8944 http://bywatersolutions.com info at bywatersolutions.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From robin at catalyst.net.nz Fri Nov 13 01:23:26 2015 From: robin at catalyst.net.nz (Robin Sheat) Date: Fri, 13 Nov 2015 13:23:26 +1300 Subject: [Koha-devel] INSTALL.debian is outdated In-Reply-To: References: Message-ID: <1447374206.9799.0.camel@catalyst.net.nz> Jonathan Druart schreef op do 12-11-2015 om 10:06 [+0000]: > Shall we 1) delete them, 2)replace them with the content of the wiki > page or 3) just put a link to the wiki page? I'd say 3, unless we have a process to 2 each release. But probably 3 is still better. -- Robin Sheat Catalyst IT Ltd. ? +64 4 803 2204 GPG: 5FA7 4B49 1E4D CAA4 4C38 8505 77F5 B724 F871 3BDF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: This is a digitally signed message part URL: From mjr at phonecoop.coop Fri Nov 13 12:13:38 2015 From: mjr at phonecoop.coop (MJ Ray) Date: Fri, 13 Nov 2015 11:13:38 +0000 Subject: [Koha-devel] INSTALL.debian is outdated In-Reply-To: <1447374206.9799.0.camel@catalyst.net.nz> References: <1447374206.9799.0.camel@catalyst.net.nz> Message-ID: <5645C5E2.7050006@phonecoop.coop> On 11/13/15 00:23, Robin Sheat wrote: > Jonathan Druart schreef op do 12-11-2015 om 10:06 [+0000]: >> Shall we 1) delete them, 2)replace them with the content of the wiki >> page or 3) just put a link to the wiki page? > > I'd say 3, unless we have a process to 2 each release. But probably 3 is > still better. It's a nuisance when you're at a site with a restricted internet connection and you go to look at the included documentation and it just refers you to some web site that you can't look at... but as long as the main INSTALL always remains, it's probably OK for INSTALL.debian to just mention that packages are available and point to the internet address where. Hope that explains, -- MJ Ray (slef), member of www.software.coop, a for-more-than-profit co-op http://koha-community.org supporter, web and library systems developer. In My Opinion Only: see http://mjr.towers.org.uk/email.html Available for hire (including development) at http://www.software.coop/ From tomascohen at gmail.com Fri Nov 13 13:22:22 2015 From: tomascohen at gmail.com (Tomas Cohen Arazi) Date: Fri, 13 Nov 2015 09:22:22 -0300 Subject: [Koha-devel] People's names on the Release Notes 3.22 Message-ID: Hi everyone, if you look at the relesae notes for the beta, you might notice some glitches on the names/addresses/companies/library names. Those are extracted from the commits, so not my fault :-D But most of them can be fixed manually. Just let me know and I'll do it. -- Tom?s Cohen Arazi Theke Solutions (http://theke.io) ? +54 9351 3513384 GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.moravec at gmail.com Fri Nov 13 13:33:13 2015 From: josef.moravec at gmail.com (Josef Moravec) Date: Fri, 13 Nov 2015 12:33:13 +0000 Subject: [Koha-devel] People's names on the Release Notes 3.22 In-Reply-To: References: Message-ID: Hello Tom?s, could you please the names "tadeasm" and "Tadeasm" to "Tadeas Moravec" in list of people who tested patches? He's my brother ;) Thanks Josef Moravec p? 13. 11. 2015 v 13:22 odes?latel Tomas Cohen Arazi napsal: > Hi everyone, if you look at the relesae notes for the beta, you might > notice some glitches on the names/addresses/companies/library names. Those > are extracted from the commits, so not my fault :-D But most of them can be > fixed manually. Just let me know and I'll do it. > > -- > Tom?s Cohen Arazi > Theke Solutions (http://theke.io) > ? +54 9351 3513384 > GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From kohanews at gmail.com Sat Nov 14 05:27:17 2015 From: kohanews at gmail.com (kohanews) Date: Fri, 13 Nov 2015 20:27:17 -0800 Subject: [Koha-devel] Call for News: November Newsletter Message-ID: <5646B825.3060305@gmail.com> Fellow Koha users ~ I'm collecting news for the November newsletter. Send anything noteworthy to: k o h a news AT gmail dot com News criteria: --------------------------- ** For events **: - Please include dates for past events. If I can't find dates I may not add it. - Announcements for future events with dates T.B.A. are fine ...Eg., Kohacon - For past events -- **** one month back is the cut-off ***. * News items can be of any length. * Images are fine * Anything and everything Koha. * Submit by the 26th of the month. If you are working on an interesting project or development related to Koha, please let me know and I'll include it in the development section. -- Chad Roseburg From kohanews at gmail.com Sat Nov 14 05:28:31 2015 From: kohanews at gmail.com (kohanews) Date: Fri, 13 Nov 2015 20:28:31 -0800 Subject: [Koha-devel] Call for Development News: November Newsletter Message-ID: <5646B86F.8060607@gmail.com> Let the community hear what cool things you're working on! Send me some bugs that need testing, sign-offs and user feedback. k o h a news AT gmail dot com Thank you! -- Chad Roseburg Editor, Koha Newsletter From robin at catalyst.net.nz Mon Nov 16 00:11:51 2015 From: robin at catalyst.net.nz (Robin Sheat) Date: Mon, 16 Nov 2015 12:11:51 +1300 Subject: [Koha-devel] INSTALL.debian is outdated In-Reply-To: <5645C5E2.7050006@phonecoop.coop> References: <1447374206.9799.0.camel@catalyst.net.nz> <5645C5E2.7050006@phonecoop.coop> Message-ID: <1447629111.9799.7.camel@catalyst.net.nz> MJ Ray schreef op vr 13-11-2015 om 11:13 [+0000]: > It's a nuisance when you're at a site with a restricted internet > connection and you go to look at the included documentation and it > just > refers you to some web site that you can't look at... but as long as > the > main INSTALL always remains, it's probably OK for INSTALL.debian to > just > mention that packages are available and point to the internet address > where. That's a good point. Even if each release it was scraped from the wiki page, with a note at the top saying that the preferred method is go to the wiki. That way we get the best of both worlds. -- Robin Sheat Catalyst IT Ltd. ? +64 4 803 2204 GPG: 5FA7 4B49 1E4D CAA4 4C38 8505 77F5 B724 F871 3BDF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: This is a digitally signed message part URL: From fridolin.somers at biblibre.com Mon Nov 16 09:25:23 2015 From: fridolin.somers at biblibre.com (Fridolin SOMERS) Date: Mon, 16 Nov 2015 09:25:23 +0100 Subject: [Koha-devel] INSTALL.debian is outdated In-Reply-To: References: Message-ID: <564992F3.9000707@biblibre.com> 3) A single README.txt with some links to wiki and other community sites. Otherwise it will never be up-to-date. Le 12/11/2015 11:06, Jonathan Druart a ?crit : > 2015-11-11 22:37 GMT+00:00 Robin Sheat : >> Aparrish schreef op wo 11-11-2015 om 08:13 [-0700]: >>> I followed the official instructions: >>> http://git.koha-community.org/gitweb/?p=koha.git;a=blob;f=INSTALL.debian;hb=HEAD >> >> We need to delete those, they're from 2012 and are super out of date. > > Shall we 1) delete them, 2)replace them with the content of the wiki > page or 3) just put a link to the wiki page? > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -- Fridolin SOMERS Biblibre - P?les support et syst?me fridolin.somers at biblibre.com From barton at bywatersolutions.com Mon Nov 16 21:48:13 2015 From: barton at bywatersolutions.com (Barton Chittenden) Date: Mon, 16 Nov 2015 15:48:13 -0500 Subject: [Koha-devel] INSTALL.debian is outdated In-Reply-To: References: Message-ID: I added this to the agenda: http://wiki.koha-community.org/wiki/General_IRC_meeting_2_December_2015#Agenda I would like to add that there is, at this point, no up-to-date documentation for doing a git install of Koha. Yes, the packages are far easier to deal with than the git installs, and yes, most of what is done in the packages can be retro-fitted to fit the git install via gitify --- but there are some corner conditions where an honest-to-god git install is the only way -- creating custom indexes for zebra comes to mind.... so I'd like to make sure that this avenue remains well documented and supported. Cheers, --Barton On Thu, Nov 12, 2015 at 6:56 PM, Brendan Gallagher < info at bywatersolutions.com> wrote: > Maybe we just bring this up for a vote at an IRC meeting? > > On Thu, Nov 12, 2015 at 4:06 AM, Jonathan Druart < > jonathan.druart at bugs.koha-community.org> wrote: > >> 2015-11-11 22:37 GMT+00:00 Robin Sheat : >> > Aparrish schreef op wo 11-11-2015 om 08:13 [-0700]: >> >> I followed the official instructions: >> >> >> http://git.koha-community.org/gitweb/?p=koha.git;a=blob;f=INSTALL.debian;hb=HEAD >> > >> > We need to delete those, they're from 2012 and are super out of date. >> >> Shall we 1) delete them, 2)replace them with the content of the wiki >> page or 3) just put a link to the wiki page? >> _______________________________________________ >> Koha-devel mailing list >> Koha-devel at lists.koha-community.org >> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >> website : http://www.koha-community.org/ >> git : http://git.koha-community.org/ >> bugs : http://bugs.koha-community.org/ >> > > > > -- > > --------------------------------------------------------------------------------------------------------------- > Brendan A. Gallagher > ByWater Solutions > CEO > > Support and Consulting for Open Source Software > Installation, Data Migration, Training, Customization, Hosting > and Complete Support Packages > Headquarters: Santa Barbara, CA - Office: Redding, CT > Phone # (888) 900-8944 > http://bywatersolutions.com > info at bywatersolutions.com > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at bigballofwax.co.nz Mon Nov 16 21:51:00 2015 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Tue, 17 Nov 2015 09:51:00 +1300 Subject: [Koha-devel] INSTALL.debian is outdated In-Reply-To: References: Message-ID: On 17 November 2015 at 09:48, Barton Chittenden wrote: > I added this to the agenda: > http://wiki.koha-community.org/wiki/General_IRC_meeting_2_December_2015#Agenda > > I would like to add that there is, at this point, no up-to-date > documentation for doing a git install of Koha. Yes, the packages are far > easier to deal with than the git installs, and yes, most of what is done in > the packages can be retro-fitted to fit the git install via gitify --- but > there are some corner conditions where an honest-to-god git install is the > only way -- creating custom indexes for zebra comes to mind.... so I'd like > to make sure that this avenue remains well documented and supported. > But only for developers right? There are massive security implications of running a git install in production, and we should never encourage that. Chris From Katrin.Fischer.83 at web.de Mon Nov 16 22:02:36 2015 From: Katrin.Fischer.83 at web.de (Katrin Fischer) Date: Mon, 16 Nov 2015 22:02:36 +0100 Subject: [Koha-devel] INSTALL.debian is outdated In-Reply-To: References: Message-ID: <564A446C.6090701@web.de> Am 16.11.2015 um 21:51 schrieb Chris Cormack: > On 17 November 2015 at 09:48, Barton Chittenden > wrote: >> I added this to the agenda: >> http://wiki.koha-community.org/wiki/General_IRC_meeting_2_December_2015#Agenda >> >> I would like to add that there is, at this point, no up-to-date >> documentation for doing a git install of Koha. Yes, the packages are far >> easier to deal with than the git installs, and yes, most of what is done in >> the packages can be retro-fitted to fit the git install via gitify --- but >> there are some corner conditions where an honest-to-god git install is the >> only way -- creating custom indexes for zebra comes to mind.... so I'd like >> to make sure that this avenue remains well documented and supported. >> > But only for developers right? There are massive security implications > of running a git install in production, > and we should never encourage that. > > Chris I think it should be possible with 3.22 to have customized indexes for with packages: Bug 12216 - One should be able to override zebra configuration on a per instance basis http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=12216 :) From barton at bywatersolutions.com Mon Nov 16 22:14:51 2015 From: barton at bywatersolutions.com (Barton Chittenden) Date: Mon, 16 Nov 2015 16:14:51 -0500 Subject: [Koha-devel] INSTALL.debian is outdated In-Reply-To: References: Message-ID: Correct -- I'm not suggesting that the general public should install Koha via git. We should, none-the-less have a current, well documented path for installing via git... after all, that's how we get new developers :-). --Barton On Mon, Nov 16, 2015 at 3:51 PM, Chris Cormack wrote: > On 17 November 2015 at 09:48, Barton Chittenden > wrote: > > I added this to the agenda: > > > http://wiki.koha-community.org/wiki/General_IRC_meeting_2_December_2015#Agenda > > > > I would like to add that there is, at this point, no up-to-date > > documentation for doing a git install of Koha. Yes, the packages are far > > easier to deal with than the git installs, and yes, most of what is done > in > > the packages can be retro-fitted to fit the git install via gitify --- > but > > there are some corner conditions where an honest-to-god git install is > the > > only way -- creating custom indexes for zebra comes to mind.... so I'd > like > > to make sure that this avenue remains well documented and supported. > > > But only for developers right? There are massive security implications > of running a git install in production, > and we should never encourage that. > > Chris > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dcook at prosentient.com.au Tue Nov 17 01:43:25 2015 From: dcook at prosentient.com.au (David Cook) Date: Tue, 17 Nov 2015 11:43:25 +1100 Subject: [Koha-devel] Working around annoying OpacSuppression 114 error in Zebra using special attribute 14 Message-ID: <002401d120d0$f8135d00$e83a1700$@prosentient.com.au> Hey all: I just opened Bug 15198 (http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=15198) which contains some instructions for how to make it so that you can turn on OpacSuppression and still retrieve search results, even if you don't have any suppressed records (942$n = 1) in Zebra. I ramble about it below but everything you actually need to know is in the bug report. I would supply a patch myself, but I'm having issues building Perl dependencies for Master, so I won't be contributing any patches until I get that sorted. Cheers, -David -- So I was reading the Zebra docs again (as you do), and I noticed a special attribute type of 14 which could help out with our OpacSuppression issue (whereby you get 0 results - well actually a 114 error - if you don't have any records suppressed but are sending queries checking for suppression). Observe: Z> find @attr 14=1 @not @attr 1=4 test @attr 1=9011 1 Sent searchRequest. Received SearchResponse. Search was a success. Number of hits: 8, setno 20 SearchResult-1: term=test cnt=8, term=1 cnt=0 records returned: 0 Elapsed: 0.000678 Z> find @not @attr 1=4 test @attr 1=9011 1 Sent searchRequest. Received SearchResponse. Search was a bloomin' failure. Number of hits: 0, setno 21 Result Set Status: none records returned: 0 Diagnostic message(s) from database: [114] Unsupported Use attribute -- v2 addinfo '9011' Elapsed: 0.000650 Here's the info about attribute type 14 from the Zebra docs (http://www.indexdata.com/zebra/doc/querymodel-zebra.html): Specifies whether un-indexed fields should be ignored. A zero value (default) throws a diagnostic when an un-indexed field is specified. A non-zero value makes it return 0 hits. Cheers to Jesse Weaver for realizing the syntax was @attr 14=1. For those of you who don't read PQF, I'll do it in CCL too: Z> find ignore-empty=(kw=test not Suppress=1) Sent searchRequest. Received SearchResponse. Search was a success. Number of hits: 97, setno 2 SearchResult-1: term=test cnt=97, term=1 cnt=0 records returned: 0 Elapsed: 0.000969 Z> find kw=test not Suppress=1 Sent searchRequest. Received SearchResponse. Search was a bloomin' failure. Number of hits: 0, setno 3 Result Set Status: none records returned: 0 Diagnostic message(s) from database: [114] Unsupported Use attribute -- v2 addinfo '9011' Elapsed: 0.000841 *Note that I just added "ignore-empty 14=1" to my ccl.properties to get that ignore-empty=() to work. Actually, the lightest touch for this issue would be to change: Suppress 1=9011 To Suppress 1=9011 14=1 Check that out: Z> find kw=test not Suppress=1 Sent searchRequest. Received SearchResponse. Search was a success. Number of hits: 97, setno 1 SearchResult-1: term=test cnt=97, term=1 cnt=0 records returned: 0 Elapsed: 0.005849 David Cook Systems Librarian Prosentient Systems 72/330 Wattle St, Ultimo, NSW 2007 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mirko at abunchofthings.net Tue Nov 17 02:56:06 2015 From: mirko at abunchofthings.net (Mirko Tietgen) Date: Tue, 17 Nov 2015 01:56:06 GMT Subject: [Koha-devel] Working around annoying OpacSuppression 114 error in Zebra using special attribute 14 References: <002401d120d0$f8135d00$e83a1700$@prosentient.com.au> Message-ID: <1447725367447.1baec168c752e@mozgaia> An HTML attachment was scrubbed... URL: From dcook at prosentient.com.au Tue Nov 17 04:19:09 2015 From: dcook at prosentient.com.au (David Cook) Date: Tue, 17 Nov 2015 14:19:09 +1100 Subject: [Koha-devel] Improving Zebra logging Message-ID: <003901d120e6$b993c090$2cbb41b0$@prosentient.com.au> Hi all: As if you weren't all sick of me going on and on about Zebra. So you may or may not know that the Zebra logs are fairly useless in a package install of Koha. At Prosentient, we don't use package installs much, so I haven't noticed this until now, but here's a tip: If you want to see what requests are being made to Zebra, you can do the following: 1) vi /usr/sbin/koha-start-zebra 2) Under start_zebra_instance, you can change "-v none,fatal,warn" to "-v none,fatal,warn,request" (note that "request" is an undocumented log level but it should work. alternately remove none and requests should appear in the log) 3) sudo koha-zebra-stop 4) sudo koha-zebra-start 5) profit (or at least see the PQF which is sent to the server and whether you're getting errors or OK query responses) This is the kind of additional output you should expect to see: 14:15:00-17/11 zebrasrv(2) [request] Auth idPass kohauser - 14:15:00-17/11 zebrasrv(2) [request] Init OK - ID:81 Name:ZOOM-C/YAZ Version:4.2.30 98864b44c654645bc16b2c54f822dc2e45a93031 14:15:00-17/11 zebrasrv(2) [request] Search biblios OK 0 1 1+0 RPN @attrset Bib-1 @attr 1=1016 @attr 4=6 @attr 5=1 test Anyway, I wouldn't *actually* recommend changing /usr/sbin/koha-start-zebra since that's part of the koha-common package. but maybe someone would be interested in making change to the Zebra commands to allow for different logging levels. Other things to note: koha-zebra-restart isn't the same thing as koha-zebra-stop && koha-zebra-start. If you restart Zebra after changing the flags for daemon or zebrasrv, the changes won't be brought across.it's necessary to stop and start it again. even though the process is actually restarted. it seems to be restarted using the original configuration used when launching daemon in the first place. David Cook Systems Librarian Prosentient Systems 72/330 Wattle St, Ultimo, NSW 2007 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dcook at prosentient.com.au Tue Nov 17 04:51:33 2015 From: dcook at prosentient.com.au (David Cook) Date: Tue, 17 Nov 2015 14:51:33 +1100 Subject: [Koha-devel] Working around annoying OpacSuppression 114 error in Zebra using special attribute 14 In-Reply-To: <1447725367447.1baec168c752e@mozgaia> References: <002401d120d0$f8135d00$e83a1700$@prosentient.com.au> <1447725367447.1baec168c752e@mozgaia> Message-ID: <004801d120eb$40135ff0$c03a1fd0$@prosentient.com.au> Hi Mirko: I had the same question, but I forgot to test it. Unfortunately, I just tried and I got the following error: [25] Specified element set name not valid for specified database -- v2 addinfo '' That's very different from the OpacSuppression search error: [114] Unsupported Use attribute -- v2 addinfo '9011' Personally, I'm thinking that the empty facet thing is a bug in Zebra. I think it should return something like the following: I've already been sending Adam Dickmeiss a million emails about Zebra, so perhaps someone should send one regarding that issue. I've found him very responsive, so he might be willing to fix that as well. or create an option to allow that sort of response. David Cook Systems Librarian Prosentient Systems 72/330 Wattle St, Ultimo, NSW 2007 From: koha-devel-bounces at lists.koha-community.org [mailto:koha-devel-bounces at lists.koha-community.org] On Behalf Of Mirko Tietgen Sent: Tuesday, 17 November 2015 12:56 PM To: Koha-devel Subject: Re: [Koha-devel] Working around annoying OpacSuppression 114 error in Zebra using special attribute 14 Hi David, Nice! Would that also work for our problem with empty facets by any chance? Cheers, Mirko David Cook schrieb: Hey all: I just opened Bug 15198 (http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=15198) which contains some instructions for how to make it so that you can turn on OpacSuppression and still retrieve search results, even if you don't have any suppressed records (942$n = 1) in Zebra. I ramble about it below but everything you actually need to know is in the bug report. I would supply a patch myself, but I'm having issues building Perl dependencies for Master, so I won't be contributing any patches until I get that sorted. Cheers, -David -- So I was reading the Zebra docs again (as you do), and I noticed a special attribute type of 14 which could help out with our OpacSuppression issue (whereby you get 0 results - well actually a 114 error - if you don't have any records suppressed but are sending queries checking for suppression). Observe: Z> find @attr 14=1 @not @attr 1=4 test @attr 1=9011 1 Sent searchRequest. Received SearchResponse. Search was a success. Number of hits: 8, setno 20 SearchResult-1: term=test cnt=8, term=1 cnt=0 records returned: 0 Elapsed: 0.000678 Z> find @not @attr 1=4 test @attr 1=9011 1 Sent searchRequest. Received SearchResponse. Search was a bloomin' failure. Number of hits: 0, setno 21 Result Set Status: none records returned: 0 Diagnostic message(s) from database: [114] Unsupported Use attribute -- v2 addinfo '9011' Elapsed: 0.000650 Here's the info about attribute type 14 from the Zebra docs (http://www.indexdata.com/zebra/doc/querymodel-zebra.html): Specifies whether un-indexed fields should be ignored. A zero value (default) throws a diagnostic when an un-indexed field is specified. A non-zero value makes it return 0 hits. Cheers to Jesse Weaver for realizing the syntax was @attr 14=1. For those of you who don't read PQF, I'll do it in CCL too: Z> find ignore-empty=(kw=test not Suppress=1) Sent searchRequest. Received SearchResponse. Search was a success. Number of hits: 97, setno 2 SearchResult-1: term=test cnt=97, term=1 cnt=0 records returned: 0 Elapsed: 0.000969 Z> find kw=test not Suppress=1 Sent searchRequest. Received SearchResponse. Search was a bloomin' failure. Number of hits: 0, setno 3 Result Set Status: none records returned: 0 Diagnostic message(s) from database: [114] Unsupported Use attribute -- v2 addinfo '9011' Elapsed: 0.000841 *Note that I just added "ignore-empty 14=1" to my ccl.properties to get that ignore-empty=() to work. Actually, the lightest touch for this issue would be to change: Suppress 1=9011 To Suppress 1=9011 14=1 Check that out: Z> find kw=test not Suppress=1 Sent searchRequest. Received SearchResponse. Search was a success. Number of hits: 97, setno 1 SearchResult-1: term=test cnt=97, term=1 cnt=0 records returned: 0 Elapsed: 0.005849 David Cook Systems Librarian Prosentient Systems 72/330 Wattle St, Ultimo, NSW 2007 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tomascohen at gmail.com Tue Nov 17 14:10:34 2015 From: tomascohen at gmail.com (Tomas Cohen Arazi) Date: Tue, 17 Nov 2015 10:10:34 -0300 Subject: [Koha-devel] Improving Zebra logging In-Reply-To: <003901d120e6$b993c090$2cbb41b0$@prosentient.com.au> References: <003901d120e6$b993c090$2cbb41b0$@prosentient.com.au> Message-ID: The koha-*-zebra scripts should be merged into koha-zebra with several option switches, just as koha-indexer works... Same for SIP-related scripts. 2015-11-17 0:19 GMT-03:00 David Cook : > Hi all: > > > > As if you weren?t all sick of me going on and on about Zebra. > > > > So you may or may not know that the Zebra logs are fairly useless in a > package install of Koha. > > > > At Prosentient, we don?t use package installs much, so I haven?t noticed > this until now, but here?s a tip: > > > > If you want to see what requests are being made to Zebra, you can do the > following: > > > > 1) vi /usr/sbin/koha-start-zebra > > 2) Under start_zebra_instance, you can change ?-v none,fatal,warn? > to ?-v none,fatal,warn,request? (note that ?request? is an undocumented log > level but it should work? alternately remove none and requests should > appear in the log) > > 3) sudo koha-zebra-stop > > 4) sudo koha-zebra-start > > 5) profit (or at least see the PQF which is sent to the server and > whether you?re getting errors or OK query responses) > > > > This is the kind of additional output you should expect to see: > > 14:15:00-17/11 zebrasrv(2) [request] Auth idPass kohauser - > > 14:15:00-17/11 zebrasrv(2) [request] Init OK - ID:81 Name:ZOOM-C/YAZ > Version:4.2.30 98864b44c654645bc16b2c54f822dc2e45a93031 > > 14:15:00-17/11 zebrasrv(2) [request] Search biblios OK 0 1 1+0 RPN > @attrset Bib-1 @attr 1=1016 @attr 4=6 @attr 5=1 test > > > > Anyway, I wouldn?t **actually** recommend changing > /usr/sbin/koha-start-zebra since that?s part of the koha-common package? > but maybe someone would be interested in making change to the Zebra > commands to allow for different logging levels. > > > > Other things to note: > > > > koha-zebra-restart isn?t the same thing as koha-zebra-stop && > koha-zebra-start. If you restart Zebra after changing the flags for daemon > or zebrasrv, the changes won?t be brought across?it?s necessary to stop and > start it again? even though the process is actually restarted? it seems to > be restarted using the original configuration used when launching daemon in > the first place. > > > > David Cook > > Systems Librarian > > Prosentient Systems > > 72/330 Wattle St, Ultimo, NSW 2007 > > > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -- Tom?s Cohen Arazi Theke Solutions (http://theke.io) ? +54 9351 3513384 GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F -------------- next part -------------- An HTML attachment was scrubbed... URL: From indradg at gmail.com Tue Nov 17 22:04:18 2015 From: indradg at gmail.com (Indranil Das Gupta) Date: Wed, 18 Nov 2015 02:34:18 +0530 Subject: [Koha-devel] incremental backup of the DB Message-ID: Hi all, I'm facing a situation that between full backups, I would like to take incremental backups. Anyone have played with mysql incremental backup and Koha? any known issues? thanks in advance -- Indranil Das Gupta Phone : +91-98300-20971 Blog : http://indradg.randomink.org/blog IRC : indradg on irc://irc.freenode.net Twitter : indradg -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=- Please exchange editable Office documents only in ODF Format. No other format is acceptable. Support Open Standards. For a free editor supporting ODF, please visit LibreOffice - http://www.documentfoundation.org From dcook at prosentient.com.au Tue Nov 17 22:54:25 2015 From: dcook at prosentient.com.au (David Cook) Date: Wed, 18 Nov 2015 08:54:25 +1100 Subject: [Koha-devel] Improving Zebra logging In-Reply-To: References: <003901d120e6$b993c090$2cbb41b0$@prosentient.com.au> Message-ID: <00a801d12182$8684f2e0$938ed8a0$@prosentient.com.au> Agreed. I think that would make a lot of sense. David Cook Systems Librarian Prosentient Systems 72/330 Wattle St, Ultimo, NSW 2007 From: Tomas Cohen Arazi [mailto:tomascohen at gmail.com] Sent: Wednesday, 18 November 2015 12:11 AM To: David Cook Cc: Koha-devel Subject: Re: [Koha-devel] Improving Zebra logging The koha-*-zebra scripts should be merged into koha-zebra with several option switches, just as koha-indexer works... Same for SIP-related scripts. 2015-11-17 0:19 GMT-03:00 David Cook >: Hi all: As if you weren?t all sick of me going on and on about Zebra. So you may or may not know that the Zebra logs are fairly useless in a package install of Koha. At Prosentient, we don?t use package installs much, so I haven?t noticed this until now, but here?s a tip: If you want to see what requests are being made to Zebra, you can do the following: 1) vi /usr/sbin/koha-start-zebra 2) Under start_zebra_instance, you can change ?-v none,fatal,warn? to ?-v none,fatal,warn,request? (note that ?request? is an undocumented log level but it should work? alternately remove none and requests should appear in the log) 3) sudo koha-zebra-stop 4) sudo koha-zebra-start 5) profit (or at least see the PQF which is sent to the server and whether you?re getting errors or OK query responses) This is the kind of additional output you should expect to see: 14:15:00-17/11 zebrasrv(2) [request] Auth idPass kohauser - 14:15:00-17/11 zebrasrv(2) [request] Init OK - ID:81 Name:ZOOM-C/YAZ Version:4.2.30 98864b44c654645bc16b2c54f822dc2e45a93031 14:15:00-17/11 zebrasrv(2) [request] Search biblios OK 0 1 1+0 RPN @attrset Bib-1 @attr 1=1016 @attr 4=6 @attr 5=1 test Anyway, I wouldn?t *actually* recommend changing /usr/sbin/koha-start-zebra since that?s part of the koha-common package? but maybe someone would be interested in making change to the Zebra commands to allow for different logging levels. Other things to note: koha-zebra-restart isn?t the same thing as koha-zebra-stop && koha-zebra-start. If you restart Zebra after changing the flags for daemon or zebrasrv, the changes won?t be brought across?it?s necessary to stop and start it again? even though the process is actually restarted? it seems to be restarted using the original configuration used when launching daemon in the first place. David Cook Systems Librarian Prosentient Systems 72/330 Wattle St, Ultimo, NSW 2007 _______________________________________________ Koha-devel mailing list Koha-devel at lists.koha-community.org http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel website : http://www.koha-community.org/ git : http://git.koha-community.org/ bugs : http://bugs.koha-community.org/ -- Tom?s Cohen Arazi Theke Solutions (http://theke.io ) ? +54 9351 3513384 GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F -------------- next part -------------- An HTML attachment was scrubbed... URL: From cajetanonyeneke at gmail.com Tue Nov 17 22:56:40 2015 From: cajetanonyeneke at gmail.com (Cajetan Onyeneke) Date: Tue, 17 Nov 2015 13:56:40 -0800 Subject: [Koha-devel] Getting an IP address for a koha server Message-ID: Please does anyone have idea of how to get or issue dynamic IP address to a koha server running on Linux environment. The server is configured but cannot be accessed within the local network because of this IP issue. Thanks. Cajetan Onyeneke Digital Librarian Imo State University, Owerri, Nigeria -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.a at navalmarinearchive.com Tue Nov 17 23:24:26 2015 From: paul.a at navalmarinearchive.com (Paul A) Date: Tue, 17 Nov 2015 17:24:26 -0500 Subject: [Koha-devel] incremental backup of the DB In-Reply-To: Message-ID: <5.2.1.1.2.20151117170818.02459de0@pop.navalmarinearchive.com> At 02:34 AM 11/18/2015 +0530, Indranil Das Gupta wrote: >Hi all, > >I'm facing a situation that between full backups, I would like to take >incremental backups. Anyone have played with mysql incremental backup >and Koha? any known issues? Yup -- [probably, depends, no compression] and if you ever "restore" you'll get back all your deletes since the last full backup... We just clean the MySQL db a couple of times a day and dump it -- takes 2104ms for a 754 Meg dump, so there's really no down time at all. Write some sort of [bash] script along the following lines for your Koha user in /usr/share/koha: ./bin/cronjobs/cleanup_database.pl --import 1 && ./bin/cronjobs/cleanup_database.pl --sessions && ./bin/cronjobs/cleanup_database.pl --logs 10 && ./bin/cronjobs/cleanup_database.pl --zebraqueue 1 && mysqldump --user=whomever --password=whatever your_koha_db > /mnt/data7/backup/your_koha_db_dump_[rotate_number].sql Prepend ./bin/cronjobs/cleanup_database.pl --mail -v && if required, vary your variables 'au go?t du jour' and bingo... Best -- Paul >thanks in advance > >-- >Indranil Das Gupta > >Phone : +91-98300-20971 >Blog : http://indradg.randomink.org/blog >IRC : indradg on irc://irc.freenode.net >Twitter : indradg > >-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=- >Please exchange editable Office documents only in ODF Format. No other >format is acceptable. Support Open Standards. > >For a free editor supporting ODF, please visit LibreOffice - >http://www.documentfoundation.org >_______________________________________________ >Koha-devel mailing list >Koha-devel at lists.koha-community.org >http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >website : http://www.koha-community.org/ >git : http://git.koha-community.org/ >bugs : http://bugs.koha-community.org/ From dcook at prosentient.com.au Wed Nov 18 01:25:14 2015 From: dcook at prosentient.com.au (David Cook) Date: Wed, 18 Nov 2015 11:25:14 +1100 Subject: [Koha-devel] Problematic Zebra Charmaps Equivalences Message-ID: <00bc01d12197$98055680$c8100380$@prosentient.com.au> Hi all: Yet another Zebra email from this guy. I don?t know how many of you are using CHR vs ICU, but CHR is the default for installs, so I?m guessing that it?s quite a few. Well, there are some issues with how we use the equivalent directive. Hopefully the UTF-8 won?t be stripped out of this message, although I?m guessing it might? Here?s all instances of the directive in word-phrase-utf.chr: # Characters to be considered equivalent for sorting purposes equivalent a??????????? equivalent ??(ae) equivalent ?(aa) equivalent i?????????? equivalent ?(ie) equivalent ?(ii) equivalent u?????????? equivalent ?(ue) equivalent ?(uu) equivalent e?????????? equivalent ??(ee) equivalent o??????????? equivalent ????(oe) equivalent ?(oo) Firstly, that comment is wrong. ?equivalent? isn?t just for sorting purposes. It?s for searching purposes. Indexdata have confirmed that the documentation is wrong about the sorting thing. So ?ie? and ? (if you can?t see this character, it?s the UTF-8 representation of ï) are equivalent. That means searches for ?siemon? will get results for ?siemon? and ?s?mon?. Now, there is also a ?map? directive: map ? i This means that ?s?mon? is the same as ?simon?. Now, ?map? affects both indexing and searching. If you have ?s?mon? in a record, you can see that it is actually stored as ?simon? in Zebra, if you do a search for it and use ?format xml? and ?elements zebra::index?. So your search for ?siemon? will really get results for ?siemon? and ?simon?. This really isn?t ideal. However, you can see why you?d want equivalences. In Scandinavian languages, I think ??? and ?aa? are roughly equivalent. They?re spelled differently but they?re the same sound. So if you search for ?Gaard?, you might want hits for ?G?rd? as well. But you might not want ?career? to be equivalent to ?carer? as they?re two different words. Or ?choose? to be equivalent to ?chose?, ?sloop? - "slop?, ?reef? - "ref?, etc. -- Unfortunately, I don?t really know what the solution is. For one client, I?ve disabled the equivalent directive where it creates an equivalence between any two letter combination with a one letter combination, as they only have records in English, and it?ll just cause them headaches. I can see this being useful for multilingual records? although I think many people with multilingual records use ICU. I don?t know ICU well enough to know how it manages characters that English speakers would think of as accents or ligatures. I know you can provide your own normalization with ICU, but I think it does a fair amount on its own as well? I think some of the difficulties are mentioned here: http://userguide.icu-project.org/collation/icu-string-search-service. It also mentions the Danish ?/aa example. I don?t know how ICU would know how to handle particular languages? that webpage seems to indicate you can provide a locale to deal with it. Of course, that doesn?t necessarily solve things. If you have multilingual records with multilingual users, how do you choose your rules? Sure, you might be able to specify a locale at search time (note you can?t do this with Zebra), but what rules did you specify at index time? As anyone who has watched this video (https://www.youtube.com/watch?v=0j74jcxSunY) would know, internationalis(z)ing code has many challenges? -- Anyway, the reason for this email is mostly just to make you all aware of this issue, and how ?equivalent? and ?map? work in the Charmap files when using CHR indexing. Oh, also, if you look at ?default.idx?, you?ll see that ?sort s? references ?charmap sort-string-utf.chr?, but I don?t think sort-string-utf.chr actually exists anywhere? David Cook Systems Librarian Prosentient Systems 72/330 Wattle St, Ultimo, NSW 2007 -------------- next part -------------- An HTML attachment was scrubbed... URL: From M.de.Rooy at rijksmuseum.nl Wed Nov 18 08:12:32 2015 From: M.de.Rooy at rijksmuseum.nl (Marcel de Rooy) Date: Wed, 18 Nov 2015 07:12:32 +0000 Subject: [Koha-devel] Problematic Zebra Charmaps Equivalences In-Reply-To: <00bc01d12197$98055680$c8100380$@prosentient.com.au> References: <00bc01d12197$98055680$c8100380$@prosentient.com.au> Message-ID: <809BE39CD64BFD4EB9036172EBCCFA315AFE52AE@S-MAIL-1B.rijksmuseum.intra> I recently "downgraded" ICU back to CHR in order to overcome Zebra segmentation faults on a complete reindex. Should still investigate some further, but have the impression that some Chinese characters made zebraidx crash. ________________________________ Van: koha-devel-bounces at lists.koha-community.org [koha-devel-bounces at lists.koha-community.org] namens David Cook [dcook at prosentient.com.au] Verzonden: woensdag 18 november 2015 1:25 Aan: 'Koha-devel' Onderwerp: [Koha-devel] Problematic Zebra Charmaps Equivalences Hi all: Yet another Zebra email from this guy. I don?t know how many of you are using CHR vs ICU, but CHR is the default for installs, so I?m guessing that it?s quite a few. Well, there are some issues with how we use the equivalent directive. Hopefully the UTF-8 won?t be stripped out of this message, although I?m guessing it might? Here?s all instances of the directive in word-phrase-utf.chr: # Characters to be considered equivalent for sorting purposes equivalent a??????????? equivalent ??(ae) equivalent ?(aa) equivalent i?????????? equivalent ?(ie) equivalent ?(ii) equivalent u?????????? equivalent ?(ue) equivalent ?(uu) equivalent e?????????? equivalent ??(ee) equivalent o??????????? equivalent ????(oe) equivalent ?(oo) Firstly, that comment is wrong. ?equivalent? isn?t just for sorting purposes. It?s for searching purposes. Indexdata have confirmed that the documentation is wrong about the sorting thing. So ?ie? and ? (if you can?t see this character, it?s the UTF-8 representation of ï) are equivalent. That means searches for ?siemon? will get results for ?siemon? and ?s?mon?. Now, there is also a ?map? directive: map ? i This means that ?s?mon? is the same as ?simon?. Now, ?map? affects both indexing and searching. If you have ?s?mon? in a record, you can see that it is actually stored as ?simon? in Zebra, if you do a search for it and use ?format xml? and ?elements zebra::index?. So your search for ?siemon? will really get results for ?siemon? and ?simon?. This really isn?t ideal. However, you can see why you?d want equivalences. In Scandinavian languages, I think ??? and ?aa? are roughly equivalent. They?re spelled differently but they?re the same sound. So if you search for ?Gaard?, you might want hits for ?G?rd? as well. But you might not want ?career? to be equivalent to ?carer? as they?re two different words. Or ?choose? to be equivalent to ?chose?, ?sloop? - "slop?, ?reef? - "ref?, etc. -- Unfortunately, I don?t really know what the solution is. For one client, I?ve disabled the equivalent directive where it creates an equivalence between any two letter combination with a one letter combination, as they only have records in English, and it?ll just cause them headaches. I can see this being useful for multilingual records? although I think many people with multilingual records use ICU. I don?t know ICU well enough to know how it manages characters that English speakers would think of as accents or ligatures. I know you can provide your own normalization with ICU, but I think it does a fair amount on its own as well? I think some of the difficulties are mentioned here: http://userguide.icu-project.org/collation/icu-string-search-service. It also mentions the Danish ?/aa example. I don?t know how ICU would know how to handle particular languages? that webpage seems to indicate you can provide a locale to deal with it. Of course, that doesn?t necessarily solve things. If you have multilingual records with multilingual users, how do you choose your rules? Sure, you might be able to specify a locale at search time (note you can?t do this with Zebra), but what rules did you specify at index time? As anyone who has watched this video (https://www.youtube.com/watch?v=0j74jcxSunY) would know, internationalis(z)ing code has many challenges? -- Anyway, the reason for this email is mostly just to make you all aware of this issue, and how ?equivalent? and ?map? work in the Charmap files when using CHR indexing. Oh, also, if you look at ?default.idx?, you?ll see that ?sort s? references ?charmap sort-string-utf.chr?, but I don?t think sort-string-utf.chr actually exists anywhere? David Cook Systems Librarian Prosentient Systems 72/330 Wattle St, Ultimo, NSW 2007 -------------- next part -------------- An HTML attachment was scrubbed... URL: From fridolin.somers at biblibre.com Wed Nov 18 17:06:04 2015 From: fridolin.somers at biblibre.com (Fridolin SOMERS) Date: Wed, 18 Nov 2015 17:06:04 +0100 Subject: [Koha-devel] incremental backup of the DB In-Reply-To: <5.2.1.1.2.20151117170818.02459de0@pop.navalmarinearchive.com> References: <5.2.1.1.2.20151117170818.02459de0@pop.navalmarinearchive.com> Message-ID: <564CA1EC.5000700@biblibre.com> You may use gzip compression. I suggest with level 3 (default is 6) to get a fast yet very compressed file : mysqldump my_koha_db | gzip -3 > my_koha_dump.sql.gz Le 17/11/2015 23:24, Paul A a ?crit : > At 02:34 AM 11/18/2015 +0530, Indranil Das Gupta wrote: >> Hi all, >> >> I'm facing a situation that between full backups, I would like to take >> incremental backups. Anyone have played with mysql incremental backup >> and Koha? any known issues? > > Yup -- [probably, depends, no compression] and if you ever "restore" > you'll get back all your deletes since the last full backup... > > We just clean the MySQL db a couple of times a day and dump it -- takes > 2104ms for a 754 Meg dump, so there's really no down time at all. Write > some sort of [bash] script along the following lines for your Koha user > in /usr/share/koha: > > ./bin/cronjobs/cleanup_database.pl --import 1 && > ./bin/cronjobs/cleanup_database.pl --sessions && > ./bin/cronjobs/cleanup_database.pl --logs 10 && > ./bin/cronjobs/cleanup_database.pl --zebraqueue 1 && > mysqldump --user=whomever --password=whatever your_koha_db > > /mnt/data7/backup/your_koha_db_dump_[rotate_number].sql > > Prepend > ./bin/cronjobs/cleanup_database.pl --mail -v && > if required, vary your variables 'au go?t du jour' and bingo... > > Best -- Paul > > >> thanks in advance >> >> -- >> Indranil Das Gupta >> >> Phone : +91-98300-20971 >> Blog : http://indradg.randomink.org/blog >> IRC : indradg on irc://irc.freenode.net >> Twitter : indradg >> >> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=- >> Please exchange editable Office documents only in ODF Format. No other >> format is acceptable. Support Open Standards. >> >> For a free editor supporting ODF, please visit LibreOffice - >> http://www.documentfoundation.org >> _______________________________________________ >> Koha-devel mailing list >> Koha-devel at lists.koha-community.org >> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >> website : http://www.koha-community.org/ >> git : http://git.koha-community.org/ >> bugs : http://bugs.koha-community.org/ > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ -- Fridolin SOMERS Biblibre - P?les support et syst?me fridolin.somers at biblibre.com From fridolin.somers at biblibre.com Wed Nov 18 17:15:36 2015 From: fridolin.somers at biblibre.com (Fridolin SOMERS) Date: Wed, 18 Nov 2015 17:15:36 +0100 Subject: [Koha-devel] Indexes of Physical presentation In-Reply-To: <56426C3D.6040005@web.de> References: <563B1472.3050000@biblibre.com> <56426C3D.6040005@web.de> Message-ID: <564CA428.7090301@biblibre.com> Hie Katrin, 'physical presentation' is just the name of one of the search fields in advanced search. In intranet you must click on 'Coded information filters' to see it. I don't know if it is UNIMARC-only, can you check with the demo websites ? Best regards. Le 10/11/2015 23:14, Katrin Fischer a ?crit : > Hi Fridolin, > > I have tried to look into this, but I am not sure what you mean with > 'physical presentation'. Can you explain a bit more? Is this UNIMARC > specific? > > Katrin > > Am 05.11.2015 um 09:33 schrieb Fridolin SOMERS: >> Hie, >> >> In intranet search, physical presentation uses index ff8-23. >> In opac search, physical presentation uses index Material-type. >> >> This is strange because the search always use the coded values, meaning >> it is ff8-23 the correct index. >> Do you agree ? >> >> Regards, >> > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -- Fridolin SOMERS Biblibre - P?les support et syst?me fridolin.somers at biblibre.com From bgkriegel at gmail.com Wed Nov 18 17:49:01 2015 From: bgkriegel at gmail.com (Bernardo Gonzalez Kriegel) Date: Wed, 18 Nov 2015 13:49:01 -0300 Subject: [Koha-devel] Indexes of Physical presentation In-Reply-To: <564CA428.7090301@biblibre.com> References: <563B1472.3050000@biblibre.com> <56426C3D.6040005@web.de> <564CA428.7090301@biblibre.com> Message-ID: It's UNIMARC specific :) -- Bernardo Gonzalez Kriegel bgkriegel at gmail.com On Wed, Nov 18, 2015 at 1:15 PM, Fridolin SOMERS < fridolin.somers at biblibre.com> wrote: > Hie Katrin, > > 'physical presentation' is just the name of one of the search fields in > advanced search. > In intranet you must click on 'Coded information filters' to see it. > > I don't know if it is UNIMARC-only, can you check with the demo websites ? > > Best regards. > > > Le 10/11/2015 23:14, Katrin Fischer a ?crit : > >> Hi Fridolin, >> >> I have tried to look into this, but I am not sure what you mean with >> 'physical presentation'. Can you explain a bit more? Is this UNIMARC >> specific? >> >> Katrin >> >> Am 05.11.2015 um 09:33 schrieb Fridolin SOMERS: >> >>> Hie, >>> >>> In intranet search, physical presentation uses index ff8-23. >>> In opac search, physical presentation uses index Material-type. >>> >>> This is strange because the search always use the coded values, meaning >>> it is ff8-23 the correct index. >>> Do you agree ? >>> >>> Regards, >>> >>> >> _______________________________________________ >> Koha-devel mailing list >> Koha-devel at lists.koha-community.org >> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >> website : http://www.koha-community.org/ >> git : http://git.koha-community.org/ >> bugs : http://bugs.koha-community.org/ >> >> > -- > Fridolin SOMERS > Biblibre - P?les support et syst?me > fridolin.somers at biblibre.com > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.a at navalmarinearchive.com Wed Nov 18 18:15:00 2015 From: paul.a at navalmarinearchive.com (Paul A) Date: Wed, 18 Nov 2015 12:15:00 -0500 Subject: [Koha-devel] incremental backup of the DB In-Reply-To: <564CA1EC.5000700@biblibre.com> References: <5.2.1.1.2.20151117170818.02459de0@pop.navalmarinearchive.com> <5.2.1.1.2.20151117170818.02459de0@pop.navalmarinearchive.com> Message-ID: <5.2.1.1.2.20151118120717.05517500@pop.navalmarinearchive.com> At 05:06 PM 11/18/2015 +0100, Fridolin SOMERS wrote: >You may use gzip compression. >I suggest with level 3 (default is 6) to get a fast yet very compressed file : >mysqldump my_koha_db | gzip -3 > my_koha_dump.sql.gz For "dump" you are obviously quite correct that compression, size and speed can be tweaked. However, the original post asked about "incremental backup" and I assumed that this referred to MySQL "enterprise backup" because I am unaware of an incremental *dump*. With the "Enterprise backup" we found it impossible to compress the incrementals, but YMMV. Best -- Paul >Le 17/11/2015 23:24, Paul A a ?crit : >>At 02:34 AM 11/18/2015 +0530, Indranil Das Gupta wrote: >>>Hi all, >>> >>>I'm facing a situation that between full backups, I would like to take >>>incremental backups. Anyone have played with mysql incremental backup >>>and Koha? any known issues? >> >>Yup -- [probably, depends, no compression] and if you ever "restore" >>you'll get back all your deletes since the last full backup... >> >>We just clean the MySQL db a couple of times a day and dump it -- takes >>2104ms for a 754 Meg dump, so there's really no down time at all. Write >>some sort of [bash] script along the following lines for your Koha user >>in /usr/share/koha: >> >>./bin/cronjobs/cleanup_database.pl --import 1 && >>./bin/cronjobs/cleanup_database.pl --sessions && >>./bin/cronjobs/cleanup_database.pl --logs 10 && >>./bin/cronjobs/cleanup_database.pl --zebraqueue 1 && >>mysqldump --user=whomever --password=whatever your_koha_db > >>/mnt/data7/backup/your_koha_db_dump_[rotate_number].sql >> >>Prepend >>./bin/cronjobs/cleanup_database.pl --mail -v && >>if required, vary your variables 'au go?t du jour' and bingo... >> >>Best -- Paul >> >> >>>thanks in advance >>> >>>-- >>>Indranil Das Gupta >>> >>>Phone : +91-98300-20971 >>>Blog : http://indradg.randomink.org/blog >>>IRC : indradg on irc://irc.freenode.net >>>Twitter : indradg >>> >>>-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=- >>>Please exchange editable Office documents only in ODF Format. No other >>>format is acceptable. Support Open Standards. >>> >>>For a free editor supporting ODF, please visit LibreOffice - >>>http://www.documentfoundation.org >>>_______________________________________________ >>>Koha-devel mailing list >>>Koha-devel at lists.koha-community.org >>>http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >>>website : http://www.koha-community.org/ >>>git : http://git.koha-community.org/ >>>bugs : http://bugs.koha-community.org/ >> >>_______________________________________________ >>Koha-devel mailing list >>Koha-devel at lists.koha-community.org >>http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >>website : http://www.koha-community.org/ >>git : http://git.koha-community.org/ >>bugs : http://bugs.koha-community.org/ > >-- >Fridolin SOMERS >Biblibre - P?les support et syst?me >fridolin.somers at biblibre.com >_______________________________________________ >Koha-devel mailing list >Koha-devel at lists.koha-community.org >http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >website : http://www.koha-community.org/ >git : http://git.koha-community.org/ >bugs : http://bugs.koha-community.org/ > --- Maritime heritage and history, preservation and conservation, research and education through the written word and the arts. and From michael.hafen at washk12.org Wed Nov 18 19:37:04 2015 From: michael.hafen at washk12.org (Michael Hafen) Date: Wed, 18 Nov 2015 11:37:04 -0700 Subject: [Koha-devel] Getting an IP address for a koha server In-Reply-To: References: Message-ID: I think the best way would be to add an address reservation in the DHCP service for your Koha server. In practice having an IP address that changes on a web server doesn't work well. A reservation in DHCP should guarantee that the Koha server gets a 'dynamic' address that is always the same. On Tue, Nov 17, 2015 at 2:56 PM, Cajetan Onyeneke wrote: > Please does anyone have idea of how to get or issue dynamic IP address to > a koha server running on Linux environment. The server is configured but > cannot be accessed within the local network because of this IP issue. > Thanks. > Cajetan Onyeneke > Digital Librarian > Imo State University, Owerri, Nigeria > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -- Michael Hafen Washington County School District Technology Department Systems Analyst -------------- next part -------------- An HTML attachment was scrubbed... URL: From Katrin.Fischer.83 at web.de Wed Nov 18 21:02:56 2015 From: Katrin.Fischer.83 at web.de (Katrin Fischer) Date: Wed, 18 Nov 2015 21:02:56 +0100 Subject: [Koha-devel] Indexes of Physical presentation In-Reply-To: References: <563B1472.3050000@biblibre.com> <56426C3D.6040005@web.de> <564CA428.7090301@biblibre.com> Message-ID: <564CD970.60803@web.de> Ah, that explains, thx Bernardo! :) Am 18.11.2015 um 17:49 schrieb Bernardo Gonzalez Kriegel: > It's UNIMARC specific :) > > -- > Bernardo Gonzalez Kriegel > bgkriegel at gmail.com > > On Wed, Nov 18, 2015 at 1:15 PM, Fridolin SOMERS > > wrote: > > Hie Katrin, > > 'physical presentation' is just the name of one of the search fields > in advanced search. > In intranet you must click on 'Coded information filters' to see it. > > I don't know if it is UNIMARC-only, can you check with the demo > websites ? > > Best regards. > > > Le 10/11/2015 23:14, Katrin Fischer a ?crit : > > Hi Fridolin, > > I have tried to look into this, but I am not sure what you mean with > 'physical presentation'. Can you explain a bit more? Is this UNIMARC > specific? > > Katrin > > Am 05.11.2015 um 09:33 schrieb Fridolin SOMERS: > > Hie, > > In intranet search, physical presentation uses index ff8-23. > In opac search, physical presentation uses index Material-type. > > This is strange because the search always use the coded > values, meaning > it is ff8-23 the correct index. > Do you agree ? > > Regards, > > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > > > -- > Fridolin SOMERS > Biblibre - P?les support et syst?me > fridolin.somers at biblibre.com > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > > > > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > From robin at catalyst.net.nz Wed Nov 18 21:51:38 2015 From: robin at catalyst.net.nz (Robin Sheat) Date: Thu, 19 Nov 2015 09:51:38 +1300 Subject: [Koha-devel] incremental backup of the DB In-Reply-To: <5.2.1.1.2.20151118120717.05517500@pop.navalmarinearchive.com> References: <5.2.1.1.2.20151117170818.02459de0@pop.navalmarinearchive.com> <5.2.1.1.2.20151117170818.02459de0@pop.navalmarinearchive.com> <5.2.1.1.2.20151118120717.05517500@pop.navalmarinearchive.com> Message-ID: <1447879898.15402.8.camel@catalyst.net.nz> Paul A schreef op wo 18-11-2015 om 12:15 [-0500]: > However, the original post asked about "incremental backup" and I > assumed > that this referred to MySQL "enterprise backup" > because I > am > unaware of an incremental *dump*. With the "Enterprise backup" we > found it > impossible to compress the incrementals, but YMMV. It could also be referring to binary log shipping/replication. https://dev.mysql.com/doc/refman/5.7/en/replication-configuration.html Indranil should probably explain exactly what he's asking about. For what it's worth, we do replication as a form of backup with at least one of our Koha systems, and I'm not aware of it causing problems. -- Robin Sheat Catalyst IT Ltd. ? +64 4 803 2204 GPG: 5FA7 4B49 1E4D CAA4 4C38 8505 77F5 B724 F871 3BDF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: This is a digitally signed message part URL: From dcook at prosentient.com.au Wed Nov 18 23:33:05 2015 From: dcook at prosentient.com.au (David Cook) Date: Thu, 19 Nov 2015 09:33:05 +1100 Subject: [Koha-devel] Problematic Zebra Charmaps Equivalences In-Reply-To: <809BE39CD64BFD4EB9036172EBCCFA315AFE52AE@S-MAIL-1B.rijksmuseum.intra> References: <00bc01d12197$98055680$c8100380$@prosentient.com.au> <809BE39CD64BFD4EB9036172EBCCFA315AFE52AE@S-MAIL-1B.rijksmuseum.intra> Message-ID: <010601d12251$179d1180$46d73480$@prosentient.com.au> Yikes, that?s not good. That would be great if you could investigate further, and let us know how it goes, and let Indexdata know as well. David Cook Systems Librarian Prosentient Systems 72/330 Wattle St, Ultimo, NSW 2007 From: koha-devel-bounces at lists.koha-community.org [mailto:koha-devel-bounces at lists.koha-community.org] On Behalf Of Marcel de Rooy Sent: Wednesday, 18 November 2015 6:13 PM To: 'Koha-devel' Subject: Re: [Koha-devel] Problematic Zebra Charmaps Equivalences I recently "downgraded" ICU back to CHR in order to overcome Zebra segmentation faults on a complete reindex. Should still investigate some further, but have the impression that some Chinese characters made zebraidx crash. _____ Van: koha-devel-bounces at lists.koha-community.org [koha-devel-bounces at lists.koha-community.org] namens David Cook [dcook at prosentient.com.au] Verzonden: woensdag 18 november 2015 1:25 Aan: 'Koha-devel' Onderwerp: [Koha-devel] Problematic Zebra Charmaps Equivalences Hi all: Yet another Zebra email from this guy. I don?t know how many of you are using CHR vs ICU, but CHR is the default for installs, so I?m guessing that it?s quite a few. Well, there are some issues with how we use the equivalent directive. Hopefully the UTF-8 won?t be stripped out of this message, although I?m guessing it might? Here?s all instances of the directive in word-phrase-utf.chr: # Characters to be considered equivalent for sorting purposes equivalent a??????????? equivalent ??(ae) equivalent ?(aa) equivalent i?????????? equivalent ?(ie) equivalent ?(ii) equivalent u?????????? equivalent ?(ue) equivalent ?(uu) equivalent e?????????? equivalent ??(ee) equivalent o??????????? equivalent ????(oe) equivalent ?(oo) Firstly, that comment is wrong. ?equivalent? isn?t just for sorting purposes. It?s for searching purposes. Indexdata have confirmed that the documentation is wrong about the sorting thing. So ?ie? and ? (if you can?t see this character, it?s the UTF-8 representation of ï) are equivalent. That means searches for ?siemon? will get results for ?siemon? and ?s?mon?. Now, there is also a ?map? directive: map ? i This means that ?s?mon? is the same as ?simon?. Now, ?map? affects both indexing and searching. If you have ?s?mon? in a record, you can see that it is actually stored as ?simon? in Zebra, if you do a search for it and use ?format xml? and ?elements zebra::index?. So your search for ?siemon? will really get results for ?siemon? and ?simon?. This really isn?t ideal. However, you can see why you?d want equivalences. In Scandinavian languages, I think ??? and ?aa? are roughly equivalent. They?re spelled differently but they?re the same sound. So if you search for ?Gaard?, you might want hits for ?G?rd? as well. But you might not want ?career? to be equivalent to ?carer? as they?re two different words. Or ?choose? to be equivalent to ?chose?, ?sloop? - "slop?, ?reef? - "ref?, etc. -- Unfortunately, I don?t really know what the solution is. For one client, I?ve disabled the equivalent directive where it creates an equivalence between any two letter combination with a one letter combination, as they only have records in English, and it?ll just cause them headaches. I can see this being useful for multilingual records? although I think many people with multilingual records use ICU. I don?t know ICU well enough to know how it manages characters that English speakers would think of as accents or ligatures. I know you can provide your own normalization with ICU, but I think it does a fair amount on its own as well? I think some of the difficulties are mentioned here: http://userguide.icu-project.org/collation/icu-string-search-service. It also mentions the Danish ?/aa example. I don?t know how ICU would know how to handle particular languages? that webpage seems to indicate you can provide a locale to deal with it. Of course, that doesn?t necessarily solve things. If you have multilingual records with multilingual users, how do you choose your rules? Sure, you might be able to specify a locale at search time (note you can?t do this with Zebra), but what rules did you specify at index time? As anyone who has watched this video (https://www.youtube.com/watch?v=0j74jcxSunY) would know, internationalis(z)ing code has many challenges? -- Anyway, the reason for this email is mostly just to make you all aware of this issue, and how ?equivalent? and ?map? work in the Charmap files when using CHR indexing. Oh, also, if you look at ?default.idx?, you?ll see that ?sort s? references ?charmap sort-string-utf.chr?, but I don?t think sort-string-utf.chr actually exists anywhere? David Cook Systems Librarian Prosentient Systems 72/330 Wattle St, Ultimo, NSW 2007 -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.a at navalmarinearchive.com Wed Nov 18 23:49:11 2015 From: paul.a at navalmarinearchive.com (Paul A) Date: Wed, 18 Nov 2015 17:49:11 -0500 Subject: [Koha-devel] incremental backup of the DB In-Reply-To: <1447879898.15402.8.camel@catalyst.net.nz> References: <5.2.1.1.2.20151118120717.05517500@pop.navalmarinearchive.com> <5.2.1.1.2.20151117170818.02459de0@pop.navalmarinearchive.com> <5.2.1.1.2.20151117170818.02459de0@pop.navalmarinearchive.com> <5.2.1.1.2.20151118120717.05517500@pop.navalmarinearchive.com> Message-ID: <5.2.1.1.2.20151118173429.02bb3b08@pop.navalmarinearchive.com> At 09:51 AM 11/19/2015 +1300, Robin Sheat wrote: >Paul A schreef op wo 18-11-2015 om 12:15 [-0500]: > > However, the original post asked about "incremental backup" and I > > assumed > > that this referred to MySQL "enterprise backup" > > because I > > am > > unaware of an incremental *dump*. With the "Enterprise backup" we > > found it > > impossible to compress the incrementals, but YMMV. > >It could also be referring to binary log shipping/replication. > >https://dev.mysql.com/doc/refman/5.7/en/replication-configuration.html > >Indranil should probably explain exactly what he's asking about. > >For what it's worth, we do replication as a form of backup with at least >one of our Koha systems, and I'm not aware of it causing problems. Very interesting, can you share more details? We had a look at this (MySQL 5.5) and did not continue with it. From my notes (November 2012): "Not to be used for backup; any error that corrupts master also instantly corrupts slave[s] so no possibility to restore. Perhaps useful in the distributed portion of our system. We will maintain dump cron jobs." Best -- Paul From robin at catalyst.net.nz Thu Nov 19 00:15:54 2015 From: robin at catalyst.net.nz (Robin Sheat) Date: Thu, 19 Nov 2015 12:15:54 +1300 Subject: [Koha-devel] incremental backup of the DB In-Reply-To: <5.2.1.1.2.20151118173429.02bb3b08@pop.navalmarinearchive.com> References: <5.2.1.1.2.20151118120717.05517500@pop.navalmarinearchive.com> <5.2.1.1.2.20151117170818.02459de0@pop.navalmarinearchive.com> <5.2.1.1.2.20151117170818.02459de0@pop.navalmarinearchive.com> <5.2.1.1.2.20151118120717.05517500@pop.navalmarinearchive.com> <5.2.1.1.2.20151118173429.02bb3b08@pop.navalmarinearchive.com> Message-ID: <1447888554.15402.11.camel@catalyst.net.nz> Paul A schreef op wo 18-11-2015 om 17:49 [-0500]: > Very interesting, can you share more details? Not really, I didn't implement it. But essentially it provides an offsite database replicant that can then be backed up using snapshotting or whatever. I think regularly dumping a running database would not be best practice, as it puts a significant load on the server, and also may lock certain actions (don't quote me on that though.) -- Robin Sheat Catalyst IT Ltd. ? +64 4 803 2204 GPG: 5FA7 4B49 1E4D CAA4 4C38 8505 77F5 B724 F871 3BDF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: This is a digitally signed message part URL: From robin at catalyst.net.nz Thu Nov 19 03:33:58 2015 From: robin at catalyst.net.nz (Robin Sheat) Date: Thu, 19 Nov 2015 15:33:58 +1300 Subject: [Koha-devel] =?utf-8?q?E_noho_r=C4=81=2C_Koha?= Message-ID: <1447900438.15402.35.camel@catalyst.net.nz> Hi, Koha folks. Tomorrow is my last day at Catalyst, and the end of my last week in New Zealand. From December, I'm starting a new job that is located only 1,450km distance from as far away as it is possible to get from Wellington on earth. That is, in Amsterdam*. It's been a pleasure meeting those of you that I've met, emailing and IRCing those that I haven't, and I'm sure you'll all keep the project heading on its current course to taking over the world :) Interesting note: my first commit was on April 20th, 2010, signed off by Galen, who has happily taken over my package manager role with almost no blackmai^Wbriber^Wconvincing needed. I'll probably still be lurking in IRC though, just in a quite different timezone to what it was, and skimming emails trying not to answer them. Ka kite an? au i a koutou, -- Robin Sheat Catalyst IT Ltd. ? +64 4 803 2204 GPG: 5FA7 4B49 1E4D CAA4 4C38 8505 77F5 B724 F871 3BDF * Marcel, I'm going to have to get you to show me the Rijksmuseum library some time :) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: This is a digitally signed message part URL: From dcook at prosentient.com.au Thu Nov 19 03:55:09 2015 From: dcook at prosentient.com.au (David Cook) Date: Thu, 19 Nov 2015 13:55:09 +1100 Subject: [Koha-devel] incremental backup of the DB In-Reply-To: <1447888554.15402.11.camel@catalyst.net.nz> References: <5.2.1.1.2.20151118120717.05517500@pop.navalmarinearchive.com> <5.2.1.1.2.20151117170818.02459de0@pop.navalmarinearchive.com> <5.2.1.1.2.20151117170818.02459de0@pop.navalmarinearchive.com> <5.2.1.1.2.20151118120717.05517500@pop.navalmarinearchive.com> <5.2.1.1.2.20151118173429.02bb3b08@pop.navalmarinearchive.com> <1447888554.15402.11.camel@catalyst.net.nz> Message-ID: <015001d12275$b3d3df60$1b7b9e20$@prosentient.com.au> I agree with Robin. I've seen another system that does database dumps during the middle of the day while the database server is running, and the increased server load is quite significant. I'm also guessing that locking takes place as sometimes the load will skyrocket during this time. Besides the huge lowering of performance, your data might be inaccurate or even corrupt anyway since you might be doing a backup midway through a script action. David Cook Systems Librarian Prosentient Systems 72/330 Wattle St, Ultimo, NSW 2007 > -----Original Message----- > From: koha-devel-bounces at lists.koha-community.org [mailto:koha-devel- > bounces at lists.koha-community.org] On Behalf Of Robin Sheat > Sent: Thursday, 19 November 2015 10:16 AM > To: koha-devel at lists.koha-community.org > Subject: Re: [Koha-devel] incremental backup of the DB > > Paul A schreef op wo 18-11-2015 om 17:49 [-0500]: > > Very interesting, can you share more details? > > Not really, I didn't implement it. > > But essentially it provides an offsite database replicant that can then be > backed up using snapshotting or whatever. I think regularly dumping a > running database would not be best practice, as it puts a significant load on > the server, and also may lock certain actions (don't quote me on that > though.) > > -- > Robin Sheat > Catalyst IT Ltd. > ? +64 4 803 2204 > GPG: 5FA7 4B49 1E4D CAA4 4C38 8505 77F5 B724 F871 3BDF From dcook at prosentient.com.au Thu Nov 19 04:00:15 2015 From: dcook at prosentient.com.au (David Cook) Date: Thu, 19 Nov 2015 14:00:15 +1100 Subject: [Koha-devel] =?utf-8?q?E_noho_r=C4=81=2C_Koha?= In-Reply-To: <1447900438.15402.35.camel@catalyst.net.nz> References: <1447900438.15402.35.camel@catalyst.net.nz> Message-ID: <015101d12276$6a48db60$3eda9220$@prosentient.com.au> Congratulations on the new job! We're sorry to see you go, but hope that you enjoy Amsterdam! It was a pleasure meeting you in Tasmania, and sharing the Oceanic timezone with you on IRC! David Cook Systems Librarian Prosentient Systems 72/330 Wattle St, Ultimo, NSW 2007 > -----Original Message----- > From: koha-devel-bounces at lists.koha-community.org [mailto:koha-devel- > bounces at lists.koha-community.org] On Behalf Of Robin Sheat > Sent: Thursday, 19 November 2015 1:34 PM > To: koha at lists.katipo.co.nz; koha-devel community.org> > Subject: [Koha-devel] E noho r?, Koha > > Hi, Koha folks. > > Tomorrow is my last day at Catalyst, and the end of my last week in New > Zealand. > > From December, I'm starting a new job that is located only 1,450km distance > from as far away as it is possible to get from Wellington on earth. That is, in > Amsterdam*. > > It's been a pleasure meeting those of you that I've met, emailing and IRCing > those that I haven't, and I'm sure you'll all keep the project heading on its > current course to taking over the world :) > > Interesting note: my first commit was on April 20th, 2010, signed off by > Galen, who has happily taken over my package manager role with almost no > blackmai^Wbriber^Wconvincing needed. > > I'll probably still be lurking in IRC though, just in a quite different timezone to > what it was, and skimming emails trying not to answer them. > > Ka kite an? au i a koutou, > -- > Robin Sheat > Catalyst IT Ltd. > ? +64 4 803 2204 > GPG: 5FA7 4B49 1E4D CAA4 4C38 8505 77F5 B724 F871 3BDF > > * Marcel, I'm going to have to get you to show me the Rijksmuseum library > some time :) From mtompset at hotmail.com Thu Nov 19 04:52:17 2015 From: mtompset at hotmail.com (Mark Tompsett) Date: Wed, 18 Nov 2015 22:52:17 -0500 Subject: [Koha-devel] incremental backup of the DB In-Reply-To: <015001d12275$b3d3df60$1b7b9e20$@prosentient.com.au> References: <5.2.1.1.2.20151118120717.05517500@pop.navalmarinearchive.com> <5.2.1.1.2.20151117170818.02459de0@pop.navalmarinearchive.com> <5.2.1.1.2.20151117170818.02459de0@pop.navalmarinearchive.com> <5.2.1.1.2.20151118120717.05517500@pop.navalmarinearchive.com> <5.2.1.1.2.20151118173429.02bb3b08@pop.navalmarinearchive.com><1447888554.15402.11.camel@catalyst.net.nz> <015001d12275$b3d3df60$1b7b9e20$@prosentient.com.au> Message-ID: Greetings, That's why when I was running Koha 3.6.x on a machine with only 512MB RAM with no swap and 20GB disk space, I would after hours: - restart mysql - shut off apache - backup - turn on apache This would free up some much needed RAM so that the backup would work. :) Now we've (both Koha and the library group) grown such that I would never dream of running with less than 1GB RAM and some swap. GPML, Mark Tompsett -----Original Message----- From: David Cook Sent: Wednesday, November 18, 2015 9:55 PM To: 'Robin Sheat' ; koha-devel at lists.koha-community.org Subject: Re: [Koha-devel] incremental backup of the DB I agree with Robin. I've seen another system that does database dumps during the middle of the day while the database server is running, and the increased server load is quite significant. I'm also guessing that locking takes place as sometimes the load will skyrocket during this time. Besides the huge lowering of performance, your data might be inaccurate or even corrupt anyway since you might be doing a backup midway through a script action. From M.de.Rooy at rijksmuseum.nl Thu Nov 19 08:08:52 2015 From: M.de.Rooy at rijksmuseum.nl (Marcel de Rooy) Date: Thu, 19 Nov 2015 07:08:52 +0000 Subject: [Koha-devel] =?utf-8?q?E_noho_r=C4=81=2C_Koha?= In-Reply-To: <1447900438.15402.35.camel@catalyst.net.nz> References: <1447900438.15402.35.camel@catalyst.net.nz> Message-ID: <809BE39CD64BFD4EB9036172EBCCFA315AFEA513@S-MAIL-1B.rijksmuseum.intra> Amsterdam is okay with me, but how sad to see you go. Success on the new job. Yes, you have to come to the Rijksmuseum. ________________________________________ Van: koha-devel-bounces at lists.koha-community.org [koha-devel-bounces at lists.koha-community.org] namens Robin Sheat [robin at catalyst.net.nz] Verzonden: donderdag 19 november 2015 3:33 Aan: koha at lists.katipo.co.nz; koha-devel Onderwerp: [Koha-devel] E noho r?, Koha Hi, Koha folks. Tomorrow is my last day at Catalyst, and the end of my last week in New Zealand. From December, I'm starting a new job that is located only 1,450km distance from as far away as it is possible to get from Wellington on earth. That is, in Amsterdam*. It's been a pleasure meeting those of you that I've met, emailing and IRCing those that I haven't, and I'm sure you'll all keep the project heading on its current course to taking over the world :) Interesting note: my first commit was on April 20th, 2010, signed off by Galen, who has happily taken over my package manager role with almost no blackmai^Wbriber^Wconvincing needed. I'll probably still be lurking in IRC though, just in a quite different timezone to what it was, and skimming emails trying not to answer them. Ka kite an? au i a koutou, -- Robin Sheat Catalyst IT Ltd. ? +64 4 803 2204 GPG: 5FA7 4B49 1E4D CAA4 4C38 8505 77F5 B724 F871 3BDF * Marcel, I'm going to have to get you to show me the Rijksmuseum library some time :) From paul.a at navalmarinearchive.com Thu Nov 19 18:12:10 2015 From: paul.a at navalmarinearchive.com (Paul A) Date: Thu, 19 Nov 2015 12:12:10 -0500 Subject: [Koha-devel] incremental backup of the DB In-Reply-To: <1447888554.15402.11.camel@catalyst.net.nz> References: <5.2.1.1.2.20151118173429.02bb3b08@pop.navalmarinearchive.com> <5.2.1.1.2.20151118120717.05517500@pop.navalmarinearchive.com> <5.2.1.1.2.20151117170818.02459de0@pop.navalmarinearchive.com> <5.2.1.1.2.20151117170818.02459de0@pop.navalmarinearchive.com> <5.2.1.1.2.20151118120717.05517500@pop.navalmarinearchive.com> <5.2.1.1.2.20151118173429.02bb3b08@pop.navalmarinearchive.com> Message-ID: <5.2.1.1.2.20151119093548.03433020@pop.navalmarinearchive.com> At 12:15 PM 11/19/2015 +1300, Robin Sheat wrote: >Paul A schreef op wo 18-11-2015 om 17:49 [-0500]: > > Very interesting, can you share more details? > >Not really, I didn't implement it. > >But essentially it provides an offsite database replicant that can then >be backed up using snapshotting or whatever. I think regularly dumping a >running database would not be best practice, as it puts a significant >load on the server, and also may lock certain actions (don't quote me on >that though.) Robin -- first and foremost, my best wishes for your move and new job, I'm sure that everyone here will miss your enthusiasm, experience and wisdom. As to the intricacies of backing up a MySQL database, we're probably going outside the "Koha envelope" but Mark T. and David C. both echoed your concern on "load on the server." I am of the opinion that this is purely hardware related, and server investment should reflect in-production use plus a suitable margin. We HAProxy two servers, each with 8-core CPUs and 16G RAM; the swap files have never been written to. The "system", all Koha files and MySQL + InnoDB are on raided solid state drives. As I wrote originally, the Koha routines to "clean" the db (especially sessions, unused Z39-50 data etc more than a day old) plus the dump takes ~2 seconds. I'm fairly certain that the lock time on the db is less than this (as the lock is released before the write happens -- I think as soon as dump has read binlog, but I'd have to immerse myself again in the intricacies, possibly of flag --single-transaction.) So, what we see in about 4 years of production practice is (a) that we have never found any backup corruption, (b) our staff and cataloguers have never noticed any slow-downs that substantially differ from normal LAN|router|cable latency, and (c) even under extreme conditions (Bing, Google and Baidu once hit between them 186,000 requests/hour) the limiting factor is never MySQL but always Zebra (with or without ZEBRAOPTIONS "-T" flag.) Hoping this response wasn't too long, and again, best wishes for your future... Paul --- Maritime heritage and history, preservation and conservation, research and education through the written word and the arts. and From martin.renvoize at ptfs-europe.com Thu Nov 19 22:54:01 2015 From: martin.renvoize at ptfs-europe.com (Renvoize, Martin) Date: Thu, 19 Nov 2015 21:54:01 +0000 Subject: [Koha-devel] =?utf-8?b?W0tvaGFdIEUgbm9obyByxIEsIEtvaGE=?= In-Reply-To: <1447900438.15402.35.camel@catalyst.net.nz> References: <1447900438.15402.35.camel@catalyst.net.nz> Message-ID: All the best in your new roles mate, looking forward to having you that bit closer and hoping we'll get the opportunities to catch up over a pint a little more often. The community will certainly miss you. On 19 Nov 2015 2:34 am, "Robin Sheat" wrote: > Hi, Koha folks. > > Tomorrow is my last day at Catalyst, and the end of my last week in New > Zealand. > > From December, I'm starting a new job that is located only 1,450km > distance from as far away as it is possible to get from Wellington on > earth. That is, in Amsterdam*. > > It's been a pleasure meeting those of you that I've met, emailing and > IRCing those that I haven't, and I'm sure you'll all keep the project > heading on its current course to taking over the world :) > > Interesting note: my first commit was on April 20th, 2010, signed off by > Galen, who has happily taken over my package manager role with almost no > blackmai^Wbriber^Wconvincing needed. > > I'll probably still be lurking in IRC though, just in a quite different > timezone to what it was, and skimming emails trying not to answer them. > > Ka kite an? au i a koutou, > -- > Robin Sheat > Catalyst IT Ltd. > ? +64 4 803 2204 > GPG: 5FA7 4B49 1E4D CAA4 4C38 8505 77F5 B724 F871 3BDF > > * Marcel, I'm going to have to get you to show me the Rijksmuseum > library some time :) > > > _______________________________________________ > Koha mailing list http://koha-community.org > Koha at lists.katipo.co.nz > https://lists.katipo.co.nz/mailman/listinfo/koha > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.renvoize at ptfs-europe.com Thu Nov 19 23:16:18 2015 From: martin.renvoize at ptfs-europe.com (Renvoize, Martin) Date: Thu, 19 Nov 2015 22:16:18 +0000 Subject: [Koha-devel] INSTALL.debian is outdated In-Reply-To: References: Message-ID: Nest, my package patch got in.. I'd completely forgotten about that! Would agree with linking out for docs. I'd have the git install guide under the how to develop categories on the wiki. The story hasn't changed much in years for that install route, I always found it odd having a guide per version. On 16 Nov 2015 9:15 pm, "Barton Chittenden" wrote: > Correct -- I'm not suggesting that the general public should install Koha > via git. We should, none-the-less have a current, well documented path for > installing via git... after all, that's how we get new developers :-). > > --Barton > > On Mon, Nov 16, 2015 at 3:51 PM, Chris Cormack > wrote: > >> On 17 November 2015 at 09:48, Barton Chittenden >> wrote: >> > I added this to the agenda: >> > >> http://wiki.koha-community.org/wiki/General_IRC_meeting_2_December_2015#Agenda >> > >> > I would like to add that there is, at this point, no up-to-date >> > documentation for doing a git install of Koha. Yes, the packages are far >> > easier to deal with than the git installs, and yes, most of what is >> done in >> > the packages can be retro-fitted to fit the git install via gitify --- >> but >> > there are some corner conditions where an honest-to-god git install is >> the >> > only way -- creating custom indexes for zebra comes to mind.... so I'd >> like >> > to make sure that this avenue remains well documented and supported. >> > >> But only for developers right? There are massive security implications >> of running a git install in production, >> and we should never encourage that. >> >> Chris >> _______________________________________________ >> Koha-devel mailing list >> Koha-devel at lists.koha-community.org >> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >> website : http://www.koha-community.org/ >> git : http://git.koha-community.org/ >> bugs : http://bugs.koha-community.org/ >> > > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tomascohen at gmail.com Fri Nov 20 23:42:22 2015 From: tomascohen at gmail.com (Tomas Cohen Arazi) Date: Fri, 20 Nov 2015 19:42:22 -0300 Subject: [Koha-devel] =?utf-8?q?E_noho_r=C4=81=2C_Koha?= In-Reply-To: <1447900438.15402.35.camel@catalyst.net.nz> References: <1447900438.15402.35.camel@catalyst.net.nz> Message-ID: Robin, I've been trying to think how the project will be without you around... We'll survive :-D but it won't be the same. I hope you keep good memories from the things we shared, and of course enjoy your new life a lot. Cheers! I will certaintly share a jar of fernet with coke on your behalf :-D Right Bernardo? :-D 2015-11-18 23:33 GMT-03:00 Robin Sheat : > Hi, Koha folks. > > Tomorrow is my last day at Catalyst, and the end of my last week in New > Zealand. > > From December, I'm starting a new job that is located only 1,450km > distance from as far away as it is possible to get from Wellington on > earth. That is, in Amsterdam*. > > It's been a pleasure meeting those of you that I've met, emailing and > IRCing those that I haven't, and I'm sure you'll all keep the project > heading on its current course to taking over the world :) > > Interesting note: my first commit was on April 20th, 2010, signed off by > Galen, who has happily taken over my package manager role with almost no > blackmai^Wbriber^Wconvincing needed. > > I'll probably still be lurking in IRC though, just in a quite different > timezone to what it was, and skimming emails trying not to answer them. > > Ka kite an? au i a koutou, > -- > Robin Sheat > Catalyst IT Ltd. > ? +64 4 803 2204 > GPG: 5FA7 4B49 1E4D CAA4 4C38 8505 77F5 B724 F871 3BDF > > * Marcel, I'm going to have to get you to show me the Rijksmuseum > library some time :) > > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -- Tom?s Cohen Arazi Theke Solutions (http://theke.io) ? +54 9351 3513384 GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F -------------- next part -------------- An HTML attachment was scrubbed... URL: From z.tajoli at cineca.it Sat Nov 21 09:51:54 2015 From: z.tajoli at cineca.it (Zeno Tajoli) Date: Sat, 21 Nov 2015 09:51:54 +0100 (CET) Subject: [Koha-devel] About level of collaction in MySQL Message-ID: <2055899394.29970906.1448095914086.JavaMail.root@cineca.it> Hi to all, working on bug 15207, Error on upgrade from 3.20.5 to 3.22 beta, http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=15207 Bernardo finds same missing bit about collate info in sound alerts table and he propose a patch about it. As I see collate info are not mandatory for us at column or table level, we setup it at database level. I see that mysql works with levels to define collation [server, database, table, column]. So which level of collation do we want ? Bye Zeno Tajoli From z.tajoli at cineca.it Sat Nov 21 11:57:04 2015 From: z.tajoli at cineca.it (Zeno Tajoli) Date: Sat, 21 Nov 2015 11:57:04 +0100 (CET) Subject: [Koha-devel] Help about bug 15207 Message-ID: <1591498787.30017140.1448103424537.JavaMail.root@cineca.it> Hi to all, I trying to resolve bug 15207, Error on upgrade from 3.20.5 to 3.22 beta http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=15207. The second error (insert defaults in the table 'permissions') it easy, I change 'INSERT' with 'INSERT IGNORE' and all is OK. The first one is much more difficult: insert a CONSTRAINT but if you update from 3.20.5 this CONSTRAINT is just present. I don't find a similar situation in others parts of installer/data/mysql/updatedatabase.pl I try a solution (check the presence of the CONSTRAINT using information_schema.table_constraints) but it doesn't work always. For example it doesn't work for Katrin. I don't know what to do now. Do you have any idea about this topic ? Bye Zeno Tajoli From Katrin.Fischer.83 at web.de Sat Nov 21 12:54:40 2015 From: Katrin.Fischer.83 at web.de (Katrin Fischer) Date: Sat, 21 Nov 2015 12:54:40 +0100 Subject: [Koha-devel] Help about bug 15207 In-Reply-To: <1591498787.30017140.1448103424537.JavaMail.root@cineca.it> References: <1591498787.30017140.1448103424537.JavaMail.root@cineca.it> Message-ID: <56505B80.1000101@web.de> Hi Zeno, first of all - thanks for testing the update process and your work on fixing the problems. I think I might have found a solution looking at older updates where we solved a similar problem. In my tests it's looking good so far. Could you take a look at my follow-up patch? Hope it works, Katrin Am 21.11.2015 um 11:57 schrieb Zeno Tajoli: > Hi to all, > > I trying to resolve bug 15207, > Error on upgrade from 3.20.5 to 3.22 beta > http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=15207. > > The second error (insert defaults in the table 'permissions') > it easy, I change 'INSERT' with 'INSERT IGNORE' and all is OK. > > The first one is much more difficult: > insert a CONSTRAINT but if you update from 3.20.5 this CONSTRAINT > is just present. > > I don't find a similar situation in others parts of installer/data/mysql/updatedatabase.pl > > I try a solution (check the presence of the CONSTRAINT using information_schema.table_constraints) > but it doesn't work always. For example it doesn't work for Katrin. > > I don't know what to do now. > Do you have any idea about this topic ? > > Bye > Zeno Tajoli > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > From fridolin.somers at biblibre.com Tue Nov 24 09:07:31 2015 From: fridolin.somers at biblibre.com (Fridolin SOMERS) Date: Tue, 24 Nov 2015 09:07:31 +0100 Subject: [Koha-devel] Problematic Zebra Charmaps Equivalences In-Reply-To: <00bc01d12197$98055680$c8100380$@prosentient.com.au> References: <00bc01d12197$98055680$c8100380$@prosentient.com.au> Message-ID: <56541AC3.9060207@biblibre.com> Hie, Thanks a lot for this complex study. sort-string-utf.chr exists indeed, in etc/zebradb/lang_defs/xx. It depends on the language chosen at install because it contains prefixes escaped for sort, for example : map (^The\s) @ The CHR is indeed not perfect. We use ICU, for a French catalog it seems OK. A good point is that you can really customize tokenization via config files xxxx-icu.xml Regards, Le 18/11/2015 01:25, David Cook a ?crit : > Hi all: > > > > Yet another Zebra email from this guy. > > > > I don?t know how many of you are using CHR vs ICU, but CHR is the default for installs, so I?m guessing that it?s quite a few. > > > > Well, there are some issues with how we use the equivalent directive. Hopefully the UTF-8 won?t be stripped out of this message, although I?m guessing it might? > > > > Here?s all instances of the directive in word-phrase-utf.chr: > > > > # Characters to be considered equivalent for sorting purposes > > equivalent a??????????? > > equivalent ??(ae) > > equivalent ?(aa) > > equivalent i?????????? > > equivalent ?(ie) > > equivalent ?(ii) > > equivalent u?????????? > > equivalent ?(ue) > > equivalent ?(uu) > > equivalent e?????????? > > equivalent ??(ee) > > equivalent o??????????? > > equivalent ????(oe) > > equivalent ?(oo) > > > > Firstly, that comment is wrong. ?equivalent? isn?t just for sorting purposes. It?s for searching purposes. Indexdata have confirmed that the documentation is wrong about the sorting thing. > > > > So ?ie? and ? (if you can?t see this character, it?s the UTF-8 representation of ï) are equivalent. That means searches for ?siemon? will get results for ?siemon? and ?s?mon?. > > > > Now, there is also a ?map? directive: > > > > map ? i > > > > This means that ?s?mon? is the same as ?simon?. Now, ?map? affects both indexing and searching. If you have ?s?mon? in a record, you can see that it is actually stored as ?simon? in Zebra, if you do a search for it and use ?format xml? and ?elements zebra::index?. > > > > So your search for ?siemon? will really get results for ?siemon? and ?simon?. > > > > This really isn?t ideal. However, you can see why you?d want equivalences. In Scandinavian languages, I think ??? and ?aa? are roughly equivalent. They?re spelled differently but they?re the same sound. So if you search for ?Gaard?, you might want hits for ?G?rd? as well. > > > > But you might not want ?career? to be equivalent to ?carer? as they?re two different words. Or ?choose? to be equivalent to ?chose?, ?sloop? - "slop?, ?reef? - "ref?, etc. > > > > -- > > > > Unfortunately, I don?t really know what the solution is. For one client, I?ve disabled the equivalent directive where it creates an equivalence between any two letter combination with a one letter combination, as they only have records in English, and it?ll just cause them headaches. > > > > I can see this being useful for multilingual records? although I think many people with multilingual records use ICU. I don?t know ICU well enough to know how it manages characters that English speakers would think of as accents or ligatures. I know you can provide your own normalization with ICU, but I think it does a fair amount on its own as well? > > > > I think some of the difficulties are mentioned here: http://userguide.icu-project.org/collation/icu-string-search-service. It also mentions the Danish ?/aa example. I don?t know how ICU would know how to handle particular languages? that webpage seems to indicate you can provide a locale to deal with it. > > > > Of course, that doesn?t necessarily solve things. If you have multilingual records with multilingual users, how do you choose your rules? Sure, you might be able to specify a locale at search time (note you can?t do this with Zebra), but what rules did you specify at index time? > > > > As anyone who has watched this video (https://www.youtube.com/watch?v=0j74jcxSunY) would know, internationalis(z)ing code has many challenges? > > > > -- > > > > Anyway, the reason for this email is mostly just to make you all aware of this issue, and how ?equivalent? and ?map? work in the Charmap files when using CHR indexing. > > > > Oh, also, if you look at ?default.idx?, you?ll see that ?sort s? references ?charmap sort-string-utf.chr?, but I don?t think sort-string-utf.chr actually exists anywhere? > > > > David Cook > > Systems Librarian > > Prosentient Systems > > 72/330 Wattle St, Ultimo, NSW 2007 > > > > > > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -- Fridolin SOMERS Biblibre - P?les support et syst?me fridolin.somers at biblibre.com From fridolin.somers at biblibre.com Tue Nov 24 09:08:45 2015 From: fridolin.somers at biblibre.com (Fridolin SOMERS) Date: Tue, 24 Nov 2015 09:08:45 +0100 Subject: [Koha-devel] Indexes of Physical presentation In-Reply-To: <564CD970.60803@web.de> References: <563B1472.3050000@biblibre.com> <56426C3D.6040005@web.de> <564CA428.7090301@biblibre.com> <564CD970.60803@web.de> Message-ID: <56541B0D.6060106@biblibre.com> Thanks Bernardo ;) Le 18/11/2015 21:02, Katrin Fischer a ?crit : > Ah, that explains, thx Bernardo! :) > > Am 18.11.2015 um 17:49 schrieb Bernardo Gonzalez Kriegel: >> It's UNIMARC specific :) >> >> -- >> Bernardo Gonzalez Kriegel >> bgkriegel at gmail.com >> >> On Wed, Nov 18, 2015 at 1:15 PM, Fridolin SOMERS >> > wrote: >> >> Hie Katrin, >> >> 'physical presentation' is just the name of one of the search fields >> in advanced search. >> In intranet you must click on 'Coded information filters' to see it. >> >> I don't know if it is UNIMARC-only, can you check with the demo >> websites ? >> >> Best regards. >> >> >> Le 10/11/2015 23:14, Katrin Fischer a ?crit : >> >> Hi Fridolin, >> >> I have tried to look into this, but I am not sure what you mean with >> 'physical presentation'. Can you explain a bit more? Is this UNIMARC >> specific? >> >> Katrin >> >> Am 05.11.2015 um 09:33 schrieb Fridolin SOMERS: >> >> Hie, >> >> In intranet search, physical presentation uses index ff8-23. >> In opac search, physical presentation uses index Material-type. >> >> This is strange because the search always use the coded >> values, meaning >> it is ff8-23 the correct index. >> Do you agree ? >> >> Regards, >> >> >> _______________________________________________ >> Koha-devel mailing list >> Koha-devel at lists.koha-community.org >> >> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >> website : http://www.koha-community.org/ >> git : http://git.koha-community.org/ >> bugs : http://bugs.koha-community.org/ >> >> >> -- >> Fridolin SOMERS >> Biblibre - P?les support et syst?me >> fridolin.somers at biblibre.com >> _______________________________________________ >> Koha-devel mailing list >> Koha-devel at lists.koha-community.org >> >> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >> website : http://www.koha-community.org/ >> git : http://git.koha-community.org/ >> bugs : http://bugs.koha-community.org/ >> >> >> >> >> _______________________________________________ >> Koha-devel mailing list >> Koha-devel at lists.koha-community.org >> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >> website : http://www.koha-community.org/ >> git : http://git.koha-community.org/ >> bugs : http://bugs.koha-community.org/ >> > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -- Fridolin SOMERS Biblibre - P?les support et syst?me fridolin.somers at biblibre.com From tomascohen at gmail.com Thu Nov 26 20:36:14 2015 From: tomascohen at gmail.com (Tomas Cohen Arazi) Date: Thu, 26 Nov 2015 16:36:14 -0300 Subject: [Koha-devel] Koha 3.22 released Message-ID: It is with great pleasure that we announce the release of Koha 3.22, a major release of the Koha open source integrated library system. Koha 3.22 is a major release, that comes with many new features. See the full release notes here: https://koha-community.org/koha-3-22-released/ Download: http://koha-community.org/download-koha/ Debian packages for this beta will be available soon on the unstable repository. Thanks everyone! -- Tom?s Cohen Arazi Theke Solutions (http://theke.io) ? +54 9351 3513384 GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F -------------- next part -------------- An HTML attachment was scrubbed... URL: From M.de.Rooy at rijksmuseum.nl Thu Nov 26 22:02:18 2015 From: M.de.Rooy at rijksmuseum.nl (Marcel de Rooy) Date: Thu, 26 Nov 2015 21:02:18 +0000 Subject: [Koha-devel] Koha 3.22 released In-Reply-To: References: Message-ID: <809BE39CD64BFD4EB9036172EBCCFA315AFEF248@S-MAIL-1B.rijksmuseum.intra> Great, Tomas! Thanks for your hard work. Marcel ________________________________ Van: koha-devel-bounces at lists.koha-community.org [koha-devel-bounces at lists.koha-community.org] namens Tomas Cohen Arazi [tomascohen at gmail.com] Verzonden: donderdag 26 november 2015 20:36 Aan: koha-devel Onderwerp: [Koha-devel] Koha 3.22 released It is with great pleasure that we announce the release of Koha 3.22, a major release of the Koha open source integrated library system. Koha 3.22 is a major release, that comes with many new features. See the full release notes here: https://koha-community.org/koha-3-22-released/ Download: http://koha-community.org/download-koha/ Debian packages for this beta will be available soon on the unstable repository. Thanks everyone! -- Tom?s Cohen Arazi Theke Solutions (http://theke.io) ? +54 9351 3513384 GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F -------------- next part -------------- An HTML attachment was scrubbed... URL: From Andreas.Hedstrom.Mace at sub.su.se Thu Nov 26 23:07:33 2015 From: Andreas.Hedstrom.Mace at sub.su.se (=?utf-8?B?QW5kcmVhcyBIZWRzdHLDtm0gTWFjZQ==?=) Date: Thu, 26 Nov 2015 22:07:33 +0000 Subject: [Koha-devel] Koha 3.22 released In-Reply-To: References: Message-ID: Wonderful Tom?s! /Andreas Fr?n: Tomas Cohen Arazi > Datum: torsdag 26 november 2015 20:36 Till: koha-devel > ?mne: [Koha-devel] Koha 3.22 released It is with great pleasure that we announce the release of Koha 3.22, a major release of the Koha open source integrated library system. Koha 3.22 is a major release, that comes with many new features. See the full release notes here: https://koha-community.org/koha-3-22-released/ Download: http://koha-community.org/download-koha/ Debian packages for this beta will be available soon on the unstable repository. Thanks everyone! -- Tom?s Cohen Arazi Theke Solutions (http://theke.io) ? +54 9351 3513384 GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F -------------- next part -------------- An HTML attachment was scrubbed... URL: From hagud at orex.es Fri Nov 27 00:09:44 2015 From: hagud at orex.es (Hugo Agud) Date: Fri, 27 Nov 2015 00:09:44 +0100 Subject: [Koha-devel] Koha 3.22 released In-Reply-To: References: Message-ID: Incredible tomas! 2015-11-26 23:07 GMT+01:00 Andreas Hedstr?m Mace < Andreas.Hedstrom.Mace at sub.su.se>: > Wonderful Tom?s! > > /Andreas > > Fr?n: Tomas Cohen Arazi > Datum: torsdag 26 november 2015 20:36 > Till: koha-devel > ?mne: [Koha-devel] Koha 3.22 released > > It is with great pleasure that we announce the release of Koha 3.22, a > major release of the Koha open source integrated library system. > > Koha 3.22 is a major release, that comes with many new features. > > See the full release notes here: > https://koha-community.org/koha-3-22-released/ > Download: http://koha-community.org/download-koha/ > Debian packages for this beta will be available soon on the unstable > repository. > > Thanks everyone! > > -- > Tom?s Cohen Arazi > Theke Solutions (http://theke.io) > ? +54 9351 3513384 > GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -- *Hugo Agud - Orex Digital * *www.orex.es * Director Passatge de la Llan?adera, 3 ? 08338 Premi? de Dalt - Tel: 93 539 40 70 hagud at orex.es ? http://www.orex.es/ No imprima este mensaje a no ser que sea necesario. Una tonelada de papel implica la tala de 15 ?rboles y el consumo de 250.000 litros de agua. Aviso de confidencialidad Este mensaje contiene informaci?n que puede ser CONFIDENCIAL y/o de USO RESTRINGIDO. Si usted no es el receptor deseado del mensaje (ni est? autorizado a recibirlo por el remitente), no est? autorizado a copiar, reenviar o divulgar el mensaje o su contenido. Si ha recibido este mensaje por error, por favor, notif?quenoslo inmediatamente y b?rrelo de su sistema. -------------- next part -------------- An HTML attachment was scrubbed... URL: From info at bywatersolutions.com Fri Nov 27 06:01:38 2015 From: info at bywatersolutions.com (Brendan Gallagher) Date: Thu, 26 Nov 2015 21:01:38 -0800 Subject: [Koha-devel] Koha 3.22 released In-Reply-To: References: Message-ID: Excellent - great run Tomas On Thu, Nov 26, 2015 at 3:09 PM, Hugo Agud wrote: > Incredible tomas! > > 2015-11-26 23:07 GMT+01:00 Andreas Hedstr?m Mace < > Andreas.Hedstrom.Mace at sub.su.se>: > >> Wonderful Tom?s! >> >> /Andreas >> >> Fr?n: Tomas Cohen Arazi >> Datum: torsdag 26 november 2015 20:36 >> Till: koha-devel >> ?mne: [Koha-devel] Koha 3.22 released >> >> It is with great pleasure that we announce the release of Koha 3.22, a >> major release of the Koha open source integrated library system. >> >> Koha 3.22 is a major release, that comes with many new features. >> >> See the full release notes here: >> https://koha-community.org/koha-3-22-released/ >> Download: http://koha-community.org/download-koha/ >> Debian packages for this beta will be available soon on the unstable >> repository. >> >> Thanks everyone! >> >> -- >> Tom?s Cohen Arazi >> Theke Solutions (http://theke.io) >> ? +54 9351 3513384 >> GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F >> >> _______________________________________________ >> Koha-devel mailing list >> Koha-devel at lists.koha-community.org >> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >> website : http://www.koha-community.org/ >> git : http://git.koha-community.org/ >> bugs : http://bugs.koha-community.org/ >> > > > > -- > > *Hugo Agud - Orex Digital * > > *www.orex.es * > > > Director > > Passatge de la Llan?adera, 3 ? 08338 Premi? de Dalt - Tel: 93 539 40 70 > hagud at orex.es ? http://www.orex.es/ > > > > No imprima este mensaje a no ser que sea necesario. Una tonelada de papel > implica la tala de 15 ?rboles y el consumo de 250.000 litros de agua. > > > > Aviso de confidencialidad > Este mensaje contiene informaci?n que puede ser CONFIDENCIAL y/o de USO > RESTRINGIDO. Si usted no es el receptor deseado del mensaje (ni > est? autorizado a recibirlo por el remitente), no est? autorizado a > copiar, reenviar o divulgar el mensaje o su contenido. Si ha recibido este > mensaje > por error, por favor, notif?quenoslo inmediatamente y b?rrelo de su > sistema. > > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -- --------------------------------------------------------------------------------------------------------------- Brendan A. Gallagher ByWater Solutions CEO Support and Consulting for Open Source Software Installation, Data Migration, Training, Customization, Hosting and Complete Support Packages Headquarters: Santa Barbara, CA - Office: Redding, CT Phone # (888) 900-8944 http://bywatersolutions.com info at bywatersolutions.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.poulain at biblibre.com Fri Nov 27 15:59:06 2015 From: paul.poulain at biblibre.com (Paul Poulain) Date: Fri, 27 Nov 2015 15:59:06 +0100 Subject: [Koha-devel] Koha 3.22 released In-Reply-To: References: Message-ID: <56586FBA.1000507@biblibre.com> Le 27/11/2015 00:09, Hugo Agud a ?crit : > Incredible tomas! Is the one with the biggest superlative winning ? ;-) (Thanks Tomas, you deserve hall of fame. 3 releases in a row, with a starting company, a starting family, burglary... really, you deserve HOF, for sure ! ) -- Paul Poulain, Associ?-g?rant / co-owner BibLibre, Services en logiciels libres pour les biblioth?ques BibLibre, Open Source software and services for libraries From fridolin.somers at biblibre.com Fri Nov 27 18:05:27 2015 From: fridolin.somers at biblibre.com (Fridolin SOMERS) Date: Fri, 27 Nov 2015 18:05:27 +0100 Subject: [Koha-devel] Koha 3.22 released In-Reply-To: References: Message-ID: <56588D57.5000908@biblibre.com> 1000000+ Congratulations Le 26/11/2015 20:36, Tomas Cohen Arazi a ?crit : > It is with great pleasure that we announce the release of Koha 3.22, a > major release of the Koha open source integrated library system. > > Koha 3.22 is a major release, that comes with many new features. > > See the full release notes here: > https://koha-community.org/koha-3-22-released/ > Download: http://koha-community.org/download-koha/ > Debian packages for this beta will be available soon on the unstable > repository. > > Thanks everyone! > > > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -- Fridolin SOMERS Biblibre - P?les support et syst?me fridolin.somers at biblibre.com From mirko at abunchofthings.net Fri Nov 27 19:36:07 2015 From: mirko at abunchofthings.net (Mirko Tietgen) Date: Fri, 27 Nov 2015 19:36:07 +0100 Subject: [Koha-devel] Benchmarks - prior to the 3.22 release In-Reply-To: References: Message-ID: <5658A297.40309@abunchofthings.net> I set up Koha 3.22 on a Raspberry Pi 2, thinking about releasing an image so people can play with it. I was hoping to get results in the way I got back when I did it with the very first Raspi, which was slow, but kind of usable with Plack. With Plack enabled I get pretty much the search times I had with the Raspberry Pi 1 (the single cpu, 256mb RAM one) without Plack, running Koha 3.8. (zebra facets disabled) I have not tried the old benchmark script (or tried Jonathan's test) yet, but so far I share Jonathan's findings. That is in no way meant to bad-mouth a great new release, but the question probably is, do we just accept it that way or can we do something about it? And what could that be? Or is it the cost that comes with having a lot of great new features and internal changes? Hardware is getting faster too, so maybe comparing a 2015/16 release to one from 1,5 years (3.16) or even 3,5 years (3.8) ago does not make sense? Cheers, Mirko Jonathan Druart schrieb am 10.11.2015 > I use plack with the koha.psgi file from the source > (misc/plack/koha.psgi) and the Proxy/ProxyPass directives in the > apache config. > > For more info on what does the tests, please see the first lines on > the wiki page, which point to bug 13690 comment 2: > === > I have created a selenium script (see bug 13691) which: > - goes on the mainpage and processes a log in (main) > - creates a patron category (add patron category) > - creates a patron (add patron) > - adds 3 items (add items) > - checks the 3 items out to the patron (checkout) > - checks the 3 items in (checkin) > === > > > 2015-11-09 20:19 GMT+00:00 Tomas Cohen Arazi : >> >> >> 2015-11-09 13:30 GMT-03:00 Jonathan Druart >> : >>> >>> Hi devs, >>> >>> Please have a look at the these benchmarks: >>> >>> http://wiki.koha-community.org/wiki/Benchmark_for_3.22 >> >> >> It is expected that broader DBIC usage would have more footprint [1]. I >> wonder what your Plack setup is, as the packages integration uses Starman >> with prefork, so the load time should not be user-noticeable. >> >> Hum... >> >> I agree with Paul A, that running in Plack feels so close to zero-time, that >> those numbers sound unrealistic. Maybe we should go back to caching sysprefs >> (you could test that if you have the VM ready for it) and have the workers >> last shorter time (something between 1 and 5 requests) [2]. >> >> [1] Only as an example, we are now retireving sysprefs through DBIC. >> [2] Dobrica mentioned this in Marseille. I don't recall how many beers we >> had before that. >> >> -- >> Tom?s Cohen Arazi >> Theke Solutions (http://theke.io) >> ? +54 9351 3513384 >> GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From tomascohen at gmail.com Fri Nov 27 23:04:24 2015 From: tomascohen at gmail.com (Tomas Cohen Arazi) Date: Fri, 27 Nov 2015 19:04:24 -0300 Subject: [Koha-devel] Benchmarks - prior to the 3.22 release In-Reply-To: References: Message-ID: Could you try reverting to old DBI syspref retrieval in C4::Context? 2015-11-10 5:20 GMT-03:00 Jonathan Druart < jonathan.druart at bugs.koha-community.org>: > I use plack with the koha.psgi file from the source > (misc/plack/koha.psgi) and the Proxy/ProxyPass directives in the > apache config. > > For more info on what does the tests, please see the first lines on > the wiki page, which point to bug 13690 comment 2: > === > I have created a selenium script (see bug 13691) which: > - goes on the mainpage and processes a log in (main) > - creates a patron category (add patron category) > - creates a patron (add patron) > - adds 3 items (add items) > - checks the 3 items out to the patron (checkout) > - checks the 3 items in (checkin) > === > > > 2015-11-09 20:19 GMT+00:00 Tomas Cohen Arazi : > > > > > > 2015-11-09 13:30 GMT-03:00 Jonathan Druart > > : > >> > >> Hi devs, > >> > >> Please have a look at the these benchmarks: > >> > >> http://wiki.koha-community.org/wiki/Benchmark_for_3.22 > > > > > > It is expected that broader DBIC usage would have more footprint [1]. I > > wonder what your Plack setup is, as the packages integration uses Starman > > with prefork, so the load time should not be user-noticeable. > > > > Hum... > > > > I agree with Paul A, that running in Plack feels so close to zero-time, > that > > those numbers sound unrealistic. Maybe we should go back to caching > sysprefs > > (you could test that if you have the VM ready for it) and have the > workers > > last shorter time (something between 1 and 5 requests) [2]. > > > > [1] Only as an example, we are now retireving sysprefs through DBIC. > > [2] Dobrica mentioned this in Marseille. I don't recall how many beers we > > had before that. > > > > -- > > Tom?s Cohen Arazi > > Theke Solutions (http://theke.io) > > ? +54 9351 3513384 > > GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -- Tom?s Cohen Arazi Theke Solutions (http://theke.io) ? +54 9351 3513384 GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F -------------- next part -------------- An HTML attachment was scrubbed... URL: From mirko at abunchofthings.net Fri Nov 27 23:35:02 2015 From: mirko at abunchofthings.net (Mirko Tietgen) Date: Fri, 27 Nov 2015 23:35:02 +0100 Subject: [Koha-devel] Benchmarks - prior to the 3.22 release In-Reply-To: <5658A297.40309@abunchofthings.net> References: <5658A297.40309@abunchofthings.net> Message-ID: <5658DA96.1080302@abunchofthings.net> Mirko Tietgen schrieb am 27.11.2015 > I have not tried the old benchmark script (or tried Jonathan's test) > yet, but so far I share Jonathan's findings. I added the benchmark results to the wiki page at http://wiki.koha-community.org/wiki/Benchmark_for_3.22#Koha_3.8_on_Raspberry_Pi_1_vs._Koha_3.22_on_Raspberry_Pi_2 Cheers, Mirko -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From mirko at abunchofthings.net Sat Nov 28 11:56:31 2015 From: mirko at abunchofthings.net (Mirko Tietgen) Date: Sat, 28 Nov 2015 11:56:31 +0100 Subject: [Koha-devel] Benchmarks - prior to the 3.22 release In-Reply-To: <5658DA96.1080302@abunchofthings.net> References: <5658A297.40309@abunchofthings.net> <5658DA96.1080302@abunchofthings.net> Message-ID: <5659885F.1010905@abunchofthings.net> NYTProfile for searches in the OPAC and in the staff client, run on the RPi2 with Koha 3.22 For /usr/share/koha/opac/cgi-bin/opac/opac-search.pl: http://abunchofthings.net/koha/profile/opacsearch/ For /usr/share/koha/intranet/cgi-bin/catalogue/search.pl: http://abunchofthings.net/koha/profile/staffsearch/ Cheers, Mirko -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From paul.a at navalmarinearchive.com Sat Nov 28 17:57:10 2015 From: paul.a at navalmarinearchive.com (Paul A) Date: Sat, 28 Nov 2015 11:57:10 -0500 Subject: [Koha-devel] Benchmarks - prior to the 3.22 release In-Reply-To: <5659885F.1010905@abunchofthings.net> References: <5658DA96.1080302@abunchofthings.net> <5658A297.40309@abunchofthings.net> <5658DA96.1080302@abunchofthings.net> Message-ID: <5.2.1.1.2.20151128112558.05577bd8@pop.navalmarinearchive.com> At 11:56 AM 11/28/2015 +0100, Mirko Tietgen wrote: >NYTProfile for searches in the OPAC and in the staff client, run on >the RPi2 with Koha 3.22 > >For /usr/share/koha/opac/cgi-bin/opac/opac-search.pl: >http://abunchofthings.net/koha/profile/opacsearch/ > >For /usr/share/koha/intranet/cgi-bin/catalogue/search.pl: >http://abunchofthings.net/koha/profile/staffsearch/ Mirko, Your numbers appear close to the testing we did (compare OPAC search 3.08.05 and 3.18.3) in January this year (see ) -- you obviously use 3.22 and a different database (how many biblios? and how many results found? I tested, by varying the search string, for anywhere between 0 and 25540 "results found" in a db with ~26k biblios and 50k items -- details on our website as above.) I could try and assist with 3.08 on a fast server (*not* a Raspberry Pi ;={ ) if there's any advantage to you (I'd try it on a production back-up server, hopefully minimum disruption, but would probably need your scripts and db.) While I no longer have NYTProf installed, I could do any "lesser" testing, or maybe even reinstall it... But then again, maybe the numbers I posted are sufficient for your comparisons? As you so diplomatically suggest in your first email on this subject, the search speed was the deciding factor that keeps us on 3.08 (now release 24) -- we just couldn't envision twenty second searches in production. Best regards -- Paul From info at bywatersolutions.com Sat Nov 28 21:53:48 2015 From: info at bywatersolutions.com (Brendan Gallagher) Date: Sat, 28 Nov 2015 12:53:48 -0800 Subject: [Koha-devel] Start of new release. Some updates. Message-ID: Now that the release is out, I wanted to give a quick little update for the following release. I will be traveling out of the country from December 1st till December 10th. Really what my plan is that the first few weeks we will not really be pushing new features. We want to concentrate on any bugs that are needed for 3.22.x and master. Kyle will be in the office and handling those needs. He will push and work on those bugs for us. Once I am back and settled in I will concentrate on bugs and new features. I am very open for discussion and I would like to talk with as many of you as I can about the great work that you're doing. Perhaps I will get a chance to speak with each of you about your plans development wise the next few months - so we can have a clear path on timing, getting you help testing your developments, and of course working with the whole team to help QA code. I think as long as we keep our expectations on the same page, we'll have a lot of success helping you meet the goals of a great stable release. Anything that we can do to help promote more initial sign-offs (probably our biggest bottleneck) the better. Let's see what things we can implement to encourage more of us testing the code. I will be encouraging our partner libraries and our staff to be doing as much as we can even with our busy schedules. Be proud! I plan on doing a weekly release newsletter. Thanks and as always don't hesitate to ask some questions. Brendan A. Gallagher -------------- next part -------------- An HTML attachment was scrubbed... URL: From mirko at abunchofthings.net Sun Nov 29 10:16:52 2015 From: mirko at abunchofthings.net (Mirko Tietgen) Date: Sun, 29 Nov 2015 10:16:52 +0100 Subject: [Koha-devel] Benchmarks - prior to the 3.22 release In-Reply-To: <5.2.1.1.2.20151128112558.05577bd8@pop.navalmarinearchive.com> References: <5658DA96.1080302@abunchofthings.net> <5658A297.40309@abunchofthings.net> <5658DA96.1080302@abunchofthings.net> <5.2.1.1.2.20151128112558.05577bd8@pop.navalmarinearchive.com> Message-ID: <565AC284.7040803@abunchofthings.net> Hi Paul, I only used figures for 3.8 because I already had them. I am not particularly interested in that version, as it is very out of date and has not had a maintenance release in almost two years. You should consider upgrading. 3.16 has been maintained until recently and would have been an option when you did your tests in January. I am doing the benchmarks with 3.16 and Plack atm. -- Mirko -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From paul.a at navalmarinearchive.com Sun Nov 29 18:42:35 2015 From: paul.a at navalmarinearchive.com (Paul A) Date: Sun, 29 Nov 2015 12:42:35 -0500 Subject: [Koha-devel] Benchmarks - prior to the 3.22 release In-Reply-To: <565AC284.7040803@abunchofthings.net> References: <5.2.1.1.2.20151128112558.05577bd8@pop.navalmarinearchive.com> <5658DA96.1080302@abunchofthings.net> <5658A297.40309@abunchofthings.net> <5658DA96.1080302@abunchofthings.net> <5.2.1.1.2.20151128112558.05577bd8@pop.navalmarinearchive.com> Message-ID: <5.2.1.1.2.20151129105942.04a691a8@pop.navalmarinearchive.com> At 10:16 AM 11/29/2015 +0100, Mirko Tietgen wrote: >Hi Paul, >I only used figures for 3.8 because I already had them. I am not >particularly interested in that version, as it is very out of date >and has not had a maintenance release in almost two years. You >should consider upgrading. Hi Mirko, Many thanks for the reply. We update our production servers every two years, so in Jan. of 2015 compared our existing 3.8.05 with the "latest" 3.18.3. The bottom line was that 3.8 could completely render "normal" searches (say less than 200 results) in < 750 ms (new faster server < 550ms) while 3.18 had trouble beating 2000ms, and "fell off the edge of the world" at 20+ seconds for the (abnormal?) cases of > 19,000 results. [1] The staff interface for searching authorities (and pulling them up to edit them) took a slightly worse hit. I look forward to your trials with Plack, but *if* I understand Plack correctly, it will improve the web rendering only -- not the Koha/perl/Zebra/MySQL component of searching which is where 3.18 took a bad hit for identical searches : 3.08: 604881 statements and 124374 subroutine calls in 306 source files and 84 string evals. 3.18: 1005756 statements and 247583 subroutine calls in 590 source files and 132 string evals. I can't explain this, but maybe you can? Your benchmarking for 3.22 (OK, different db) shows: 3.22: 1809166 statements and 578656 subroutine calls in 657 source files and 142 string evals. so this progression through the versions, for "just plain searching" has doubled the subroutine calls and source files (3.22 probably more than that.) Is this mission creep? or security? or even necessary? I certainly don't want to be negative about all the tremendous work that goes into Koha, but have to wonder why "core functions" have apparently been negatively impacted by added utilities, more modern web experience and all the other highly desirable goodies. Best -- Paul [1] due partly to Zebra swamping a single core of the Intel I7 CPU (we tried the "-T" option, but "multi-threading" still went to a single core.) >3.16 has been maintained until recently and would have been an >option when you did your tests in January. > >I am doing the benchmarks with 3.16 and Plack atm. > >-- Mirko > > >_______________________________________________ >Koha-devel mailing list >Koha-devel at lists.koha-community.org >http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >website : http://www.koha-community.org/ >git : http://git.koha-community.org/ >bugs : http://bugs.koha-community.org/ --- Maritime heritage and history, preservation and conservation, research and education through the written word and the arts. and From dcook at prosentient.com.au Mon Nov 30 01:52:38 2015 From: dcook at prosentient.com.au (David Cook) Date: Mon, 30 Nov 2015 11:52:38 +1100 Subject: [Koha-devel] Proposed "metadata" table for Koha Message-ID: <006101d12b09$68dcdce0$3a9696a0$@prosentient.com.au> Hi all: For those not following along at http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=10662, we?ve recently started talking about the possibility of adding a ?metadata? table to Koha. The basic schema I have in mind would be something like: metadata.id, metadata.record_id, metadata.scheme, metadata.qualifier, metadata.value. The row would look like: 1, 1, marc21, 001, 123456789 It might also be necessary to store ?metadata.record_type? so as to know where metadata.record_id points. This obviously has a lot of disadvantages redundant data between ?metadata? rows, no database cascades via foreign keys, etc. However, it might be necessary in the short-term as a temporary measure. Of course, adding ?yet another place? to store metadata might not seem like a great idea. We already store metadata in biblioitems.marcxml (and biblioitems.marc), Zebra, and other biblio/biblioitems/items relational database fields. Do we really need a new place to worry about data? That said, if we?re ever going to move away from MARC as the internal metadata format, we need to start transitioning to something new. I?ve noticed this ?metadata? table model in DSpace and other library systems, and it seems to work reasonable well. I don?t know if we?d break down the whole record into this structure, or if we?d just break down certain fields as defined by a configuration file. In the short term, I?d like to use something like this to access a record?s 001 without going to Zebra, which can be slow to update. I need to be able to query a record using the 001 as soon as its added to the database, and I can?t necessarily get that from Zebra. I also need to be able to query a record, even if Zebra is down. Failing the ?metadata? table idea, I?m not sure how else we?d expose the 001 and any number of other fields without using Zebra. We store the 020 and 022 in biblioitems.isbn and biblioitems.issn, but we?re putting multiple values in a single field, and that?s not so great for searching. We might also want to add the 035 to the fields we?re searching, so I don?t think just adding to the biblio or biblioitems tables will really do especially since we?re trying to move away from MARC. Anyway, please let me know your thoughts. David Cook Systems Librarian Prosentient Systems 72/330 Wattle St, Ultimo, NSW 2007 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dcook at prosentient.com.au Mon Nov 30 04:30:27 2015 From: dcook at prosentient.com.au (David Cook) Date: Mon, 30 Nov 2015 14:30:27 +1100 Subject: [Koha-devel] Proposed "metadata" table for Koha In-Reply-To: References: <006101d12b09$68dcdce0$3a9696a0$@prosentient.com.au> Message-ID: <006d01d12b1f$74a5ab90$5df102b0$@prosentient.com.au> For now, I think the metadata.record_id would link to biblionumber, but long-term it would probably link to some ?record? table. So if you wanted to get all bibliographic records, you?d do something like: select * from record join metadata ON record.id = metadata.record_id where record.type = ?bibliographic? Or maybe you want to search for a bibliographic record with a 001 of 123456789: select * from record join metadata ON record.id = metadata.record_id where record.type = ?bibliographic? and metadata.qualifier = ?001? and metadata.value = ?123456789? -- Of course, off the top of my head, I don?t know how you?d store indicators and subfields in an extensible way. I suppose indicators are attributes and subfields are child elements... I suppose DSpace actually does a ?element? and ?qualifier? approach for DC. So you?d have a ?dc?, ?author?, ?primary?. Or ?marc21? ?100? ?a?. Of course, that creates a limit of a single level of hierarchy which may or may not be desirable? and still doesn?t account for indicators/attributes. I suppose there is more thinking to do there. David Cook Systems Librarian Prosentient Systems 72/330 Wattle St, Ultimo, NSW 2007 From: Barton Chittenden [mailto:barton at bywatersolutions.com] Sent: Monday, 30 November 2015 2:17 PM To: David Cook Subject: Re: [Koha-devel] Proposed "metadata" table for Koha > The basic schema I have in mind would be something like: metadata.id , metadata.record_id, metadata.scheme, metadata.qualifier, metadata.value. > > > > The row would look like: 1, 1, marc21, 001, 123456789 I think this is an interesting idea... Obviously the replication of biblio data is not ideal, but I think that that's a necessary and worthwhile trade off in terms of moving away from MARC. How do you propose linking the metadata fields to the biblio records? Does the metadata.record_id link to biblionumber? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kohanews at gmail.com Mon Nov 30 05:41:45 2015 From: kohanews at gmail.com (Koha Newsletter) Date: Sun, 29 Nov 2015 20:41:45 -0800 Subject: [Koha-devel] Koha Community Newsletter: November 2015 Message-ID: Fellow Koha users: The Koha Community Newsletter for November 2015 is here: https://koha-community.org/koha-community-newsletter-november-2015/ Many thanks to the folks who submitted articles and news to this month's newsletter. Please feel free to email me with any corrections or suggestions. Thanks! Chad Roseburg Editor, Koha Community Newsletter -------------- next part -------------- An HTML attachment was scrubbed... URL: From barton at bywatersolutions.com Mon Nov 30 06:02:29 2015 From: barton at bywatersolutions.com (Barton Chittenden) Date: Mon, 30 Nov 2015 00:02:29 -0500 Subject: [Koha-devel] Proposed "metadata" table for Koha In-Reply-To: <006d01d12b1f$74a5ab90$5df102b0$@prosentient.com.au> References: <006101d12b09$68dcdce0$3a9696a0$@prosentient.com.au> <006d01d12b1f$74a5ab90$5df102b0$@prosentient.com.au> Message-ID: > > > > > Of course, off the top of my head, I don?t know how you?d store indicators > and subfields in an extensible way. I suppose indicators are attributes and > subfields are child elements... > > > > I suppose DSpace actually does a ?element? and ?qualifier? approach for > DC. So you?d have a ?dc?, ?author?, ?primary?. Or ?marc21? ?100? ?a?. Of > course, that creates a limit of a single level of hierarchy which may or > may not be desirable? and still doesn?t account for indicators/attributes. > > > > I suppose there is more thinking to do there. > My mind flew off into several different schemes for recursively sub-dividing metadata. I had to reboot my brain because I ran out of stack space. Dang infinite recursion. This reminded me of a Larry Wall quote ... my memory of the quote was about abstraction, but there was a bit more to it: I think that the biggest mistake people make is latching onto the first > idea that comes to them and trying to do that. It really comes to a thing > that my folks taught me about money. Don't buy something unless you've > wanted it three times. Similarly, don't throw in a feature when you first > think of it. Think if there's a way to generalize it, think if it should be > generalized. Sometimes you can generalize things too much. I think like the > things in Scheme were generalized too much. There is a level of abstraction > beyond which people don't want to go. Take a good look at what you want to > do, and try to come up with the long-term lazy way, not the short-term lazy > way. So... what's the long-term lazy way of handling the sub-division of metadata? --Barton -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.hafen at washk12.org Mon Nov 30 07:10:28 2015 From: michael.hafen at washk12.org (Michael Hafen) Date: Sun, 29 Nov 2015 23:10:28 -0700 Subject: [Koha-devel] Proposed "metadata" table for Koha In-Reply-To: References: <006101d12b09$68dcdce0$3a9696a0$@prosentient.com.au> <006d01d12b1f$74a5ab90$5df102b0$@prosentient.com.au> Message-ID: I've been thinking along these lines too recently. I've been thinking 'wouldn't it be nice to do a No-SQL or directory hierarchy sort of thing where you just add subfields and attributes to the record as needed'. Of course in a relational database you would do that by having an attributed field that was serialized by some standard method. But you could only have non-critical information there, since it would have to be unserialized to query it. As far as indicators you would have to have some internally consistent way to map them to serialized attributes (and some subfields would be able to be handled that way too). For example with the indicator for MARC21 245$a you would have an attribute like 'ii',3 for 'indexing ignore first three characters', and you could do the same with authority id's instead of using $9 (of the top of my head) And of course you would have to have some framework(s) in order to convert the metadata to other formats (MARC21, UNIMARC, NORMARC, and DC for example), which would make the requirements for attributes quite large (to handle all the possible indicators and serializable subfields). Something like that I suppose. On Sun, Nov 29, 2015 at 10:02 PM, Barton Chittenden < barton at bywatersolutions.com> wrote: > >> >> >> Of course, off the top of my head, I don?t know how you?d store >> indicators and subfields in an extensible way. I suppose indicators are >> attributes and subfields are child elements... >> >> >> >> I suppose DSpace actually does a ?element? and ?qualifier? approach for >> DC. So you?d have a ?dc?, ?author?, ?primary?. Or ?marc21? ?100? ?a?. Of >> course, that creates a limit of a single level of hierarchy which may or >> may not be desirable? and still doesn?t account for indicators/attributes. >> >> >> >> I suppose there is more thinking to do there. >> > > My mind flew off into several different schemes for recursively > sub-dividing metadata. I had to reboot my brain because I ran out of stack > space. Dang infinite recursion. This reminded me of a Larry Wall quote ... > my memory of the quote was about abstraction, but there was a bit more to > it: > > I think that the biggest mistake people make is latching onto the first >> idea that comes to them and trying to do that. It really comes to a thing >> that my folks taught me about money. Don't buy something unless you've >> wanted it three times. Similarly, don't throw in a feature when you first >> think of it. Think if there's a way to generalize it, think if it should be >> generalized. Sometimes you can generalize things too much. I think like the >> things in Scheme were generalized too much. There is a level of abstraction >> beyond which people don't want to go. Take a good look at what you want to >> do, and try to come up with the long-term lazy way, not the short-term lazy >> way. > > > So... what's the long-term lazy way of handling the sub-division of > metadata? > > --Barton > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -- Michael Hafen Washington County School District Technology Department Systems Analyst -------------- next part -------------- An HTML attachment was scrubbed... URL: From mirko at abunchofthings.net Mon Nov 30 11:07:40 2015 From: mirko at abunchofthings.net (Mirko Tietgen) Date: Mon, 30 Nov 2015 11:07:40 +0100 Subject: [Koha-devel] Benchmarks - prior to the 3.22 release In-Reply-To: References: Message-ID: <565C1FEC.4000909@abunchofthings.net> Tomas Cohen Arazi schrieb am 27.11.2015 > Could you try reverting to old DBI syspref retrieval in C4::Context? I did a lot of speed tests over the weekend, and I think that is not the main problem. I do not see a huge drop in speed in all areas, but search speed with Plack in 3.18 is 1/3 of that in 3.16. I found two areas so far that seem problematic. 1. The anti-cache stuff introduced in bug 11842 http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=11842 The cache is deleted all the time, eg. the framework structure is fetched again and again for every single result in a search (related to 2. below) When I revert that patch, I speed up search from 30 to 23 seconds for a repeated search in OPAC with multiple results (once the frameworks are cached). Not enough, but a good start. 2. The handling of XSLT when building the results page, related to C4::XSLT. When I tested Koha 3.8 on the Raspberry Pi in 2012, a search with one result took 2x the time of one with multiple results, because after finding the single result, you get redirected to the detail page automatically. Nowadays, the search with a single result is much faster than that of one with multiple results, because we lose a lot of time building the results page. Also, a search with multiple results is twice as fast when XSLT view is disabled in the sysprefs. I have not tried how it was in 3.16 yet. For every single result in a search, a lot of things are repeated all over again. A huge bunch of sysprefs is fetched every time, even though it is not likely they will change while fetching all results of one search query. GetMarcStructure seems to be a problem (see 1. above) but others may be too. Bug 11051 seems related. http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=11051 There is probably more and I will do more testing. Jonathan: could you check if the first point may be relevant for your selenium tests? Cheers, Mirko -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From mirko at abunchofthings.net Mon Nov 30 12:05:09 2015 From: mirko at abunchofthings.net (Mirko Tietgen) Date: Mon, 30 Nov 2015 12:05:09 +0100 Subject: [Koha-devel] Benchmarks - prior to the 3.22 release In-Reply-To: <565C1FEC.4000909@abunchofthings.net> References: <565C1FEC.4000909@abunchofthings.net> Message-ID: <565C2D65.4030301@abunchofthings.net> I opened Bug 15262. http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=15262 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From barton at bywatersolutions.com Mon Nov 30 14:35:34 2015 From: barton at bywatersolutions.com (Barton Chittenden) Date: Mon, 30 Nov 2015 08:35:34 -0500 Subject: [Koha-devel] Debugging in vim Message-ID: Hi, has anyone gotten debugging to work in vim, using the instructions here? http://wiki.koha-community.org/wiki/Debugging_in_VIM I tried it, and ran into dependency hell... If I remember correctly, Komodo-PerlRemoteDebugging-4.4.1 didn't play nicely with https://github.com/joonty/vdebug. If someone *has* gotten this working, I'd like to know how it was done, so that I can update the wiki. Thanks, --Barton -------------- next part -------------- An HTML attachment was scrubbed... URL: From tomascohen at gmail.com Mon Nov 30 15:19:34 2015 From: tomascohen at gmail.com (Tomas Cohen Arazi) Date: Mon, 30 Nov 2015 11:19:34 -0300 Subject: [Koha-devel] Proposed "metadata" table for Koha In-Reply-To: <006101d12b09$68dcdce0$3a9696a0$@prosentient.com.au> References: <006101d12b09$68dcdce0$3a9696a0$@prosentient.com.au> Message-ID: 2015-11-29 21:52 GMT-03:00 David Cook : > Hi all: > > > > For those not following along at > http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=10662, we?ve > recently started talking about the possibility of adding a ?metadata? table > to Koha. > > > > The basic schema I have in mind would be something like: metadata.id, > metadata.record_id, metadata.scheme, metadata.qualifier, metadata.value. > > > > The row would look like: 1, 1, marc21, 001, 123456789 > > > > It might also be necessary to store ?metadata.record_type? so as to know > where metadata.record_id points. This obviously has a lot of disadvantages? > redundant data between ?metadata? rows, no database cascades via foreign > keys, etc. However, it might be necessary in the short-term as a temporary > measure. > > > > Of course, adding ?yet another place? to store metadata might not seem > like a great idea. We already store metadata in biblioitems.marcxml (and > biblioitems.marc), Zebra, and other biblio/biblioitems/items relational > database fields. Do we really need a new place to worry about data? > I think we should have a metadata_record table storing the serialized metadata, and more needed information (basically the fields Koha::MetadataRecord has...) and let the fulltext engine do the job for accessing those values. The codebase is already too bloated trying to band-aid our "minimal" usage of the search engines' features. Of course, while trying to fix that we might find our search engine has problems and/or broken functionalities (zebra facets are so slow that are not cool). But we should definitely get rid of tons of code in favour of using the search engine more, and probably have QueryParser be the standard, having a driver for ES... -- Tom?s Cohen Arazi Theke Solutions (http://theke.io) ? +54 9351 3513384 GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F -------------- next part -------------- An HTML attachment was scrubbed... URL: From julian.maurice at biblibre.com Mon Nov 30 15:30:22 2015 From: julian.maurice at biblibre.com (Julian Maurice) Date: Mon, 30 Nov 2015 15:30:22 +0100 Subject: [Koha-devel] Debugging in vim In-Reply-To: References: Message-ID: <565C5D7E.3020203@biblibre.com> I do it daily, it's a life changer! :) But everything seems ok in the wiki. What is the dependency hell you are talking about ? Le 30/11/2015 14:35, Barton Chittenden a ?crit : > Hi, has anyone gotten debugging to work in vim, using the instructions here? > > http://wiki.koha-community.org/wiki/Debugging_in_VIM > > I tried it, and ran into dependency hell... If I remember > correctly, Komodo-PerlRemoteDebugging-4.4.1 didn't play nicely > with https://github.com/joonty/vdebug. > > If someone *has* gotten this working, I'd like to know how it was done, > so that I can update the wiki. > > Thanks, > > --Barton > > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -- Julian Maurice BibLibre From dcook at prosentient.com.au Mon Nov 30 23:36:22 2015 From: dcook at prosentient.com.au (David Cook) Date: Tue, 1 Dec 2015 09:36:22 +1100 Subject: [Koha-devel] Proposed "metadata" table for Koha In-Reply-To: References: <006101d12b09$68dcdce0$3a9696a0$@prosentient.com.au> Message-ID: <00b701d12bbf$8a0b2870$9e217950$@prosentient.com.au> I?m not 100% sure what I think yet, but in the past I was certainly in favour of a metadata_record table that stored the serialized metadata and whatever else it needed to support that. I still think it?s an all right idea to have that table. In general, I?m in favour of using the full text engine for searching, although as Katrin has noted on http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=10662, Zebra isn?t necessarily updated fast enough to be used for ?matching? when importing records, especially when records are being downloaded and imported every 2-3 seconds. Also, what happens if Zebra goes down? Suddenly your catalogue gets flooded with duplicates. I suppose one way of getting around that is querying Zebra to make sure it is alive before starting an import. However, that doesn?t solve the speed problem. I don?t think there?s any reliable way of knowing if the record you want to match on has already been indexed (or indexed again) in Koha. Don?t we only update Zebra once every 60 seconds? The OAI-PMH import wouldn?t be the only one affected by the indexing. The OCLC Connexion daemon and any Staged Import both use Zebra for matching. If Zebra hasn?t indexed relevant additions or updates, the matching won?t work when it should work. For records in the hundreds, thousands, and millions, that can cause major problems both with duplicates and failed updates. Maybe a ?metadata? table is overkill. In fact, I can?t necessarily see a lot of advantages to storing mass quantities of metadata in the relational database off the top of my head , but perhaps some way of keeping record identifiers in the relational database would be doable. If we think about the metadata in terms of a ?source of truth?, the relational database is always going to contain the source of truth. Zebra is basically just an indexed cache, and when it comes to importing records? I rather be querying a source of truth than a cache as the cache might be stale. At the moment, it?s going to be stale by at least 1-59 seconds? longer if Zebra has a lot of indexing jobs to do when it receives an update. Maybe there?s a way to mitigate that. Like? waiting to do an import until Zebra has reported that it?s emptied the zebraqueue X seconds ago, although zebraqueue may never be empty. There?s always that possibility that you?re going to miss something, and that possibility doesn?t exist in the relational database, as it?s the source of truth. If the identifier doesn?t exist in the database, then it doesn?t exist for that record (or there?s a software bug which can be fixed). While we probably could use the search engine more throughout Koha, I think it might not be wise to use it during an import. (As for the QueryParser, I totally agree about it being the standard, and creating a driver for ES. I chatted with Robin about this a bit over the past few months, but I haven?t had time to help out with that. The QueryParser also isn?t quite right for Zebra either yet, so it would probably make sense to focus on finalizing the PQF driver first.) David Cook Systems Librarian Prosentient Systems 72/330 Wattle St, Ultimo, NSW 2007 From: Tomas Cohen Arazi [mailto:tomascohen at gmail.com] Sent: Tuesday, 1 December 2015 1:20 AM To: David Cook Cc: koha-devel Subject: Re: [Koha-devel] Proposed "metadata" table for Koha I think we should have a metadata_record table storing the serialized metadata, and more needed information (basically the fields Koha::MetadataRecord has...) and let the fulltext engine do the job for accessing those values. The codebase is already too bloated trying to band-aid our "minimal" usage of the search engines' features. Of course, while trying to fix that we might find our search engine has problems and/or broken functionalities (zebra facets are so slow that are not cool). But we should definitely get rid of tons of code in favour of using the search engine more, and probably have QueryParser be the standard, having a driver for ES... -- Tom?s Cohen Arazi Theke Solutions (http://theke.io ) ? +54 9351 3513384 GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F -------------- next part -------------- An HTML attachment was scrubbed... URL: From dcook at prosentient.com.au Mon Nov 30 23:42:55 2015 From: dcook at prosentient.com.au (David Cook) Date: Tue, 1 Dec 2015 09:42:55 +1100 Subject: [Koha-devel] Proposed "metadata" table for Koha In-Reply-To: References: <006101d12b09$68dcdce0$3a9696a0$@prosentient.com.au> Message-ID: <00bc01d12bc0$7420b1f0$5c6215d0$@prosentient.com.au> Just to contradict myself a bit, it might be worth mentioning that Zebra will do a better job with ISSN and ISBN matching, as I think it normalizes those strings. That would be nicer than a regular string matching SQL query? David Cook Systems Librarian Prosentient Systems 72/330 Wattle St, Ultimo, NSW 2007 From: Tomas Cohen Arazi [mailto:tomascohen at gmail.com] Sent: Tuesday, 1 December 2015 1:20 AM To: David Cook Cc: koha-devel Subject: Re: [Koha-devel] Proposed "metadata" table for Koha 2015-11-29 21:52 GMT-03:00 David Cook >: Hi all: For those not following along at http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=10662, we?ve recently started talking about the possibility of adding a ?metadata? table to Koha. The basic schema I have in mind would be something like: metadata.id , metadata.record_id, metadata.scheme, metadata.qualifier, metadata.value. The row would look like: 1, 1, marc21, 001, 123456789 It might also be necessary to store ?metadata.record_type? so as to know where metadata.record_id points. This obviously has a lot of disadvantages? redundant data between ?metadata? rows, no database cascades via foreign keys, etc. However, it might be necessary in the short-term as a temporary measure. Of course, adding ?yet another place? to store metadata might not seem like a great idea. We already store metadata in biblioitems.marcxml (and biblioitems.marc), Zebra, and other biblio/biblioitems/items relational database fields. Do we really need a new place to worry about data? I think we should have a metadata_record table storing the serialized metadata, and more needed information (basically the fields Koha::MetadataRecord has...) and let the fulltext engine do the job for accessing those values. The codebase is already too bloated trying to band-aid our "minimal" usage of the search engines' features. Of course, while trying to fix that we might find our search engine has problems and/or broken functionalities (zebra facets are so slow that are not cool). But we should definitely get rid of tons of code in favour of using the search engine more, and probably have QueryParser be the standard, having a driver for ES... -- Tom?s Cohen Arazi Theke Solutions (http://theke.io ) ? +54 9351 3513384 GPG: B76C 6E7C 2D80 551A C765 E225 0A27 2EA1 B2F3 C15F -------------- next part -------------- An HTML attachment was scrubbed... URL: