From rijal.it at gmail.com Fri Oct 1 13:28:15 2010 From: rijal.it at gmail.com (Nitesh Rijal) Date: Fri, 1 Oct 2010 17:13:15 +0545 Subject: [Koha-devel] [koha] mail configuraiton help In-Reply-To: References: Message-ID: Thanks, I was wondering whether it is possible to use koha server in one server and use existing mail server from another server. So, if possible, please guide me through the steps. Regards. On Thu, Sep 30, 2010 at 8:05 PM, Mike Hafen wrote: > Koha relies a lot on the mail functions of the server. For example I have > my server running postfix and set to send email through another mail server. > > As far as Koha itself, a lot of the mail functionality runs from cron. > There are a couple perl scripts, most notably in this case the following: > overdue_notices.pl, advance_notices.pl, and process_message_queue.pl. > These are in the crontab.example. > > 2010/9/29 Nitesh Rijal > >> Hello all. >> >> I have been running koha in my server for about 10 libraries. Each of them >> have a different public IP for accesssing. >> >> What are the things that I need inorder to use the mail functionality for >> sending overdue notices and other related functions? >> >> Is there some step by step guide for it? >> >> We already have a mail server at some other IP address, so is it possible >> to use that server as mail server for koha as well or is it necessary that >> the machine that has koha server, should also have mail server configured in >> it? >> >> Please reply. >> >> Regards. >> >> -- >> Nitesh Rijal >> BE IT >> rijal.it at gmail.com >> http://niteshrijal.com.np >> http://facebook.com/openrijal >> http://twitter.com/openrijal >> +9779841458173 >> >> _______________________________________________ >> Koha-devel mailing list >> Koha-devel at lists.koha-community.org >> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >> > > -- Nitesh Rijal BE IT rijal.it at gmail.com http://niteshrijal.com.np http://facebook.com/openrijal http://twitter.com/openrijal +9779841458173 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdhafen at tech.washk12.org Fri Oct 1 16:39:27 2010 From: mdhafen at tech.washk12.org (Mike Hafen) Date: Fri, 1 Oct 2010 08:39:27 -0600 Subject: [Koha-devel] [koha] mail configuraiton help In-Reply-To: References: Message-ID: I'm afraid I can't help much. I have set my server with the postfix mail package, and set relayhost to outbound.washk12.org. C4::Letters uses the Mail::Sendmail module, which uses the mail package installed on the server (postfix in my case, which provides a sendmail like wrapper). What I can't help with is how outbound is setup. It is in dns with spf and mx entries, and it had to be set to allow relaying for my server. That was all done by one of my coworkers. Maybe someone else on the list can provide further information. On Fri, Oct 1, 2010 at 5:28 AM, Nitesh Rijal wrote: > Thanks, > > I was wondering whether it is possible to use koha server in one server and > use existing mail server from another server. So, if possible, please guide > me through the steps. > > Regards. > > > On Thu, Sep 30, 2010 at 8:05 PM, Mike Hafen wrote: > >> Koha relies a lot on the mail functions of the server. For example I have >> my server running postfix and set to send email through another mail server. >> >> As far as Koha itself, a lot of the mail functionality runs from cron. >> There are a couple perl scripts, most notably in this case the following: >> overdue_notices.pl, advance_notices.pl, and process_message_queue.pl. >> These are in the crontab.example. >> >> 2010/9/29 Nitesh Rijal >> >>> Hello all. >>> >>> I have been running koha in my server for about 10 libraries. Each of >>> them have a different public IP for accesssing. >>> >>> What are the things that I need inorder to use the mail functionality for >>> sending overdue notices and other related functions? >>> >>> Is there some step by step guide for it? >>> >>> We already have a mail server at some other IP address, so is it possible >>> to use that server as mail server for koha as well or is it necessary that >>> the machine that has koha server, should also have mail server configured in >>> it? >>> >>> Please reply. >>> >>> Regards. >>> >>> -- >>> Nitesh Rijal >>> BE IT >>> rijal.it at gmail.com >>> http://niteshrijal.com.np >>> http://facebook.com/openrijal >>> http://twitter.com/openrijal >>> +9779841458173 >>> >>> _______________________________________________ >>> Koha-devel mailing list >>> Koha-devel at lists.koha-community.org >>> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >>> >> >> > > > -- > Nitesh Rijal > BE IT > rijal.it at gmail.com > http://niteshrijal.com.np > http://facebook.com/openrijal > http://twitter.com/openrijal > +9779841458173 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ohiocore at gmail.com Fri Oct 1 17:25:56 2010 From: ohiocore at gmail.com (Joe Atzberger) Date: Fri, 1 Oct 2010 11:25:56 -0400 Subject: [Koha-devel] [koha] mail configuraiton help In-Reply-To: References: Message-ID: You still need something to get messages from the Koha server to your other mailserver. In all cases, the simplest way to accomplish that is w/ a mailserver on the Koha box. It can be configured such that it doesn't connect to any other systems, but only to your other mailserver. --Joe 2010/10/1 Nitesh Rijal > Thanks, > > I was wondering whether it is possible to use koha server in one server and > use existing mail server from another server. So, if possible, please guide > me through the steps. > > Regards. > > > On Thu, Sep 30, 2010 at 8:05 PM, Mike Hafen wrote: > >> Koha relies a lot on the mail functions of the server. For example I have >> my server running postfix and set to send email through another mail server. >> >> As far as Koha itself, a lot of the mail functionality runs from cron. >> There are a couple perl scripts, most notably in this case the following: >> overdue_notices.pl, advance_notices.pl, and process_message_queue.pl. >> These are in the crontab.example. >> >> 2010/9/29 Nitesh Rijal >> >>> Hello all. >>> >>> I have been running koha in my server for about 10 libraries. Each of >>> them have a different public IP for accesssing. >>> >>> What are the things that I need inorder to use the mail functionality for >>> sending overdue notices and other related functions? >>> >>> Is there some step by step guide for it? >>> >>> We already have a mail server at some other IP address, so is it possible >>> to use that server as mail server for koha as well or is it necessary that >>> the machine that has koha server, should also have mail server configured in >>> it? >>> >>> Please reply. >>> >>> Regards. >>> >>> -- >>> Nitesh Rijal >>> BE IT >>> rijal.it at gmail.com >>> http://niteshrijal.com.np >>> http://facebook.com/openrijal >>> http://twitter.com/openrijal >>> +9779841458173 >>> >>> _______________________________________________ >>> Koha-devel mailing list >>> Koha-devel at lists.koha-community.org >>> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >>> >> >> > > > -- > Nitesh Rijal > BE IT > rijal.it at gmail.com > http://niteshrijal.com.np > http://facebook.com/openrijal > http://twitter.com/openrijal > +9779841458173 > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rijal.it at gmail.com Fri Oct 1 17:44:48 2010 From: rijal.it at gmail.com (Nitesh Rijal) Date: Fri, 1 Oct 2010 21:29:48 +0545 Subject: [Koha-devel] [koha] mail configuraiton help In-Reply-To: References: Message-ID: thanks... I have installed postfix in my koha box... its host name is kohamail.healthnet.org.np when I use #mailx nitesh at healthnet.org.np it has errors to send the mail, the log has some error messages, moreover, how is the email address managed? I guess it will be like "abc at kohamail.healthnet.org.np", is this kind of email address allowed? can some one please send me the postfix configuration that I need to change in order to get mail working in my KOHA box? that would really be a great help. I cannot find any good resource for setting up the koha mail. regards. On Fri, Oct 1, 2010 at 9:10 PM, Joe Atzberger wrote: > You still need something to get messages from the Koha server to your other > mailserver. In all cases, the simplest way to accomplish that is w/ a > mailserver on the Koha box. It can be configured such that it doesn't > connect to any other systems, but only to your other mailserver. > > --Joe > > 2010/10/1 Nitesh Rijal > > Thanks, >> >> I was wondering whether it is possible to use koha server in one server >> and use existing mail server from another server. So, if possible, please >> guide me through the steps. >> >> Regards. >> >> >> On Thu, Sep 30, 2010 at 8:05 PM, Mike Hafen wrote: >> >>> Koha relies a lot on the mail functions of the server. For example I >>> have my server running postfix and set to send email through another mail >>> server. >>> >>> As far as Koha itself, a lot of the mail functionality runs from cron. >>> There are a couple perl scripts, most notably in this case the following: >>> overdue_notices.pl, advance_notices.pl, and process_message_queue.pl. >>> These are in the crontab.example. >>> >>> 2010/9/29 Nitesh Rijal >>> >>>> Hello all. >>>> >>>> I have been running koha in my server for about 10 libraries. Each of >>>> them have a different public IP for accesssing. >>>> >>>> What are the things that I need inorder to use the mail functionality >>>> for sending overdue notices and other related functions? >>>> >>>> Is there some step by step guide for it? >>>> >>>> We already have a mail server at some other IP address, so is it >>>> possible to use that server as mail server for koha as well or is it >>>> necessary that the machine that has koha server, should also have mail >>>> server configured in it? >>>> >>>> Please reply. >>>> >>>> Regards. >>>> >>>> -- >>>> Nitesh Rijal >>>> BE IT >>>> rijal.it at gmail.com >>>> http://niteshrijal.com.np >>>> http://facebook.com/openrijal >>>> http://twitter.com/openrijal >>>> +9779841458173 >>>> >>>> _______________________________________________ >>>> Koha-devel mailing list >>>> Koha-devel at lists.koha-community.org >>>> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >>>> >>> >>> >> >> >> -- >> Nitesh Rijal >> BE IT >> rijal.it at gmail.com >> http://niteshrijal.com.np >> http://facebook.com/openrijal >> http://twitter.com/openrijal >> +9779841458173 >> >> _______________________________________________ >> Koha-devel mailing list >> Koha-devel at lists.koha-community.org >> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >> > > -- Nitesh Rijal BE IT rijal.it at gmail.com http://niteshrijal.com.np http://facebook.com/openrijal http://twitter.com/openrijal +9779841458173 -------------- next part -------------- An HTML attachment was scrubbed... URL: From henridamien.laurent at biblibre.com Mon Oct 4 10:10:41 2010 From: henridamien.laurent at biblibre.com (LAURENT Henri-Damien) Date: Mon, 04 Oct 2010 10:10:41 +0200 Subject: [Koha-devel] Search Engine Changes : let's get some solr Message-ID: <4CA98C01.8080709@biblibre.com> Hi As you already read in Paul previous message about "BibLibre strategy for 3.4 and next version", we are growing, want be involved in the community as previously. Paul promised some POCs, here is one available. We also worked on Plack and support. We created a base of script to search for Memoryleaks. We'll demonstrate that later. zebra is fast and embeds native z3950 server. But it has also some major drawbacks we have to cope with on our everyday life making it quite difficult to maintain. 1. zebra config files are a nightmare. You can't drive the configuration file easily. namely : Can't edit indexs via HTTP or configuration. all is in files hardcoded on disk. ? you can't list indexes you can't change indexes, you can't edit indexes, you can't say I want this index at OPAC, that in intranet. (Could be done with scraping ccl.properties, and then record.abs and bib1.att?. But what a HELL) So you cannot customize configuration defining the indexes you want easily. And ppl donot get a translation of the indexes since all the indexes are hardcoded in the ccl.properties and we donot have a translation process so that ccl attributes could be translated into different languages. 2. no real-time indexing : the use of a crontab is poor: when you add an authority while creating a biblio, you have to wait some some minutes to end your biblio (might be solved since zebra has some way to index biblios via z3950 extended services, but hard and should be tested and at the time community first tested that, a performance problem was raised on indexing.) 3. no way to access/process/delete data easily. If you have indexes in it or have some problems with your data, you have to reindex the whole stuff and indexing errors are quite difficult to detect. 4. during index process of a file, if you have a problem in your data, zebraidx just fails silently? And this is NOT secure. And you have no way to know WHICH biblio made the process crash. We had a LOT of trouble with Aix-Marseille universities that have some arabic translitterated biblios that makes zebra/icu completly crash ! We had to do some recursive script to find 14 biblios on 730 000 that makes zebra crash (even is properly stored & displayed) 5. facets are not working properly : they are on the result displayed because there are problems with diacritics & facets that can't be solved as of today. And noone can provide a solution (we spoke about that with indexdata and no clear solution was really provided. 6. zebra does not evolve anymore. There is no real community around it, it's just an opensource indexdata software. We sent many questions onlist and never got answers. We could pay for better support but the fee required is quite deterrent and benefit is still questionable. 7. icu & zebra are colleagues, not really friends : right truncation not working, fuzzy search not working and facets. 8. we use a deprecated way to define indexes for biblios (grs1) and the tool developped by indexdata to change to DOM has many flaws. we could manage and do with it. But is it worth the strive ? I think that every one agrees that we have to refactor C4::Search. Indeed, query parser is not able to manage independantly all the configuration options. And usage of usmarc as internal for biblio comes with a serious limitation of 9999 bytes, which for big biblios with many items, is not enough. BibLibre investigated in a catalogue based on solr. A University in France contracted us for that development. This University is in relation with all the community here in France and solr will certainly be adopted by all the libraries France wide. We are planning to release the code on our git early spring next year and rebase on whatever Koha version will be released at that time 3.4 or 3.6. Why ? Solr indexes with data with HTTP. It can provide fuzzy search, search on synonyms, suggestions It can provide facet search, stemming. utf8 support is embedded. Community is really impressively reactive and numerous and efficient. And documentation is very good and exhaustive. You can see the results on solr.biblibre.com and catalogue.solr.biblibre.com http://catalogue.solr.biblibre.com/cgi-bin/koha/opac-search.pl?q=jean http://solr.biblibre.com/cgi-bin/koha/admin/admin-home.pl you can log there with demo/demo lgoin/password http://solr.biblibre.com/cgi-bin/koha/solr/indexes.pl is the page where ppl can manage their indexes and links. a) Librarians can define their own indexes, and there is a plugin that fetches data from rejected authorities and from authorised_values (that could/should have been achieved with zebra but only with major work on xslt). b) C4/Search.pm count lines of code could be shrinked ten times. You can test from poc_solr branch on git://git.biblibre.com/koha_biblibre.git But you have to install solr. Any feedback/idea welcome. -- Henri-Damien LAURENT BibLibre From tajoli at cilea.it Mon Oct 4 11:04:47 2010 From: tajoli at cilea.it (Zeno Tajoli) Date: Mon, 04 Oct 2010 11:04:47 +0200 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CA98C01.8080709@biblibre.com> References: <4CA98C01.8080709@biblibre.com> Message-ID: <4CA998AF.7040204@cilea.it> Hi to all, >Any feedback/idea welcome. the main problem that I see is that Zebra is much more light on RAM. Solrs is needs Java + an App server J2EE (Tomact, Jetty ?). If we select Solr, can we setup a library with 512 MB of RAM in the server ? Bye -- Zeno Tajoli CILEA - Segrate (MI) tajoliAT_SPAM_no_prendiATcilea.it (Indirizzo mascherato anti-spam; sostituisci qaunto tra AT con @) From salvazm at masmedios.com Mon Oct 4 11:30:31 2010 From: salvazm at masmedios.com (Salvador Zaragoza Rubio) Date: Mon, 04 Oct 2010 11:30:31 +0200 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CA998AF.7040204@cilea.it> References: <4CA98C01.8080709@biblibre.com> <4CA998AF.7040204@cilea.it> Message-ID: <4CA99EB7.8060702@masmedios.com> Hi, Maybe a possible alternative to Lucene with Java could be CLucene (http://clucene.sourceforge.net/) to increase performance. But seems to be in a less mature stage that the Java brother and is only the library behind the search engine. Have a Cpan module: http://search.cpan.org/~tbusch/Lucene-0.18/lib/Lucene.pm Salva El 04/10/2010 11:04, Zeno Tajoli escribi?: > Hi to all, > >> Any feedback/idea welcome. > > the main problem that I see is that Zebra is much more light > on RAM. > Solrs is needs Java + an App server J2EE (Tomact, Jetty ?). > If we select Solr, can we setup a library with 512 MB of RAM in the > server ? > > Bye > > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5527 bytes Desc: S/MIME Cryptographic Signature URL: From colin.campbell at ptfs-europe.com Mon Oct 4 11:40:57 2010 From: colin.campbell at ptfs-europe.com (Colin Campbell) Date: Mon, 04 Oct 2010 10:40:57 +0100 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CA99EB7.8060702@masmedios.com> References: <4CA98C01.8080709@biblibre.com> <4CA998AF.7040204@cilea.it> <4CA99EB7.8060702@masmedios.com> Message-ID: <4CA9A129.8010905@ptfs-europe.com> On 04/10/10 10:30, Salvador Zaragoza Rubio wrote: > Hi, > > Maybe a possible alternative to Lucene with Java could be CLucene > (http://clucene.sourceforge.net/) to increase performance. > But seems to be in a less mature stage that the Java brother and is only > the library behind the search engine. > Have a Cpan module: http://search.cpan.org/~tbusch/Lucene-0.18/lib/Lucene.pm > If looking at cpan modules take a look at KinoSearch as well. -- Colin Campbell Chief Software Engineer, PTFS Europe Limited Content Management and Library Solutions +44 (0) 208 366 1295 (phone) +44 (0) 7759 633626 (mobile) colin.campbell at ptfs-europe.com skype: colin_campbell2 http://www.ptfs-europe.com From laurenthdl at alinto.com Mon Oct 4 12:38:54 2010 From: laurenthdl at alinto.com (LAURENT Henri-Damien) Date: Mon, 04 Oct 2010 12:38:54 +0200 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CA998AF.7040204@cilea.it> References: <4CA98C01.8080709@biblibre.com> <4CA998AF.7040204@cilea.it> Message-ID: <4CA9AEBE.2040505@alinto.com> Le 04/10/2010 11:04, Zeno Tajoli a ?crit : > Hi to all, > >> Any feedback/idea welcome. > > the main problem that I see is that Zebra is much more light > on RAM. > Solrs is needs Java + an App server J2EE (Tomact, Jetty ?). > If we select Solr, can we setup a library with 512 MB of RAM in the > server ? > > Bye > > Looking at most users we know, the problem of little RAM is the least of their problems, the question about accurate facetting with correct encoding and the ease of use, add and display index is more of their concerns. So yes, problem of RAM, but it could be overcome with a common web service say on solr.koha-community.org for instance, people could provide on an external server to index and get search results from. We chose to base our POC on Data::SearchEngine If you want to know more details on dependencies. -- Henri-Damien LAURENT From mdhafen at tech.washk12.org Mon Oct 4 15:46:49 2010 From: mdhafen at tech.washk12.org (Mike Hafen) Date: Mon, 4 Oct 2010 07:46:49 -0600 Subject: [Koha-devel] [koha] mail configuraiton help In-Reply-To: References: Message-ID: I'll post here the important settings from my servers postfix/main.cf: myhostname = koha.washk12.org alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = koha.washk12.org, localhost.washk12.org, localhost relayhost = outbound.washk12.org mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all Maybe that will help. On Fri, Oct 1, 2010 at 9:44 AM, Nitesh Rijal wrote: > thanks... > > I have installed postfix in my koha box... > > its host name is kohamail.healthnet.org.np > > when I use #mailx nitesh at healthnet.org.np > > it has errors to send the mail, the log has some error messages, > > moreover, how is the email address managed? > > I guess it will be like "abc at kohamail.healthnet.org.np", is this kind of > email address allowed? > > can some one please send me the postfix configuration that I need to change > in order to get mail working in my KOHA box? > > that would really be a great help. I cannot find any good resource for > setting up the koha mail. > > regards. > > > On Fri, Oct 1, 2010 at 9:10 PM, Joe Atzberger wrote: > >> You still need something to get messages from the Koha server to your >> other mailserver. In all cases, the simplest way to accomplish that is w/ a >> mailserver on the Koha box. It can be configured such that it doesn't >> connect to any other systems, but only to your other mailserver. >> >> --Joe >> >> 2010/10/1 Nitesh Rijal >> >> Thanks, >>> >>> I was wondering whether it is possible to use koha server in one server >>> and use existing mail server from another server. So, if possible, please >>> guide me through the steps. >>> >>> Regards. >>> >>> >>> On Thu, Sep 30, 2010 at 8:05 PM, Mike Hafen wrote: >>> >>>> Koha relies a lot on the mail functions of the server. For example I >>>> have my server running postfix and set to send email through another mail >>>> server. >>>> >>>> As far as Koha itself, a lot of the mail functionality runs from cron. >>>> There are a couple perl scripts, most notably in this case the following: >>>> overdue_notices.pl, advance_notices.pl, and process_message_queue.pl. >>>> These are in the crontab.example. >>>> >>>> 2010/9/29 Nitesh Rijal >>>> >>>>> Hello all. >>>>> >>>>> I have been running koha in my server for about 10 libraries. Each of >>>>> them have a different public IP for accesssing. >>>>> >>>>> What are the things that I need inorder to use the mail functionality >>>>> for sending overdue notices and other related functions? >>>>> >>>>> Is there some step by step guide for it? >>>>> >>>>> We already have a mail server at some other IP address, so is it >>>>> possible to use that server as mail server for koha as well or is it >>>>> necessary that the machine that has koha server, should also have mail >>>>> server configured in it? >>>>> >>>>> Please reply. >>>>> >>>>> Regards. >>>>> >>>>> -- >>>>> Nitesh Rijal >>>>> BE IT >>>>> rijal.it at gmail.com >>>>> http://niteshrijal.com.np >>>>> http://facebook.com/openrijal >>>>> http://twitter.com/openrijal >>>>> +9779841458173 >>>>> >>>>> _______________________________________________ >>>>> Koha-devel mailing list >>>>> Koha-devel at lists.koha-community.org >>>>> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >>>>> >>>> >>>> >>> >>> >>> -- >>> Nitesh Rijal >>> BE IT >>> rijal.it at gmail.com >>> http://niteshrijal.com.np >>> http://facebook.com/openrijal >>> http://twitter.com/openrijal >>> +9779841458173 >>> >>> _______________________________________________ >>> Koha-devel mailing list >>> Koha-devel at lists.koha-community.org >>> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >>> >> >> > > > -- > Nitesh Rijal > BE IT > rijal.it at gmail.com > http://niteshrijal.com.np > http://facebook.com/openrijal > http://twitter.com/openrijal > +9779841458173 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tomascohen at gmail.com Mon Oct 4 15:54:34 2010 From: tomascohen at gmail.com (Tomas Cohen Arazi) Date: Mon, 4 Oct 2010 10:54:34 -0300 Subject: [Koha-devel] [koha] mail configuraiton help In-Reply-To: References: Message-ID: 2010/10/1 Nitesh Rijal : > thanks... > I have installed postfix in my koha box... > its host name is kohamail.healthnet.org.np > when I use #mailx nitesh at healthnet.org.np > it has errors to send the mail, the log has some error messages, > moreover, how is the email address managed? > I guess it will be like "abc at kohamail.healthnet.org.np", is this kind of > email address allowed? > can some one please send me the postfix configuration that I need to change > in order to get mail working in my KOHA box? > that would really be a great help. I cannot find any good resource for > setting up the koha mail. Try nullmailer. Its really simple to setup. Runs in localhost, and uses an external server for delivery. To+ From ian.walls at bywatersolutions.com Mon Oct 4 15:56:33 2010 From: ian.walls at bywatersolutions.com (Ian Walls) Date: Mon, 4 Oct 2010 09:56:33 -0400 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CA9AEBE.2040505@alinto.com> References: <4CA98C01.8080709@biblibre.com> <4CA998AF.7040204@cilea.it> <4CA9AEBE.2040505@alinto.com> Message-ID: Wouldn't using Solr also give us the flexibility to use non-MARC metadata schemas, like MODS, METS, Dublin Core or EAD? Indexes could be defined for the whole system, with mappings for each supported metadata scheme on how to get the data into said indexes. The increased flexibility seems like it would be worth the higher system requirements, at least for folks who could afford a suitably powerful server. The installation process should provide options to help tune Koha to the available hardware (light and fast OR heavier and more powerful). Cheers, -Ian On Mon, Oct 4, 2010 at 6:38 AM, LAURENT Henri-Damien wrote: > Le 04/10/2010 11:04, Zeno Tajoli a ?crit : > > Hi to all, > > > >> Any feedback/idea welcome. > > > > the main problem that I see is that Zebra is much more light > > on RAM. > > Solrs is needs Java + an App server J2EE (Tomact, Jetty ?). > > If we select Solr, can we setup a library with 512 MB of RAM in the > > server ? > > > > Bye > > > > > Looking at most users we know, the problem of little RAM is the least of > their problems, the question about accurate facetting with correct > encoding and the ease of use, add and display index is more of their > concerns. > So yes, problem of RAM, but it could be overcome with a common web > service say on solr.koha-community.org for instance, people could > provide on an external server to index and get search results from. > > We chose to base our POC on Data::SearchEngine If you want to know more > details on dependencies. > > -- > Henri-Damien LAURENT > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > -- Ian Walls Lead Development Specialist ByWater Solutions Phone # (888) 900-8944 http://bywatersolutions.com ian.walls at bywatersolutions.com Twitter: @sekjal -------------- next part -------------- An HTML attachment was scrubbed... URL: From henridamien.laurent at gmail.com Mon Oct 4 16:04:14 2010 From: henridamien.laurent at gmail.com (LAURENT Henri-Damien) Date: Mon, 04 Oct 2010 16:04:14 +0200 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CA9AEBE.2040505@alinto.com> References: <4CA98C01.8080709@biblibre.com> <4CA998AF.7040204@cilea.it> <4CA9AEBE.2040505@alinto.com> Message-ID: <4CA9DEDE.1040106@gmail.com> Le 04/10/2010 12:38, LAURENT Henri-Damien a ?crit : > Le 04/10/2010 11:04, Zeno Tajoli a ?crit : >> Hi to all, >> >>> Any feedback/idea welcome. >> >> the main problem that I see is that Zebra is much more light >> on RAM. >> Solrs is needs Java + an App server J2EE (Tomact, Jetty ?). >> If we select Solr, can we setup a library with 512 MB of RAM in the >> server ? >> >> Bye >> >> > Looking at most users we know, the problem of little RAM is the least of > their problems, the question about accurate facetting with correct > encoding and the ease of use, add and display index is more of their > concerns. > So yes, problem of RAM, but it could be overcome with a common web > service say on solr.koha-community.org for instance, people could > provide on an external server to index and get search results from. > > We chose to base our POC on Data::SearchEngine If you want to know more > details on dependencies. > Just as a side note, zebra3 should be based on solr. So yet another point for solr. From lculber at mdah.state.ms.us Mon Oct 4 20:08:50 2010 From: lculber at mdah.state.ms.us (Linda Culberson) Date: Mon, 04 Oct 2010 13:08:50 -0500 Subject: [Koha-devel] Item barcode prefixes in 3.2 Message-ID: <4CAA1832.40901@mdah.state.ms.us> I apologize for another newbie question, this time regarding item barcode prefixes in 3.2: We are a single-branch library that has B in front of our item barcodes which are then auto-incremental. However, unlike the documentation there is not a set itembarcodelength. So that an item barcode could be B1 or it could be B1245351. There is nothing in front of the patron barcodes which are also auto-incremental, and the B in front of the item barcode just lets us know that it applies to an item and not a person. Do I need to use javascript to handle this or is there a setting(s) that I can use for a single branch system? What I have been reading seems to apply only to consortia, and I was unclear as to whether this was still a RFC or something that was already handled in 3.2. Thanks in advance -- Linda Culberson lculber at mdah.state.ms.us Archives and Records Services Division Ms. Dept. of Archives& History P. O. Box 571 Jackson, MS 39205-0571 Telephone: 601/576-6873 Facsimile: 601/576-6824 From robin at catalyst.net.nz Mon Oct 4 21:00:17 2010 From: robin at catalyst.net.nz (Robin Sheat) Date: Tue, 05 Oct 2010 08:00:17 +1300 Subject: [Koha-devel] Item barcode prefixes in 3.2 In-Reply-To: <4CAA1832.40901@mdah.state.ms.us> References: <4CAA1832.40901@mdah.state.ms.us> Message-ID: <201010050800.17745.robin@catalyst.net.nz> Op dinsdag 05 oktober 2010 07:08:50 schreef Linda Culberson: > I apologize for another newbie question, this time regarding item > barcode prefixes in 3.2: We are a single-branch library that has B in > front of our item barcodes which are then auto-incremental. However, > unlike the documentation there is not a set itembarcodelength. So that > an item barcode could be B1 or it could be B1245351. I don't think that'd be an issue at all. Really, the barcode is sent as "B12345", so the length is determined by you (or the barcode scanner) sending an 'enter' keypress. So basically, I see no reason your scheme wouldn't work just fine. -- Robin Sheat Catalyst IT Ltd. ? +64 4 803 2204 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part. URL: From cfouts at liblime.com Mon Oct 4 22:16:43 2010 From: cfouts at liblime.com (Fouts, Clay) Date: Mon, 4 Oct 2010 13:16:43 -0700 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CA98C01.8080709@biblibre.com> References: <4CA98C01.8080709@biblibre.com> Message-ID: Adding to the aforementioned limitations and desired improvements, another issue with Zebra is that it index builds scale very poorly. Indexes must be built serially, and it's a really problematic limitation for large catalogs. A parallelized index creation process would be a huge benefit. Does SOLR have that capability? Also, what do you have in mind for continuing Z39.50 support? This is a must-have feature for many libraries. I've been investigating the possibility of using MongoDB or a similar dynamic indexer to replace Zebra, but the need to write a Z39.50 front end adds a great deal more work to the project. Clay On Mon, Oct 4, 2010 at 1:10 AM, LAURENT Henri-Damien < henridamien.laurent at biblibre.com> wrote: > Hi > As you already read in Paul previous message about > "BibLibre strategy for 3.4 and next version", we are growing, want be > involved in the community as previously. Paul promised some POCs, here > is one available. We also worked on Plack and support. We created a base > of script to search for Memoryleaks. We'll demonstrate that later. > > > zebra is fast and embeds native z3950 server. But it has also some major > drawbacks we have to cope with on our everyday life making it quite > difficult to maintain. > > 1. zebra config files are a nightmare. You can't drive the > configuration file easily. namely : Can't edit indexs via HTTP or > configuration. all is in files hardcoded on disk. => you can't list > indexes you can't change indexes, you can't edit indexes, you can't say > I want this index at OPAC, that in intranet. (Could be done with > scraping ccl.properties, and then record.abs and bib1.att.... But what a > HELL) So you cannot customize configuration defining the indexes you > want easily. And ppl donot get a translation of the indexes since all > the indexes are hardcoded in the ccl.properties and we donot have a > translation process so that ccl attributes could be translated into > different languages. > > 2. no real-time indexing : the use of a crontab is poor: when you > add an authority while creating a biblio, you have to wait some some > minutes to end your biblio (might be solved since zebra has some way to > index biblios via z3950 extended services, but hard and should be tested > and at the time community first tested that, a performance problem was > raised on indexing.) > > 3. no way to access/process/delete data easily. If you have indexes > in it or have some problems with your data, you have to reindex the > whole stuff and indexing errors are quite difficult to detect. > > 4. during index process of a file, if you have a problem in your > data, zebraidx just fails silently... And this is NOT secure. And you have > no way to know WHICH biblio made the process crash. We had a LOT of > trouble with Aix-Marseille universities that have some > arabic translitterated biblios that makes zebra/icu completly crash ! We > had to do some recursive script to find 14 biblios on 730 000 that makes > zebra crash (even is properly stored & displayed) > > 5. facets are not working properly : they are on the result displayed > because there are problems with diacritics & facets that can't be solved > as of today. And noone can provide a solution (we spoke about that with > indexdata and no clear solution was really provided. > > 6. zebra does not evolve anymore. There is no real community around > it, it's just an opensource indexdata software. We sent many questions > onlist and never got answers. We could pay for better support but the > fee required is quite deterrent and benefit is still questionable. > > 7. icu & zebra are colleagues, not really friends : right truncation > not working, fuzzy search not working and facets. > > 8. we use a deprecated way to define indexes for biblios (grs1) and > the tool developped by indexdata to change to DOM has many flaws. we > could manage and do with it. But is it worth the strive ? > > I think that every one agrees that we have to refactor C4::Search. > Indeed, query parser is not able to manage independantly all the > configuration options. And usage of usmarc as internal for biblio comes > with a serious limitation of 9999 bytes, which for big biblios with many > items, is not enough. > > BibLibre investigated in a catalogue based on solr. > A University in France contracted us for that development. > This University is in relation with all the community here in France and > solr will certainly be adopted by all the libraries France wide. > We are planning to release the code on our git early spring next year > and rebase on whatever Koha version will be released at that time 3.4 or > 3.6. > > > Why ? > > Solr indexes with data with HTTP. > It can provide fuzzy search, search on synonyms, suggestions > It can provide facet search, stemming. > utf8 support is embedded. > Community is really impressively reactive and numerous and efficient. > And documentation is very good and exhaustive. > > You can see the results on solr.biblibre.com and > catalogue.solr.biblibre.com > > http://catalogue.solr.biblibre.com/cgi-bin/koha/opac-search.pl?q=jean > http://solr.biblibre.com/cgi-bin/koha/admin/admin-home.pl > you can log there with demo/demo lgoin/password > > http://solr.biblibre.com/cgi-bin/koha/solr/indexes.pl > is the page where ppl can manage their indexes and links. > > a) Librarians can define their own indexes, and there is a plugin that > fetches data from rejected authorities and from authorised_values (that > could/should have been achieved with zebra but only with major work on > xslt). > > b) C4/Search.pm count lines of code could be shrinked ten times. > You can test from poc_solr branch on > git://git.biblibre.com/koha_biblibre.git > But you have to install solr. > > Any feedback/idea welcome. > -- > Henri-Damien LAURENT > BibLibre > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurenthdl at alinto.com Mon Oct 4 22:35:14 2010 From: laurenthdl at alinto.com (LAURENT Henri-Damien) Date: Mon, 04 Oct 2010 22:35:14 +0200 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: References: <4CA98C01.8080709@biblibre.com> Message-ID: <4CAA3A82.7000104@alinto.com> Le 04/10/2010 22:16, Fouts, Clay a ?crit : > Adding to the aforementioned limitations and desired improvements, > another issue with Zebra is that it index builds scale very poorly. > Indexes must be built serially, and it's a really problematic limitation > for large catalogs. A parallelized index creation process would be a > huge benefit. Does SOLR have that capability? > > Also, what do you have in mind for continuing Z39.50 support? This is a > must-have feature for many libraries. I've been investigating the > possibility of using MongoDB or a similar dynamic indexer to replace > Zebra, but the need to write a Z39.50 front end adds a great deal more > work to the project. Hi Clay. Serious questions. About z3950 support, I investigated in JZ3950 Kit http://k-int.blogspot.com/2008/05/exposing-solr-services-as-z3950-server.html It seems a serious candidate. About parallelized index creation process, I think I read something about that for solr 1.4.1. But not sure if I can grab on that soon. At least we are investigating solr multicore in order to use solr at its best. -- Henri-Damien LAURENT From rijal.it at gmail.com Tue Oct 5 06:09:31 2010 From: rijal.it at gmail.com (Nitesh Rijal) Date: Tue, 5 Oct 2010 09:54:31 +0545 Subject: [Koha-devel] [koha] mail configuraiton help In-Reply-To: References: Message-ID: Thanks to all. I used postfix, and now I can send mail from commandline.... looking to koha mail services to work now. thanks to all for the response and help. cheers. On Mon, Oct 4, 2010 at 7:39 PM, Tomas Cohen Arazi wrote: > 2010/10/1 Nitesh Rijal : > > thanks... > > I have installed postfix in my koha box... > > its host name is kohamail.healthnet.org.np > > when I use #mailx nitesh at healthnet.org.np > > it has errors to send the mail, the log has some error messages, > > moreover, how is the email address managed? > > I guess it will be like "abc at kohamail.healthnet.org.np", is this kind of > > email address allowed? > > can some one please send me the postfix configuration that I need to > change > > in order to get mail working in my KOHA box? > > that would really be a great help. I cannot find any good resource for > > setting up the koha mail. > > Try nullmailer. Its really simple to setup. Runs in localhost, and > uses an external server for delivery. > > To+ > -- Nitesh Rijal BE IT rijal.it at gmail.com http://niteshrijal.com.np http://facebook.com/openrijal http://twitter.com/openrijal +9779841458173 -------------- next part -------------- An HTML attachment was scrubbed... URL: From frederic at tamil.fr Tue Oct 5 10:30:17 2010 From: frederic at tamil.fr (Frederic Demians) Date: Tue, 05 Oct 2010 10:30:17 +0200 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CAA3A82.7000104@alinto.com> References: <4CA98C01.8080709@biblibre.com> <4CAA3A82.7000104@alinto.com> Message-ID: <4CAAE219.4070509@tamil.fr> On this subject, some feedback would be welcomed from members of the Koha community who are also members from Evergreen community. As said by others, small to medium-sized libraries shouldn't be scarified to large libraries needs. A debian-ubuntu Koha package, installable in one-click, seems now an achievable goal. New search engine integration into Koha shouldn't distract from this goal. -- Fr?d?ric DEMIANS From gmcharlt at gmail.com Thu Oct 7 04:17:19 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Wed, 6 Oct 2010 22:17:19 -0400 Subject: [Koha-devel] 3.2.0 release candidate available Message-ID: Hi, I have uploaded the release candidate for Koha 3.2.0. Changes since the beta include: * fixes to the upgrade of acquisitions data from 3.0 * various bugfixes, particularly to acquisitions, serials, and label creation * improvements to test case coverage * all required test cases now pass * adding back the ability to save and manage local system preferences * the rotating collections feature is now deferred to 3.4.x * security fixes * translations Pending confirmation of successful installations and upgrades, this will become the general release of 3.2.0. At this point, string changes, enhancements, and non-super-critical bugfixes will not be accepted for 3.2.0; only truly critical bugfixes, improvements to upgrade and installation processes and documentation, and translations will be pushed. The tarball can be retrieved from: http://download.koha-community.org/koha-3.02.00-rc.tar.gz Checksum and signature files for verifying the package can be retrieved from: http://download.koha-community.org/koha-3.02.00-rc.tar.gz.MD5 http://download.koha-community.org/koha-3.02.00-rc.tar.gz.MD5.asc http://download.koha-community.org/koha-3.02.00-rc.tar.gz.sig Regards, Galen -- Galen Charlton gmcharlt at gmail.com From gmcharlt at gmail.com Thu Oct 7 04:43:04 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Wed, 6 Oct 2010 22:43:04 -0400 Subject: [Koha-devel] 3.2.0 release candidate available In-Reply-To: References: Message-ID: Hi, On Wed, Oct 6, 2010 at 10:17 PM, Galen Charlton wrote: > I have uploaded the release candidate for Koha 3.2.0. ?Changes since > the beta include: [snip] Another change to note in particular is that there is a new Perl module dependency since the beta, Business::ISBN. Regards, Galen -- Galen Charlton gmcharlt at gmail.com From nengard at gmail.com Thu Oct 7 13:24:15 2010 From: nengard at gmail.com (Nicole Engard) Date: Thu, 7 Oct 2010 07:24:15 -0400 Subject: [Koha-devel] [Koha] 3.2.0 release candidate available In-Reply-To: References: Message-ID: I don't think this is a 'super-critical bugfix' - but I do think it's pretty confusing to users of the software. There is a preference for opacprivacy, but still no page where patron can actually set their privacy options. Nicole On Wed, Oct 6, 2010 at 10:43 PM, Galen Charlton wrote: > Hi, > > > On Wed, Oct 6, 2010 at 10:17 PM, Galen Charlton wrote: >> I have uploaded the release candidate for Koha 3.2.0. ?Changes since >> the beta include: > [snip] > > Another change to note in particular is that there is a new Perl > module dependency since the beta, Business::ISBN. > > Regards, > > Galen > -- > Galen Charlton > gmcharlt at gmail.com > _______________________________________________ > Koha mailing list > Koha at lists.katipo.co.nz > http://lists.katipo.co.nz/mailman/listinfo/koha > From magnus at enger.priv.no Thu Oct 7 13:31:47 2010 From: magnus at enger.priv.no (Magnus Enger) Date: Thu, 7 Oct 2010 13:31:47 +0200 Subject: [Koha-devel] [Koha] 3.2.0 release candidate available In-Reply-To: References: Message-ID: I think that's a good point, Nicole. It's bug 3881, by the way: http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=3881 Regards, Magnus Enger libriotech.no On 7 October 2010 13:24, Nicole Engard wrote: > I don't think this is a 'super-critical bugfix' - but I do think it's > pretty confusing to users of the software. ?There is a preference for > opacprivacy, but still no page where patron can actually set their > privacy options. > > Nicole > > On Wed, Oct 6, 2010 at 10:43 PM, Galen Charlton wrote: >> Hi, >> >> >> On Wed, Oct 6, 2010 at 10:17 PM, Galen Charlton wrote: >>> I have uploaded the release candidate for Koha 3.2.0. ?Changes since >>> the beta include: >> [snip] >> >> Another change to note in particular is that there is a new Perl >> module dependency since the beta, Business::ISBN. >> >> Regards, >> >> Galen >> -- >> Galen Charlton >> gmcharlt at gmail.com >> _______________________________________________ >> Koha mailing list >> Koha at lists.katipo.co.nz >> http://lists.katipo.co.nz/mailman/listinfo/koha >> > _______________________________________________ > Koha mailing list > Koha at lists.katipo.co.nz > http://lists.katipo.co.nz/mailman/listinfo/koha > From magnus at enger.priv.no Thu Oct 7 13:36:11 2010 From: magnus at enger.priv.no (Magnus Enger) Date: Thu, 7 Oct 2010 13:36:11 +0200 Subject: [Koha-devel] [Koha] 3.2.0 release candidate available In-Reply-To: References: Message-ID: I think that's a good point, Nicole. It's bug 3881, by the way: http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=3881 Regards, Magnus Enger libriotech.no On 7 October 2010 13:24, Nicole Engard wrote: > I don't think this is a 'super-critical bugfix' - but I do think it's > pretty confusing to users of the software. ?There is a preference for > opacprivacy, but still no page where patron can actually set their > privacy options. > > Nicole > > On Wed, Oct 6, 2010 at 10:43 PM, Galen Charlton wrote: >> Hi, >> >> >> On Wed, Oct 6, 2010 at 10:17 PM, Galen Charlton wrote: >>> I have uploaded the release candidate for Koha 3.2.0. ?Changes since >>> the beta include: >> [snip] >> >> Another change to note in particular is that there is a new Perl >> module dependency since the beta, Business::ISBN. >> >> Regards, >> >> Galen >> -- >> Galen Charlton >> gmcharlt at gmail.com >> _______________________________________________ >> Koha mailing list >> Koha at lists.katipo.co.nz >> http://lists.katipo.co.nz/mailman/listinfo/koha >> > _______________________________________________ > Koha mailing list > Koha at lists.katipo.co.nz > http://lists.katipo.co.nz/mailman/listinfo/koha > From fridolyn.somers at gmail.com Fri Oct 8 12:28:57 2010 From: fridolyn.somers at gmail.com (Fridolyn SOMERS) Date: Fri, 8 Oct 2010 12:28:57 +0200 Subject: [Koha-devel] Authority merge In-Reply-To: <809BE39CD64BFD4EB9036172EBCCFA3103D556@S-MAIL-1B.rijksmuseum.intra> References: <809BE39CD64BFD4EB9036172EBCCFA3103D556@S-MAIL-1B.rijksmuseum.intra> Message-ID: Hie, Do you talk about "merge" method in AuthoritiesMarc.pm ? I agree, behavior should be the same as when you use the "thesaurus" plugin in biblio fields edition. Subfieds of authority field are imported but existing subfields are not removed. You may open a bug. Regards, 2010/9/27 Marcel de Rooy > Hi developers, > > > > I have a question on the update of biblio records after changing an > authority. The 3.0 / 3.2 code in authorities.pl/AuthoritiesMarc.pmapparently replaces the complete MARC field in the biblio record (say 700) > with the report tag of the authority record (say 100 for PERSO_NAME). > > This means that if I have an additional subfield e.g. 4 (relator code) in > the biblio record with such an authority, an update of the authority record > (without such a relator code) simply discards such extra subfields on the > biblio side. > > > > My question is: Are we misunderstanding MARC in my library and should we > always put the relator code on the authority side, or is this code doing > something unintentional ?? > > If the first is true and I would have two separate relator codes for one > authority, should I then make one authority record per relator code? It > seems somewhat odd.. > > If the latter should be the case, I could write a bug report and submit a > patch for it. > > > > Thanks for your time. > > > > Marcel > > > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > -- Fridolyn SOMERS ICT engineer PROGILONE - Lyon - France fridolyn.somers at gmail.com -------------- section suivante -------------- Une pi?ce jointe HTML a ?t? nettoy?e... URL: From kohadevel at agogme.com Fri Oct 8 21:02:54 2010 From: kohadevel at agogme.com (Thomas Dukleth) Date: Fri, 8 Oct 2010 19:02:54 -0000 (UTC) Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CA98C01.8080709@biblibre.com> References: <4CA98C01.8080709@biblibre.com> Message-ID: 1. Z39.50/SRU SUPPORT. If I would have any specific concern about the prospect of new development complicating long term development, it would be about the possibility of breaking or neglecting necessary Z39.50/SRU server support in the process of adding excessively generic Solr/Lucene indexing. Z39.50/SRU are important library standards for record sharing which is vital to the good functioning of the library community. I commend Henri-Damien Laurent for taking the issue of Z39.50/SRU support seriously and finding JZKit as a possible solution for Z39.50/SRU support using Solr/Lucene. 2. AVOIDING FEATURE REGRESSION OR BLOCKS TO FUTURE DEVELOPMENT. Popular implementations of Solr/Lucene in library automation systems have made all the mistakes of sacrificing the precision needed for serious library research in return for high recall with poor relevancy often found in Google which may merely satisfy casual queries. I share the concern that working with Zebra is too much like working with a black box into which one cannot peer. I make no claim that existing Z39.50/SRU Zebra support in Koha is ideal but merely than it should not be too easily sacrificed for something else with its own problems which are merely less familiar to us. I suggest that we retain the existing Z39.50/SRU Zebra support in Koha while adding other options which may improve local indexing. The full use of Bib-1 position, structure, and completeness attributes for Z39.50 or the ordered prox CQL operator for SRU would allow the precise queries needed for serious research. The lack of a completeness operator in CQL is a serious deficiency for SRU. Index Data may still need to develop support in Zebra for the ordered prox CQL operator which will most likely require paying to support that effort when it would be appreciated in the Koha community. Zebra certainly has bugs as does all software. [See the end of the document for the Index Data promise about bugs.] Ultimately, I see no manageable way to have a free software library automation system without paying for some support for something from Index Data even if that would merely be Z39.50/SRU client programming libraries. Solr/Lucene may now be a good choice for internal indexing in Koha. Lucene was not considered fairly during 2005 testing for Koha because the Perl bindings at that time were notoriously slow. Solr and Lucene have long had the mind share and development advantage of being Apache Foundation projects which Zebra will never match, hence the forthcoming inclusion of Solr/Lucene indexing for the next major versions of Pazpar2 and Zebra . However, Solr/Lucene has had problems which should not go unconsidered in evaluating or actually implementing Solr/Lucene based indexing in Koha. I am not certain what point 8 from Henri-Damien's message is specifically meant to criticise. Is the complaint against indexing based the DOM in general or against the frustration of needing to migrate from an inefficient deprecated means of using the DOM to a more efficient means of using it? On Mon, October 4, 2010 08:10, LAURENT Henri-Damien wrote: [...] > 8. we use a deprecated way to define indexes for biblios (grs1) and > the tool developped by indexdata to change to DOM has many flaws. we > could manage and do with it. But is it worth the strive ? [...] I contend that although working with the DOM can be difficult at times, the DOM helps provide needed flexibility and precision in indexing. 2.1. HISTORICAL LACK OF PRECISION IN SOLR/LUCENE. Solr/Lucene may have been a poor choice during the 2004 - 2006 period of sponsoring Perl Zoom and developing Zebra in Koha. Lucene had originally been developed for full text indexing of unstructured documents. Solr had originally been merely an easy to configure front end to a subset of Lucene functionality. Solr became a popular choice for the simplest free software OPACs. I have always tried to subject choices taken in Koha to personal reconsideration and made a modest investigation of the capabilities of Lucene and Solr/Lucene in 2007. I consulted widely and attended some conferences asking questions of the most expert implementers of library automation systems who had been using Lucene or Solr/Lucene. I tried to consult with people working to solve real problems rather than merely relying upon possibly incomplete documentation. In 2007, Solr provided no support for indexing to serve important concepts used for obtaining precision in library systems. 2.1.1. ASPECTS OF PRECISION HISTORICALLY UNSUPPORTED BY SOLR/LUCENE. Hierarchy where some content is subsidiary to other content and content derives meaning from the place in the hierarchy had no support in Solr circa 2007. Field to subfield relationships is an example of hierarchy in MARC records. Namespace hierarchies are examples of hierarchy in XML records and are accessible by XPath queries. Hierarchy is a fundamental feature of classification and retrieval for easily including wanted record sets and excluding unwanted record sets. Sequential order where the order of separate record sub-elements is relevant to meaning had no support in Solr circa 2007. Philosophy - History, meaning 'history of philosophy', is an entirely different subject from History - Philosophy, meaning 'philosophy of history'. Note that the inversion of word order between individual controlled vocabulary elements and the corresponding English phrase with the same meaning. The sequential order of fields within a record or MARC subfields within a particular field are examples of sequential order in MARC records. The sequential order of namespaces within a record and the order of repeated elements within the same namespace are examples of sequential order in XML records accessible by XPath queries. Sequential order is a fundamental feature of meaning in language and is not necessarily reducible to phrase strings where interceding terms may or may not be present and word order may be inverted as in the example given. 2.1.2. ALTERNATIVES FOR PRECISION USING LUCENE. In 2005, work at Biblioth?que de l'Universit? Laval (originators of RAMEAU) had developed LIUS (Lucene Index Update and Search) to overcome some difficulties of Lucene including fielded indexing of the very simplest flat field metadata found in some general purpose document types and XPath indexing for XML documents, http://sourceforge.net/projects/lius/ . Laval now uses Solr/Lucene based Constellio, http://www.constellio.com/ . In 2007, I had been informed by a programmer of library automation systems working in the pharmaceutical industry, if I remember his job correctly, that hierarchical indexing and sequential indexing could be done in Lucene but that there was no support for such indexing in Solr. Precision is very important for both scientific and business purposes in the pharmaceutical industry. Despite valid criticism of some business practises within the pharmaceutical industry, lives are often at stake in their work. We should treat the quality of information retrieval in library automation systems as if lives are at stake. Lives will sometimes be at stake in the research which people do. 2.2. CONSEQUENCES OF LACK OF PRECISION. Sadly, the concept of precision has not been one which signified in the minds of those developing the popular free software OPACs using Solr/Lucene or some of their non-free equivalents. Examples of the consequences to which Koha is not excluded are using only $a in faceting despite the presence of other important subfields; jumbling all the subfields from all similar fields independently; and returning irrelevant results because subfields have been treated as mere independent keywords devoid of contextual meaning even in the context of a query using an authority controlled field. Human nature, to which Koha is not immune, may have some impetus to oversimplify for an expected advantage. Oversimplification in the context of a library automation system could eliminate the ability for the user to access the real complexity and richness of relationships in bibliographic records to improve speed or robustness. Such oversimplification exists to a large extent in every actual library automation system. I may be raising a false alarm about the possibility that some feature advance may complicate or block better improvements in the future. Yet, I prefer to take a vigilant stance rather than be sorry later for not having raised a concern. 2.3. CURRENT SUITABILITY OF SOLR/LUCENE. I note significant improvements identified in the Solr/Lucene changelog from version 1.3 in 2008 and later. The DataImportHandler was added in version 1.3. DataImportHandler has options for XPath based indexing. Solr still seems to have no support for ordered proximity searches. Perhaps XPath based indexing would address the problem. A possible workaround modifying the Lucene code in SolrQueryParser to return SpanNearQuery instead of PhraseQuery may be a very undesirable remedy, breaking one feature to fix another. Whether the improvements in Solr/Lucene are sufficient to overcome the past limitations which I have identified would require experimentation. 3. SUPPORT MODELS FOR NEEDED PROGRAMMING LIBRARIES. It is good that companies such as Knowledge Integration, http://www.k-int.com/ , developers of JZKit, http://www.k-int.com/jzkit , are providing some free software competition and complementary work to what is available from Index Data. Note that the JZKit developer, Ian Ibbotson, is using Yaz as a Z39.50 client, http://k-int.blogspot.com/2008/05/exposing-solr-services-as-z3950-server.html leaving a dependency on Index Data for Z39.50 for client side services. There is some prevarication at Index Data against fully embracing free software in everything they do. Inevitably they need revenue to be sustainable. The following thought about a possible shortage of Index Data development time and the consequences is merely speculative but not uninformed. Index Data may have a problem of not enough developers working for them with sufficient experience to further the development of the underlying programming libraries which we use to meet the amount of the work which the library community hopes to have from them. Contracting for Index Data development in the absence of sufficient development time to go around might result in bidding for the importance of the development which you need as much as it is sharing the cost of development with others. Would working with Knowledge Integration which has even fewer developers be significantly different in terms of development costs? Does Knowledge Integration need less money for a given amount of work than Index Data does to be sustainable? Consider that JZKit seems to have no documentation worthy of identifying as documentation. The source code repository contains about four pages of outlines for documentation with only one sentence of actual content, http://www.k-int.com/developer/downloads . There are some comments in the source code which I understand are used as documentation for JZKit. Yet the comments are too few and incomplete to be of sufficient use to me and from what I have noted others as well. There are some virtually empty example configuration files which could be used as a basis for speculating how configuration works. JZKit supposedly has a mailing list but I have not found it. Index Data does provide documentation even if we have often found it inadequate for our needs in Koha development. I suspect that sufficient documentation at Knowledge Integration as at Index Data requires a support contract and as we know has no guarantees for completeness. Writing clear and thorough documentation is hard work. Writing documentation is the last thing which programmers generally want to do. Lack of good documentation is a common characteristic of free software. If there would also be a need for JZKit to have some missing feature or better functionality, would the situation also be any different for Knowledge Integration development than Index Data development? See the unfortunate position of Knowledge Integration on GPL contributions or AGPL 3 contributions in the case of JZKit, http://www.k-int.com/developer/participate . The library community needs to find the means of working more cooperatively to ensure a steady availability of development resources at companies such as Index Data and Knowledge Integration for sustainable shared development. I hope that Index Data may eventually be won over from their sometimes prevaricative position towards free software. Yet they need to be sustainable by some means. I do not find the position of of Knowledge Integration to be any different and note that they do not have a link to the source code repository for OpenHarvest, http://www.k-int.com/developer/downloads . Index Data does have a long history of supporting free software for libraries. Index Data also makes an extraordinary almost impossible to be believed promise in their support contracts to fix any bug within set number of days. The issue of how to share the cost of support contracts for programming libraries provided by companies such as Index Data or Knowledge Integration across multiple Koha support companies or even outside of the Koha community needs to be considered. [...] Thomas Dukleth Agogme 109 E 9th Street, 3D New York, NY 10003 USA http://www.agogme.com +1 212-674-3783 From glawson at rhcl.org Mon Oct 11 00:26:27 2010 From: glawson at rhcl.org (glawson at rhcl.org) Date: Sun, 10 Oct 2010 17:26:27 -0500 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: References: <4CA98C01.8080709@biblibre.com> Message-ID: <82b2169ea9cc89b507c717da40b07801.squirrel@zephyrus.sp> I post this email while away from my work area and access to most of my research materials; more specifically, on a netbook from a hospital room, although I personally am fine. I hope my question is not thus poorly presented. Does the term "precision" have a meaning significantly different when applied to the indexing of databases than it does in general use? Quote: "The precision of a measurement system, also called reproducibility or repeatability, is the degree to which repeated measurements under unchanged conditions show the same results." http://en.wikipedia.org/wiki/Accuracy_and_precision I was not aware that precision was a problem with Zebra, although we have found, to our great dismay with our recent implementation of Koha, that relevancy, which I would call accuracy, is. More correctly and scientifically stated perhaps, relevancy sucks. Do I incorrectly associate accuracy with relevancy? Quote: "...accuracy of a measurement system is the degree of closeness of measurements of a quantity to its actual (true) value." IBID Greg Lawson Rolling Hills Consolidated Library ------------------------------ > 1. Z39.50/SRU SUPPORT.... > Koha community needs to be considered. > [...] > Thomas Dukleth > Agogme > 109 E 9th Street, 3D > New York, NY 10003 > USA > http://www.agogme.com > +1 212-674-3783 > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel From chrisc at catalyst.net.nz Mon Oct 11 00:37:50 2010 From: chrisc at catalyst.net.nz (Chris Cormack) Date: Mon, 11 Oct 2010 11:37:50 +1300 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <82b2169ea9cc89b507c717da40b07801.squirrel@zephyrus.sp> References: <4CA98C01.8080709@biblibre.com> <82b2169ea9cc89b507c717da40b07801.squirrel@zephyrus.sp> Message-ID: <20101010223750.GH20177@rorohiko> * glawson at rhcl.org (glawson at rhcl.org) wrote: > I post this email while away from my work area and access to most of my > research materials; more specifically, on a netbook from a hospital room, > although I personally am fine. I hope my question is not thus poorly > presented. > > Does the term "precision" have a meaning significantly different when > applied to the indexing of databases than it does in general use? > > Quote: > > "The precision of a measurement system, also called reproducibility or > repeatability, is the degree to which repeated measurements under > unchanged conditions show the same results." > http://en.wikipedia.org/wiki/Accuracy_and_precision > > I was not aware that precision was a problem with Zebra, although we have > found, to our great dismay with our recent implementation of Koha, that > relevancy, which I would call accuracy, is. More correctly and > scientifically stated perhaps, relevancy sucks. Do I incorrectly > associate accuracy with relevancy? > Greg I have noticed a bug with C4/Search .. such that relevancy isn't actually being used. Is your opac live? If you send me a url I can test my theory on your opac Chris > Quote: > > "...accuracy of a measurement system is the degree of closeness of > measurements of a quantity to its actual (true) value." > IBID > > > > > Greg Lawson > Rolling Hills Consolidated Library > > ------------------------------ > > > 1. Z39.50/SRU SUPPORT.... > > > > Koha community needs to be considered. > > [...] > > Thomas Dukleth > > Agogme > > 109 E 9th Street, 3D > > New York, NY 10003 > > USA > > http://www.agogme.com > > +1 212-674-3783 > > _______________________________________________ > > Koha-devel mailing list > > Koha-devel at lists.koha-community.org > > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > > > > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel -- Chris Cormack Catalyst IT Ltd. +64 4 803 2238 PO Box 11-053, Manners St, Wellington 6142, New Zealand -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: From frederic at tamil.fr Mon Oct 11 08:49:01 2010 From: frederic at tamil.fr (Frederic Demians) Date: Mon, 11 Oct 2010 08:49:01 +0200 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CA9DEDE.1040106@gmail.com> References: <4CA98C01.8080709@biblibre.com> <4CA998AF.7040204@cilea.it> <4CA9AEBE.2040505@alinto.com> <4CA9DEDE.1040106@gmail.com> Message-ID: <4CB2B35D.7090802@tamil.fr> > Just as a side note, zebra3 should be based on solr. So yet another > point for solr. So, isn't it the best interest of the Koha community to wait that Indexdata integrate Solr into Zebra? and in the meantime, switch Koha-Zebra interface from GRS1 to Dom in order to gain in granularity and customazibility. And as suggested by Thomas Dukleth, Koha supporters could sponsor directly Indexdata to speed up their work and give them a Koha 'direction'. Who's better than Indexdata has deep knowledge of search engine technlogy in Libraries arena? and could conciliate and accommodate Solr at best to the library's needs and peculiarities? -- Fr?d?ric From kohadevel at agogme.com Mon Oct 11 10:57:21 2010 From: kohadevel at agogme.com (Thomas Dukleth) Date: Mon, 11 Oct 2010 08:57:21 -0000 (UTC) Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <82b2169ea9cc89b507c717da40b07801.squirrel@zephyrus.sp> References: <4CA98C01.8080709@biblibre.com> <82b2169ea9cc89b507c717da40b07801.squirrel@zephyrus.sp> Message-ID: <42dd7f5afc8de8695f74e227ad6ac9b8.squirrel@wmail.agogme.com> Reply inline: On Sun, October 10, 2010 22:26, glawson at rhcl.org wrote: [...] 1. PRECISION AND BUGS IN KOHA ZEBRA IMPLEMENTATION. > Does the term "precision" have a meaning significantly different when > applied to the indexing of databases than it does in general use? The ordinary language use of precision has basically the same meaning as the more specialist uses which measure precision mathematically. > > Quote: > > "The precision of a measurement system, also called reproducibility or > repeatability, is the degree to which repeated measurements under > unchanged conditions show the same results." > http://en.wikipedia.org/wiki/Accuracy_and_precision The measurement theory Wikipedia article is fine. The Wikipedia article treating the same concepts for information retrieval is http://en.wikipedia.org/wiki/Precision_and_recall . Precision has a disambiguation page, http://en.wikipedia.org/wiki/Precision . > > I was not aware that precision was a problem with Zebra, although we have > found, to our great dismay with our recent implementation of Koha, that > relevancy, which I would call accuracy, is. More correctly and > scientifically stated perhaps, relevancy sucks. I did not assert that precision is necessarily a problem with Zebra. Henri-Damien Laurent listed some Zebra bugs which could be construed as affecting precision by not returning a result set or returning a result set in which some multi-byte characters are mangled. Zebra bugs need to be fixed. What I did assert is that we are not yet using some options in Zebra Z39.50 support which allow for better precision matching sets of controlled terms especially useful for subject and classification based queries. There has not been sufficient programming time available to implement the options to which I referred. The underlying Koha implementation of authority control and support for classification needs improvement first. I suspect that the relevancy issues to which you are referring are different and relate to ranking the result set, possibly adding indexes appropriate to your organisation, and using appropriate fielded queries. You would need to state the problematic results which you have for particular queries to know the problem to which you are referring. Chris Cormack may have identified at least one of the problems for you in his reply, http://lists.koha-community.org/pipermail/koha-devel/2010-October/034470.html . > Do I incorrectly > associate accuracy with relevancy? I think that you correctly associate accuracy with relevancy. I am not certain where recall would fit in comparing information retrieval terms with measurement theory terms. > > Quote: > > "...accuracy of a measurement system is the degree of closeness of > measurements of a quantity to its actual (true) value." > IBID 2. CONSIDERING OPTIONS CAREFULLY. My concern is that we do not completely abandon the Zebra indexing system which we now have because of some bugs which could be fixed. We could have a sophisticated Z39.50/SRU server and Solr/Lucene indexing by working with Index Data and with bugs fixed for a moderate service contract. Given the undocumented but apparently unsophisticated feature set of JZKit, I suspect that it will be more expensive to have Knowledge Integration develop a sufficiently sophisticated Z39.50/SRU server implementation in JZKit. The large Solr/Lucene community may support many of our interests well without needing to depend upon us financially, which would be a real advantage. However, such a non-library community, despite the presence of some library community members, is liable to take a very long time to appreciate the value of sophisticated implementations of precision to support all types of library queries well. [...] Thomas Dukleth Agogme 109 E 9th Street, 3D New York, NY 10003 USA http://www.agogme.com +1 212-674-3783 From paul.poulain at biblibre.com Mon Oct 11 11:23:22 2010 From: paul.poulain at biblibre.com (Paul Poulain) Date: Mon, 11 Oct 2010 11:23:22 +0200 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <42dd7f5afc8de8695f74e227ad6ac9b8.squirrel@wmail.agogme.com> References: <4CA98C01.8080709@biblibre.com> <82b2169ea9cc89b507c717da40b07801.squirrel@zephyrus.sp> <42dd7f5afc8de8695f74e227ad6ac9b8.squirrel@wmail.agogme.com> Message-ID: <4CB2D78A.9060302@biblibre.com> Le 11/10/2010 10:57, Thomas Dukleth a ?crit : > My concern is that we do not completely abandon the Zebra indexing system > which we now have because of some bugs which could be fixed. We could > have a sophisticated Z39.50/SRU server and Solr/Lucene indexing by working > with Index Data and with bugs fixed for a moderate service contract. > Given the undocumented but apparently unsophisticated feature set of > JZKit, I suspect that it will be more expensive to have Knowledge > Integration develop a sufficiently sophisticated Z39.50/SRU server > implementation in JZKit. > Frankly, The search code in Koha is a nightmare. We try to construct the ccl query, build facets, ... All of this is done directly by Lucene/solR. In our POC, we removed 90% of the lines of Search.pm I'm 100% sure it's much more work to fix all this stuff than rewrite it from scratch. The only thing we must take big care of, imo, will be how to migrate existing zebra-koha libraries to solR-koha. That will require a real effort, but we (BibLibre) agree to deal with it (and, reminder, it's already sponsored -thanks to a new french university switching to Koha- !) -- Paul POULAIN http://www.biblibre.com Expert en Logiciels Libres pour l'info-doc Tel : (33) 4 91 81 35 08 From tajoli at cilea.it Mon Oct 11 12:53:39 2010 From: tajoli at cilea.it (Zeno Tajoli) Date: Mon, 11 Oct 2010 12:53:39 +0200 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CB2D78A.9060302@biblibre.com> References: <4CA98C01.8080709@biblibre.com> <82b2169ea9cc89b507c717da40b07801.squirrel@zephyrus.sp> <42dd7f5afc8de8695f74e227ad6ac9b8.squirrel@wmail.agogme.com> <4CB2D78A.9060302@biblibre.com> Message-ID: <4CB2ECB3.2010409@cilea.it> Hi, I want to ask specific question about this topic to Paul: Are you using Solr with Tomcat or Jetty as servlet container ? Bye -- Zeno Tajoli CILEA - Segrate (MI) tajoliAT_SPAM_no_prendiATcilea.it (Indirizzo mascherato anti-spam; sostituisci qaunto tra AT con @) From paul.poulain at biblibre.com Mon Oct 11 16:29:28 2010 From: paul.poulain at biblibre.com (Paul Poulain) Date: Mon, 11 Oct 2010 16:29:28 +0200 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CB2ECB3.2010409@cilea.it> References: <4CA98C01.8080709@biblibre.com> <82b2169ea9cc89b507c717da40b07801.squirrel@zephyrus.sp> <42dd7f5afc8de8695f74e227ad6ac9b8.squirrel@wmail.agogme.com> <4CB2D78A.9060302@biblibre.com> <4CB2ECB3.2010409@cilea.it> Message-ID: <4CB31F48.8020202@biblibre.com> Le 11/10/2010 12:53, Zeno Tajoli a ?crit : > Hi, > I want to ask specific question about this topic to Paul: > Are you using Solr with Tomcat or Jetty as servlet container ? This question is more for hdl than for me ;-) I think it's tomcat, but let hdl confirm or give more details ! -- Paul POULAIN http://www.biblibre.com Expert en Logiciels Libres pour l'info-doc Tel : (33) 4 91 81 35 08 From laurenthdl at alinto.com Mon Oct 11 16:35:11 2010 From: laurenthdl at alinto.com (LAURENT Henri-Damien) Date: Mon, 11 Oct 2010 16:35:11 +0200 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CB31F48.8020202@biblibre.com> References: <4CA98C01.8080709@biblibre.com> <82b2169ea9cc89b507c717da40b07801.squirrel@zephyrus.sp> <42dd7f5afc8de8695f74e227ad6ac9b8.squirrel@wmail.agogme.com> <4CB2D78A.9060302@biblibre.com> <4CB2ECB3.2010409@cilea.it> <4CB31F48.8020202@biblibre.com> Message-ID: <4CB3209F.40204@alinto.com> Le 11/10/2010 16:29, Paul Poulain a ?crit : > Le 11/10/2010 12:53, Zeno Tajoli a ?crit : >> Hi, >> I want to ask specific question about this topic to Paul: >> Are you using Solr with Tomcat or Jetty as servlet container ? > This question is more for hdl than for me ;-) > > I think it's tomcat, but let hdl confirm or give more details ! > either tomcat or jetty could be suitable, depending on your requirements. Universities would rather use Tomcat. -- Henri-Damien LAURENT From kohadevel at agogme.com Mon Oct 11 16:49:44 2010 From: kohadevel at agogme.com (Thomas Dukleth) Date: Mon, 11 Oct 2010 14:49:44 -0000 (UTC) Subject: [Koha-devel] Search Engine Changes : let's get some solr Message-ID: 1. REWRITING EASIER THAN BUG FIXING. I agree that as with many things in Koha there would be more work to fix Search.pm than rewrite it. I think that the CCL implementation had been partly a mistake in which Joshua Ferraro took a shortcut to reducing the work to support Nelsonville Public Library on Zebra. Joshua had previously agreed to support PQF. Some previous searching features were unnecessarily broken by the implementation which BibLibre later had to fix in subsequent versions of 3.0.X. 2. INNOVATION TO PRESERVE. Yet, there was one great achievement of Joshua's work. Searches for a title which is a stop word on many automation systems return that title sorted to the top of the result set. Searches for the title 'it' would return "It" by Stephen King. That book would be unfindable by the title in some library automation systems. We should not loose such innovations which set Koha apart from other library automation systems. 3. Z39.50/SRU IMPLEMENTATION. Where I may disagree with Paul Poulain is that completely dropping support for Zebra would be cheaper than retaining it as an adjunct to Solr/Lucene based indexing. There is a need for a sophisticated Z39.50/SRU servers for sharing records with the rest of the library community, although, perhaps most libraries now using Koha are unconcerned about having their own Z39.50 server. I hope that I would be mistaken but my investigation thus far leaves me to doubt that JZKit is a sufficient replacement for Zebra without much more work which would cost something significant. See what I reported about JZKit in section 3 of my first post in this thread, http://lists.koha-community.org/pipermail/koha-devel/2010-October/034468.html . My information about JZKit leads me to believe that we should be working with Index Data on using Zebra as a Z39.50 server with Solr/Lucene as they are already working on Solr/Lucene support and certainly already provide a sophisticated Z39.50 server. Furthermore, if we remove CCL support from Search.pm we do not remove the need for supporting Z39.50/SRU queries. We still need to support Z39.50/SRU queries for returning result sets from other remote systems in some user queries for resources outside the local catalogue and in copy cataloguing. I will be more complete about the continued importance of Z39.50/SRU later. 4. IMPLEMENTATION QUESTION. Henri-Damien Laurent raised an issue of DOM based indexing in point 8 of his post starting this thread. Is there an objection to using XPath indexing in general or for use with Solr/Lucene? What is that objection if any? 5. IMPLEMENTATION SUGGESTION. I think that eventually we will need distinct record designs optimised specifically for the distinct functions of editing record content, indexing, and display. For the purpose of indexing discussion and not to create too many problems at once, we could confine consideration to record design optimised for indexing. The MARC record whether in MARC format or MARCXML is poorly suited for any purpose other than as a somewhat antiquated record exchange format which is accepted throughout the library community. Thomas Dukleth Agogme 109 E 9th Street, 3D New York, NY 10003 USA http://www.agogme.com +1 212-674-3783 From krichel at openlib.org Tue Oct 12 14:41:53 2010 From: krichel at openlib.org (Thomas Krichel) Date: Tue, 12 Oct 2010 14:41:53 +0200 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CB2B35D.7090802@tamil.fr> References: <4CA98C01.8080709@biblibre.com> <4CA998AF.7040204@cilea.it> <4CA9AEBE.2040505@alinto.com> <4CA9DEDE.1040106@gmail.com> <4CB2B35D.7090802@tamil.fr> Message-ID: <20101012124153.GA30122@openlib.org> Frederic Demians writes > > >Just as a side note, zebra3 should be based on solr. So yet > >another point for solr. > > So, isn't it the best interest of the Koha community to wait that > Indexdata integrate Solr into Zebra? and in the meantime, switch > Koha-Zebra interface from GRS1 to Dom in order to gain in > granularity and customazibility. Yes. In any case, the zebra documentation notes that GRS1 has been improved and replaced by DOM. > And as suggested by Thomas Dukleth, Koha supporters could sponsor > directly Indexdata to speed up their work and give them a Koha > 'direction'. Who's better than Indexdata has deep knowledge of > search engine technlogy in Libraries arena? And to appear credible to them, it wouldn't be best to adopt their recent technologies, before asking them to develop new ones? Cheers, Thomas Krichel http://openlib.org/home/krichel http://authorclaim.org/profile/pkr1 skype: thomaskrichel From chris.nighswonger at gmail.com Tue Oct 12 14:47:27 2010 From: chris.nighswonger at gmail.com (Christopher Nighswonger) Date: Tue, 12 Oct 2010 08:47:27 -0400 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <20101012124153.GA30122@openlib.org> References: <4CA98C01.8080709@biblibre.com> <4CA998AF.7040204@cilea.it> <4CA9AEBE.2040505@alinto.com> <4CA9DEDE.1040106@gmail.com> <4CB2B35D.7090802@tamil.fr> <20101012124153.GA30122@openlib.org> Message-ID: On Tue, Oct 12, 2010 at 8:41 AM, Thomas Krichel wrote: > > ?Yes. In any case, the zebra documentation notes that GRS1 > ?has been improved and replaced by DOM. > >> And as suggested by Thomas Dukleth, Koha supporters could sponsor >> directly Indexdata to speed up their work and give them a Koha >> 'direction'. Who's better than Indexdata has deep knowledge of >> search engine technlogy in Libraries arena? > > ?And to appear credible to them, it wouldn't be best to adopt > ?their recent technologies, before asking them to develop > ?new ones? When running Koha's Makefile.PL, DOM is the default choice for zebra and has been for quite some time. Kind Regards, Chris From kohadevel at agogme.com Tue Oct 12 14:48:39 2010 From: kohadevel at agogme.com (Thomas Dukleth) Date: Tue, 12 Oct 2010 12:48:39 -0000 (UTC) Subject: [Koha-devel] MARC record size limit Message-ID: <01c73f7770978873e50aaa6d2996374f.squirrel@wmail.agogme.com> Reply inline: Original Subject: [Koha-devel] Search Engine Changes : let's get some solr On Mon, October 4, 2010 08:10, LAURENT Henri-Damien wrote: [...] > zebra is fast and embeds native z3950 server. But it has also some major > drawbacks we have to cope with on our everyday life making it quite > difficult to maintain. [...] > I think that every one agrees that we have to refactor C4::Search. > Indeed, query parser is not able to manage independantly all the > configuration options. And usage of usmarc as internal for biblio comes > with a serious limitation of 9999 bytes, which for big biblios with many > items, is not enough. How do MARC limitations on record size relate to Solr/Indexing or Zebra indexing which lacks Solr/Lucene support in the current version? How does BibLibre intend to fix the limitation on the size of bibliographic records as part of its work on record indexing and retrieval in Koha or in some parallel work.? > > BibLibre investigated in a catalogue based on solr. > A University in France contracted us for that development. > This University is in relation with all the community here in France and > solr will certainly be adopted by all the libraries France wide. > We are planning to release the code on our git early spring next year > and rebase on whatever Koha version will be released at that time 3.4 or > 3.6. > > > Why ? > > Solr indexes with data with HTTP. > It can provide fuzzy search, search on synonyms, suggestions > It can provide facet search, stemming. > utf8 support is embedded. > Community is really impressively reactive and numerous and efficient. > And documentation is very good and exhaustive. > > You can see the results on solr.biblibre.com and > catalogue.solr.biblibre.com > > http://catalogue.solr.biblibre.com/cgi-bin/koha/opac-search.pl?q=jean > http://solr.biblibre.com/cgi-bin/koha/admin/admin-home.pl > you can log there with demo/demo lgoin/password > > http://solr.biblibre.com/cgi-bin/koha/solr/indexes.pl > is the page where ppl can manage their indexes and links. > > a) Librarians can define their own indexes, and there is a plugin that > fetches data from rejected authorities and from authorised_values (that > could/should have been achieved with zebra but only with major work on > xslt). > > b) C4/Search.pm count lines of code could be shrinked ten times. > You can test from poc_solr branch on > git://git.biblibre.com/koha_biblibre.git > But you have to install solr. > > Any feedback/idea welcome. [...] Thomas Dukleth Agogme 109 E 9th Street, 3D New York, NY 10003 USA http://www.agogme.com +1 212-674-3783 From krichel at openlib.org Tue Oct 12 14:51:29 2010 From: krichel at openlib.org (Thomas Krichel) Date: Tue, 12 Oct 2010 14:51:29 +0200 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: References: <4CA98C01.8080709@biblibre.com> <4CA998AF.7040204@cilea.it> <4CA9AEBE.2040505@alinto.com> <4CA9DEDE.1040106@gmail.com> <4CB2B35D.7090802@tamil.fr> <20101012124153.GA30122@openlib.org> Message-ID: <20101012125129.GD30122@openlib.org> Christopher Nighswonger writes > When running Koha's Makefile.PL, DOM is the default choice for zebra > and has been for quite some time. Oh good because from reading the comments here I got the impression that it was not. Cheers, Thomas Krichel http://openlib.org/home/krichel http://authorclaim.org/profile/pkr1 skype: thomaskrichel From laurenthdl at alinto.com Tue Oct 12 15:11:18 2010 From: laurenthdl at alinto.com (LAURENT Henri-Damien) Date: Tue, 12 Oct 2010 15:11:18 +0200 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: References: <4CA98C01.8080709@biblibre.com> <4CA998AF.7040204@cilea.it> <4CA9AEBE.2040505@alinto.com> <4CA9DEDE.1040106@gmail.com> <4CB2B35D.7090802@tamil.fr> <20101012124153.GA30122@openlib.org> Message-ID: <4CB45E76.9030007@alinto.com> Le 12/10/2010 14:47, Christopher Nighswonger a ?crit : > On Tue, Oct 12, 2010 at 8:41 AM, Thomas Krichel wrote: >> >> Yes. In any case, the zebra documentation notes that GRS1 >> has been improved and replaced by DOM. >> >>> And as suggested by Thomas Dukleth, Koha supporters could sponsor >>> directly Indexdata to speed up their work and give them a Koha >>> 'direction'. Who's better than Indexdata has deep knowledge of >>> search engine technlogy in Libraries arena? >> >> And to appear credible to them, it wouldn't be best to adopt >> their recent technologies, before asking them to develop >> new ones? > > When running Koha's Makefile.PL, DOM is the default choice for zebra > and has been for quite some time. > > Kind Regards, Well, actually DOM has been default choice for zebra Authority server. For biblio server, it still is GRS1 as far as I know. Friendly. -- Henri-Damien LAURENT From frederic at tamil.fr Tue Oct 12 15:29:01 2010 From: frederic at tamil.fr (Frederic Demians) Date: Tue, 12 Oct 2010 15:29:01 +0200 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: References: <4CA98C01.8080709@biblibre.com> <4CA998AF.7040204@cilea.it> <4CA9AEBE.2040505@alinto.com> <4CA9DEDE.1040106@gmail.com> <4CB2B35D.7090802@tamil.fr> <20101012124153.GA30122@openlib.org> Message-ID: <4CB4629D.5030905@tamil.fr> Le 12/10/10 14:47, Christopher Nighswonger a ?crit : > When running Koha's Makefile.PL, DOM is the default choice for zebra > and has been for quite some time. It isn't available for UNIMARC which use GRS1 for biblio and authorities records. For MARC21, DOM is used only for authorities records. Biblio records still use GRS1 Record Model. In theory--and I understand Henri-Damien Laurent frustration to conciliate theory and practice with Zebra--DOM Record Model should allow to do a lot of things required by Koha via successive XSLT transformations: strip out title leading articles marked by NSB/NSE characters for Title sort index; Unicode normalization; facets (?)... It'll be a great pity for Index Data to loose Koha. We need help from Index Data to see if it's possible to keep Zebra into Koha... -- Fr?d?ric From glawson at rhcl.org Tue Oct 12 17:52:00 2010 From: glawson at rhcl.org (glawson) Date: Tue, 12 Oct 2010 10:52:00 -0500 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <20101010223750.GH20177@rorohiko> References: <4CA98C01.8080709@biblibre.com> <82b2169ea9cc89b507c717da40b07801.squirrel@zephyrus.sp> <20101010223750.GH20177@rorohiko> Message-ID: <4CB48420.5090303@rhcl.org> http://opac.rhcl.org I have some really good test data to illustrate my point, but don't have the data here with me. Greg On 10/10/2010 05:37 PM, Chris Cormack wrote: > * glawson at rhcl.org (glawson at rhcl.org) wrote: > >> I post this email while away from my work area and access to most of my >> research materials; more specifically, on a netbook from a hospital room, >> although I personally am fine. I hope my question is not thus poorly >> presented. >> >> Does the term "precision" have a meaning significantly different when >> applied to the indexing of databases than it does in general use? >> >> Quote: >> >> "The precision of a measurement system, also called reproducibility or >> repeatability, is the degree to which repeated measurements under >> unchanged conditions show the same results." >> http://en.wikipedia.org/wiki/Accuracy_and_precision >> >> I was not aware that precision was a problem with Zebra, although we have >> found, to our great dismay with our recent implementation of Koha, that >> relevancy, which I would call accuracy, is. More correctly and >> scientifically stated perhaps, relevancy sucks. Do I incorrectly >> associate accuracy with relevancy? >> >> > Greg > > I have noticed a bug with C4/Search .. such that relevancy isn't > actually being used. > Is your opac live? If you send me a url I can test my theory on your > opac > > Chris > > >> Quote: >> >> "...accuracy of a measurement system is the degree of closeness of >> measurements of a quantity to its actual (true) value." >> IBID >> >> >> >> >> Greg Lawson >> Rolling Hills Consolidated Library >> >> ------------------------------ >> >> >>> 1. Z39.50/SRU SUPPORT.... >>> >> >> >>> Koha community needs to be considered. >>> [...] >>> Thomas Dukleth >>> Agogme >>> 109 E 9th Street, 3D >>> New York, NY 10003 >>> USA >>> http://www.agogme.com >>> +1 212-674-3783 >>> _______________________________________________ >>> Koha-devel mailing list >>> Koha-devel at lists.koha-community.org >>> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >>> >> >> >> >> _______________________________________________ >> Koha-devel mailing list >> Koha-devel at lists.koha-community.org >> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >> > -- Greg Lawson Rolling Hills Consolidated Library 1912 N. Belt Highway St. Joseph, MO 64506 From kohadevel at agogme.com Tue Oct 12 18:11:09 2010 From: kohadevel at agogme.com (Thomas Dukleth) Date: Tue, 12 Oct 2010 16:11:09 -0000 (UTC) Subject: [Koha-devel] Search Engine Changes : let's get some solr Message-ID: Reply inline: On Tue, October 12, 2010 12:41, Thomas Krichel wrote: > > Frederic Demians writes >> >> >Just as a side note, zebra3 should be based on solr. So yet >> >another point for solr. >> >> So, isn't it the best interest of the Koha community to wait that >> Indexdata integrate Solr into Zebra? and in the meantime, switch >> Koha-Zebra interface from GRS1 to Dom in order to gain in >> granularity and customazibility. > > Yes. In any case, the zebra documentation notes that GRS1 > has been improved and replaced by DOM. > 1. WORK WHICH INDEX DATA IS DOING ALREADY. >> And as suggested by Thomas Dukleth, Koha supporters could sponsor >> directly Indexdata to speed up their work and give them a Koha >> 'direction'. Who's better than Indexdata has deep knowledge of >> search engine technlogy in Libraries arena? > > And to appear credible to them, it wouldn't be best to adopt > their recent technologies, before asking them to develop > new ones? If the Koha community would work with Index Data to support Solr/Lucene, we would not necessarily be asking Index Data to develop any new technologies which they had not already been intending to develop themselves. Index Data has added Solr/Lucene support to a several products for querying Solr/Lucene servers and integrating the result sets with Z39.50/SRU and other searches. See http://www.indexdata.com/blog/2010/09/solr-support-zoom-pazpar2-and-masterkey and http://www.indexdata.com/news/2010/10/yaz-411-available-now-solr-support . There may be some which have not yet had a public release. I did not check all the release notes. 2. WORK WHICH INDEX DATA MIGHT DO. I enquired with Index Data about the prospect of Solr/Lucene support in Zebra. Index Data has experimented with Solr/Lucene based indexing as well as other options which might be used in a next generation server for Zebra 3. A next generation version of Zebra would need some commitment of funding for development from interested parties which has apparently not been a priority from their customers most recently. However, given the work done for experimentation, the funding needed might be more modest than I would have expected otherwise. There may be other more modest options for developing support for Zebra as a Z39.50/SRU server which might be independent from Zebra as a complete indexing server if possible. What may be done with Zebra would depend upon the level of importance which the Koha community attaches to having a good Z39.50/SRU server. As I previously stated, I speculate that most libraries using Koha care very little about their own Z39.50/SRU server. Again, I will explain later the importance of Z39.50/SRU and how needing to support Z39.50/SRU for libraries is a necessity. 3. WORK FROM KNOWLEDGE INTEGRATION. I have not yet contacted Knowledge Integration to learn what is required to obtain any useful documentation for JZKit and learn whether its Z39.50/SRU feature set is as limited as it appears it may be from my examination of the source code and configuration files. 4. CLIENT VS SERVER SUPPORT FOR Z39.50/SRU. I do not think that client side support for Z39.50/SRU is enough. However, there is a potential for regression of both client and server support for Z39.50 when rewriting Search.pm without working with Index Data if they are the only company supporting the requisite software. Knowledge Integration seems to depend upon Index Data software on the client side but perhaps Knowledge Integration has some additional options to provide. I will enquire. Thomas Dukleth Agogme 109 E 9th Street, 3D New York, NY 10003 USA http://www.agogme.com +1 212-674-3783 From henridamien.laurent at biblibre.com Tue Oct 12 18:20:58 2010 From: henridamien.laurent at biblibre.com (LAURENT Henri-Damien) Date: Tue, 12 Oct 2010 18:20:58 +0200 Subject: [Koha-devel] MARC record size limit In-Reply-To: <01c73f7770978873e50aaa6d2996374f.squirrel@wmail.agogme.com> References: <01c73f7770978873e50aaa6d2996374f.squirrel@wmail.agogme.com> Message-ID: <4CB48AEA.3050901@biblibre.com> Le 12/10/2010 14:48, Thomas Dukleth a ?crit : > Reply inline: > > > Original Subject: [Koha-devel] Search Engine Changes : let's get some solr > > On Mon, October 4, 2010 08:10, LAURENT Henri-Damien wrote: > > [...] > >> zebra is fast and embeds native z3950 server. But it has also some major >> drawbacks we have to cope with on our everyday life making it quite >> difficult to maintain. > > [...] > >> I think that every one agrees that we have to refactor C4::Search. >> Indeed, query parser is not able to manage independantly all the >> configuration options. And usage of usmarc as internal for biblio comes >> with a serious limitation of 9999 bytes, which for big biblios with many >> items, is not enough. > > How do MARC limitations on record size relate to Solr/Indexing or Zebra > indexing which lacks Solr/Lucene support in the current version? Koha is now using iso2709 returned from zebra in order to display result lists. Problem is that if zebra is returning only part of the biblio and/or MARC::Record is not able to parse the whole data then the biblio is not displayed. We have biblio records which contains more than 1000 items. And MARC::Record/MARC::File::XML fails to parse that. So this is a real issue. > > How does BibLibre intend to fix the limitation on the size of > bibliographic records as part of its work on record indexing and retrieval > in Koha or in some parallel work.? Solr/Lucene can return indexes and thoses be used in order to display desired data or we could also do the same as we do with zebra : - store the data record (Format could be iso2709 or marcxml or YAML) - use that for display. Or we could use GetBiblio in order to get the data from database. Problem now would be the fact that storing xml in database is not really optimal for process. -- Henri-Damien LAURENT From kohadevel at agogme.com Tue Oct 12 20:45:09 2010 From: kohadevel at agogme.com (Thomas Dukleth) Date: Tue, 12 Oct 2010 18:45:09 -0000 (UTC) Subject: [Koha-devel] MARC record size limit In-Reply-To: <4CB48AEA.3050901@biblibre.com> References: <01c73f7770978873e50aaa6d2996374f.squirrel@wmail.agogme.com> <4CB48AEA.3050901@biblibre.com> Message-ID: Reply inline: On Tue, October 12, 2010 16:20, LAURENT Henri-Damien wrote: > Le 12/10/2010 14:48, Thomas Dukleth a ?crit : >> Reply inline: >> >> >> Original Subject: [Koha-devel] Search Engine Changes : let's get some >> solr >> >> On Mon, October 4, 2010 08:10, LAURENT Henri-Damien wrote: [...] >>> I think that every one agrees that we have to refactor C4::Search. >>> Indeed, query parser is not able to manage independantly all the >>> configuration options. And usage of usmarc as internal for biblio comes >>> with a serious limitation of 9999 bytes, which for big biblios with >>> many >>> items, is not enough. >> >> How do MARC limitations on record size relate to Solr/Indexing or Zebra >> indexing which lacks Solr/Lucene support in the current version? > Koha is now using iso2709 returned from zebra in order to display result > lists. I recall that having Zebra return ISO2709, MARC communications format, records had the supposed advantage of faster response time from Zebra. > Problem is that if zebra is returning only part of the biblio and/or > MARC::Record is not able to parse the whole data then the biblio is not > displayed. We have biblio records which contains more than 1000 items. > And MARC::Record/MARC::File::XML fails to parse that. > > So this is a real issue. Ultimately, we need a specific solution to various problems arising from storing holdings directly in the MARC bibliographic records. > > >> >> How does BibLibre intend to fix the limitation on the size of >> bibliographic records as part of its work on record indexing and >> retrieval >> in Koha or in some parallel work.? > Solr/Lucene can return indexes and thoses be used in order to display > desired data or we could also do the same as we do with zebra : > - store the data record (Format could be iso2709 or marcxml or YAML) > - use that for display. If using ISO 2709, MARC communications format, how would the problem of excess record size be addressed? > Or we could use GetBiblio in order to get the data from database. > Problem now would be the fact that storing xml in database is not really > optimal for process. I like the idea of using YAML for some purposes. As you state, previous testing showed that returning every record in a large result set from the SQL database was very inefficient as compared to using the records as part of the response from the index server. Is there any practical way of sufficiently improving the efficiency of accessing a large set of records from the SQL database? How much might retrieving and parsing YAML records from the database help? I can imagine using XSLT to pre-process MARCXML records into an appropriate format, such YAML with embedded HTML, pure HTML, or whatever is needed embedded for a particular purpose and storing the pre-processed records in appropriate special purpose columns. Real time parsing would be minimised. The OPAC result set display might use biblioitems.recordOPACDisplayBrief. The standard single record view might use biblioitems.recordOPACDisplayDetail. An ISBD card view might use biblioitems.recordOPACDisplayISBD. [...] Thomas Dukleth Agogme 109 E 9th Street, 3D New York, NY 10003 USA http://www.agogme.com +1 212-674-3783 From mjr at phonecoop.coop Wed Oct 13 02:48:56 2010 From: mjr at phonecoop.coop (MJ Ray) Date: Wed, 13 Oct 2010 01:48:56 +0100 (BST) Subject: [Koha-devel] Release Manager : how long (open question) In-Reply-To: <4C61603E.8050803@biblibre.com> Message-ID: <20101013004856.A4236501CF@nail.towers.org.uk> Paul Poulain wrote back in August: > Le 24/07/2010 09:56, Chris Cormack a ?crit : [...] > > actually think it might be good to change the release manager often. I > > think working on making the transition easy and making the RM's duties > > less onerous is the way to go and then shift the duties around each 6 > > months would be good. > yes and no. > switching RM will probably need a few weeks, if not a few months: time > to elect the RM, for the RM to propose new workflows if he wants, new > coding rules, and all those things. > So switching every 6 months looks too often to me. I think 12 months is > a minimum. > > any other opinion ? I think 6 months is fine as an ambition. Once we have a workflow that works for us and produces releases about that often, I think the community should be suspicious of RMs who want to change too much. But I won't be surprised if it takes a few loops to reach it. Belatedly, -- MJ Ray (slef), member of www.software.coop, a for-more-than-profit co-op. Webmaster, Debian Developer, Past Koha RM, statistician, former lecturer. In My Opinion Only: see http://mjr.towers.org.uk/email.html Available for hire for various work http://www.software.coop/products/ From ian.walls at bywatersolutions.com Wed Oct 13 18:42:32 2010 From: ian.walls at bywatersolutions.com (Ian Walls) Date: Wed, 13 Oct 2010 12:42:32 -0400 Subject: [Koha-devel] ByWater Solutions and Software.coop Collaborating on Koha EDI Support Message-ID: Dear Koha Community, It is our pleasure to announce that software.coop of Somerset, England and ByWater Solutions LLC of Santa Barbara CA, USA, are partnering to bring Electronic Data Exchange (EDI) to Koha. EDI will greatly increase the functionality of the acquisitions module of Koha by allowing a direct electronic exchange from one entity to another, making processes such as billing and the transmission of purchase orders seamless. It may optionally accept bibliographic records from a vendor, making cataloguing quicker. It was described in the journal "Program: electronic library and information systems" Vol. 44 No. 3, 2010. The EDI code, which was written by software.coop, will be rebased by ByWater Solutions on the current head of the Koha project. The rebased code will be submitted for possible inclusion in the 3.4 release of Koha. Cheers, -Ian -- Ian Walls Lead Development Specialist ByWater Solutions Phone # (888) 900-8944 http://bywatersolutions.com ian.walls at bywatersolutions.com Twitter: @sekjal -------------- next part -------------- An HTML attachment was scrubbed... URL: From reedwade at gmail.com Wed Oct 13 23:33:12 2010 From: reedwade at gmail.com (Reed Wade) Date: Thu, 14 Oct 2010 10:33:12 +1300 Subject: [Koha-devel] ByWater Solutions and Software.coop Collaborating on Koha EDI Support In-Reply-To: References: Message-ID: 2010/10/14 Ian Walls : > Dear Koha Community, > > It is our pleasure to announce that software.coop of Somerset, England and > ByWater > Solutions LLC of Santa Barbara CA, USA, are partnering to bring Electronic > Data > Exchange (EDI) to Koha. Very cool to see cooperative activities of this type. Makes it fun to be here. thanks, -reed From cnighswonger at foundations.edu Wed Oct 13 23:46:27 2010 From: cnighswonger at foundations.edu (Chris Nighswonger) Date: Wed, 13 Oct 2010 17:46:27 -0400 Subject: [Koha-devel] [Koha] ByWater Solutions and Software.coop Collaborating on Koha EDI Support In-Reply-To: References: Message-ID: On Wed, Oct 13, 2010 at 5:33 PM, Reed Wade wrote: > 2010/10/14 Ian Walls : > > Dear Koha Community, > > > > It is our pleasure to announce that software.coop of Somerset, England > and > > ByWater > > Solutions LLC of Santa Barbara CA, USA, are partnering to bring > Electronic > > Data > > Exchange (EDI) to Koha. > > > Very cool to see cooperative activities of this type. Makes it fun to be > here. > Very nice indeed! I hope others will pick up on this type of cooperation. There are a number of outstanding feature sets in various repos at the moment which could benefit from this sort of thing. Kind Regards, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From jransom at library.org.nz Wed Oct 13 23:53:44 2010 From: jransom at library.org.nz (Joann Ransom) Date: Thu, 14 Oct 2010 10:53:44 +1300 Subject: [Koha-devel] [Koha] ByWater Solutions and Software.coop Collaborating on Koha EDI Support In-Reply-To: References: Message-ID: This is brilliant news ! 2010/10/14 Ian Walls > Dear Koha Community, > > > It is our pleasure to announce that software.coop of Somerset, England and > ByWater > Solutions LLC of Santa Barbara CA, USA, are partnering to bring Electronic > Data > Exchange (EDI) to Koha. > > EDI will greatly increase the functionality of the acquisitions module of > Koha by > allowing a direct electronic exchange from one entity to another, making > processes such > as billing and the transmission of purchase orders seamless. It may > optionally accept > bibliographic records from a vendor, making cataloguing quicker. It was > described in the > journal "Program: electronic library and information systems" Vol. 44 No. > 3, 2010. > > The EDI code, which was written by software.coop, will be rebased by > ByWater > Solutions on the current head of the Koha project. The rebased code will be > submitted for > possible inclusion in the 3.4 release of Koha. > > > Cheers, > > -Ian > > -- > Ian Walls > Lead Development Specialist > ByWater Solutions > Phone # (888) 900-8944 > http://bywatersolutions.com > ian.walls at bywatersolutions.com > Twitter: @sekjal > > _______________________________________________ > Koha mailing list http://koha-community.org > Koha at lists.katipo.co.nz > http://lists.katipo.co.nz/mailman/listinfo/koha > > -- Joann Ransom RLIANZA Head of Libraries, Horowhenua Library Trust. *Q: Why is this email three sentences or less? A: http://three.sentenc.es* -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at sentosatech.com Thu Oct 7 19:16:41 2010 From: steve at sentosatech.com (Steven Saunders) Date: Thu, 7 Oct 2010 11:16:41 -0600 Subject: [Koha-devel] KHOA developer needed (work from home) Message-ID: <001801cb6643$6989ed80$3c9dc880$@com> Greetings, I am looking for a strong Perl developer to work on a KOHA based book/document imaging and library management system. The customer is headquartered near the East coast and this is a work from home remote development position. Both contract or employment arrangements are possible. TECHNICAL QUALIFICATIONS - Strong Perl Developer, mySQL, and Linux environments - 3 to 5 years experience preferably with larger applications - Experience in the library (electronic book management) industry a plus, but not a requirement - Experience with KOHA a plus, but not a requirement - Remote OK, employee or contractor OK PERSONAL QUALIFICATIONS These are at least as important as your technical qualifications (and the basis my business success) - Complete integrity, no exceptions - Strong problem solver who is highly productive and who always delivers - Customer focused, easy to work, no drama If this sounds like you please let me know, I would love to work with you Steve Saunders Steven Saunders Founder and Principal Sentosa Technology Consultants Cell 303.809.8043 Fax 303.834.8853 Email steve at sentosatech.com Web www.sentosatech.com AIM SteveAtSentosa cid:image001.jpg at 01C9EFFC.ECB10130 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 2693 bytes Desc: not available URL: From dpavlin at rot13.org Thu Oct 14 17:48:40 2010 From: dpavlin at rot13.org (Dobrica Pavlinusic) Date: Thu, 14 Oct 2010 17:48:40 +0200 Subject: [Koha-devel] CloneSubfields and problems with it Message-ID: <20101014154840.GA30706@rot13.org> We are running latest git in our production, and we have stumbled upon variation of bug reported at: http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=3264 We are having problems with cataloguing/addbiblio.tmpl which doesn't correctly clone subfields (it always clone first one) and after removing one subfield whole JavaScript breaks. After spending a day looking at current code, I decided to try to re-implement it. My motivation is following: - currently code is injected in html page. This makes page load slow, because browser has to stop everything until it parse JavaScript - there is variation of same methods in authorities/authorities.tmpl but it has it's own problems. Splitting code into separate, page-independent reusable code would benefit both of problems To do that I moved selected element into anchor of link, so it can be accessed (and updated) easily from JavaScript. Result is following code: function _subfield_id(a) { console.debug( '_subfield_id', a ); return a.href.substr( a.href.indexOf('#')+1 ); } function clone_subfield( from_a ) { console.debug( 'clone_subfield', from_a ); var subfield_id = _subfield_id(from_a); var $original = $('#' + subfield_id); var $clone = $original.clone(); var new_key = CreateKey(); $clone .attr('id', subfield_id + new_key ) .find('input,select,textarea').each( function() { $(this) .attr('id', function() { return this.id + new_key }) .attr('name', function() { return this.name + new_key }) ; }) .end() .find('label').attr('for', function() { return this.for + new_key }) .end() .find('.subfield_controls > a').each( function() { this.href = '#' + subfield_id + new_key; console.debug( 'fix href', this.href ); }) ; console.debug( 'clone', $clone ); $clone.insertAfter( $original ); } function remove_subfield( from_a ) { console.debug( 'remove_subfield', from_a ); var subfield_id = _subfield_id(from_a); $('#'+subfield_id).remove(); } with html change: " onclick="clone_subfield(this); return false;">Clone " onclick="remove_subfield(this); return false;">Delete for comparison, take at look at 77 lines of CloneSubfield alone in current koha code. However, I'm very aware that my solution is very jquery-eske, and might be too much for people who haven't used jquery before. I also used $o notation in JavaScript for jQuery objects, and that might also be confusing. I would also love to use event bubbling using .live instead of attaching all click handlers, but I have to check how reliably it works with jquery 1.3.2 which koha uses. Would such a change be accepted into Koha and does it make sense to peruse this refactoring? If you want to follow along this experiment, there is a branch for it in our git: http://git.rot13.org/?p=koha.git;a=shortlog;h=refs/heads/clone-subfields-jquery -- Dobrica Pavlinusic 2share!2flame dpavlin at rot13.org Unix addict. Internet consultant. http://www.rot13.org/~dpavlin From chris at bigballofwax.co.nz Thu Oct 14 21:18:16 2010 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Fri, 15 Oct 2010 08:18:16 +1300 Subject: [Koha-devel] CloneSubfields and problems with it In-Reply-To: <20101014154840.GA30706@rot13.org> References: <20101014154840.GA30706@rot13.org> Message-ID: On 15 October 2010 04:48, Dobrica Pavlinusic wrote: > We are running latest git in our production, and we have stumbled upon > variation of bug reported at: > > http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=3264 > > We are having problems with cataloguing/addbiblio.tmpl which doesn't > correctly clone subfields (it always clone first one) and after > removing one subfield whole JavaScript breaks. > > After spending a day looking at current code, I decided to try to > re-implement it. My motivation is following: > > > Would such a change be accepted into Koha and does it make sense to > peruse this refactoring? There's nothing in it that sets off my alarm bells, I would of course ask people like Owen who do far more front end work than me to check it. But certainly it is worth pursuing. Anything you can do to make that whole page load faster, would be gratefully received also Chris > > If you want to follow along this experiment, there is a > branch for it in our git: > > http://git.rot13.org/?p=koha.git;a=shortlog;h=refs/heads/clone-subfields-jquery > > -- > Dobrica Pavlinusic ? ? ? ? ? ? ? 2share!2flame ? ? ? ? ? ?dpavlin at rot13.org > Unix addict. Internet consultant. ? ? ? ? ? ? http://www.rot13.org/~dpavlin > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > From gmcharlt at gmail.com Fri Oct 15 00:15:12 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Thu, 14 Oct 2010 18:15:12 -0400 Subject: [Koha-devel] CloneSubfields and problems with it In-Reply-To: <20101014154840.GA30706@rot13.org> References: <20101014154840.GA30706@rot13.org> Message-ID: Hi, On Thu, Oct 14, 2010 at 11:48 AM, Dobrica Pavlinusic wrote: > However, I'm very aware that my solution is very jquery-eske, and might > be too much for people who haven't used jquery before. Being "jQuery-esque" isn't a problem at all - Koha uses jQuery anyway, and using a framework is almost almost better than dealing with hand-written JavaScript code to walk the DOM. Regards, Galen -- Galen Charlton gmcharlt at gmail.com From chris.nighswonger at gmail.com Fri Oct 15 00:20:03 2010 From: chris.nighswonger at gmail.com (Christopher Nighswonger) Date: Thu, 14 Oct 2010 18:20:03 -0400 Subject: [Koha-devel] CloneSubfields and problems with it In-Reply-To: References: <20101014154840.GA30706@rot13.org> Message-ID: On Thu, Oct 14, 2010 at 3:18 PM, Chris Cormack wrote: > >> >> Would such a change be accepted into Koha and does it make sense to >> peruse this refactoring? > > There's nothing in it that sets off my alarm bells, I would of course > ask people like Owen who do far more front end work than me to check > it. But certainly it is worth pursuing. > > Anything you can do to make that whole page load faster, would be > gratefully received also I have to agree with Chris here, anything which improves the speed of this page would be a welcomed improvement. Kind Regards, Chris From mjr at phonecoop.coop Fri Oct 15 11:05:27 2010 From: mjr at phonecoop.coop (MJ Ray) Date: Fri, 15 Oct 2010 10:05:27 +0100 (BST) Subject: [Koha-devel] [Koha] 3.2.0 release candidate available In-Reply-To: Message-ID: <20101015090527.67D45F72F6@nail.towers.org.uk> Galen Charlton wrote: > Pending confirmation of successful installations and upgrades, this > will become the general release of 3.2.0. At this point, string > changes, enhancements, and non-super-critical bugfixes will not be > accepted for 3.2.0; only truly critical bugfixes, improvements to > upgrade and installation processes and documentation, and translations > will be pushed. If you'd like to see what to fix to help bring about a general release, there's a bug progress chart link in the last but one block of http://wiki.koha-community.org/wiki/Roadmap_to_3.2 At the moment, there are six blockers. This one needs someone comfortable with the translations: 4472 img tags in xslt broken after automatic translation These three are database-related: 4141 reconcile 3.0.x and HEAD database updates for 3.2.0 4310 No Migration for budgets from 3.0 to 3.2 4972 updatedatebase fails to create aqbasketgroups and aqcontract And the following two are ASSIGNED: 3536 Checked In item requiring transfer does not consistently ... 3756 new sys prefs - no way to add a new local use preference A quote from the BTS: "Communicate! It can't make things any worse." Hope that helps, -- MJ Ray (slef), member of www.software.coop, a for-more-than-profit co-op. Past Koha Release Manager (2.0), LMS programmer, statistician, webmaster. In My Opinion Only: see http://mjr.towers.org.uk/email.html Available for hire for Koha work http://www.software.coop/products/koha From oleonard at myacpl.org Fri Oct 15 16:09:23 2010 From: oleonard at myacpl.org (Owen Leonard) Date: Fri, 15 Oct 2010 10:09:23 -0400 Subject: [Koha-devel] CloneSubfields and problems with it In-Reply-To: <20101014154840.GA30706@rot13.org> References: <20101014154840.GA30706@rot13.org> Message-ID: > - currently code is injected in html page. This makes page load slow, > ?because browser has to stop everything until it parse JavaScript Agreed. However much of the embedded script is reliant on TMPL variables, so it's not possible to eliminate it from the template. > However, I'm very aware that my solution is very jquery-eske, and might > be too much for people who haven't used jquery before. As others have said, we use jQuery in many places in Koha so there's no problem here. > I would also love to use event bubbling using .live instead of attaching > all click handlers, but I have to check how reliably it works with > jquery 1.3.2 which koha uses. Upgrading the version of jQuery Koha uses should be a to-do item for 3.4. > Would such a change be accepted into Koha and does it make sense to > peruse this refactoring? Anything that will improve the performance of this page will be very much welcomed. Thanks for sharing, -- Owen -- Web Developer Athens County Public Libraries http://www.myacpl.org From bibliwho at gmail.com Fri Oct 15 16:37:52 2010 From: bibliwho at gmail.com (Cab Vinton) Date: Fri, 15 Oct 2010 10:37:52 -0400 Subject: [Koha-devel] Batch processes on records? In-Reply-To: References: Message-ID: I've just seen a list of new features for 3.2 -- http://bywatersolutions.com/?p=686 -- & I'm curious whether there's any provision for batch edits or deletion of Records. I see that there's batch edit/ deletion of Items, but would love to have the same ability for Records. We're hosted & don't currently have command-line access. Thanks in advance, Cab Vinton, Director Sanbornton Public Library Sanbornton, NH From paul.poulain at biblibre.com Fri Oct 15 17:25:02 2010 From: paul.poulain at biblibre.com (Paul Poulain) Date: Fri, 15 Oct 2010 17:25:02 +0200 Subject: [Koha-devel] Batch processes on records? In-Reply-To: References: Message-ID: <4CB8724E.8050207@biblibre.com> Le 15/10/2010 16:37, Cab Vinton a ?crit : > I've just seen a list of new features for 3.2 -- > http://bywatersolutions.com/?p=686 -- & I'm curious whether there's > any provision for batch edits or deletion of Records. > > I see that there's batch edit/ deletion of Items, but would love to > have the same ability for Records. > Hi Cab, This feature is in BibLibre portfolio. It's already a work in progress on our git. See http://wiki.koha-community.org/wiki/Batch_Modification_Biblio_Record_Level (not really many informations here, I agree ;-) ) -- Paul POULAIN http://www.biblibre.com Expert en Logiciels Libres pour l'info-doc Tel : (33) 4 91 81 35 08 From fridolyn.somers at gmail.com Fri Oct 15 17:38:53 2010 From: fridolyn.somers at gmail.com (Fridolyn SOMERS) Date: Fri, 15 Oct 2010 17:38:53 +0200 Subject: [Koha-devel] CloneSubfields and problems with it In-Reply-To: <20101014154840.GA30706@rot13.org> References: <20101014154840.GA30706@rot13.org> Message-ID: Hie, Notice that I proposed a patch for bug 3264. Indeed, a JQuery-like solution is far better. I also use is at 99% in Javascript. Don't forget to change also Authorities MARC edition. It is nearly the same code a for Biblios. I don't know why there are differences. Regards, -- Fridolyn SOMERS ICT engineer PROGILONE SAS - Lyon - France fridolyn.somers at gmail.com On Thu, Oct 14, 2010 at 5:48 PM, Dobrica Pavlinusic wrote: > We are running latest git in our production, and we have stumbled upon > variation of bug reported at: > > http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=3264 > > We are having problems with cataloguing/addbiblio.tmpl which doesn't > correctly clone subfields (it always clone first one) and after > removing one subfield whole JavaScript breaks. > > After spending a day looking at current code, I decided to try to > re-implement it. My motivation is following: > > - currently code is injected in html page. This makes page load slow, > because browser has to stop everything until it parse JavaScript > - there is variation of same methods in authorities/authorities.tmpl > but it has it's own problems. Splitting code into separate, > page-independent reusable code would benefit both of problems > > To do that I moved selected element into anchor of link, so it can be > accessed (and updated) easily from JavaScript. Result is following code: > > function _subfield_id(a) { > console.debug( '_subfield_id', a ); > return a.href.substr( a.href.indexOf('#')+1 ); > } > > function clone_subfield( from_a ) { > console.debug( 'clone_subfield', from_a ); > var subfield_id = _subfield_id(from_a); > var $original = $('#' + subfield_id); > var $clone = $original.clone(); > var new_key = CreateKey(); > > $clone > .attr('id', subfield_id + new_key ) > .find('input,select,textarea').each( function() { > $(this) > .attr('id', function() { return this.id + new_key }) > .attr('name', function() { return this.name + new_key }) > ; > }) > .end() > .find('label').attr('for', function() { return this.for + new_key }) > .end() > .find('.subfield_controls > a').each( function() { > this.href = '#' + subfield_id + new_key; > console.debug( 'fix href', this.href ); > }) > ; > > console.debug( 'clone', $clone ); > > $clone.insertAfter( $original ); > } > > function remove_subfield( from_a ) { > console.debug( 'remove_subfield', from_a ); > var subfield_id = _subfield_id(from_a); > $('#'+subfield_id).remove(); > } > > > with html change: > > > " onclick="clone_subfield(this); return > false;">Clone title="Clone this subfield" /> > " onclick="remove_subfield(this); return > false;">Delete title="Delete this subfield" /> > > > for comparison, take at look at 77 lines of CloneSubfield alone in current > koha code. > > However, I'm very aware that my solution is very jquery-eske, and might > be too much for people who haven't used jquery before. I also used $o > notation in JavaScript for jQuery objects, and that might also be > confusing. > > I would also love to use event bubbling using .live instead of attaching > all click handlers, but I have to check how reliably it works with > jquery 1.3.2 which koha uses. > > Would such a change be accepted into Koha and does it make sense to > peruse this refactoring? > > If you want to follow along this experiment, there is a > branch for it in our git: > > > http://git.rot13.org/?p=koha.git;a=shortlog;h=refs/heads/clone-subfields-jquery > > -- > Dobrica Pavlinusic 2share!2flame > dpavlin at rot13.org > Unix addict. Internet consultant. > http://www.rot13.org/~dpavlin > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > -------------- section suivante -------------- Une pi?ce jointe HTML a ?t? nettoy?e... URL: From bibliwho at gmail.com Fri Oct 15 17:40:28 2010 From: bibliwho at gmail.com (Cab Vinton) Date: Fri, 15 Oct 2010 11:40:28 -0400 Subject: [Koha-devel] Batch processes on records? In-Reply-To: <4CB8724E.8050207@biblibre.com> References: <4CB8724E.8050207@biblibre.com> Message-ID: Thank you, Paul. Does this mean it's scheduled for the next release? 3.6? Cab Vinton, Director Sanbornton Public Library Sanbornton, NH On Fri, Oct 15, 2010 at 11:25 AM, Paul Poulain wrote: > Le 15/10/2010 16:37, Cab Vinton a ?crit : > > This feature is in BibLibre portfolio. It's already a work in progress > on our git. > See > http://wiki.koha-community.org/wiki/Batch_Modification_Biblio_Record_Level > (not really many informations here, I agree ;-) ) > > -- > Paul POULAIN > http://www.biblibre.com > Expert en Logiciels Libres pour l'info-doc > Tel : (33) 4 91 81 35 08 > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > From paul.poulain at biblibre.com Fri Oct 15 17:51:40 2010 From: paul.poulain at biblibre.com (Paul Poulain) Date: Fri, 15 Oct 2010 17:51:40 +0200 Subject: [Koha-devel] Batch processes on records? In-Reply-To: References: <4CB8724E.8050207@biblibre.com> Message-ID: <4CB8788C.6080109@biblibre.com> Le 15/10/2010 17:40, Cab Vinton a ?crit : > Thank you, Paul. > > Does this mean it's scheduled for the next release? 3.6? > good question, that is related to our (community) workflow for including patches. We will work on this during KohaCon As I said some weeks ago, we (BibLibre) have 600+ patches waiting for inclusion into official version, due to 3.2 feature freeze & late release (we -hdl & me- stay a few days in NZ after the conf to work on this with chris) [ In fact, our git.biblibre.com is a kind of 3.4 (it's already live for 4 of our customers) ] HTH -- Paul POULAIN http://www.biblibre.com Expert en Logiciels Libres pour l'info-doc Tel : (33) 4 91 81 35 08 From paul.poulain at biblibre.com Fri Oct 15 18:10:33 2010 From: paul.poulain at biblibre.com (Paul Poulain) Date: Fri, 15 Oct 2010 18:10:33 +0200 Subject: [Koha-devel] Batch processes on records? In-Reply-To: <4CB8788C.6080109@biblibre.com> References: <4CB8724E.8050207@biblibre.com> <4CB8788C.6080109@biblibre.com> Message-ID: <4CB87CF9.3000507@biblibre.com> Le 15/10/2010 17:51, Paul Poulain a ?crit : > [ In fact, our git.biblibre.com is a kind of 3.4 (it's already live for > 4 of our customers) ] > rewrite of this sentence : [ In fact, all the features on git.biblibre.com are quite stable (it's already live for 4 of our customers) ] -- Paul POULAIN http://www.biblibre.com Expert en Logiciels Libres pour l'info-doc Tel : (33) 4 91 81 35 08 From gmcharlt at gmail.com Sun Oct 17 22:04:42 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Sun, 17 Oct 2010 16:04:42 -0400 Subject: [Koha-devel] Koha documentation patches list? Message-ID: Hi, Now that we're starting to get patches to the Koha manual, I propose that we create a separate documentation patches list (or perhaps a general Koha documentation mailing list that also accepts patches). That way documentation patches don't get hidden in the general flow of Koha software patches, but remain available for community review and discussion when they are submitted. Regards, Galen -- Galen Charlton gmcharlt at gmail.com From laurenthdl at alinto.com Sun Oct 17 22:20:10 2010 From: laurenthdl at alinto.com (LAURENT Henri-Damien) Date: Sun, 17 Oct 2010 22:20:10 +0200 Subject: [Koha-devel] Koha documentation patches list? In-Reply-To: References: Message-ID: <4CBB5A7A.5010900@alinto.com> Le 17/10/2010 22:04, Galen Charlton a ?crit : > Hi, > > Now that we're starting to get patches to the Koha manual, I propose > that we create a separate documentation patches list (or perhaps a > general Koha documentation mailing list that also accepts patches). > That way documentation patches don't get hidden in the general flow of > Koha software patches, but remain available for community review and > discussion when they are submitted. > > Regards, > > Galen Maybe this could be worth doing that. Any suggestion on a name for it ? Koha-Documentation-patch ? Koha-Documentation ? However, I wonder whether this list should contain only patches or could have any thread related to documentation. Should this list send a monthly reminder about how to send a patch, how to contribute to the doc, how to post new stuff ? In that case, Nicole, would you mind writing something up ? Or should we stick to one list with patches only, and have different tags ? for instance : [3.0.x] [3.2] [New/FT/test] [DOC] Since DOC is related to devs too, should we have different channels. This is my concern. -- Henri-Damien LAURENT From nengard at gmail.com Sun Oct 17 22:23:35 2010 From: nengard at gmail.com (Nicole Engard) Date: Sun, 17 Oct 2010 16:23:35 -0400 Subject: [Koha-devel] Koha documentation patches list? In-Reply-To: <4CBB5A7A.5010900@alinto.com> References: <4CBB5A7A.5010900@alinto.com> Message-ID: As for list name, I'd say koha-docs (short and sweet). As for content I think it should be only patches and the regular list can be for documentation discussion among all other things. I can surely write something up on how to submit patches and such to have sent to the Koha community on a regular basis - or just once in a while. Nicole On Sun, Oct 17, 2010 at 4:20 PM, LAURENT Henri-Damien wrote: > Le 17/10/2010 22:04, Galen Charlton a ?crit : >> Hi, >> >> Now that we're starting to get patches to the Koha manual, I propose >> that we create a separate documentation patches list (or perhaps a >> general Koha documentation mailing list that also accepts patches). >> That way documentation patches don't get hidden in the general flow of >> Koha software patches, but remain available for community review and >> discussion when they are submitted. >> >> Regards, >> >> Galen > > Maybe this could be worth doing that. > Any suggestion on a name for it ? > Koha-Documentation-patch ? > Koha-Documentation ? > However, I wonder whether this list should contain only patches or could > have any thread related to documentation. > Should this list send a monthly reminder about how to send a patch, how > to contribute to the doc, how to post new stuff ? > In that case, Nicole, would you mind writing something up ? > > Or should we stick to one list with patches only, and have different tags ? > for instance : > [3.0.x] > [3.2] > [New/FT/test] > [DOC] > > Since DOC is related to devs too, should we have different channels. > This is my concern. > > -- > Henri-Damien LAURENT > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > From rick at praxis.com.au Sun Oct 17 22:30:37 2010 From: rick at praxis.com.au (Rick Welykochy) Date: Mon, 18 Oct 2010 06:30:37 +1000 Subject: [Koha-devel] Koha documentation patches list? In-Reply-To: <4CBB5A7A.5010900@alinto.com> References: <4CBB5A7A.5010900@alinto.com> Message-ID: <4CBB5CED.1050402@praxis.com.au> LAURENT Henri-Damien wrote: > Or should we stick to one list with patches only, and have different tags ? > for instance : > [3.0.x] > [3.2] > [New/FT/test] > [DOC] > > Since DOC is related to devs too, should we have different channels. > This is my concern. If the documentation source is in the same git repository as the rest of Koha it makes sense to include DOC. Otherwise, if the patches apply to some other source, i.e. the CMS for the web site, a separate patch list makes more sense. cheers rickw -- Rick Welykochy || Praxis Services The great advantage of being in a rut is that when one is in a rut, one knows exactly where one is. -- Arnold Bennett From gmcharlt at gmail.com Sun Oct 17 22:45:07 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Sun, 17 Oct 2010 16:45:07 -0400 Subject: [Koha-devel] Koha documentation patches list? In-Reply-To: <4CBB5CED.1050402@praxis.com.au> References: <4CBB5A7A.5010900@alinto.com> <4CBB5CED.1050402@praxis.com.au> Message-ID: Hi, On Sun, Oct 17, 2010 at 4:30 PM, Rick Welykochy wrote: > If the documentation source is in the same git repository as the rest > of Koha it makes sense to include DOC. The Koha manual resides in a separate repository. See: http://git.koha-community.org/gitweb/?p=kohadocs.git;a=summary Regards, Galen -- Galen Charlton gmcharlt at gmail.com From robin at catalyst.net.nz Sun Oct 17 23:31:57 2010 From: robin at catalyst.net.nz (Robin Sheat) Date: Mon, 18 Oct 2010 10:31:57 +1300 Subject: [Koha-devel] CloneSubfields and problems with it In-Reply-To: References: <20101014154840.GA30706@rot13.org> Message-ID: <1287351117.20544.6.camel@zarathud> Owen Leonard schreef op vr 15-10-2010 om 10:09 [-0400]: > Upgrading the version of jQuery Koha uses should be a to-do item for 3.4. See http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=5184 :) -- Robin Sheat Catalyst IT Ltd. ? +64 4 803 2204 GPG: 5957 6D23 8B16 EFAB FEF8 7175 14D3 6485 A99C EB6D -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part URL: From nengard at gmail.com Thu Oct 21 02:21:46 2010 From: nengard at gmail.com (Nicole Engard) Date: Wed, 20 Oct 2010 17:21:46 -0700 Subject: [Koha-devel] November Newsletter Message-ID: Hello all, I'm thinking for the next Koha Newsletter we'll do a conference sum-up. So between the start of the conference and the 12th of November please send me links to posts you may have written about conference sessions, links to pictures from KohaCon, and anything else conference related. Thanks Nicole C. Engard From fridolyn.somers at gmail.com Thu Oct 21 10:25:16 2010 From: fridolyn.somers at gmail.com (Fridolyn SOMERS) Date: Thu, 21 Oct 2010 10:25:16 +0200 Subject: [Koha-devel] Facets performance Message-ID: Hie, I have posted a proposed patch for Bug 3154 : http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=3154 It's about the fact that facets computation is limited to the records in search results page. I think I've found a good way to improve the facets extraction performance. Any comment or modification is welcome. Regards, -- Fridolyn SOMERS ICT engineer PROGILONE - Lyon - France fridolyn.somers at gmail.com -------------- section suivante -------------- Une pi?ce jointe HTML a ?t? nettoy?e... URL: From chris at bigballofwax.co.nz Fri Oct 22 00:47:25 2010 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Fri, 22 Oct 2010 11:47:25 +1300 Subject: [Koha-devel] Facets performance In-Reply-To: References: Message-ID: 2010/10/21 Fridolyn SOMERS : > Hie, Hi Fridolyn > > I have posted a proposed patch for Bug 3154 : > http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=3154 > > It's about the fact that facets computation is limited to the records in > search results page. > I think I've found a good way to improve the facets extraction performance. > > Any comment or modification is welcome. > This looks really promising, I probably won't get a chance to try it out until after Kohacon. But i'm looking forward to giving it some testing Thank you Chris From gmcharlt at gmail.com Fri Oct 22 08:49:31 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Fri, 22 Oct 2010 02:49:31 -0400 Subject: [Koha-devel] Koha 3.2.0 released Message-ID: Hi, I am pleased to announce the release of Koha 3.2.0. The package can be retrieved from: http://download.koha-community.org/koha-3.02.00.tar.gz You can use the following checksum and signature files to verify the download: http://download.koha-community.org/koha-3.02.00.tar.gz.MD5 http://download.koha-community.org/koha-3.02.00.tar.gz.MD5.asc http://download.koha-community.org/koha-3.02.00.tar.gz.sig Here are the release notes: RELEASE NOTES FOR KOHA 3.2.0 - 22 October 2010 ======================================================================== Koha is the first free and open source software library automation package (ILS). Development is sponsored by libraries of varying types and sizes, volunteers, and support companies from around the world. The website for the Koha project is http://koha-community.org/ Koha 3.2.0 can be downloaded from: http://download.koha-community.org/koha-3.02.00.tar.gz Installation instructions can be found at: http://wiki.koha-community.org/wiki/Installation_Documentation Koha 3.2.0 is a major feature release. New features in 3.2.0 ====================== ACQUISITIONS * The acquisitions module is significantly revamped: * Support for hierarchical funds and budgets * Budget planning by calendar and item type * Vendor contract periods * Generation of PDF purchase orders * Ability to place orders on a batch of bibliographic records imported into the catalog from a file or Z39.50 search ADMINISTRATION * Significant usability enhancements to the system preferences editor * Granular permissions are now always on; the GranularPermissions system preference is consequently removed * Many additional granular permissions are added CATALOGING * Bulk item editing * Revamped inventory/stock-taking * Ability to export bibliographic information in CSV format from the staff cart * New quick spine label print button * Support for temporary location and in-process item statuses * Usability enhancements to cataloging workflow: * Can now choose whether to edit items after saving a bib record * Option to move an item from one bib to another * Option to delete all items attached to a bib * Ability to clone an item * View bib in OPAC link from the staff interface * Ability to merge duplicate bibliographic record from the staff lists interface CIRCULATION * Ability to define library transfer limits * Email checkout slips * Option to enable alert sounds during checkin and checkout * Improvements in Koha's ability to express circulation policies * Option to charge fines using suspension days instead of money * Hold policies are now on the library/itemtype/categorycode level * Renewal policies are now on the library/itemtype/categorycode level * Ability to specify an expiration date for a hold request when placing it via the staff interface or OPAC * Daily batch job to cancel expired holds * Improvements to the interface to change the priority of hold requests for a bib in the staff interface * New messaging system for patron records, allowing an unlimited number of patron notes to be stored and managed * Changes to web-based self checkout * Ability to login in automatically to self-check, allowing for unattended self-check stations * Ability to display the patron image in self-check OPAC * Numerous enhancements to the bib display XML templates * Per-patron OPAC search history, with ability for patrons to manage the retention of their search history * Support for Syndetics, LibraryThing, and Babeltheque enhanced content * Support for RIS and BibTeX export * Bib details page includes which lists a bib belongs to * Can now customize the 'search for this title in' links * Preference to control whether patrons can change their details in the OPAC * OPAC icon set provided by vokal REPORTS * Guided reports can now take runtime parameters * Can now edit SQL reports SERIALS * Can now specify the subscription end date, library location, and grace periods * Option to automatically place hold requests for members of a serials routing list * Numerous bugfixes STAFF INTERFACE * The cart has been added to the staff interface * Staff can add items to lists in bulk from search results * Enhanced patron card and item label creator * Support for XSLT templates in the staff bib details display * Bib details page includes which lists a bib belongs to WEB SERVICES AND INTERFACE * Integration with SOPAC, including support for various web services defined by the ILS-DI recommendation * Support for CAS single sign-on * Improvements to OAI-PMH support INTERNATIONALIZTION * New initialization SQL files for German, Italian, and Polish * Revamped UNIMARC framework for English INTERNALS AND PACKAGING * Koha is now packaged for Debian Squeeze; installation of Koha can now be as simple as apt-get install koha * Improvements to the management of required Perl modules * Improvements to test case coverage * Substantial progress on enabling the warnings pragma in all of Koha's Perl code BUGFIXES * Approximately 1,050 tracked bugs and enhancement requests are addressed in this release System Preferences ====================== The following system preferences are new in 3.2.0: * AcqCreateItem * AllowAllMessageDeletion * AllowHoldDateInFuture * AllowHoldPolicyOverride * AutoSelfCheckAllowed * AutoSelfCheckID * AutoSelfCheckPass * BranchTransferLimitsType * Babeltheque * casAuthentication * casLogout * casServerUrl * ceilingDueDate * CurrencyFormat * DisplayClearScreenButton * DisplayMultiPlaceHold * DisplayOPACiconsXSLT * EnableOpacSearchHistory * FilterBeforeOverdueReport * HidePatronName * ILS-DI * ILS-DI:AuthorizedIPs * ImageLimit * InProcessingToShelvingCart * intranetbookbag * LibraryThingForLibrariesEnabled * LibraryThingForLibrariesID * LibraryThingForLibrariesTabbedView * NewItemsDefaultLocation * numReturnedItemsToShow * OAI-PMH:ConfFile * OpacAddMastheadLibraryPulldown * OPACAllowHoldDateInFuture * OPACAmazonReviews * OPACDisplayRequestPriority * OPACFineNoRenewals * OPACFinesTab * OPACPatronDetails * OPACSearchForTitleIn * opacSerialDefaultTab * OPACSerialIssueDisplayCount * OPACShowCheckoutName * OrderPdfFormat * OverdueNoticeBcc * OverduesBlockCirc * PrintNoticesMaxLines * ReturnToShelvingCart * RoutingListAddReserves * ShowPatronImageInWebBasedSelfCheck * soundon * SpineLabelAutoPrint * SpineLabelFormat * SpineLabelShowPrintOnBibDetails * StaffSerialIssueDisplayCount * SyndeticsAuthorNotes * SyndeticsAwards * SyndeticsClientCode * SyndeticsCoverImages * SyndeticsCoverImageSize * SyndeticsEditions * SyndeticsEnabled * SyndeticsExcerpt * SyndeticsReviews * SyndeticsSeries * SyndeticsSummary * SyndeticsTOC * UseBranchTransferLimits System requirements ====================== Changes since 3.0: * The minimum version of Perl required is now 5.8.8. * There are a number of new Perl module dependencies. Run ./koha_perl_deps.pl -u -m to get a list of any new modules to install during upgrade. Upgrades ====================== The structure of the acquisitions tables have changed significantly from 3.0.x. In particular, the budget hierarchy is quite different. During an upgrade, a new database table is created called fundmapping that contains a record of how budgets were mapped. It is strongly recommended that users of Koha 3.0.x acquisitions carefully review the results of the upgrade before resuming ordering in Koha 3.2.0. Documentation ====================== As of Koha 3.2, the Koha manual is now maintained in DocBook. The home page for Koha documentation is http://koha-community.org/documentation/ As of the date of these release notes, several translations of the Koha manual are available: English: http://koha-community.org/documentation/3-2-manual/ Spanish: http://koha-community.org/documentation/3-2-manual-es/ French: http://koha-community.org/documentation/3-2-manual-fr/ The Git repository for the Koha manual can be found at http://git.koha-community.org/gitweb/?p=kohadocs.git;a=summary Translations ====================== Complete or near-complete translations of the OPAC and staff interface are available in this release for the following languages: * Chinese * Danish * English (New Zealand) * English (USA) * French (France) * French (Canada) * German * Greek * Hindi * Italian * Norwegian * Portuguese * Spanish * Turkish Partial translations are available for various other languages. The Koha team welcomes additional translations; please see http://www.kohadocs.org/usersguide/apb.html for information about translating Koha, and join the koha-translate list to volunteer: http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-translate The most up-to-date translations can be found at: http://translate.koha.org/ Release Team ====================== The release team for Koha 3.2 is Release Manager: Galen Charlton Documentation Manager: Nicole Engard Translation Manager: Chris Cormack Release Maintainer (3.0.x): Henri-Damien Laurent Release Maintainer (3.2.x): Chris Nighswonger Credits ====================== We thank the following libraries who are known to have sponsored new features in Koha 3.2: * Aix-Marseille Universities, France * BrailleNet (http://www.braillenet.org/) * BULAC, France (www.bulac.fr) * East Brunswick Public Library, East Brunswick, New Jersey, USA * Foundations Bible College & Seminary, Dunn, North Carolina, USA * Hochschule f?r J?dische Studien, Heidelberg, Germany (www.hfjs.eu) - XSLT changes to display 880 fields * Howard County Library, Maryland, USA (http://www.hclibrary.org/) * MassCat, Massachussetts, USA * Middletown Township Public Library, Middletown, New Jersey, USA * New York University Health Sciences Library, New York, USA * Northeast Kansas Library System, Kansas, USA * Plano Independent School District, Plano, Texas, USA * SAN Ouest Provence, France * vokal (Vermont Association of Koha Automated Libraries), Vermont, USA * www.digital-loom.com We thank the following individuals who contributed patches to Koha 3.2.0. * Alex Arnaud * Allen Reinmeyer * Amit Gupta * Andrei V. Toutoukine * Andrew Chilton * Andrew Elwell * Andrew Moore * Brendan A. Gallagher * Brian Harrington * Chris Catalfo * Chris Cormack * Chris Nighswonger * Christopher Hyde * Cindy Murdock Ames * Clay Fouts * Colin Campbell * Cory Jaeger * Daniel Sweeney * Danny Bouman * Darrell Ulm * David Birmingham * David Goldfein * Donovan Jones * D. Ruth Bavousett * Eric Olsen * Fr?d?ric Demains * Galen Charlton * Garry Collum * Henri-Damien Laurent * Ian Walls * James Winter * Jane Wagner * Jared Camins-Esakov * Jean-Andr? Santoni * Jesse Weaver * Joe Atzberger * John Beppu * John Soros * Joshua Ferraro * Katrin Fischer * Koustubha Kale * Kyle M Hall * Lars Wirzenius * Liz Rea * Magnus Enger * Marc Chantreux * Marcel de Rooy * Mason James * Matthew Hunt * Matthias Meusburger * Michael Hafen * MJ Ray * Nahuel Angelinetti * Nicolas Morin * Nicole Engard * Owen Leonard * Paul Poulain * Piotr Wejman * Ricardo Dias Marques * Rick Welykochy * Robin Sheat * Ryan Higgins * Savitra Sirohi * S?bastien Hinderer * Srdjan Jankovic * Stan Brinkerhoff * Stephen Edwards * Vincent Danjean * Will Stokes * Wolfgang Heymans * Zeno Tajoli We regret any omissions. If a contributor has been inadvertantly missed, please send patch against these release notes to koha-patches at lists.koha-community.org. Revision control notes ====================== The Koha project uses Git for version control. The current development version of Koha can be retrieved by checking out the master branch of git://git.koha-community.org/koha.git The branch for Koha 3.2.x (i.e., this version of Koha and future bugfix releases) is 3.2.x. The next major feature release of Koha will be Koha 3.4.0. Bugs and feature requests ====================== Bug reports and feature requests can be filed at the Koha bug tracker at http://bugs.koha-community.org/ Naku te rourou, nau te rourou, ka ora ai te iwi. Regards, Galen -- Galen Charlton gmcharlt at gmail.com From gmcharlt at gmail.com Fri Oct 22 09:08:29 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Fri, 22 Oct 2010 03:08:29 -0400 Subject: [Koha-devel] 3.2.x branched Message-ID: Hi, I have created a 3.2.x branch in the public Git repository. All commits for the 3.2.x series will land there, and master is now available for 3.4. Chris Cormack, the 3.4 release manager, now has push access to the master branch, while Chris Nighswonger, the incoming release maintainer for 3.2.x, now has push access to the 3.2.x branch. I am retaining push access for 3.2.x and master for the next few days to help Chris C. with some of the backlog and to deal with things in case a quick 3.2.1 is needed. After that, I'll revert back to common developer. For those of you who intend to start following 3.2.x right away, remember that when you rebase, you should rebase against origin/3.2.x. Regards, Galen -- Galen Charlton gmcharlt at gmail.com From chris at bigballofwax.co.nz Fri Oct 22 09:10:32 2010 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Fri, 22 Oct 2010 20:10:32 +1300 Subject: [Koha-devel] Koha 3.2.0 released In-Reply-To: References: Message-ID: On 22 October 2010 19:49, Galen Charlton wrote: > Hi, > > I am pleased to announce the release of Koha 3.2.0. ?The package can > be retrieved from: > > http://download.koha-community.org/koha-3.02.00.tar.gz > > You can use the following checksum and signature files to verify the download: > > http://download.koha-community.org/koha-3.02.00.tar.gz.MD5 > http://download.koha-community.org/koha-3.02.00.tar.gz.MD5.asc > http://download.koha-community.org/koha-3.02.00.tar.gz.sig I'd like to be the first to congratulate and thank Galen for all his hard work bringing this release to fruition. And of course huge thanks to all the developers .. 73 different individuals I think .. who contributed to this release. Again huge thanks to libraries and librarians who drive Koha. Of course as translation manager I would be remiss in not thanking all those who provide translations. Koha really is a team effort, and the proverb Galen used at the end of his release notes is very apt. By sharing the knowledge and effort, we create something wonderful. Chris From gmcharlt at gmail.com Fri Oct 22 09:57:28 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Fri, 22 Oct 2010 03:57:28 -0400 Subject: [Koha-devel] Koha 3.2.0 released In-Reply-To: References: Message-ID: Hi, On Fri, Oct 22, 2010 at 3:10 AM, Chris Cormack wrote: > Koha really is a team effort, and the proverb Galen used at the end of > his release notes is very apt. By sharing the knowledge and effort, we > create something wonderful. Hear, hear! Of course, I would be remiss if I didn't thank Chris for supplying the proverb. So, thanks! Regards, Galen -- Galen Charlton gmcharlt at gmail.com From tomascohen at gmail.com Fri Oct 22 13:55:07 2010 From: tomascohen at gmail.com (Tomas Cohen Arazi) Date: Fri, 22 Oct 2010 08:55:07 -0300 Subject: [Koha-devel] Koha 3.2.0 released In-Reply-To: References: Message-ID: On Fri, Oct 22, 2010 at 3:49 AM, Galen Charlton wrote: > Hi, > > I am pleased to announce the release of Koha 3.2.0. Galen, thanks and congratulations for this great release! To+ From robin at catalyst.net.nz Fri Oct 22 14:30:42 2010 From: robin at catalyst.net.nz (Robin Sheat) Date: Sat, 23 Oct 2010 01:30:42 +1300 Subject: [Koha-devel] Package details (was: Re: 3.2.x branched) In-Reply-To: References: Message-ID: <201010230130.43413.robin@catalyst.net.nz> Op vrijdag 22 oktober 2010 20:08:29 schreef Galen Charlton: > I have created a 3.2.x branch in the public Git repository. All > commits for the 3.2.x series will land there, and master is now Just a note for those interested in the packages, Galen graciously let me slip a patch in at the last minute that allows a 3.2.0 .deb to be built directly from the released source. So, when I get back to work (or possibly during Kohacon if I get some downtime) I'll make a stable package for that that'll only get updated as Koha gets updated, and also continue tracking the development stream in a different part of the repo. Details and wiki updates to follow when it actually happens. -- Robin Sheat Catalyst IT Ltd. ? +64 4 803 2204 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part. URL: From gmcharlt at gmail.com Fri Oct 22 17:15:30 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Fri, 22 Oct 2010 11:15:30 -0400 Subject: [Koha-devel] Koha 3.2.0 package with all translations enabled now available Message-ID: Hi, A tarball of Koha 3.2.0 with all translations enabled is now available for download from: http://download.koha-community.org/koha-3.02.00-all-translations.tar.gz You can use the following checksum and signature files to verify the download: http://download.koha-community.org/koha-3.02.00-all-translations.tar.gz.sig http://download.koha-community.org/koha-3.02.00-all-translations.tar.gz.MD5 http://download.koha-community.org/koha-3.02.00-all-translations.tar.gz.MD5.asc Given the size of the all-translations tarball, please download it only if you really need to install every translation. Otherwise, please the regular taball and install just the languages you need. Instructions for installing a set of translated templates for a given language can be found at: http://wiki.koha-community.org/wiki/Installation_of_additional_languages_for_OPAC_and_INTRANET_staff_client Regards, Galen -- Galen Charlton gmcharlt at gmail.com From dpavlin at gmail.com Mon Oct 25 15:59:52 2010 From: dpavlin at gmail.com (=?ISO-8859-2?Q?Dobrica_Pavlinu=B9i=E6?=) Date: Mon, 25 Oct 2010 15:59:52 +0200 Subject: [Koha-devel] Scraping online web catalogues to provide Z39.50 server Message-ID: I wrote simple Z39.50 server which uses WWW::Mechanize to scrape web pages and produce MARC records which can then be imported in Koha. Short announcement is at: http://blog.rot13.org/2010/10/z3950_server_which_scrapes_web_catalogues_for_information.html and source code is on github: http://github.com/dpavlin/Biblio-Z3950 Source is important, because you will have to modify it, even to use another Aleph instance because my instance uses Croatian language, and there are regexes inside which include few words of language to find number or results. But, I hope this will motivate someone to provide a data source or two over Z39.50 :-) -- ?...2share!2flame... http://blog.rot13.org From gmcharlt at gmail.com Mon Oct 25 16:50:55 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Mon, 25 Oct 2010 10:50:55 -0400 Subject: [Koha-devel] Scraping online web catalogues to provide Z39.50 server In-Reply-To: References: Message-ID: Hi, 2010/10/25 Dobrica Pavlinu?i? : > I wrote simple Z39.50 server which uses WWW::Mechanize to scrape web pages > and produce MARC records which can then be imported in Koha. > > Short announcement is at: > > http://blog.rot13.org/2010/10/z3950_server_which_scrapes_web_catalogues_for_information.html Thanks, this looks interesting. Pity that the catalogs in question don't provide Z39.50 service directly, but whatever works to get the data moving... Regards, Galen -- Galen Charlton gmcharlt at gmail.com From dpavlin at rot13.org Tue Oct 26 19:17:35 2010 From: dpavlin at rot13.org (Dobrica Pavlinusic) Date: Tue, 26 Oct 2010 19:17:35 +0200 Subject: [Koha-devel] DBIx::Class vs current Koha's DBI performance Message-ID: <20101026171735.GA32451@rot13.org> I just figured out today that kohacon10 is in progress, and one of topics seems to be DBIx::Class migration. Let me start by saying that I'm all for it, but probably not for same resons as everybody else. I was trying to profile koha search page. On first run under Devel::NYTProf it showed 2220 invocations of DBI::st::execute. That's more than 2000 calls which wait for database to return answer. This might explain extremly high usage of mysql query cache which I'm seeing in production: http://mjesec.ffzg.hr/munin/koha/koha-mysql_queries.html So, I decided to experiment a bit. and few patches later I was able to reduce number of queries issued on each page load down to 876 If you are interested in what I changed, my code is at: http://git.rot13.org/?p=koha.git;a=shortlog;h=refs/heads/nytprof-cache It's basically using either global hash to cache values per request of simple memoize around function. Which brings me back to DBIx::Class. If we can cache values using dbic, this will be huge WIN for Koha performance as opposed to loss :-) I never used dbic in that contex, but google seems to think it's possible, and I'm willing to take a try at it. -- Dobrica Pavlinusic 2share!2flame dpavlin at rot13.org Unix addict. Internet consultant. http://www.rot13.org/~dpavlin From cfouts at liblime.com Tue Oct 26 23:24:24 2010 From: cfouts at liblime.com (Fouts, Clay) Date: Tue, 26 Oct 2010 14:24:24 -0700 Subject: [Koha-devel] DBIx::Class vs current Koha's DBI performance In-Reply-To: <20101026171735.GA32451@rot13.org> References: <20101026171735.GA32451@rot13.org> Message-ID: Hello, Dobrica. Memoize and local caching is handy, and Koha has functions that benefit from its use. However, because Koha's functions' return values usually depend on database contents rather than just the parameters, you'd have to be very clear about unmemoizing functions if the data on which they depends gets changed. This is not always easy to determine since there's not a clear or enforceable domain defined for which parts of the code can read and write directly to various parts of the database contents. Using a cache in such a way as to avoid these staleness problems typically demands consistent use of a limited set of accessor and mutator functions for the database contents, isolating the cache seeding and checking code within those. That way, by defining those functions for, for example, the 'branches' table you can avoid caching the whole branch detail in one cache and then the branch name in another. From within GetBranchName you'd simply call GetBranchDetail (which in turn would call the private _get_branch_row or something), then return only the name field. I imagine you're already aware of much of this, and that this is a proof of concept testbed you're publishing. I only bring it up because it's important that others understand the subtlety of some of the issues that caching can introduce, particularly when the code doesn't naturally support its use. Caching DB contents has an even more pronounced impact when latency is introduced by hosting the database on a different machine than the CGI server. When you have 2k+ queries per script, even a few milliseconds RTT adds up to a rather large number. Consequently, for many of Koha's smaller tables, it often ends up being faster to seed the cache by fetching the entire table on the first call rather than retrieving the rows as-needed. In most Koha setups I've seen, bandwidth and memory are cheaper than latency + MySQL query overhead. Over Unix sockets the 65kB datagram size allows the contents of many tables to be delivered in a single packet. Ethernet is smaller, of course, but the 1500 byte standard MTU is still large enough for several tables to fit in a single packet, plus TCP windowing greatly reduces the impact of subsequent packet latencies. My experience is that the startup overhead introduced by DBIx::Class is not sufficiently offset by its caching features in a CGI environment. Running under Plack or something similar would easily recoup that overhead, of course. Cheers, Clay On Tue, Oct 26, 2010 at 10:17 AM, Dobrica Pavlinusic wrote: > I just figured out today that kohacon10 is in progress, and one of > topics seems to be DBIx::Class migration. > > Let me start by saying that I'm all for it, but probably not for same > resons as everybody else. > > I was trying to profile koha search page. On first run under > Devel::NYTProf it showed 2220 invocations of DBI::st::execute. > > That's more than 2000 calls which wait for database to return answer. > This might explain extremly high usage of mysql query cache which I'm > seeing in production: > > http://mjesec.ffzg.hr/munin/koha/koha-mysql_queries.html > > So, I decided to experiment a bit. and few patches later I was able > to reduce number of queries issued on each page load down to 876 > > If you are interested in what I changed, my code is at: > > http://git.rot13.org/?p=koha.git;a=shortlog;h=refs/heads/nytprof-cache > > It's basically using either global hash to cache values per request of > simple memoize around function. > > Which brings me back to DBIx::Class. If we can cache values using dbic, > this will be huge WIN for Koha performance as opposed to loss :-) > > I never used dbic in that contex, but google seems to think it's > possible, and I'm willing to take a try at it. > > -- > Dobrica Pavlinusic 2share!2flame > dpavlin at rot13.org > Unix addict. Internet consultant. > http://www.rot13.org/~dpavlin > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cfouts at liblime.com Wed Oct 27 00:01:12 2010 From: cfouts at liblime.com (Fouts, Clay) Date: Tue, 26 Oct 2010 15:01:12 -0700 Subject: [Koha-devel] MARC record size limit In-Reply-To: References: <01c73f7770978873e50aaa6d2996374f.squirrel@wmail.agogme.com> <4CB48AEA.3050901@biblibre.com> Message-ID: I did some (very limited) testing on storing and retrieving MARC in YAML. The results were not encouraging. IIRC, I just did a direct conversion of the MARC::Record object into YAML and back. Perhaps there's a way to optimize the formatting that would improve performance, but my testing showed sometimes even worse performance than XML. MARCXML is a performance killer at this point, but there's no other apparent way to handle large bib records. The parsing is the issue, not the data transfer load. Perhaps cached BSON-formatted MARC::Record objects are a way out of this. Clay On Tue, Oct 12, 2010 at 11:45 AM, Thomas Dukleth wrote: > Reply inline: > > > On Tue, October 12, 2010 16:20, LAURENT Henri-Damien wrote: > > Le 12/10/2010 14:48, Thomas Dukleth a ?crit : > >> Reply inline: > >> > >> > >> Original Subject: [Koha-devel] Search Engine Changes : let's get some > >> solr > >> > >> On Mon, October 4, 2010 08:10, LAURENT Henri-Damien wrote: > > [...] > > >>> I think that every one agrees that we have to refactor C4::Search. > >>> Indeed, query parser is not able to manage independantly all the > >>> configuration options. And usage of usmarc as internal for biblio comes > >>> with a serious limitation of 9999 bytes, which for big biblios with > >>> many > >>> items, is not enough. > >> > >> How do MARC limitations on record size relate to Solr/Indexing or Zebra > >> indexing which lacks Solr/Lucene support in the current version? > > Koha is now using iso2709 returned from zebra in order to display result > > lists. > > I recall that having Zebra return ISO2709, MARC communications format, > records had the supposed advantage of faster response time from Zebra. > > > Problem is that if zebra is returning only part of the biblio and/or > > MARC::Record is not able to parse the whole data then the biblio is not > > displayed. We have biblio records which contains more than 1000 items. > > And MARC::Record/MARC::File::XML fails to parse that. > > > > So this is a real issue. > > Ultimately, we need a specific solution to various problems arising from > storing holdings directly in the MARC bibliographic records. > > > > > > >> > >> How does BibLibre intend to fix the limitation on the size of > >> bibliographic records as part of its work on record indexing and > >> retrieval > >> in Koha or in some parallel work.? > > Solr/Lucene can return indexes and thoses be used in order to display > > desired data or we could also do the same as we do with zebra : > > - store the data record (Format could be iso2709 or marcxml or > YAML) > > - use that for display. > > If using ISO 2709, MARC communications format, how would the problem of > excess record size be addressed? > > > Or we could use GetBiblio in order to get the data from database. > > Problem now would be the fact that storing xml in database is not really > > optimal for process. > > I like the idea of using YAML for some purposes. > > As you state, previous testing showed that returning every record in a > large result set from the SQL database was very inefficient as compared to > using the records as part of the response from the index server. > > Is there any practical way of sufficiently improving the efficiency of > accessing a large set of records from the SQL database? How much might > retrieving and parsing YAML records from the database help? > > I can imagine using XSLT to pre-process MARCXML records into an > appropriate format, such YAML with embedded HTML, pure HTML, or whatever > is needed embedded for a particular purpose and storing the pre-processed > records in appropriate special purpose columns. Real time parsing would > be minimised. The OPAC result set display might use > biblioitems.recordOPACDisplayBrief. The standard single record view might > use biblioitems.recordOPACDisplayDetail. An ISBD card view might use > biblioitems.recordOPACDisplayISBD. > > [...] > > > Thomas Dukleth > Agogme > 109 E 9th Street, 3D > New York, NY 10003 > USA > http://www.agogme.com > +1 212-674-3783 > > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From henridamien.laurent at biblibre.com Wed Oct 27 00:12:55 2010 From: henridamien.laurent at biblibre.com (LAURENT Henri-Damien) Date: Wed, 27 Oct 2010 11:12:55 +1300 Subject: [Koha-devel] MARC record size limit In-Reply-To: References: <01c73f7770978873e50aaa6d2996374f.squirrel@wmail.agogme.com> <4CB48AEA.3050901@biblibre.com> Message-ID: <4CC75267.9090204@biblibre.com> Le 13/10/2010 07:45, Thomas Dukleth a ?crit : > XSLT to pre-process MARCXML records into an > appropriate format, such YAML with embedded HTML, pure HTML, or whatever > is needed embedded for a particular purpose and storing the pre-processed > records in appropriate special purpose columns. Real time parsing would > be minimised. The OPAC result set display might use > biblioitems.recordOPACDisplayBrief. The standard single record view might > use biblioitems.recordOPACDisplayDetail. An ISBD card view might use > biblioitems.recordOPACDisplayISBD. > Adding more fields in the database is a noway I think. Since it would make DB grow and would just be kind of caching results. Maybe we could just use function and caching results. Maybe we could use disk space for that. Synching and update process is ressource demanding enough. So I definitely think that we should use Memcached for that, provided the mighty cautions Clay did in the Thread on DBIx::Class. -- Henri-Damien LAURENT From frederic at tamil.fr Wed Oct 27 08:15:41 2010 From: frederic at tamil.fr (Frederic Demians) Date: Wed, 27 Oct 2010 08:15:41 +0200 Subject: [Koha-devel] MARC record size limit In-Reply-To: References: <01c73f7770978873e50aaa6d2996374f.squirrel@wmail.agogme.com> <4CB48AEA.3050901@biblibre.com> Message-ID: <4CC7C38D.7080607@tamil.fr> > I did some (very limited) testing on storing and retrieving MARC in > YAML. The results were not encouraging. IIRC, I just did a direct > conversion of the MARC::Record object into YAML and back. Perhaps > there's a way to optimize the formatting that would improve > performance, but my testing showed sometimes even worse performance > than XML. Did you use YAML or YAML::XS? My tests with YAML::XS shows a very significant improvement with YAML: see attached file. Of course, we should define an serialization format independent from MARC::Record object if we don't want to break the process when MARC::Record internal data structure ever change. > MARCXML is a performance killer at this point, but there's no other > apparent way to handle large bib records. The parsing is the issue, > not the data transfer load. Perhaps cached BSON-formatted MARC::Record > objects are a way out of this. Benchmark should be done with all available serialization formats. We also could implement serialization/deserialization logic directly into MARC::Record library, as ISO2709 and XML format, in order gain control. -- Fr?d?ric -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: test-marc-deserial URL: From frederic at tamil.fr Wed Oct 27 08:27:15 2010 From: frederic at tamil.fr (Frederic Demians) Date: Wed, 27 Oct 2010 08:27:15 +0200 Subject: [Koha-devel] DBIx::Class vs current Koha's DBI performance In-Reply-To: References: <20101026171735.GA32451@rot13.org> Message-ID: <4CC7C643.2070906@tamil.fr> > My experience is that the startup overhead introduced by DBIx::Class > is not sufficiently offset by its caching features in a CGI > environment. Running under Plack or something similar would easily > recoup that overhead, of course. Yes, this is the solution. We need in 2010 a persistent environment to execute Koha in: an execution environment, in which they are application-level, session and page objects Memcaching everything isn't the solution. -- Fr?d?ric From dpavlin at rot13.org Wed Oct 27 13:15:18 2010 From: dpavlin at rot13.org (Dobrica Pavlinusic) Date: Wed, 27 Oct 2010 13:15:18 +0200 Subject: [Koha-devel] DBIx::Class vs current Koha's DBI performance In-Reply-To: References: <20101026171735.GA32451@rot13.org> Message-ID: <20101027111518.GA12795@rot13.org> On Tue, Oct 26, 2010 at 02:24:24PM -0700, Fouts, Clay wrote: > I imagine you're already aware of much of this, and that this is a proof of > concept testbed you're publishing. I only bring it up because it's important > that others understand the subtlety of some of the issues that caching can > introduce, particularly when the code doesn't naturally support its use. I am aware of this, so I implementd only caching of values which aren't dependent on other data or user info. However, data dependent on user info could be cached in user session (like lists and similar values). You are correct that invalidation of caches is hard problem, but I assume that caching over DBIx::Class would take care of invalidation, but I might be wrong. > My experience is that the startup overhead introduced by DBIx::Class is not > sufficiently offset by its caching features in a CGI environment. Running > under Plack or something similar would easily recoup that overhead, of > course. I did try to run it under Plack using following simple plack app which supports just opac-search: http://git.rot13.org/?p=koha.git;a=blob;f=app.psgi;h=7ff4c251236fb37a9d88bf84b59cfa9499e283cf;hb=658d404bf80d83f5a2875a25579f2080e6eddf7e (sorry about long URL). However, it's not would problems: CGI::Compile doesn't like some parts of Koha code and can't compile it (but that would be rather easy to fix), however to make it real solution we should first move Koha to use strict everywhere. Another intersting bit is change at: http://git.rot13.org/?p=koha.git;a=commit;h=8b27e87c24b8efa5f4ffc9b21b8d1a97d51f2e77 which takes marc records directly from marc instead of XML. This bypasses XML parsing and transformation and reduces number of statements significantly: old new stmts 1112898 786000 subs 295048 228382 (and it does show in real-life performance also, not just in profiling ;-) From bogus@does.not.exist.com Sat Oct 16 04:25:36 2010 From: bogus@does.not.exist.com () Date: Sat, 16 Oct 2010 02:25:36 -0000 Subject: No subject Message-ID: which we can pick in Koha code using caching. However, I would love to know if this moment is the right one to tackle this, or should I wait for DBIx::Class changes to land into code. -- Dobrica Pavlinusic 2share!2flame dpavlin at rot13.org Unix addict. Internet consultant. http://www.rot13.org/~dpavlin From cfouts at liblime.com Wed Oct 27 14:57:46 2010 From: cfouts at liblime.com (Fouts, Clay) Date: Wed, 27 Oct 2010 05:57:46 -0700 Subject: [Koha-devel] MARC record size limit In-Reply-To: <4CC7C38D.7080607@tamil.fr> References: <01c73f7770978873e50aaa6d2996374f.squirrel@wmail.agogme.com> <4CB48AEA.3050901@biblibre.com> <4CC7C38D.7080607@tamil.fr> Message-ID: That is substantially faster, and more in line with my expectation of what the performance difference would be. I had thought I was using the XS module, but clearly something was amiss! This is good news. I would think implementing within MARC::Record would be the winner between those two approaches, rather than coming up with an intermediary format. However, I had been imagining utilizing the YAML for caching rather than as persistent data, making the accommodation of internal changes to MARC::Record a matter of flushing the cache rather than regenerating the serialized records. Cheers, Clay 2010/10/26 Frederic Demians > > I did some (very limited) testing on storing and retrieving MARC in YAML. >> The results were not encouraging. IIRC, I just did a direct conversion of >> the MARC::Record object into YAML and back. Perhaps there's a way to >> optimize the formatting that would improve performance, but my testing >> showed sometimes even worse performance than XML. >> > > Did you use YAML or YAML::XS? My tests with YAML::XS shows a very > significant improvement with YAML: see attached file. Of course, we should > define an serialization format independent from MARC::Record object if we > don't want to break the process when MARC::Record internal data structure > ever change. > > > MARCXML is a performance killer at this point, but there's no other >> apparent way to handle large bib records. The parsing is the issue, not the >> data transfer load. Perhaps cached BSON-formatted MARC::Record objects are a >> way out of this. >> > > Benchmark should be done with all available serialization formats. > > We also could implement serialization/deserialization logic directly into > MARC::Record library, as ISO2709 and XML format, in order gain control. > -- > Fr?d?ric > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cfouts at liblime.com Wed Oct 27 16:51:46 2010 From: cfouts at liblime.com (Fouts, Clay) Date: Wed, 27 Oct 2010 07:51:46 -0700 Subject: [Koha-devel] DBIx::Class vs current Koha's DBI performance In-Reply-To: <20101027111518.GA12795@rot13.org> References: <20101026171735.GA32451@rot13.org> <20101027111518.GA12795@rot13.org> Message-ID: On Wed, Oct 27, 2010 at 4:15 AM, Dobrica Pavlinusic wrote: > > You are correct that invalidation of caches is hard problem, but I > assume that caching over DBIx::Class would take care of invalidation, > but I might be wrong. DBIx::Class handles cache invalidation very well as long as it is used uniformly as the only means of accessing the database. My concern is more in regards to the challenges with getting all of Koha's use of a given set of tables to go through the dbic interface. Another intersting bit is change at: > > > http://git.rot13.org/?p=koha.git;a=commit;h=8b27e87c24b8efa5f4ffc9b21b8d1a97d51f2e77 > > which takes marc records directly from marc instead of XML. This > bypasses XML parsing and transformation and reduces number of statements > significantly: > While this is much faster, the ISO format has hard-coded limitations that Koha needs to surpass. From bogus@does.not.exist.com Sat Oct 16 04:25:36 2010 From: bogus@does.not.exist.com () Date: Sat, 16 Oct 2010 02:25:36 -0000 Subject: No subject Message-ID: > which we can pick in Koha code using caching. However, I would love to know > if this moment is the right one to tackle this, or should I wait for > DBIx::Class changes to land into code. Getting Koha to run persistently (and to do so safely!) seems like a higher priority in my estimation than the conversion to DBIC, if for no other reason than pervasive use of DBIC is going to seriously impede CGI performance. In mean time, I find that applying small, judicious use of local caching often gives a high return on very little time and additional code while not interfering with the API. Clay --e0cb4e3854187ef35604939a5e5d Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
On Wed, Oct 27, 2010 at 4:15 AM, Dobrica Pavlinu= sic <dpavlin at rot13.org> wrote:
You are correct that invalidation of caches is hard problem, but I
assume that caching over DBIx::Class would take care of invalidation,
but I might be wrong.

DBIx::Class handles c= ache invalidation very well as long as it is used uniformly as the only mea= ns of accessing the database. My concern is more in regards to the challeng= es with getting all of Koha's use of a given set of tables to go throug= h the dbic interface.
=A0

Another intersting bit is change at:

http://git.rot13.org/?p=3Dkoh= a.git;a=3Dcommit;h=3D8b27e87c24b8efa5f4ffc9b21b8d1a97d51f2e77

which takes marc records directly from marc instead of XML. This
bypasses XML parsing and transformation and reduces number of statements significantly:

While this is much faste= r, the ISO format has hard-coded limitations that Koha needs to surpass.


From bogus@does.not.exist.com Sat Oct 16 04:25:36 2010 From: bogus@does.not.exist.com () Date: Sat, 16 Oct 2010 02:25:36 -0000 Subject: No subject Message-ID: > which we can pick in Koha code using caching. However, I would love to know=
if this moment is the right one to tackle this, or should I wait for
DBIx::Class changes to land into code.

Gett= ing Koha to run persistently (and to do so safely!) seems like a higher pri= ority in my estimation than the conversion to DBIC, if for no other reason = than pervasive use of DBIC is going to seriously impede CGI performance. In= mean time, I find that applying small, judicious use of local caching ofte= n gives a high return on very little time and additional code while not int= erfering with the API.

Clay

--e0cb4e3854187ef35604939a5e5d-- From kolibrie at graystudios.org Wed Oct 27 18:20:18 2010 From: kolibrie at graystudios.org (Nathan Gray) Date: Wed, 27 Oct 2010 12:20:18 -0400 Subject: [Koha-devel] persistence via Plack - was: DBIx::Class vs current Koha's DBI performance In-Reply-To: References: <20101026171735.GA32451@rot13.org> <20101027111518.GA12795@rot13.org> Message-ID: <20101027162018.GA10474@vs1.graystudios.org> On Wed, Oct 27, 2010 at 07:51:46AM -0700, Fouts, Clay wrote: > Getting Koha to run persistently (and to do so safely!) seems like a higher > priority in my estimation than the conversion to DBIC, if for no other > reason than pervasive use of DBIC is going to seriously impede CGI > performance. In mean time, I find that applying small, judicious use of > local caching often gives a high return on very little time and additional > code while not interfering with the API. I have been playing with Plack recently. It makes use of PSGI, an interface such as CGI or FastCGI or mod_perl that sits between an application and the webserver. The benefit of PSGI is that once an application targets it, it opens up a lot of possibilities with regard to webservers and persistence. http://plackperl.org One of the most informative talks listed is: Plack: Superglue for Perl web frameworks and servers http://www.slideshare.net/miyagawa/plack-at-oscon-2010 As a quick test, there are some Plack modules that allow you to test out CGI scripts in a Plack environment: http://search.cpan.org/dist/Plack/lib/Plack/App/WrapCGI.pm http://search.cpan.org/dist/Plack/lib/Plack/App/CGIBin.pm -kolibrie -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 197 bytes Desc: Digital signature URL: From chris at bigballofwax.co.nz Wed Oct 27 21:04:23 2010 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Thu, 28 Oct 2010 08:04:23 +1300 Subject: [Koha-devel] DBIx::Class vs current Koha's DBI performance In-Reply-To: References: <20101026171735.GA32451@rot13.org> <20101027111518.GA12795@rot13.org> Message-ID: Well there is no aim for widespread DBIx::Class usage in 3.4 we do only have 6 months. All we are proposing to do is make use of DBIx::Class::Schema On 28 Oct 2010 03:52, "Fouts, Clay" wrote: On Wed, Oct 27, 2010 at 4:15 AM, Dobrica Pavlinusic wrote: > > You are correct t... DBIx::Class handles cache invalidation very well as long as it is used uniformly as the only means of accessing the database. My concern is more in regards to the challenges with getting all of Koha's use of a given set of tables to go through the dbic interface. > Another intersting bit is change at: > > http://git.rot13.org/?p=koha.git;a=commit;h=8b27e87c2... While this is much faster, the ISO format has hard-coded limitations that Koha needs to surpass. > From my expirience so far, there are quite a few more low hanging fruits > which we can pick in... Getting Koha to run persistently (and to do so safely!) seems like a higher priority in my estimation than the conversion to DBIC, if for no other reason than pervasive use of DBIC is going to seriously impede CGI performance. In mean time, I find that applying small, judicious use of local caching often gives a high return on very little time and additional code while not interfering with the API. Clay _______________________________________________ Koha-devel mailing list Koha-devel at lists.koha-community.org http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at bigballofwax.co.nz Wed Oct 27 21:06:47 2010 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Thu, 28 Oct 2010 08:06:47 +1300 Subject: [Koha-devel] DBIx::Class vs current Koha's DBI performance In-Reply-To: References: <20101026171735.GA32451@rot13.org> <20101027111518.GA12795@rot13.org> Message-ID: Gah bumped by people on the bus, what I was trying to say was that for 3.4 we plan to use the schema tools to move us one step away from mysql dependency. We are talking about plack at the dev conf tomorrow, please try and join us on irc. Chris On 28 Oct 2010 03:52, "Fouts, Clay" wrote: On Wed, Oct 27, 2010 at 4:15 AM, Dobrica Pavlinusic wrote: > > You are correct t... DBIx::Class handles cache invalidation very well as long as it is used uniformly as the only means of accessing the database. My concern is more in regards to the challenges with getting all of Koha's use of a given set of tables to go through the dbic interface. > Another intersting bit is change at: > > http://git.rot13.org/?p=koha.git;a=commit;h=8b27e87c2... While this is much faster, the ISO format has hard-coded limitations that Koha needs to surpass. > From my expirience so far, there are quite a few more low hanging fruits > which we can pick in... Getting Koha to run persistently (and to do so safely!) seems like a higher priority in my estimation than the conversion to DBIC, if for no other reason than pervasive use of DBIC is going to seriously impede CGI performance. In mean time, I find that applying small, judicious use of local caching often gives a high return on very little time and additional code while not interfering with the API. Clay _______________________________________________ Koha-devel mailing list Koha-devel at lists.koha-community.org http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From cnighswonger at foundations.edu Thu Oct 28 21:22:21 2010 From: cnighswonger at foundations.edu (Chris Nighswonger) Date: Thu, 28 Oct 2010 15:22:21 -0400 Subject: [Koha-devel] [Koha] Koha cataloging Error In-Reply-To: References: Message-ID: Please 'reply-to-all' to keep on list. On Thu, Oct 28, 2010 at 11:50 AM, Geovana de Paula Santos Tarricone < geovanatarricone at gmail.com> wrote: > Hi Chris! > Here is an example: > > 020 $a - 85-326-1263-6 > 245 $a - Avalia??o Psicopedag?gica da Crian?a de Zero a Seis Anos > > When I save these book the error appears.... > Any chance you can send along the entire record? Kind Regards, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinodkumarbl at gmail.com Wed Oct 27 07:57:16 2010 From: vinodkumarbl at gmail.com (VinodKumar BL) Date: Wed, 27 Oct 2010 11:27:16 +0530 Subject: [Koha-devel] Koha - Patron card creator Message-ID: Dear Sir/Madam, Please send us the details for generating patron card in koha. -- Regards *VinodKumar BL* Dy.Librarian Saraswati Central Library, Prashanti Kutiram, #19, Eknath Bhavan, Gavipuram, KG Nagar, Bangalore -560 019 Mobile - 9986928980 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nengard at gmail.com Fri Oct 29 11:13:25 2010 From: nengard at gmail.com (Nicole Engard) Date: Fri, 29 Oct 2010 22:13:25 +1300 Subject: [Koha-devel] November Newsletter In-Reply-To: References: Message-ID: As many of you saw I covered the conference in blog posts and Ian is covering the hackfest in blog posts. Did anyone else blog about the conference? or do you plan to? If so make sure you send me those links so that I can put them into our conference edition of the newsletter. Also remember to put your pics online and tag them kohacon10 and put your slides in the Slideshare event: http://www.slideshare.net/event/kohacon10 All these links will be put into the newsletter in 15 days or so. Nicole On Thu, Oct 21, 2010 at 1:21 PM, Nicole Engard wrote: > Hello all, > > I'm thinking for the next Koha Newsletter we'll do a conference > sum-up. ?So between the start of the conference and the 12th of > November please send me links to posts you may have written about > conference sessions, links to pictures from KohaCon, and anything else > conference related. > > Thanks > Nicole C. Engard > From mjr at phonecoop.coop Fri Oct 29 11:20:03 2010 From: mjr at phonecoop.coop (MJ Ray) Date: Fri, 29 Oct 2010 10:20:03 +0100 Subject: [Koha-devel] November Newsletter Message-ID: Nicole wrote: > Also remember to put your pics online and tag them kohacon10 and put > your slides in the Slideshare event: > http://www.slideshare.net/event/kohacon10 > > All these links will be put into the newsletter in 15 days or so. Slideshare stinks because it uses flash and requires registration before you can download, I think? So can we put slides on the conference site or a wiki page before linking them from our foss lms's newsletter please? Ta, -- MJ Ray (slef), member of www.software.coop, a for-more-than-profit co-op. Webmaster, Debian Developer, Past Koha RM, statistician, former lecturer. In My Opinion Only: see http://mjr.towers.org.uk/email.html Available for hire for various work http://www.software.coop/products/ From nengard at gmail.com Sun Oct 31 00:57:43 2010 From: nengard at gmail.com (Nicole Engard) Date: Sun, 31 Oct 2010 11:57:43 +1300 Subject: [Koha-devel] Adding events to the website Message-ID: A discussion today at Hackfest was about the Events calendar on the Koha site (http://koha-community.org). For those who don't know how to use the events calendar on our site I created a video tutorial for this for another group I was working with. http://www.youtube.com/watch?v=sxj2N92priE Hopefully that will help people add all Koha related events to the Koha website for us all to keep track of. The events calendar can then be subscribed to. Nicole From nengard at gmail.com Sun Oct 31 01:03:42 2010 From: nengard at gmail.com (Nicole Engard) Date: Sun, 31 Oct 2010 12:03:42 +1300 Subject: [Koha-devel] Adding events to the website In-Reply-To: References: Message-ID: FYI - events can be subscribed to with this feed: http://koha-community.org/category/events/feed/ or it can be added as an iCal with this URL: webcal://koha-community.org/?ec3_ical Nicole On Sun, Oct 31, 2010 at 11:57 AM, Nicole Engard wrote: > A discussion today at Hackfest was about the Events calendar on the > Koha site (http://koha-community.org). ?For those who don't know how > to use the events calendar on our site I created a video tutorial for > this for another group I was working with. > > http://www.youtube.com/watch?v=sxj2N92priE > > Hopefully that will help people add all Koha related events to the > Koha website for us all to keep track of. ?The events calendar can > then be subscribed to. > > Nicole > From arosa at tginet.com Sun Oct 31 02:22:28 2010 From: arosa at tginet.com (Toni Rosa) Date: Sun, 31 Oct 2010 02:22:28 +0200 (Hora de verano romance) Subject: [Koha-devel] opac-search.pl Message-ID: <.77.224.23.61.1288484548.squirrel@webmail.tgi.es> Hello, We are using Koha 3.0.5 I noticed that the opac advanced search in it (opac-search.pl) when running in a nozebra installation doesn't honor "publication period" intervals (e.g entering the 1800-1900 interval the search doesn't work). I followed the code to the "C4/Search.pm" file, method "NZanalyse" and it seems that it actually doesn't even try to build the sql query taking the intervals into account. Did anyone find the same behavior? (I'd guess everybody should, as it seems a bug/omission to me!) Thanks, Toni From cnighswonger at foundations.edu Sun Oct 31 03:23:57 2010 From: cnighswonger at foundations.edu (Chris Nighswonger) Date: Sat, 30 Oct 2010 22:23:57 -0400 Subject: [Koha-devel] opac-search.pl In-Reply-To: <.77.224.23.61.1288484548.squirrel@webmail.tgi.es> References: <.77.224.23.61.1288484548.squirrel@webmail.tgi.es> Message-ID: Hi Toni On Sat, Oct 30, 2010 at 8:22 PM, Toni Rosa wrote: > Hello, > > We are using Koha 3.0.5 > I noticed that the opac advanced search in it (opac-search.pl) when > running in a nozebra installation doesn't honor "publication period" > intervals (e.g entering the 1800-1900 interval the search doesn't work). > I followed the code to the "C4/Search.pm" file, method "NZanalyse" and it > seems that it actually doesn't even try to build the sql query taking the > intervals into account. > Did anyone find the same behavior? (I'd guess everybody should, as it > seems a bug/omission to me!) > > NoZebra is basically deprecated and for the most part unsupported. I would highly recommend switching over to Zebra even for a small collection. Kind Regards, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.poulain at biblibre.com Sun Oct 31 05:10:25 2010 From: paul.poulain at biblibre.com (Paul Poulain) Date: Sun, 31 Oct 2010 05:10:25 +0100 Subject: [Koha-devel] BibLibre branches for 3.4 Message-ID: <4CCCEC31.7000403@biblibre.com> Hello, As we've said previously, we have a lot of patches to submit for 3.4 (637 exactly) We've rebased them against 3.2 official, and splitted them in "small" chunks. There are 10 different branches. You can pull them from http://git.biblibre.com/?p=koha;a=heads The branches 3.4/BibLibre- are the one that are related to this stuff. I've added a page on the wiki to describe what each branch contains. Some smart ppl at KohaCon hackfest have started some testing & added their comment : http://wiki.koha-community.org/wiki/BibLibre_patches_to_be_integrated_for_3.4 We also have setup a "sandbox" install for each branch. It contains just all the default datas you can load from the webinstaller, marc21/english Note that all the tests have been made against a fresh installation, not from on an updatedatabase. Colin/Chris/Anyone, it's up to you ! feel free to ask questions, report problems,... Enjoy ! -- Paul POULAIN http://www.biblibre.com Expert en Logiciels Libres pour l'info-doc Tel : (33) 4 91 81 35 08 From cnighswonger at foundations.edu Sun Oct 31 20:32:16 2010 From: cnighswonger at foundations.edu (Chris Nighswonger) Date: Sun, 31 Oct 2010 15:32:16 -0400 Subject: [Koha-devel] BibLibre branches for 3.4 In-Reply-To: <4CCCEC31.7000403@biblibre.com> References: <4CCCEC31.7000403@biblibre.com> Message-ID: On Sun, Oct 31, 2010 at 12:10 AM, Paul Poulain wrote: > Hello, > > As we've said previously, we have a lot of patches to submit for 3.4 > (637 exactly) > > We've rebased them against 3.2 official, and splitted them in "small" > chunks. > There are 10 different branches. You can pull them from > http://git.biblibre.com/?p=koha;a=heads > The branches 3.4/BibLibre- are the one that are related to this > stuff. > > I've added a page on the wiki to describe what each branch contains. > Some smart ppl at KohaCon hackfest have started some testing & added > their comment : > > http://wiki.koha-community.org/wiki/BibLibre_patches_to_be_integrated_for_3.4 > > We also have setup a "sandbox" install for each branch. It contains just > all the default datas you can load from the webinstaller, marc21/english > > Note that all the tests have been made against a fresh installation, not > from on an updatedatabase. > > Colin/Chris/Anyone, it's up to you ! feel free to ask questions, report > problems,... > > This is very nice and sets a good example for other companies with unintentional or otherwise forks. I would encourage everyone doing development work to look over this wiki page carefully. If there is duplicate development going on, this may save time and money which can then be invested in other Koha goodies. Where there are conflicts, setup a thread on this list and hash them out. Let's capitalize on this "investment." Kind Regards, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From bogus@does.not.exist.com Sat Oct 16 04:25:36 2010 From: bogus@does.not.exist.com () Date: Sat, 16 Oct 2010 02:25:36 -0000 Subject: No subject Message-ID: " Perl's garbage collection has one big problem: Circular references can't get cleaned up. A circular reference can be as simple as two reference that refer to each other: my $mom =3D { name =3D> "Marilyn Lester", }; my $me =3D { name =3D> "Andy Lester", mother =3D> $mom, }; $mom->{son} =3D $me; " > > Other question: do you plan to make Moose a Koha requirement and > recommendation? Not for 3.4. I think the above list plus all the rfcs to get done in 6 months is plenty ambitious enough. And until we can run Koha in a persistent manner, adding Moose would just kill us. Chris From bogus@does.not.exist.com Sat Oct 16 04:25:36 2010 From: bogus@does.not.exist.com () Date: Sat, 16 Oct 2010 02:25:36 -0000 Subject: No subject Message-ID: email me the link to the wiki page and I'll add it to the following newsletter. Nicole C. Engard Documentation Manager From bogus@does.not.exist.com Sat Oct 16 04:25:36 2010 From: bogus@does.not.exist.com () Date: Sat, 16 Oct 2010 02:25:36 -0000 Subject: No subject Message-ID: All patches should have at least 1 signoff before the Release Manager looks at them, (exceptions will be made for trivial patches). Preferably 2 signoffs, 1 from the QA manager and 1 from someone else. Although 1 from the QA manager will suffice. All patches should refer to a bug number Chris 2010/11/11 Ian Walls : > Everyone, > > While there can be no guarantees as to whether a patch will be committed > into the Koha codebase, I think in practice there are several requirement= s. > =A0This email is an attempt to identify a few of them, and hopefully star= t a > discussion about whether they are truly requirements, and what others cou= ld > possibly be added. > 1. =A0The patch must do what it claims to do, in all commonly-supported K= oha > environments > 2. =A0The patch must not break existing functionality > 3. =A0The patch must apply to the current HEAD of the master branch of th= e > code > 4. =A0The patch must follow the Coding Style Guidelines > 5. =A0The patch must be MARC-flavour agnostic > 6. =A0The patch must contain appropriate copyright information > 7. =A0If a database update is require, the patch must handle the update b= oth > for new installs and upgrades > 8. =A0If a new feature is added, the patch must include appropriate Help > documentation > What do people think of these requirements? =A0Are they reasonable? =A0Sh= ould > there be more? =A0I understand that there may not be any set of requireme= nts > that's completely sufficient, but if we can identify as many as possible,= it > would make developers lives a bit easier, since we'd all have a better id= ea > what is needed for our patches to be committable. > Cheers, > > -Ian > -- > Ian Walls > Lead Development Specialist > ByWater Solutions > Phone # (888) 900-8944 > http://bywatersolutions.com > ian.walls at bywatersolutions.com > Twitter: @sekjal > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > From bogus@does.not.exist.com Sat Oct 16 04:25:36 2010 From: bogus@does.not.exist.com () Date: Sat, 16 Oct 2010 02:25:36 -0000 Subject: No subject Message-ID:
All patches should have at least 1 signoff before the Release Manager
looks at them, (exceptions will be made for trivial patches).
Preferably 2 signoffs, 1 from the QA manager and 1 from someone else.
Although 1 from the QA manager will suffice.

All patches should refer to a bug number

Chris

2010/11/11 Ian Walls <= ian.walls at bywatersolutions.com>:
> Everyone,
>
> While there can be no guarantees as to whether a patch will be committ= ed
> into the Koha codebase, I think in practice there are several requirem= ents.
> =C2=A0This email is an attempt to identify a few of them, and hopefull= y start a
> discussion about whether they are truly requirements, and what others = could
> possibly be added.
> 1. =C2=A0The patch must do what it claims to do, in all commonly-suppo= rted Koha
> environments
> 2. =C2=A0The patch must not break existing functionality
> 3. =C2=A0The patch must apply to the current HEAD of the master branch= of the
> code
> 4. =C2=A0The patch must follow the Coding Style Guidelines
> 5. =C2=A0The patch must be MARC-flavour agnostic
> 6. =C2=A0The patch must contain appropriate copyright information
> 7. =C2=A0If a database update is require, the patch must handle the up= date both
> for new installs and upgrades
> 8. =C2=A0If a new feature is added, the patch must include appropriate= Help
> documentation
> What do people think of these requirements? =C2=A0Are they reasonable?= =C2=A0Should
> there be more? =C2=A0I understand that there may not be any set of req= uirements
> that's completely sufficient, but if we can identify as many as po= ssible, it
> would make developers lives a bit easier, since we'd all have a be= tter idea
> what is needed for our patches to be committable.
> Cheers,
>
> -Ian
> --
> Ian Walls
> Lead Development Specialist
> ByWater Solutions
> Phone # (888) 900-8944
> http://bywat= ersolutions.com
> ian.walls at bywatersol= utions.com
> Twitter: @sekjal
> _______________________________________________
> Koha-devel mailing list
> Koha-devel at list= s.koha-community.org
> http://lists.koha-community.org/cgi-bin/mailman= /listinfo/koha-devel
> website : http://www.koha-community.org/
> git : htt= p://git.koha-community.org/
> bugs : h= ttp://bugs.koha-community.org/
>
_______________________________________________
Koha-devel mailing list
Koha-devel at lists.koh= a-community.org
http://lists.koha-community.org/cgi-bin/mailman/list= info/koha-devel
website : http= ://www.koha-community.org/
git : http://g= it.koha-community.org/
bugs : http:/= /bugs.koha-community.org/

--0015174beab67b326c0494be11c2-- From bogus@does.not.exist.com Sat Oct 16 04:25:36 2010 From: bogus@does.not.exist.com () Date: Sat, 16 Oct 2010 02:25:36 -0000 Subject: No subject Message-ID:
All patches should have at least 1 signoff before the Release Manager
looks at them, (exceptions will be made for trivial patches).
Preferably 2 signoffs, 1 from the QA manager and 1 from someone else.
Although 1 from the QA manager will suffice.

All patches should refer to a bug number

Chris

2010/11/11 Ian Walls <= ian.walls at bywatersolutions.com>:
> Everyone,
>
> While there can be no guarantees as to whether a patch will be committ= ed
> into the Koha codebase, I think in practice there are several requirem= ents.
> =C2=A0This email is an attempt to identify a few of them, and hopefull= y start a
> discussion about whether they are truly requirements, and what others = could
> possibly be added.
> 1. =C2=A0The patch must do what it claims to do, in all commonly-suppo= rted Koha
> environments
> 2. =C2=A0The patch must not break existing functionality
> 3. =C2=A0The patch must apply to the current HEAD of the master branch= of the
> code
> 4. =C2=A0The patch must follow the Coding Style Guidelines
> 5. =C2=A0The patch must be MARC-flavour agnostic
> 6. =C2=A0The patch must contain appropriate copyright information
> 7. =C2=A0If a database update is require, the patch must handle the up= date both
> for new installs and upgrades
> 8. =C2=A0If a new feature is added, the patch must include appropriate= Help
> documentation
> What do people think of these requirements? =C2=A0Are they reasonable?= =C2=A0Should
> there be more? =C2=A0I understand that there may not be any set of req= uirements
> that's completely sufficient, but if we can identify as many as po= ssible, it
> would make developers lives a bit easier, since we'd all have a be= tter idea
> what is needed for our patches to be committable.
> Cheers,
>
> -Ian
> --
> Ian Walls
> Lead Development Specialist
> ByWater Solutions
> Phone # (888) 900-8944
> http://bywat= ersolutions.com
> ian.walls at bywatersol= utions.com
> Twitter: @sekjal
> _______________________________________________
> Koha-devel mailing list
> Koha-devel at list= s.koha-community.org
> http://lists.koha-community.org/cgi-bin/mailman= /listinfo/koha-devel
> website : http://www.koha-community.org/
> git : htt= p://git.koha-community.org/
> bugs : h= ttp://bugs.koha-community.org/
>
_______________________________________________
Koha-devel mailing list
Koha-devel at lists.koh= a-community.org
http://lists.koha-community.org/cgi-bin/mailman/list= info/koha-devel
website : http= ://www.koha-community.org/
git : http://g= it.koha-community.org/
bugs : http:/= /bugs.koha-community.org/

Question for Mr. 3.4 RM:

Is the procedure for dealing with DB revision numbers still the same? As far as I remember from the 3.2 development days, the procedure was to=20 patch kohastructure.sql (or sysprefs.sql, or whatever), then add the=20 update to the end of updatedatabase.pl= with a generic version number,=20 like 3.01.00.XXX. Patching kohastructur= e.pl was left to the RM when they applied the patch.

I had a crazy table on the wiki for a bit, but this seemed to work better.<= br>
That still the consensus?

--
Jesse Weaver
--0016368340704f969f0494be3787-- From bogus@does.not.exist.com Sat Oct 16 04:25:36 2010 From: bogus@does.not.exist.com () Date: Sat, 16 Oct 2010 02:25:36 -0000 Subject: No subject Message-ID: Koha in search result deserialize MARC records from their ISO2709 representation rather than their MARCXML. If we were able to use marcxml, the 99,999 limitation for MARC record size could be exceed. And we would have one less reason to move to SolR--notwithstanding the other reasons to move to. -- Fr=E9d=E9ric _______________________________________________ Koha-devel mailing list Koha-devel at lists.koha-comm... --0015174c45f0c581db0494dc1b2a Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable

Other reason is that deserializing from xml rather than iso2709 is WAY s= lower and proc intensive. It would not be a problem is any setup would CORR= ECTLY and SENSIBLY use XSLT. But since XSLT.pm is what it is ie taking marc= record, editing, tranforms to xml before processing xslt, this process wou= ld only be slower if we used xml.
Moreover the main reason why record are bigger than 9999 bytes is because o= f items. It is proven that it would really be HEALTHY to remove them. So th= e problem wouls not exist any longer.
My 2 cts.
--
Henri-Damien Laurent

Le=A012 nov. 2010, 7:55 AM, "Fr=E9d=E9ric= Demians" <frederic at tamil.fr>=A0a =E9crit=A0:

> Here we go > > http://www.nntp.perl.org/group/perl.perl4lib/2006/05/msg2369.html

From bogus@does.not.exist.com Sat Oct 16 04:25:36 2010 From: bogus@does.not.exist.com () Date: Sat, 16 Oct 2010 02:25:36 -0000 Subject: No subject Message-ID: oha in search result deserialize MARC records from their ISO2709 representa= tion rather than their MARCXML. If we were able to use marcxml, the 99,999 = limitation for MARC record size could be exceed. And we would have one less= reason to move to SolR--notwithstanding the other reasons to move to.
--
Fr=E9d=E9ric

_______________________________________________ Koha-devel mailing list Koha-devel at lists.koha-comm...

--0015174c45f0c581db0494dc1b2a-- From bogus@does.not.exist.com Sat Oct 16 04:25:36 2010 From: bogus@does.not.exist.com () Date: Sat, 16 Oct 2010 02:25:36 -0000 Subject: No subject Message-ID: oha in search result deserialize MARC records from their ISO2709 representa= tion rather than their MARCXML. If we were able to use marcxml, the 99,999 = limitation for MARC record size could be exceed. And we would have one less= reason to move to SolR--notwithstanding the other reasons to move to.
--
Fr=E9d=E9ric

_______________________________________________ Koha-devel mailing list Koha-devel at lists.koha-comm...


_______________________________________________
Koha-devel mailing list
Ko= ha-devel at lists.koha-community.org
http://lists.koha-community.org/cgi-bin/mailman/list= info/koha-devel
website : http= ://www.koha-community.org/
git : http://g= it.koha-community.org/
bugs : http:/= /bugs.koha-community.org/

--0016367fb029de1b5d0494dc61f1-- From bogus@does.not.exist.com Sat Oct 16 04:25:36 2010 From: bogus@does.not.exist.com () Date: Sat, 16 Oct 2010 02:25:36 -0000 Subject: No subject Message-ID: talk, git stores the files in its database, since it is a DAG oriented SCM. It may send the diffs, but actualy stores the files. > >> ADSL is coping well with that... But there are still some places in the >> world which donot have access to wide bandwidth. > > True. But a Git clone (of the public repo) is a once-and-done > operation. Anybody installing Koha for production use could use the > tarball or (even better) the Debian package. Particularly because of > the Debian package, we're getting past the point where dev mode would > be recommend for use by single-library production installations. > > I've been doing some measurements. A PO-only repository would be > about 50M in size, and creating such a thing is the easy part. But if > we move misc/translator/po to a separate repository, we would have to > also remove that directory from the main repository in order to > realize the repository size savings motivating your proposal - a 'git > rm misc/translator/po' wouldn't reduce the size of the repo. My test > run is not quite finished yet (it takes a long time for > git-filter-branch to handle almost 13,000 commits), but even assuming > that 50M could be pared from the main repository, actually doing that > would come at a significant cost: every commit would be rewritten by > the git-filter-branch operation. Rewriting history like that could > mean that every single person who clones against the public repo could > have to deal with forced branch updates, to say nothing of > invalidating all of the release tags. I don't think all the release tags would be broken. And it would allow to release localisation at a different pace... When there is a need. > > That prospect doesn't hearten me. I'll report back once my test finishes. Thanks for your update. > > Regards, > > Galen Regards. -- Henri-Damien LAURENT From bogus@does.not.exist.com Sat Oct 16 04:25:36 2010 From: bogus@does.not.exist.com () Date: Sat, 16 Oct 2010 02:25:36 -0000 Subject: No subject Message-ID: This mailing list is an informational list to which commit descriptions for= patches pushed to the public Koha Git repository (currently git://git.koha= -community.org/koha.git) are posted. This list should not be used to discuss individual patches, as not all cont= ributors will be subscribed to koha-commits. Instead, please use koha-devel= , koha-patches, or bugs.koha.org for that purpose. Is this no longer the case? -----Oorspronkelijk bericht----- Van: koha-devel-bounces at lists.koha-community.org [mailto:koha-devel-bounces= @lists.koha-community.org] Namens MJ Ray Verzonden: maandag 15 november 2010 11:10 Aan: Galen Charlton CC: koha-devel at lists.koha-community.org; koha-patches at lists.koha-community.= org Onderwerp: [Koha-devel] Discussion of development topics, was: Bug 5401: WY= SWYG for Koha News Galen Charlton wrote: > There is a long standing practice of discussion of individual patches > on koha-commits, however, so while koha-devel is certainly a valid > choice for discussing this issue, I do want to point out and remind > people that some relevant discussion does take place on the > koha-commits list. I don't like that practice because:- 1. the list description "Patches submitted to Koha" gives no indication it occurs and it's different from other projects' -commit mailing lists; 2. developers have to subscribe to receive all patches even if they're only interested in their fields of activity and the occasional discussions, contributing to information overload; 3. I think the patches are carried in only one of the web forum versions of our discussions http://koha-community.org/support/forums/ 4. usually the patches relate to a bug and email discussion isn't automatically attached to the bug report - I have started work to address this, but it's still going to have some delay between the discussion and posting to the bug report, which may mean the discussion is irrelevant by the time the bug is told; 5. most importantly, many topics are wider than a single patch, such as this one about what rich editor to adopt. Can we overcome these problems, or state clearly that wider-than-one-patch discussions should be on -devel? Thanks, -- MJ Ray (slef), member of www.software.coop, a for-more-than-profit co-op. Past Koha Release Manager (2.0), LMS programmer, statistician, webmaster. In My Opinion Only: see http://mjr.towers.org.uk/email.html Available for hire for Koha work http://www.software.coop/products/koha _______________________________________________ Koha-devel mailing list Koha-devel at lists.koha-community.org http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel website : http://www.koha-community.org/ git : http://git.koha-community.org/ bugs : http://bugs.koha-community.org/ --_000_809BE39CD64BFD4EB9036172EBCCFA311A2195SMAIL1Brijksmuseu_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable

From the koha-commits mailing list:

 

This mailing list is an informational list to whi= ch commit descriptions for patches pushed to the public Koha Git repository= (currently git://git.koha-community.org/koha.git) are posted.

 

This list should not be used to discuss indivi= dual patches, as not all contributors will be subscribed to koha-commit= s. Instead, please use koha-devel, koha-patches, or bugs.koha.org for that = purpose.

 

Is this no longer the case?

 

-----Oorspronkelijk bericht----= -
Van: koha-devel-bounces at lists.koha-community.org [mailto:koha-devel-bounces= @lists.koha-community.org] Namens MJ Ray
Verzonden: maandag 15 november 2010 11:10
Aan: Galen Charlton
CC: koha-devel at lists.koha-community.org; koha-patches at lists.koha-community.= org
Onderwerp: [Koha-devel] Discussion of development topics, was: Bug 5401: WY= SWYG for Koha News

 

Galen Charlton wrote:

> There is a long standing practice of discuss= ion of individual patches

> on koha-commits, however, so while koha-deve= l is certainly a valid

> choice for discussing this issue, I do want = to point out and remind

> people that some relevant discussion does ta= ke place on the

> koha-commits list.

 

I don't like that practice because:-

 

1. the list description "Patches submitted t= o Koha" gives no

indication it occurs and it's different from othe= r projects' -commit

mailing lists;

 

2. developers have to subscribe to receive all pa= tches even if they're

only interested in their fields of activity and t= he occasional

discussions, contributing to information overload= ;

 

3. I think the patches are carried in only one of= the web forum

versions of our discussions http://koha-community= .org/support/forums/

 

4. usually the patches relate to a bug and email = discussion isn't

automatically attached to the bug report - I have= started work to

address this, but it's still going to have some d= elay between

the discussion and posting to the bug report, whi= ch may mean the

discussion is irrelevant by the time the bug is t= old;

 

5. most importantly, many topics are wider than a= single patch,

such as this one about what rich editor to adopt.=

 

Can we overcome these problems, or state clearly = that

wider-than-one-patch discussions should be on -de= vel?

 

Thanks,

--

MJ Ray (slef), member of www.software.coop, a for= -more-than-profit co-op.

Past Koha Release Manager (2.0), LMS programmer, = statistician, webmaster.

In My Opinion Only: see http://mjr.towers.org.uk/= email.html

Available for hire for Koha work http://www.softw= are.coop/products/koha

_______________________________________________

Koha-devel mailing list

Koha-devel at lists.koha-community.org

http://lists.koha-community.org/cgi-bin/mailman/l= istinfo/koha-devel

website : http://www.koha-community.org/

git : http://git.koha-community.org/

bugs : http://bugs.koha-community.org/=

--_000_809BE39CD64BFD4EB9036172EBCCFA311A2195SMAIL1Brijksmuseu_-- From bogus@does.not.exist.com Sat Oct 16 04:25:36 2010 From: bogus@does.not.exist.com () Date: Sat, 16 Oct 2010 02:25:36 -0000 Subject: No subject Message-ID: with a copy (a normal book with many chapters). What we (I, Amit and Savitra) are writing do 'ONLY' this link. Is not a generic tool to link records. Bye -- Zeno Tajoli CILEA - Segrate (MI) tajoliAT_SPAM_no_prendiATcilea.it (Indirizzo mascherato anti-spam; sostituisci qaunto tra AT con @) From bogus@does.not.exist.com Sat Oct 16 04:25:36 2010 From: bogus@does.not.exist.com () Date: Sat, 16 Oct 2010 02:25:36 -0000 Subject: No subject Message-ID: string changes will be pushed with the exception of blocker bugs and/or security bugs. On December 20, 2010, 3.2.x will enter a translation freeze, and .po files will be pulled on December 22, 2010 just prior to the release of 3.2.2. Immediately following the release of 3.2.2, the 3.2.x branch will be "thawed" and accept all patches. To date there are some 87 commits to be included in the 3.2.2 release. Keep up the great work! Kind Regards, Chris Nighswonger 3.2 Release Maintainer From bogus@does.not.exist.com Sat Oct 16 04:25:36 2010 From: bogus@does.not.exist.com () Date: Sat, 16 Oct 2010 02:25:36 -0000 Subject: No subject Message-ID: string changes will be pushed with the exception of blocker bugs and/or security bugs. On January 20, 2011, 3.2.x will enter a translation freeze, and .po files will be pulled on January 22, 2011 just prior to the release of 3.2.3. Immediately following the release of 3.2.3, the 3.2.x branch will be "thawed" and accept all patches. To date there are some 75 commits to be included in the 3.2.3 release. Keep up the great work! Kind Regards, Chris Nighswonger 3.2.x Release Maintainer On Tue, Jan 11, 2011 at 5:12 PM, Chris Nighswonger wrote: > Hmm.... cut-n-paste strikes again... ;-) > > 3.2.3 is on track to be released on January 22, 2011 rather than a year ago. > > On Tue, Jan 11, 2011 at 5:11 PM, Chris Nighswonger > wrote: >> >> Hi all, >> >> 3.2.3 is on track to be released on January 22, 2010, however, there is >> another important date between now and then which I wanted to point out. >> >> On January 15, 2011, 3.2.3 will enter a string freeze to facilitate >> finalization of translation updates. After this date the only patches >> introducing string changes which will be pushed for 3.2.3 will be those >> addressing blocker bugs or security issues. All others will be pushed to >> 3.2.4. >> >> Patches not introducing string changes will be pushed up to the time of >> the release. >> >> Keep up the good work! >> >> Kind Regards, >> Chris Nighswonger >> 3.2 Release Maintainer >> > > From bogus@does.not.exist.com Sat Oct 16 04:25:36 2010 From: bogus@does.not.exist.com () Date: Sat, 16 Oct 2010 02:25:36 -0000 Subject: No subject Message-ID: rendering is pretty different from everyone else. That may create issues which are a problem or it may not. We have tried to correct JavaScript problems with Internet Explorer whenever we find them, but it's likely you'll find more. As for over-committing, I think this depends partly on how much they want Koha on IE7 to look exactly like Koha on Firefox or Chrome. I'll happily test and sign off on patches to address IE7 compatibility fixes. -- Owen -- Web Developer Athens County Public Libraries http://www.myacpl.org From bogus@does.not.exist.com Sat Oct 16 04:25:36 2010 From: bogus@does.not.exist.com () Date: Sat, 16 Oct 2010 02:25:36 -0000 Subject: No subject Message-ID: of the pending release anyway... Just talking off the top of my head but that would provide for 2 travel options only and what a bang for your buck on KohaCon! On Mon, Apr 11, 2011 at 6:24 AM, Magnus Enger wrote: > On 11 April 2011 13:14, Chris Cormack wrote: > > On 11 April 2011 23:12, Magnus Enger wrote: > >> On 8 April 2011 18:22, Paul Poulain wrote: > >>> Just a special thanks to Magnus, that came from Norway, and to Katrin, > >>> that came from Germany. We should send some picture we've taken very > soon ! > >> > >> Just a special thanks to BibLibre for organizing and hosting the > >> hackfest in the best way possible! Being the only Koha hacker in > >> Scandinavia, getting to spend some time with people with similar > >> interests every now and then is just great! Maybe we could make a > >> European hackfest a yearly tradition? > >> > > How about 6 monthly ... a month ahead of each release :) > > That would be absolutely fine by me, it's just that I'm afraid I would > not be able to travel to 2 hackfests and 1 KohaCon every year... ;-) > > Best regards, > Magnus > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -- David Schuster Plano ISD Library Technology Coordinator --90e6ba5bbaf11b07c104a0a3e730 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Just an idea, but what about having the hackfest at the KohaCon happen right before a release!=A0 That way the release manager would be there to work with everyone on blockers and possibly get the blockers out of the way.=A0 Then 2 weeks or so after the KohaCon there would be a "release= "...

From my experience the Release manager does a talk about w= hat to expect out of the pending release anyway...

Just talking off the top of my head but that would provide for 2 travel opt= ions only and what a bang for your buck on KohaCon!

On Mon, Apr 11, 2011 at 6:24 AM, Magnus Enger = <magnus at enger.priv.no> wrote:
On 11 April 2011 13:14, Chris Cormack <chris at bigballofwax.co.nz> wrote:
> On 11 April 2011 23:12, Magnus Enger <magnus at enger.priv.no> wrote:
>> On 8 April 2011 18:22, Paul Poulain <paul.poulain at biblibre.com> wrote:
>>> Just a special thanks to Magnus, that came from Norway, and to= Katrin,
>>> that came from Germany. We should send some picture we've = taken very soon !
>>
>> Just a special thanks to BibLibre for organizing and hosting the >> hackfest in the best way possible! Being the only Koha hacker in >> Scandinavia, getting to spend some time with people with similar >> interests every now and then is just great! Maybe we could make a<= br> >> European hackfest a yearly tradition?
>>
> How about 6 monthly =A0... a month ahead of each release :)

That would be absolutely fine by me, it's just that I'm afrai= d I would
not be able to travel to 2 hackfests and 1 KohaCon every year... ;-)

Best regards,
Magnus



--
David Schus= ter
Plano ISD
Library Technology Coordinator
--90e6ba5bbaf11b07c104a0a3e730-- From bogus@does.not.exist.com Sat Oct 16 04:25:36 2010 From: bogus@does.not.exist.com () Date: Sat, 16 Oct 2010 02:25:36 -0000 Subject: No subject Message-ID: option "b" and "c" Where should i start tinkering around to add that feature... any idea? the label-item-search.pl or the search.tmpl or simply via IntranetUserJS using a bit of jQuery? FWIW, I'm on 3.2.7 and upgrading to 3.4 is not an option at the moment. cheers :) -- Indranil Das Gupta Phone : +91-98300-20971 Blog=A0 =A0 : http://indradg.randomink.org/blog IRC=A0 =A0 =A0 : indradg on irc://irc.freenode.net Twitter : indradg -=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-= =3D-=3D-=3D=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D- Please exchange editable Office documents only in ODF Format. No other format is acceptable. Support Open Standards. For a free editor supporting ODF, please visit LibreOffice - http://www.documentfoundation.org From bogus@does.not.exist.com Sat Oct 16 04:25:36 2010 From: bogus@does.not.exist.com () Date: Sat, 16 Oct 2010 02:25:36 -0000 Subject: No subject Message-ID: reversible. It might be easier to do this if the changes were in a separate directory with an individual files for each change and a script that pulled the relevant ones in. It might make it easier to isolate if a stage failed rather than having them all embedded in a script. ( I'm thinking along the lines of Rails which I seem to recall handled database schema changes rather neatly). There's no single solution out there but I think we could definitely do better if we put our minds to it Colin -- Colin Campbell Chief Software Engineer, PTFS Europe Limited Content Management and Library Solutions +44 (0) 845 557 5634 (phone) +44 (0) 7759 633626 (mobile) colin.campbell at ptfs-europe.com skype: colin_campbell2 http://www.ptfs-europe.com From bogus@does.not.exist.com Sat Oct 16 04:25:36 2010 From: bogus@does.not.exist.com () Date: Sat, 16 Oct 2010 02:25:36 -0000 Subject: No subject Message-ID: on a machine where you install koha as a package with standard install with USMARC only. Hope that helps. -- Henri-Damien LAURENT From bogus@does.not.exist.com Sat Oct 16 04:25:36 2010 From: bogus@does.not.exist.com () Date: Sat, 16 Oct 2010 02:25:36 -0000 Subject: No subject Message-ID: ZEBRA_MARC_FORMAT And ZEBRA_LANGUAGE All it would have to do is set those parameters or ask for confirmation... Same could be done with INSTALL_MODE (dev mode is the most appropriate in many structures, even for "little libraries" when you want some support) INSTALL_SRU AUTH_INDEX_MODE grs1 would be mandatory for UNIMARC installation. and other environnement variables enclosed in install_misc/environment_MakeFile.PL could be displayed and asked for confirmation before getting to the make; make install. my 2 cents. -- Henri-Damien LAURENT From bogus@does.not.exist.com Sat Oct 16 04:25:36 2010 From: bogus@does.not.exist.com () Date: Sat, 16 Oct 2010 02:25:36 -0000 Subject: No subject Message-ID: bin/maintenance/remove_items_from_biblioitems.pl is a migration tool. Or have I understood it wrongly? (INSTALL wrongly mentions its location as misc/maintenance/. The correct location ought to be bin/maintenance/ . -- Mahesh T. Pai ||