From paul.poulain at biblibre.com Mon Nov 1 05:04:06 2010 From: paul.poulain at biblibre.com (Paul Poulain) Date: Mon, 01 Nov 2010 05:04:06 +0100 Subject: [Koha-devel] wiki [important] changes & request for reviewing Message-ID: <4CCE3C36.1040407@biblibre.com> Hello, Those who have an RSS feed on the wiki may have seen a lot of changes. It was a task I planned to do during hackfest. Couldn't but today I could ! I've cleaned A LOT all RFCs, I hope (and think) it's much more clear now ! I've created 3 new categories : rfc_for3.2, rfc_for_3.4, abandoned_rfc I've checked *all* (109 !) rfcs and added/modified the categories to be the right one. So, now: - http://wiki.koha-community.org/wiki/Category:RFCs:for_3.2 => contains rfcs for 3.2 - http://wiki.koha-community.org/wiki/Category:RFCs_for3.4 => contains rfcs that have been announced as expected for 3.4 - http://wiki.koha-community.org/wiki/Category:Abandoned_RFC => contains abandoned rfcs (because it's no more relevant or we have lost contact with the developer) on page : http://wiki.koha-community.org/wiki/Category:RFCs there are still 19 pages that I don't know what to do with. FOLLOW-UP => ANYONE who has entered a RFCs on the wiki should go and double-check that it's at the right place. On some RFCs, the author was displayed in the RFC itself (BibLibre or Liblime most of the time, but a few others), but the edition history has been lost when we moved the wiki. So, for many RFCs, there is no information in the RFC itself and it seems to have been filed by nengard, which is wrong. I also have modified the tags "Version". I propose that the tag "version X.Y" to be checked only when the feature has been integrated in master branch (it's sure it will be a part of the release X.Y) Thus, i've removed all "3.4" tags, they will be re-checked later (and RFCs_for3.4 is clear) Other points = * the page http://wiki.koha-community.org/wiki/Category:RFCs describe the workflow * pls note I've created a template for RFC (i haven't adapted existing RFCs). If something is missing in this template, let me know (or update it, it's quite easy !). We (BibLibre) will use this template for the coming-soon RFCs ! Will now work on BibLibre RFCs for 3.4. (That may be 40, mostly for acq & serials ) HTH -- Paul POULAIN http://www.biblibre.com Expert en Logiciels Libres pour l'info-doc Tel : (33) 4 91 81 35 08 From mjr at phonecoop.coop Mon Nov 1 12:54:04 2010 From: mjr at phonecoop.coop (MJ Ray) Date: Mon, 01 Nov 2010 11:54:04 +0000 Subject: [Koha-devel] wiki [important] changes & request for reviewing Message-ID: Paul Poulain wrote: > - http://wiki.koha-community.org/wiki/Category:Abandoned_RFC => contains > abandoned rfcs (because it's no more relevant or we have lost contact > with the developer) I've rescued two of them (multi-level biblios and requiring items) which I'd still like to see in 3.4 although I have no funding for them at the moment. > on page : http://wiki.koha-community.org/wiki/Category:RFCs there are > still 19 pages that I don't know what to do with. > > FOLLOW-UP => ANYONE who has entered a RFCs on the wiki should go and > double-check that it's at the right place. On some RFCs, the author was > displayed in the RFC itself (BibLibre or Liblime most of the time, but a > few others), but the edition history has been lost when we moved the > wiki. So, for many RFCs, there is no information in the RFC itself and > it seems to have been filed by nengard, which is wrong. Why didn't you ask? I have a copy of what I think is the edit history, although the move also disrupted it by renaming many pages. Here is a list of RFC creator usernames from the old wiki: rfcs3.2:rfc32_hourly_circ gmc created rfcs3.2:rfc32_email_checkout_slips gmc created rfcs3.2:rfc32_fine_thresholds gmc created rfcs3.2:rfc32_opening_hours gmc created rfcs3.2:rfc32_recalls gmc created rfcs3.2:rfc32_holdability_flag gmc created rfcs3.2:rfc32_notifications_and_opac_messaging gmc created rfcs3.2:rfc32_system_groups gmc created rfcs3.2:rfc32_marcstructure_translation eric.begin created rfcs3.2:rfc32_description_translation eric.begin created rfcs3.2:rfc32_readingrecordhistory tipaul created reading record management by patron rfcs3.2:acq-permissions nicomo created rfcs3.2:create-item-when-ordering nicomo created rfcs3.2:enhance-closing-basket nicomo created rfcs3.2:refining-budgets nicomo created rfcs3.2:better-stats-about-acq nicomo created rfcs3.2:refining-budgets-management nicomo created rfcs3.2:bib-search-acq nicomo created rfcs3.2:searches-on-orders nicomo created rfcs3.2:suggestion-opac-searchtest nicomo created rfcs3.2:suggestion-opac-hold nicomo created rfcs3.2:suggestion-staff-workflow nicomo created rfcs3.2:pqf_search tipaul created rfcs3.2:display_item tipaul created : RFC for displaying items in large networks rfcs3.2:rfc32_koha_api_uml_diagrams_and_e-r_diagrams tajoli created rfcs3.2:rfc32_guided_reports tipaul created guided reports improved rfcs3.2:rfc32_turn_on_warnings gmc created rfcs3.2:rfc32_general_holds_improvement gmc created rfcs3.2:rfc32_hold_request_targeting gmc created rfcs3.2:rfc32_ajax_in_the_staff_interface pianohacker created rfcs3.2:rfc32_advancedpatronsearch danny.bouman created RFC for advanced patron search rfcs3.2:buying-analysis tipaul created rfcs3.2:rfc32_cas_compatibility nicomo created rfcs3.2:rfc32_unsaved_changes_notification danny.bouman created rfcs3.2:advanced_cataloging_search danny.bouman created rfcs3.2:rfc32_fast_add danny.bouman created rfcs3.2:rfc32_syndetics_support danny.bouman created rfcs3.2:rfc32_place_hold_multiple_items danny.bouman created rfcs3.2:rfc32_tag_multiple_items danny.bouman created rfcs3.2:rfc32_holdings_structure gmc created rfcs3.2:authority_control_improvements gmc created rfcs3.2:rfc32_brief_records gmc created rfcs3.2:improved_batch_capabilities danny.bouman created rfcs3.2:rfc32_improved_permissions danny.bouman created rfcs3.2:rfc32_staff_place_hold_opac danny.bouman created rfcs3.2:rfc32_import_export_system_preferences danny.bouman created rfcs3.2:rfc32_barcode_prefixes rch created rfcs3.2:rfc32_multiple_opac rch created rfcs3.2:rfc32_take_items_out_of_bib gmc created rfcs3.2:storequeries hdl created rfcs3.2:rfc32_improved_security pustake created rfcs3.2:rfc32_item_level_itemtypes rch created rfcs3.2:blocking_non_blocking_rules tipaul created rfcs3.2:fines_in_days tipaul created rfcs3.2:dbix-class amoore created rfcs3.2:rfc32_biblio_table rch created rfcs3.2:rfc32_subtitle_field mdhafen created rfcs3.2:requiring_items mjr created rfcs3.2:rfc32_circ_policies rch created rfcs3.2:rfc32_patronselfprivacy tipaul created : RFC for patron dealing with their Reading History at OPAC If any appear multiple times, I think the later ones are translations. Perhaps someone could recover any RFCs that they think are/should not be abandoned, based on that list? Regards, -- MJ Ray (slef), member of www.software.coop, a for-more-than-profit co-op. Webmaster, Debian Developer, Past Koha RM, statistician, former lecturer. In My Opinion Only: see http://mjr.towers.org.uk/email.html Available for hire for various work http://www.software.coop/products/ From M.de.Rooy at rijksmuseum.nl Mon Nov 1 14:56:08 2010 From: M.de.Rooy at rijksmuseum.nl (Marcel de Rooy) Date: Mon, 1 Nov 2010 13:56:08 +0000 Subject: [Koha-devel] Adding items in Acquisition 3.2 Message-ID: <809BE39CD64BFD4EB9036172EBCCFA3119A907@S-MAIL-1B.rijksmuseum.intra> Hi, The new feature (in Koha 3.2) of adding items in Acquisitions with the framework display is quite promising. It appears however that the framework plugins cannot be activated there. Do I overlook something? Or is anyone still working on this functionality? Regards, Marcel -------------- next part -------------- An HTML attachment was scrubbed... URL: From lculber at mdah.state.ms.us Mon Nov 1 20:38:19 2010 From: lculber at mdah.state.ms.us (Linda Culberson) Date: Mon, 01 Nov 2010 14:38:19 -0500 Subject: [Koha-devel] Koha 3.2 : bulkmarcimport.pl/bulkauthimport.pl Message-ID: <4CCF172B.40700@mdah.state.ms.us> Everyone, I apologize if this is the wrong list for this question. There is considerable change in the bulkmarcimport.pl script and it seems to be written to handle authorities as well as bibliographic records. I loaded my authority records using bulkauthimport.pl as I did in 3.1, and it worked fine, but I was wondering if I should be using bulkmarcimport.pl for both authorities and biblios (in other words, is bulkauthimport.pl being deprecated?) I also still have the need to be able to retain our system identification numbers from the 001 as the biblio.biblionumber because it is used to tie our analytics to the host record and it is used in links to non-MARC databases. I see that there is still /usr/share/koha/bin/batchImportMARCWithBiblionumbers.pl and 3.2's bulkmarcimport.pl does seem to allow for a "-k" "keepids" option (although the script does state that is is .."for authorities, where 001 contains the authid for Koha, that can contain a very valuable info for authorities coming from LOC or BNF. useless for biblios probably") I tried unsuccessfully to use that option for biblios, and wondered if anyone else was working on a similar problem. Thanks in advance. -- Linda Culberson lculber at mdah.state.ms.us Archives and Records Services Division Ms. Dept. of Archives& History P. O. Box 571 Jackson, MS 39205-0571 Telephone: 601/576-6873 Facsimile: 601/576-6824 From robin at catalyst.net.nz Mon Nov 1 22:54:28 2010 From: robin at catalyst.net.nz (Robin Sheat) Date: Tue, 02 Nov 2010 10:54:28 +1300 Subject: [Koha-devel] Changing default koha mapping 440$a -> 490$a Message-ID: <1288648468.2328.34.camel@zarathud> In 2008, the 440 MARC field was obsoleted[0], and 490$a is the equivalent. Default Koha installs (unless it was updated relatively recently) map 'seriestitle' to 440$a. I'm planning to update this to 490$a for a client's install, but was wondering: a) is this likely to make things blow up in unexpected ways?, and b) is this worth putting into the default Koha mapping so that new installs have it set up correctly? For obvious reasons of not messing with peoples' data, adding this as a database update is probably a bad idea, but is there any issue with having it as the new default? [0] http://www.loc.gov/marc/bibliographic/bd4xx.html -- Robin Sheat Catalyst IT Ltd. ? +64 4 803 2204 GPG: 5957 6D23 8B16 EFAB FEF8 7175 14D3 6485 A99C EB6D -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part URL: From nengard at gmail.com Mon Nov 1 23:01:32 2010 From: nengard at gmail.com (Nicole Engard) Date: Tue, 2 Nov 2010 11:01:32 +1300 Subject: [Koha-devel] Changing default koha mapping 440$a -> 490$a In-Reply-To: <1288648468.2328.34.camel@zarathud> References: <1288648468.2328.34.camel@zarathud> Message-ID: I think it makes perfect sense for it to be the new default. We have many libraries asking for the same thing. I'd even recommend some way for the libraries to choose which field the series is pulled from so that an update is possible. Nicole 2010/11/2 Robin Sheat : > In 2008, the 440 MARC field was obsoleted[0], and 490$a is the > equivalent. Default Koha installs (unless it was updated relatively > recently) map 'seriestitle' to 440$a. I'm planning to update this to > 490$a for a client's install, but was wondering: > a) is this likely to make things blow up in unexpected ways?, and > b) is this worth putting into the default Koha mapping so that new > installs have it set up correctly? > > For obvious reasons of not messing with peoples' data, adding this as a > database update is probably a bad idea, but is there any issue with > having it as the new default? > > [0] http://www.loc.gov/marc/bibliographic/bd4xx.html > -- > Robin Sheat > Catalyst IT Ltd. > ? +64 4 803 2204 > GPG: 5957 6D23 8B16 EFAB FEF8 ?7175 14D3 6485 A99C EB6D > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > From paul.poulain at biblibre.com Tue Nov 2 04:07:49 2010 From: paul.poulain at biblibre.com (Paul Poulain) Date: Tue, 02 Nov 2010 04:07:49 +0100 Subject: [Koha-devel] Getting rid of YUI ? Message-ID: <4CCF8085.7070602@biblibre.com> Hi world, As you know, we are using both YUI and jquery javascript libraries in Koha. That's too much ! I think jquiry-ui (http://jqueryui.com) is good enough to decide to get rid of YUI. Pro: * jquery much more developed * yui & webdev have problems working together * a lot of informations are available for jquery, including tutorials and books * http://jqueryui.com/docs/Theming/API seems much more clear & easy to use to me than yui css (is it just me ?) * jqueryui comes with (a themable) complete set of icons Cons: ? (Anyone want to argue ?) So, I propose the following roadmap : * new templates should be developed using jquery, not yui * 3.4 would still have both * in 3.6, we get rid of yui Other question : should we have jquery code inside koha (and thus having to take care of the updates) or should we use an external source ? (does this exist ? for yui it does, see yuipath syspref in koha) -- Paul POULAIN http://www.biblibre.com Expert en Logiciels Libres pour l'info-doc Tel : (33) 4 91 81 35 08 From robin at catalyst.net.nz Tue Nov 2 04:14:09 2010 From: robin at catalyst.net.nz (Robin Sheat) Date: Tue, 02 Nov 2010 16:14:09 +1300 Subject: [Koha-devel] Getting rid of YUI ? In-Reply-To: <4CCF8085.7070602@biblibre.com> References: <4CCF8085.7070602@biblibre.com> Message-ID: <1288667649.2328.58.camel@zarathud> Paul Poulain schreef op di 02-11-2010 om 04:07 [+0100]: > Other question : should we have jquery code inside koha (and thus > having > to take care of the updates) or should we use an external source ? > (does > this exist ? for yui it does, see yuipath syspref in koha) It would definitely have to be optional if it were made possible to use the hosted version, for people on non-internet-connected machines. However, this would encourage bringing the support up to modern jquery levels. There are currently issues in places when you use a current version (and this affects the packages which originally tried to use the debian packaged version.) -- Robin Sheat Catalyst IT Ltd. ? +64 4 803 2204 GPG: 5957 6D23 8B16 EFAB FEF8 7175 14D3 6485 A99C EB6D -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part URL: From paul.poulain at biblibre.com Tue Nov 2 05:24:21 2010 From: paul.poulain at biblibre.com (Paul Poulain) Date: Tue, 02 Nov 2010 05:24:21 +0100 Subject: [Koha-devel] RFCs for 3.4 from BibLibre (serials & acquisitions) Message-ID: <4CCF9275.3010307@biblibre.com> Hello, I have filed RFCs for all enhancements you can expect from BibLibre in the next few months. All of them are sponsored (by Saint-Etienne University), and will have to be delivered to the library for a "go live" in May-2011 * You can find them on the wiki (search biblibre rfc for3.4) : http://wiki.koha-community.org/w/index.php?title=Special:Search&ns0=1&redirs=1&search=biblibre+rfc+for3.4&limit=50&offset=0 * You can also find them on bugzilla : i've added a saved search, shared with everybody, called "Koha 3.4 enhancements". They are only related to serials and acquisitions. What's the next step ? * read the RFCs * comment them (on the wiki or on bugzilla) * we will probably propose/organise an IRC meeting to discuss & get any feedback for all those incoming developments. None of them (except the solR one) has started, so if you want to give us an advice, add something, ... it's now or it may be too late ;-) -- Paul POULAIN http://www.biblibre.com Expert en Logiciels Libres pour l'info-doc Tel : (33) 4 91 81 35 08 -------------- section suivante -------------- Une pi?ce jointe HTML a ?t? nettoy?e... URL: From paul.poulain at biblibre.com Tue Nov 2 05:27:26 2010 From: paul.poulain at biblibre.com (Paul Poulain) Date: Tue, 02 Nov 2010 05:27:26 +0100 Subject: [Koha-devel] Template for RFCs updated Message-ID: <4CCF932E.8060703@biblibre.com> Hello, it's a follow-up of my yesterday mail about the wiki RFC template: i've updated the template for RFC. Now, you can/must also enter the bugzilla number, like this : |bug= The number will be transformed into a nice url to the bug itself, just enter the number (|bug=5420) -- Paul POULAIN http://www.biblibre.com Expert en Logiciels Libres pour l'info-doc Tel : (33) 4 91 81 35 08 From tajoli at cilea.it Tue Nov 2 10:13:47 2010 From: tajoli at cilea.it (Zeno Tajoli) Date: Tue, 02 Nov 2010 10:13:47 +0100 Subject: [Koha-devel] wiki [important] changes & request for reviewing In-Reply-To: References: Message-ID: <4CCFD64B.5040400@cilea.it> Hi all Il 01/11/2010 12:54, MJ Ray ha scritto: > Paul Poulain wrote: >> - http://wiki.koha-community.org/wiki/Category:Abandoned_RFC => contains >> abandoned rfcs (because it's no more relevant or we have lost contact >> with the developer) > I've rescued two of them (multi-level biblios and requiring items) which > I'd still like to see in 3.4 although I have no funding for them at the > moment. MJ can you write the link to "multi-level biblios" RFC ? I undestand tha the "requiring items" RFC is http://wiki.koha-community.org/wiki/Requiring_items_RFC but I don't understand the first. Probably the "analytic record support" RFC, http://wiki.koha-community.org/wiki/Analytic_Record_support is related with "multi-level biblios" RFC. Bye -- Zeno Tajoli CILEA - Segrate (MI) tajoliAT_SPAM_no_prendiATcilea.it (Indirizzo mascherato anti-spam; sostituisci qaunto tra AT con @) From fridolyn.somers at gmail.com Tue Nov 2 10:21:49 2010 From: fridolyn.somers at gmail.com (Fridolyn SOMERS) Date: Tue, 2 Nov 2010 10:21:49 +0100 Subject: [Koha-devel] opac-search.pl In-Reply-To: References: <.77.224.23.61.1288484548.squirrel@webmail.tgi.es> Message-ID: Hie, Can we think about removing "No Zebra" mode ? It will make code more clear. Regards, 2010/10/31 Chris Nighswonger > Hi Toni > > > On Sat, Oct 30, 2010 at 8:22 PM, Toni Rosa wrote: > >> Hello, >> >> We are using Koha 3.0.5 >> I noticed that the opac advanced search in it (opac-search.pl) when >> running in a nozebra installation doesn't honor "publication period" >> intervals (e.g entering the 1800-1900 interval the search doesn't work). >> I followed the code to the "C4/Search.pm" file, method "NZanalyse" and it >> seems that it actually doesn't even try to build the sql query taking the >> intervals into account. >> Did anyone find the same behavior? (I'd guess everybody should, as it >> seems a bug/omission to me!) >> >> > NoZebra is basically deprecated and for the most part unsupported. I would > highly recommend switching over to Zebra even for a small collection. > > Kind Regards, > Chris > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > -- Fridolyn SOMERS ICT engineer PROGILONE - Lyon - France fridolyn.somers at gmail.com -------------- section suivante -------------- Une pi?ce jointe HTML a ?t? nettoy?e... URL: From cnighswonger at foundations.edu Tue Nov 2 13:46:32 2010 From: cnighswonger at foundations.edu (Chris Nighswonger) Date: Tue, 2 Nov 2010 08:46:32 -0400 Subject: [Koha-devel] Getting rid of YUI ? In-Reply-To: <4CCF8085.7070602@biblibre.com> References: <4CCF8085.7070602@biblibre.com> Message-ID: On Mon, Nov 1, 2010 at 11:07 PM, Paul Poulain wrote: > > > Other question : should we have jquery code inside koha (and thus having > to take care of the updates) or should we use an external source ? (does > this exist ? for yui it does, see yuipath syspref in koha) > I have no problem with changing to something better than yui. Regarding the location of the source: I think that if we use an external source, we *must* include an internal version/option for those who are bandwidth challenged as well as in a preemptive protection against data link outages on local installations. Koha tends to bork if the yui source is external and the link to the internet dies. In this regard, we should also consider some fail over mechanism if we continue to use an external source so when the external source is unavailable for whatever reason, Koha drops back to the internal source. my $0.02 worth. Kind Regards, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleonard at myacpl.org Tue Nov 2 13:59:16 2010 From: oleonard at myacpl.org (Owen Leonard) Date: Tue, 2 Nov 2010 08:59:16 -0400 Subject: [Koha-devel] Getting rid of YUI ? In-Reply-To: <4CCF8085.7070602@biblibre.com> References: <4CCF8085.7070602@biblibre.com> Message-ID: > I think jquiry-ui (http://jqueryui.com) is good enough to decide to get > rid of YUI. I think this is a good goal. I agree that JqueryUI is mature enough to start using. > Cons: > ? (Anyone want to argue ?) Not a con, but a difficulty: We need to choose a good jquery-based menu system to replace the YUI one. This is something JqueryUI doesn't offer. -- Owen -- Web Developer Athens County Public Libraries http://www.myacpl.org From kyle.m.hall at gmail.com Tue Nov 2 14:14:16 2010 From: kyle.m.hall at gmail.com (Kyle Hall) Date: Tue, 2 Nov 2010 09:14:16 -0400 Subject: [Koha-devel] Getting rid of YUI ? In-Reply-To: References: <4CCF8085.7070602@biblibre.com> Message-ID: Another upvote for jQuery. I'm using it for Libki now. I find it far more elegant to work with than YUI. Kyle http://www.kylehall.info Mill Run Technology Solutions ( http://millruntech.com ) Crawford County Federated Library System ( http://www.ccfls.org ) Meadville Public Library ( http://www.meadvillelibrary.org ) On Tue, Nov 2, 2010 at 8:59 AM, Owen Leonard wrote: > > I think jquiry-ui (http://jqueryui.com) is good enough to decide to get > > rid of YUI. > > I think this is a good goal. I agree that JqueryUI is mature enough to > start using. > > > Cons: > > ? (Anyone want to argue ?) > > Not a con, but a difficulty: We need to choose a good jquery-based > menu system to replace the YUI one. This is something JqueryUI doesn't > offer. > > > -- Owen > > -- > Web Developer > Athens County Public Libraries > http://www.myacpl.org > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmcharlt at gmail.com Tue Nov 2 14:24:58 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Tue, 2 Nov 2010 09:24:58 -0400 Subject: [Koha-devel] Getting rid of YUI ? In-Reply-To: <4CCF8085.7070602@biblibre.com> References: <4CCF8085.7070602@biblibre.com> Message-ID: Hi, On Mon, Nov 1, 2010 at 11:07 PM, Paul Poulain wrote: > I think jquiry-ui (http://jqueryui.com) is good enough to decide to get > rid of YUI. I am in favor of this. > Other question : should we have jquery code inside koha (and thus having > to take care of the updates) or should we use an external source ? (does > this exist ? for yui it does, see yuipath syspref in koha) I agree with Chris that we need to provide options for both an external and internal source. Regards, Galen -- Galen Charlton gmcharlt at gmail.com From kmkale at anantcorp.com Tue Nov 2 14:31:20 2010 From: kmkale at anantcorp.com (Koustubha Kale) Date: Tue, 2 Nov 2010 19:01:20 +0530 Subject: [Koha-devel] itemBarcodeInputFilter or Client side JS? Message-ID: Hi All, We are implementing Koha at a college. They have a existing software for library. They have barcoded all their collection of about 150000 books with this software. The peculiar thing about the software and the barcode it generates is this : The barcode related data stored in the database of the software backend ( MS SQL) is the item's accession number & item type. The actual barcode is not stored anywhere. The printed barcodes are like this B011364, B011436, AR011346, AR011436,AR018100,AR018103,AR018104,AR018105. ( Note the duplicate accession numbers) The first alphabets correspond to the itemtype code stored in a table. The remaining numbers are the accession number stored in a table without the leading zeros. This causes a problem while migrating this data to Koha due to the missing leading zeros and arbitrary length of the string and the fact that accession numbers are same for different item types. While generating a barcode for a new item the software takes the itemtype and the accession number and prints out a barcode label in the required format. When the barcode is read back in the softwares circulation module by a barcode reader, a asp script on the server separates the item type and the accession number, strips the leading zeros from the numeric part and does a search on the database by means of a join on the two tables. As the college has stuck these barcode labels on all items, we have to figure out how to emulate this functionality in Koha. (Post migration we will generate & print barcodes from Koha; so we dont have to emulate this logic in the barcode generation for Koha). I was thinking we could store the barcode in Koha as itemtype-accession_number ( without the leading zeros e.g. AR-18100) retrieved from the old softwares database and when reading it in circulation in Koha we convert the barcode from for example AR018100 to AR-18100. I see two options to acchieve this.. options : 1) Create another filter for itemBarcodeInputFilter and modify the sub barcodedecode in C4/Circulation.pm 2) write a client side javascript to do it. I would like to know is 1) What is the better way to acchive this? 2) Is there way for us to avoid a patch to Koha which will probably be useless to anybody not migrating from this particular software. By using opacuserjs perhaps.. Regards, Koustubha Kale Anant Corporation Contact Details : Address? : 103, Armaan Residency, R. W Sawant Road, Nr. Golden Dyes Naka, Thane (w), ? ? ? ? ??? ? ? Maharashtra, India, Pin : 400601. TeleFax? : +91-22-21720108, +91-22-21720109 Mobile? ?? : +919820715876 Website? : http://www.anantcorp.com Blog : http://www.anantcorp.com/blog/?author=2 From tomascohen at gmail.com Tue Nov 2 14:32:10 2010 From: tomascohen at gmail.com (Tomas Cohen Arazi) Date: Tue, 2 Nov 2010 10:32:10 -0300 Subject: [Koha-devel] Getting rid of YUI ? In-Reply-To: References: <4CCF8085.7070602@biblibre.com> Message-ID: On Tue, Nov 2, 2010 at 10:24 AM, Galen Charlton wrote: > > > Other question : should we have jquery code inside koha (and thus having > > to take care of the updates) or should we use an external source ? (does > > this exist ? for yui it does, see yuipath syspref in koha) > > I agree with Chris that we need to provide options for both an > external and internal source. I think Koha shouldn't depend on external libraries release cycles and provided API stability. We should depend on included and tested version of the libraries and backport security fixes from upstream. To+ From vdanjean.ml at free.fr Tue Nov 2 15:02:35 2010 From: vdanjean.ml at free.fr (Vincent Danjean) Date: Tue, 02 Nov 2010 15:02:35 +0100 Subject: [Koha-devel] Getting rid of YUI ? In-Reply-To: References: <4CCF8085.7070602@biblibre.com> Message-ID: <4CD019FB.8090306@free.fr> On 02/11/2010 14:32, Tomas Cohen Arazi wrote: > On Tue, Nov 2, 2010 at 10:24 AM, Galen Charlton wrote: >> >>> Other question : should we have jquery code inside koha (and thus having >>> to take care of the updates) or should we use an external source ? (does >>> this exist ? for yui it does, see yuipath syspref in koha) >> >> I agree with Chris that we need to provide options for both an >> external and internal source. > > I think Koha shouldn't depend on external libraries release cycles and > provided API stability. We should depend on included and tested > version of the libraries and backport security fixes from upstream. When Koha is packaged into a distribution (Debian for example), embedding copies of external libraries is always a bad point. So, you should prepare the code to use another (local) copy of this library. IMHO, the best way is to provide a copy of the external libraries with the Koha code (so that users that download directly Koha can easily use it) but to put it in a separate directory so that packagers can remove it and replace it with symlinks to the system-wide library (such as the contents of the libjs-jquery-ui package in Debian for example) Regards Vincent > To+ -- Vincent Danjean Adresse: Laboratoire d'Informatique de Grenoble T?l?phone: +33 4 76 61 20 11 ENSIMAG - antenne de Montbonnot Fax: +33 4 76 61 20 99 ZIRST 51, avenue Jean Kuntzmann Email: Vincent.Danjean at imag.fr 38330 Montbonnot Saint Martin From cnighswonger at foundations.edu Tue Nov 2 16:24:30 2010 From: cnighswonger at foundations.edu (Chris Nighswonger) Date: Tue, 2 Nov 2010 11:24:30 -0400 Subject: [Koha-devel] A Discussion on A Policy Setting Forth Standards of Code Submission, etc. [WAS: RFCs for 3.4 from BibLibre (serials & acquisitions)] Message-ID: Starting a new thread because this is indirectly related to Paul's request. 2010/11/2 Paul Poulain > Hello, > > I have filed RFCs for all enhancements you can expect from BibLibre in the > next few months. All of them are sponsored (by Saint-Etienne University), > and will have to be delivered to the library for a "go live" in May-2011 > > * You can find them on the wiki (search biblibre rfc for3.4) : > http://wiki.koha-community.org/w/index.php?title=Special:Search&ns0=1&redirs=1&search=biblibre+rfc+for3.4&limit=50&offset=0 > * You can also find them on bugzilla : i've added a saved search, shared > with everybody, called "Koha 3.4 enhancements". > > They are only related to serials and acquisitions. > > What's the next step ? > * read the RFCs > * comment them (on the wiki or on bugzilla) > * we will probably propose/organise an IRC meeting to discuss & get any > feedback for all those incoming developments. None of them (except the solR > one) has started, so if you want to give us an advice, add something, ... > it's now or it may be too late ;-) > > As a vendor-neutral voice, I would like to encourage everyone who has an vested interest in these areas and the best interests of the Koha project at heart to actively participate and respond to these RFC's. It seems that often there is little dicussion, etc. on RFCs in the community. And even when there is discussion, etc. it is often unclear if a consensus is reached (at least publicly). Furthermore, I would encourage vendors and others who post RFC's to do so with a willingness to adapt, adjust, bend, compromise, and/or to positions on those RFC's which may be different but are clearly the consensus of the community at large. Vendors may and often do have the resources to implement "what they want," however, this is not in the spirit of cooperation which this project so greatly depends upon for its success. Clients of vendors should be educated during the RFQ process as to this aspect of open source, and their expectations managed accordingly, imho. I would also suggest that we implement a policy that states in some agreeable way that code/features will not be pushed to master which have not passed through a review and consensus process by the community and the RM (as the elected head of development by the community). No one excepting possibly the RM should presuppose that their code is guaranteed inclusion by default. Secondly, I would suggest that we implement a strong recommendation that larger shops submit timely RFC's *prior to* beginning work on code and then promote discussion on those RFC's. This recommendation should with some lesser strength suggest that everyone submit timely RFC's to maximize productivity and usefulness of the resources of all concerned. Thirdly, I would suggest a stated policy (and such a policy is presently in place practically) which requires all submissions to pass through a QA branch and receive at a minimum one sign-off prior to being pushed into master. This policy should also assign a certain amount of responsibility to the one signing off to avoid "frivolous" sign-offs. It should also, perhaps, include a restriction that the required sign-off for pushing to master be a disinterested developer perhaps from another vendor or the community at large. This is a discussion we need to have. I would encourage everyone to invest time (the operative term here is 'invest') in this discussion. Kind Regards, Chris Nighswonger Koha 3.2.x Release Maintainer -------------- next part -------------- An HTML attachment was scrubbed... URL: From tomascohen at gmail.com Tue Nov 2 16:32:31 2010 From: tomascohen at gmail.com (Tomas Cohen Arazi) Date: Tue, 2 Nov 2010 12:32:31 -0300 Subject: [Koha-devel] Getting rid of YUI ? In-Reply-To: <4CD019FB.8090306@free.fr> References: <4CCF8085.7070602@biblibre.com> <4CD019FB.8090306@free.fr> Message-ID: On Tue, Nov 2, 2010 at 11:02 AM, Vincent Danjean wrote: > On 02/11/2010 14:32, Tomas Cohen Arazi wrote: >> On Tue, Nov 2, 2010 at 10:24 AM, Galen Charlton wrote: >>> >>>> Other question : should we have jquery code inside koha (and thus having >>>> to take care of the updates) or should we use an external source ? (does >>>> this exist ? for yui it does, see yuipath syspref in koha) >>> >>> I agree with Chris that we need to provide options for both an >>> external and internal source. >> >> I think Koha shouldn't depend on external libraries release cycles and >> provided API stability. We should depend on included and tested >> version of the libraries ?and backport security fixes from upstream. > > When Koha is packaged into a distribution (Debian for example), embedding > copies of external libraries is always a bad point. So, you should > prepare the code to use another (local) copy of this library. > > IMHO, the best way is to provide a copy of the external libraries with > the Koha code (so that users that download directly Koha can easily use > it) but to put it in a separate directory so that packagers can > remove it and replace it with symlinks to the system-wide library > (such as the contents of the libjs-jquery-ui package in Debian for > example) IMHO, it will always depend on the API stability of the libraries in use and, in the case you use as an example, the politics of the Linux distribution in question. As a maintainer of debian packages for Koha I'm sure you're confident about that in this specific distribution. Also, I'm not a jquery expert so I'm not aware of any non-stability on its API, so take my comments as mere questions. To+ From mdhafen at tech.washk12.org Tue Nov 2 16:56:56 2010 From: mdhafen at tech.washk12.org (Mike Hafen) Date: Tue, 2 Nov 2010 09:56:56 -0600 Subject: [Koha-devel] itemBarcodeInputFilter or Client side JS? In-Reply-To: References: Message-ID: Why can't you correct the barcodes during the migration? If you are going from MS-SQL to MySQL you should be able to add another step in the middle to populate the barcode from the available information in the database. If you are going from MARC into Koha you should be able to populate the barcode once the data is in Koha and update the MARC and Zebra from there. Is it not possible to do one of these two? On Tue, Nov 2, 2010 at 7:31 AM, Koustubha Kale wrote: > Hi All, > We are implementing Koha at a college. They have a existing software > for library. They have barcoded all their collection of about 150000 > books with this software. > The peculiar thing about the software and the barcode it generates is this > : > > The barcode related data stored in the database of the software > backend ( MS SQL) is the item's accession number & item type. The > actual barcode is not stored anywhere. > The printed barcodes are like this B011364, B011436, AR011346, > AR011436,AR018100,AR018103,AR018104,AR018105. ( Note the duplicate > accession numbers) > The first alphabets correspond to the itemtype code stored in a table. > The remaining numbers are the accession number stored in a table > without the leading zeros. This causes a problem while migrating this > data to Koha due to the missing leading zeros and arbitrary length of > the string and the fact that accession numbers are same for different > item types. > While generating a barcode for a new item the software takes the > itemtype and the accession number and prints out a barcode label in > the required format. > When the barcode is read back in the softwares circulation module by a > barcode reader, a asp script on the server separates the item type and > the accession number, strips the leading zeros from the numeric part > and does a search on the database by means of a join on the two > tables. > > As the college has stuck these barcode labels on all items, we have to > figure out how to emulate this functionality in Koha. (Post migration > we will generate & print barcodes from Koha; so we dont have to > emulate this logic in the barcode generation for Koha). I was thinking > we could store the barcode in Koha as itemtype-accession_number ( > without the leading zeros e.g. AR-18100) retrieved from the old > softwares database and when reading it in circulation in Koha we > convert the barcode from for example AR018100 to AR-18100. > > I see two options to acchieve this.. > options : > 1) Create another filter for itemBarcodeInputFilter and modify the sub > barcodedecode in C4/Circulation.pm > 2) write a client side javascript to do it. > > I would like to know is > 1) What is the better way to acchive this? > 2) Is there way for us to avoid a patch to Koha which will probably be > useless to anybody not migrating from this particular software. By > using opacuserjs perhaps.. > > > Regards, > Koustubha Kale > Anant Corporation > > Contact Details : > Address : 103, Armaan Residency, R. W Sawant Road, Nr. Golden Dyes > Naka, Thane (w), > Maharashtra, India, Pin : 400601. > TeleFax : +91-22-21720108, +91-22-21720109 > Mobile : +919820715876 > Website : http://www.anantcorp.com > Blog : http://www.anantcorp.com/blog/?author=2 > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ohiocore at gmail.com Tue Nov 2 19:10:10 2010 From: ohiocore at gmail.com (Joe Atzberger) Date: Tue, 2 Nov 2010 14:10:10 -0400 Subject: [Koha-devel] A Discussion on A Policy Setting Forth Standards of Code Submission, etc. [WAS: RFCs for 3.4 from BibLibre (serials & acquisitions)] In-Reply-To: References: Message-ID: Chris, I disagree that the first sign-off on a major vendor's patches should be external. The first sign-off from a major vendor should be *internal* to their quality control process. This was at one time the standing policy amongst LibLime and BibLibre for major changes. I think encouraging abstraction and RFC-aware flexibility is fine, but I think it is unwise to suggest that we should block *working code* from getting in just because a bigger, different or more-deluxe RFC exists. RFCs and the widespread desire for a feature flavor X are really quite removed from a working implementation ready for action, testing and revision now. Also, I think if you develop your features "in the open" (e.g., posted w/ gitweb, or on github), the burden of synthesizing multiple RFCs and general "feature consensus" sentiment isn't on you in quite the same way as when changes are delivered en bloc. A vendor has what their clients are paying for, and if other devs have an unfunded desire for extra feature X, they can follow the branch right along add the last bit themselves, all while still in development. Whether X is pulled in by the vendor, or separately submitted by the dev to the RM doesn't really matter. --joe 2010/11/2 Chris Nighswonger > Starting a new thread because this is indirectly related to Paul's request. > > 2010/11/2 Paul Poulain > >> Hello, >> >> I have filed RFCs for all enhancements you can expect from BibLibre in the >> next few months. All of them are sponsored (by Saint-Etienne University), >> and will have to be delivered to the library for a "go live" in May-2011 >> >> * You can find them on the wiki (search biblibre rfc for3.4) : >> http://wiki.koha-community.org/w/index.php?title=Special:Search&ns0=1&redirs=1&search=biblibre+rfc+for3.4&limit=50&offset=0 >> * You can also find them on bugzilla : i've added a saved search, shared >> with everybody, called "Koha 3.4 enhancements". >> >> They are only related to serials and acquisitions. >> >> What's the next step ? >> * read the RFCs >> * comment them (on the wiki or on bugzilla) >> * we will probably propose/organise an IRC meeting to discuss & get any >> feedback for all those incoming developments. None of them (except the solR >> one) has started, so if you want to give us an advice, add something, ... >> it's now or it may be too late ;-) >> >> > As a vendor-neutral voice, I would like to encourage everyone who has an > vested interest in these areas and the best interests of the Koha project at > heart to actively participate and respond to these RFC's. It seems that > often there is little dicussion, etc. on RFCs in the community. And even > when there is discussion, etc. it is often unclear if a consensus is reached > (at least publicly). > > Furthermore, I would encourage vendors and others who post RFC's to do so > with a willingness to adapt, adjust, bend, compromise, and/or > to positions on those RFC's which may be > different but are clearly the consensus of the community at large. Vendors > may and often do have the resources to implement "what they want," however, > this is not in the spirit of cooperation which this project so greatly > depends upon for its success. Clients of vendors should be educated during > the RFQ process as to this aspect of open source, and their expectations > managed accordingly, imho. > > I would also suggest that we implement a policy that states in some > agreeable way that code/features will not be pushed to master which have not > passed through a review and consensus process by the community and the RM > (as the elected head of development by the community). No one excepting > possibly the RM should presuppose that their code is guaranteed inclusion by > default. > > Secondly, I would suggest that we implement a strong recommendation that > larger shops submit timely RFC's *prior to* beginning work on code and then > promote discussion on those RFC's. This recommendation should with some > lesser strength suggest that everyone submit timely RFC's to maximize > productivity and usefulness of the resources of all concerned. > > Thirdly, I would suggest a stated policy (and such a policy is presently in > place practically) which requires all submissions to pass through a QA > branch and receive at a minimum one sign-off prior to being pushed into > master. This policy should also assign a certain amount of responsibility to > the one signing off to avoid "frivolous" sign-offs. It should also, perhaps, > include a restriction that the required sign-off for pushing to master be a > disinterested developer perhaps from another vendor or the community at > large. > > This is a discussion we need to have. I would encourage everyone to invest > time (the operative term here is 'invest') in this discussion. > > Kind Regards, > Chris Nighswonger > Koha 3.2.x Release Maintainer > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ohiocore at gmail.com Tue Nov 2 19:45:23 2010 From: ohiocore at gmail.com (Joe Atzberger) Date: Tue, 2 Nov 2010 14:45:23 -0400 Subject: [Koha-devel] itemBarcodeInputFilter or Client side JS? In-Reply-To: References: Message-ID: I agree, neither itemBarcodeInputFilter nor javascript are necessary. Just make the data loaded in Koha match the printed barcodes. If you are worried about further generation of barcodes, then you might have to insert an artificial max barcode value under the scheme you intend to use, in order to provide a valid base value to increment. --Joe 2010/11/2 Mike Hafen > Why can't you correct the barcodes during the migration? > > If you are going from MS-SQL to MySQL you should be able to add another > step in the middle to populate the barcode from the available information in > the database. > > If you are going from MARC into Koha you should be able to populate the > barcode once the data is in Koha and update the MARC and Zebra from there. > > Is it not possible to do one of these two? > > > On Tue, Nov 2, 2010 at 7:31 AM, Koustubha Kale wrote: > >> Hi All, >> We are implementing Koha at a college. They have a existing software >> for library. They have barcoded all their collection of about 150000 >> books with this software. >> The peculiar thing about the software and the barcode it generates is this >> : >> >> The barcode related data stored in the database of the software >> backend ( MS SQL) is the item's accession number & item type. The >> actual barcode is not stored anywhere. >> The printed barcodes are like this B011364, B011436, AR011346, >> AR011436,AR018100,AR018103,AR018104,AR018105. ( Note the duplicate >> accession numbers) >> The first alphabets correspond to the itemtype code stored in a table. >> The remaining numbers are the accession number stored in a table >> without the leading zeros. This causes a problem while migrating this >> data to Koha due to the missing leading zeros and arbitrary length of >> the string and the fact that accession numbers are same for different >> item types. >> While generating a barcode for a new item the software takes the >> itemtype and the accession number and prints out a barcode label in >> the required format. >> When the barcode is read back in the softwares circulation module by a >> barcode reader, a asp script on the server separates the item type and >> the accession number, strips the leading zeros from the numeric part >> and does a search on the database by means of a join on the two >> tables. >> >> As the college has stuck these barcode labels on all items, we have to >> figure out how to emulate this functionality in Koha. (Post migration >> we will generate & print barcodes from Koha; so we dont have to >> emulate this logic in the barcode generation for Koha). I was thinking >> we could store the barcode in Koha as itemtype-accession_number ( >> without the leading zeros e.g. AR-18100) retrieved from the old >> softwares database and when reading it in circulation in Koha we >> convert the barcode from for example AR018100 to AR-18100. >> >> I see two options to acchieve this.. >> options : >> 1) Create another filter for itemBarcodeInputFilter and modify the sub >> barcodedecode in C4/Circulation.pm >> 2) write a client side javascript to do it. >> >> I would like to know is >> 1) What is the better way to acchive this? >> 2) Is there way for us to avoid a patch to Koha which will probably be >> useless to anybody not migrating from this particular software. By >> using opacuserjs perhaps.. >> >> >> Regards, >> Koustubha Kale >> Anant Corporation >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cnighswonger at foundations.edu Tue Nov 2 21:12:36 2010 From: cnighswonger at foundations.edu (Chris Nighswonger) Date: Tue, 2 Nov 2010 16:12:36 -0400 Subject: [Koha-devel] A Discussion on A Policy Setting Forth Standards of Code Submission, etc. [WAS: RFCs for 3.4 from BibLibre (serials & acquisitions)] In-Reply-To: References: Message-ID: Hi Joe, On Tue, Nov 2, 2010 at 2:10 PM, Joe Atzberger wrote: > Chris, I disagree that the first sign-off on a major vendor's patches > should be external. The first sign-off from a major vendor should be > *internal* to their quality control process. This was at one time the > standing policy amongst LibLime and BibLibre for major changes. > > My recommendation is that we require a *minimum* of one sign-off by a disinterested party. This in no way excludes a company having their own QA internally and signing off. Furthermore, it really provides no impedance to the entire process when one considers how simple it should be to obtain a sign-off on good working code. > I think encouraging abstraction and RFC-aware flexibility is fine, but I > think it is unwise to suggest that we should block *working code* from > getting in just because a bigger, different or more-deluxe RFC exists. RFCs > and the widespread desire for a feature flavor X are really quite removed > from a working implementation ready for action, testing and revision now. > > Ahh... nothing in my recommendation suggests that "should block *working code* from getting in just because a bigger, different or more-deluxe RFC exists." It simply suggests we have a policy in place which will actively promote some sort of community collaboration particularly on the part of large shops who "should" know better than to clam up especially on large feature development. That is patently bad behavior in the light of community participation which is the foundation of this project. It is certainly within the rights of anyone to take the source and run with it in whatever way they like. They may even take it and never contribute back. However, it is not within their right to do large, unilateral development and then expect it to be pushed into the main codebase. > Also, I think if you develop your features "in the open" (e.g., posted w/ > gitweb, or on github), the burden of synthesizing multiple RFCs and general > "feature consensus" sentiment isn't on you in quite the same way as when > changes are delivered en bloc. A vendor has what their clients are paying > for, and if other devs have an unfunded desire for extra feature X, they can > follow the branch right along add the last bit themselves, all while still > in development. Whether X is pulled in by the vendor, > or separately submitted by the dev to the RM doesn't really matter. > > A couple of points addressing this scenario as a whole (none of these is against the principle of open development): 1. That scenario may work for simpler features/code. But consider the current press to switch from zebra to 'foo' (any one of several suggestions recently). If a vendor develops an entire replacement for C4::Search and friends which centers around 'foo' in a unilateral fashion, the de facto assertion is that the community must either take what they have done or leave it. That assertion is within their prerogative as a vendor, however, it is equally possible that the community consensus may be to use 'foobar' rather than 'foo' and so away we go on a different path leaving said vendor to sink or swim on their own fork. 2. Given the problems we already have with a lack of development cooperation, this scenario at best does nothing to address those problems. 3. This scenario appears a bit "vendor-centric." I am of the opinion that Koha should be "community-centric" with individual's first and vendor's second in order of relationship. This may not be the view of all involved. However, if it were not for Koha, Koha support vendors would be out of some amount of business. Yes, I realize there are other FOSS ILS's available. However, the point is that many Koha support vendors (especially the names you mention earlier) came into existence because of Koha. I think it is in their best interest to help assure the survival of the community by putting in as they take out, and it is the community as a whole that decides the "rules" (if you will) for putting in. 4. Regarding the statement "A vendor has what their clients are paying for..." True, vendors are client driven. However, as I said in my initial post, vendors must educate their clients as to the nature of open source. Client expectations should be set based on known, published community procedures. If this were properly done, many problems would be resolved. As it is, I think vendors have a very hard time managing their own growth once it reaches the ballooning point. Unmanageable growth will kill you... as we have seen. In the final analysis, if each vendor pursues their own direction, we will end up with a Baskin-Robins of Koha. Make the job of RM hard enough and no one will want it. At some point the project dies due to leanness and overextension. The strength of the project lies in the *two-way* cooperation of its members. The "we have developed it: take it or leave it" approach is a one-way, dead-end street for the community. Kind Regards, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.poulain at biblibre.com Tue Nov 2 22:27:25 2010 From: paul.poulain at biblibre.com (Paul Poulain) Date: Tue, 02 Nov 2010 22:27:25 +0100 Subject: [Koha-devel] A Discussion on A Policy Setting Forth Standards of Code Submission, etc. [WAS: RFCs for 3.4 from BibLibre (serials & acquisitions)] In-Reply-To: References: Message-ID: <4CD0823D.6070400@biblibre.com> Le 02/11/2010 16:24, Chris Nighswonger a ?crit : > As a vendor-neutral voice, I would like to encourage everyone who has > an vested interest in these areas and the best interests of the Koha > project at heart to actively participate and respond to these RFC's. > It seems that often there is little dicussion, etc. on RFCs in the > community. And even when there is discussion, etc. it is often unclear > if a consensus is reached (at least publicly). ++, ++, and more ++ ! That has been one of the main topic we spoke during the last morning of the hackfest (the other one being long term management of the project) Irma & Bob have taken notes, and should sent the minutes here soon. This meeting was, I hope, the start of a better management of our project. > Furthermore, I would encourage vendors and others who post RFC's to do > so with a willingness to adapt, adjust, bend, compromise, and/or > to positions on those RFC's which may > be different but are clearly the consensus of the community at large. > Vendors may and often do have the resources to implement "what they > want," however, this is not in the spirit of cooperation which this > project so greatly depends upon for its success. Clients of vendors > should be educated during the RFQ process as to this aspect of open > source, and their expectations managed accordingly, imho. I must add that (at least for BibLibre), our customers want to be a part of an OpenSource community. So if one of our customer ask for something that is amended (or even rejected), we (BibLibre) are completly OK to reach him and explain : "well, your idea seems a wrong one, because of this, and that. So, either you confirm your request, and you already know the feature won't be a part of "Official Koha", you'll have your fork, or you update your request to have something useful for everybody". And i'm 95% sure that our customer(s) will answer : "well, ok, let's stay community oriented". The problems occurs for everybody if the "no-go" appears AFTER the dev has been done ! pain for the dev, pain for the library that has sponsored something that is not accepted in Koha (& pain for the community, because maybe, an amended RFC would have been OK). Sometimes ppl think/argue it's dangerous to trust "a community". But I always remember the "FLOSS moto" : given enough eyes, all bugs will be detected. Here it's not a matter of bug, but what may seem a good idea to a library may in fact not be one, and the community has enough eyes to see & argue it's a bad idea (& thus convince the original library to reconsider his request). Hint: in the RFCs I posted yesterday, I see at least one thing that could/should be amended. Very easy to amend & will definetly be a better idea. Let's see if someone find it ;-) > I would also suggest that we implement a policy that states in some > agreable way that code/features will not be pushed to master which > have not passed through a review and consensus process by the > community and the RM (as the elected head of development by the > community). No one excepting possibly the RM should presuppose that > their code is guaranteed inclusion by default. I think it's better to have a review even for the RM (except for very small patches/obvious mistakes) > Secondly, I would suggest that we implement a strong recommendation > that larger shops submit timely RFC's *prior to* beginning work on > code and then promote discussion on those RFC's. This recommendation > should with some lesser strength suggest that everyone submit timely > RFC's to maximize productivity and usefulness of the resources of all > concerned. ++ We haven't started working on any of those RFCs (except solR, to have a proof of concept). What has really be a problem for us is that we published RFCs for Lyon3 university a long time ago (mail from Nicolas on koha-devel oct, 12, 2009), there has been strictly no reaction/feedback to those RFCs. Now they are done, and we have rebased them vs head (huge work, and huge QA to do, and probably a lot of time lost) Could they be rejected by the community ? hopefully I hope no, but I frankly don't know what we (BibLibre) could do if it were :-((( (because the customers are live now !) I think we (all) failed because Koha 3.2 was 9 months late. Well, in fact, I think the mistake was not to branch 3.4 immediatly on feature freeze. That would have been much less pain for us (that are customer-planning driven) (suggestion below). > Thirdly, I would suggest a stated policy (and such a policy is > presently in place practically) which requires all submissions to pass > through a QA branch and receive at a minimum one sign-off prior to > being pushed into master. This policy should also assign a certain > amount of responsibility to the one signing off to avoid "frivolous" > sign-offs. It should also, perhaps, include a restriction that the > required sign-off for pushing to master be a disinterested developer > perhaps from another vendor or the community at large. OK, except for obvious bugfixes/patches Another question : some librarians like liz started to test our branches, mainly the biggest one, and she find the features "awesome". How could we have librarian being more involved in QA from a functionnal point of view ? (suggestions below) > This is a discussion we need to have. I would encourage everyone to > invest time (the operative term here is 'invest') in this discussion. SUGGESTIONS TO DISCUSS: * branch next version when the RM declare feature freeze for a given version * have a website rebuilded every night (week ?) (from which branch ? a waiting_librarian_feedback one ?), with all marc21 default values fitted in (with maybe a few biblios added), the librarians being requested to test from a functionnal point of view after the techies QA validation -- Paul POULAIN http://www.biblibre.com Expert en Logiciels Libres pour l'info-doc Tel : (33) 4 91 81 35 08 From paul.poulain at biblibre.com Tue Nov 2 22:57:14 2010 From: paul.poulain at biblibre.com (Paul Poulain) Date: Tue, 02 Nov 2010 22:57:14 +0100 Subject: [Koha-devel] A Discussion on A Policy Setting Forth Standards of Code Submission, etc. [WAS: RFCs for 3.4 from BibLibre (serials & acquisitions)] In-Reply-To: <4CD0823D.6070400@biblibre.com> References: <4CD0823D.6070400@biblibre.com> Message-ID: <4CD0893A.7070703@biblibre.com> Le 02/11/2010 22:27, Paul Poulain a ?crit : > SUGGESTIONS TO DISCUSS: > * branch next version when the RM declare feature freeze for a given version > * have a website rebuilded every night (week ?) (from which branch ? a > waiting_librarian_feedback one ?), with all marc21 default values fitted > in (with maybe a few biblios added), the librarians being requested to > test from a functionnal point of view after the techies QA validation I've added those topics, as well as a few others (like jqueryui) to the agenda of the next meeting (nov, 10th) -- Paul POULAIN http://www.biblibre.com Expert en Logiciels Libres pour l'info-doc Tel : (33) 4 91 81 35 08 From chris at bigballofwax.co.nz Tue Nov 2 23:05:11 2010 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Wed, 3 Nov 2010 11:05:11 +1300 Subject: [Koha-devel] A Discussion on A Policy Setting Forth Standards of Code Submission, etc. [WAS: RFCs for 3.4 from BibLibre (serials & acquisitions)] In-Reply-To: <4CD0823D.6070400@biblibre.com> References: <4CD0823D.6070400@biblibre.com> Message-ID: On 3 November 2010 10:27, Paul Poulain wrote: > We haven't started working on any of those RFCs (except solR, to have a > proof of concept). > What has really be a problem for us is that we published RFCs for Lyon3 > university a long time ago (mail from Nicolas on koha-devel oct, 12, > 2009), there has been strictly no reaction/feedback to those RFCs. > Now they are done, and we have rebased them vs head (huge work, and huge > QA to do, and probably a lot of time lost) > Could they be rejected by the community ? hopefully I hope no, but I > frankly don't know what we (BibLibre) could do if it were :-((( (because > the customers are live now !) > I think we (all) failed because Koha 3.2 was 9 months late. Well, in > fact, I think the mistake was not to branch 3.4 immediatly on feature > freeze. That would have been much less pain for us (that are > customer-planning driven) (suggestion below). What would have caused much much much less pain for you, was to develop your features in small branches, rather than one monolithic branch which makes rebasing much harder than it needs to be. This is a lesson that cannot be overstated, topic/bug/feature branches make everyones lives much easier. And they mean that if one feature is rejected ... then the whole stack doesn't need to be. I don't think branching sooner or an earlier release would have helped anywhere near as much as developing in smaller branches, not one huge one. Chris From paul.poulain at biblibre.com Tue Nov 2 23:10:28 2010 From: paul.poulain at biblibre.com (Paul Poulain) Date: Tue, 02 Nov 2010 23:10:28 +0100 Subject: [Koha-devel] A Discussion on A Policy Setting Forth Standards of Code Submission, etc. In-Reply-To: References: <4CD0823D.6070400@biblibre.com> Message-ID: <4CD08C54.50402@biblibre.com> Le 02/11/2010 23:05, Chris Cormack a ?crit : >> I think we (all) failed because Koha 3.2 was 9 months late. Well, in >> fact, I think the mistake was not to branch 3.4 immediatly on feature >> freeze. That would have been much less pain for us (that are >> customer-planning driven) (suggestion below). >> > What would have caused much much much less pain for you, was to > develop your features in small branches, rather than one monolithic > branch which makes rebasing much harder than it needs to be. > > This is a lesson that cannot be overstated, topic/bug/feature branches > make everyones lives much easier. And they mean that if one feature is > rejected ... then the whole stack doesn't need to be. > agreed: we made a mistake here. (and don't plan to do it again !) > I don't think branching sooner or an earlier release would have helped > anywhere near as much as developing in smaller branches, not one huge > one. > Partially agreeing, I'd like to discuss of this topic on the next IRC meeting, i'm not fully sure I see clearly the best path (branching too early means that bugfix patches would have to be done on both branches) -- Paul POULAIN http://www.biblibre.com Expert en Logiciels Libres pour l'info-doc Tel : (33) 4 91 81 35 08 From chris at bigballofwax.co.nz Tue Nov 2 23:14:27 2010 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Wed, 3 Nov 2010 11:14:27 +1300 Subject: [Koha-devel] A Discussion on A Policy Setting Forth Standards of Code Submission, etc. In-Reply-To: <4CD08C54.50402@biblibre.com> References: <4CD0823D.6070400@biblibre.com> <4CD08C54.50402@biblibre.com> Message-ID: On 3 November 2010 11:10, Paul Poulain wrote: > Le 02/11/2010 23:05, Chris Cormack a ?crit : >>> I think we (all) failed because Koha 3.2 was 9 months late. Well, in >>> fact, I think the mistake was not to branch 3.4 immediatly on feature >>> freeze. That would have been much less pain for us (that are >>> customer-planning driven) (suggestion below). >>> >> What would have caused much much much less pain for you, was to >> develop your features in small branches, rather than one monolithic >> branch which makes rebasing much harder than it needs to be. >> >> This is a lesson that cannot be overstated, topic/bug/feature branches >> make everyones lives much easier. And they mean that if one feature is >> rejected ... then the whole stack doesn't need to be. >> > agreed: we made a mistake here. (and don't plan to do it again !) >> I don't think branching sooner or an earlier release would have helped >> anywhere near as much as developing in smaller branches, not one huge >> one. >> > Partially agreeing, I'd like to discuss of this topic on the next IRC > meeting, i'm not fully sure I see clearly the best path (branching too > early means that bugfix patches would have to be done on both branches) > Branching 3.2 earlier, in no way takes away the need to rebase, and so having smaller branches to rebase is still a much bigger win than a big branch to rebase. Branching earlier would not have meant the patches were anymore likely to go into master than they are now, they still have to go through QA etc, and having them in small feature set branches makes the chances of them passing much more likely. We can talk about this more at the meeting, but I am of the firm opinion branching earlier would have been of very little help in your situation. That's not to say its a bad idea, I just don't think it would have solved your problem. Chris From cnighswonger at foundations.edu Tue Nov 2 23:29:25 2010 From: cnighswonger at foundations.edu (Chris Nighswonger) Date: Tue, 2 Nov 2010 18:29:25 -0400 Subject: [Koha-devel] A Discussion on A Policy Setting Forth Standards of Code Submission, etc. [WAS: RFCs for 3.4 from BibLibre (serials & acquisitions)] In-Reply-To: <4CD0893A.7070703@biblibre.com> References: <4CD0823D.6070400@biblibre.com> <4CD0893A.7070703@biblibre.com> Message-ID: On Tue, Nov 2, 2010 at 5:57 PM, Paul Poulain wrote: > Le 02/11/2010 22:27, Paul Poulain a ?crit : > > SUGGESTIONS TO DISCUSS: > > * branch next version when the RM declare feature freeze for a given > version > I'll let this one alone for now. > > * have a website rebuilded every night (week ?) (from which branch ? a > > waiting_librarian_feedback one ?), with all marc21 default values fitted > > in (with maybe a few biblios added), the librarians being requested to > > test from a functionnal point of view after the techies QA validation > > I think this is a great idea. If Hudson can do auto builds, surely we can setup a server with auto-built test installs for various branches. This would have to be limited to some reasonable number, though. Kind Regards, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From ian.walls at bywatersolutions.com Wed Nov 3 00:03:39 2010 From: ian.walls at bywatersolutions.com (Ian Walls) Date: Tue, 2 Nov 2010 19:03:39 -0400 Subject: [Koha-devel] A Discussion on A Policy Setting Forth Standards of Code Submission, etc. [WAS: RFCs for 3.4 from BibLibre (serials & acquisitions)] In-Reply-To: References: <4CD0823D.6070400@biblibre.com> <4CD0893A.7070703@biblibre.com> Message-ID: Just to throw in on this thread for ByWater Solutions: It is company policy to obtain at least one signoff from another staff member before submitting a patch to the community. We do not anticipate that this will be sufficient for inclusion into master (except perhaps on very simple patches), and hope other members of the community will sign off on our patches. We will do the same for others in the community as often as we can. The end goal of all our development is inclusion into master. We are willing to do whatever the Quality Assurance manager and Release Manager need to make our developments worthy of inclusion (provided we can do so and still meet our clients' needs). If this means recoding in a more generic way, so be it. All our work is done on separate branches, based off the current HEAD of master, and rebased/merged as needed to keep it applicable. You can see it at http://git.bywatersolutions.com. It would be helpful to have some kind of general guidelines for inclusion (i.e. follow the coding style rules, have working unit tests, include relevant documentation, etc.), even if in the end, it's up to the Release Manager's expert opinion. Perhaps this is something we can discuss in an IRC meeting? We've started on this (and must continue) to post all our feature ideas on the wiki. I'd like those to be public, working "open spec" that anyone is free to edit and enhance until someone can sponsor all or part of the feature(s). Even if no library has the money to pay for the creation of these features right now, we can all figure out what they'd need to be in order to meet our needs, worldwide. Perhaps the unsponsored RFCs can serve as a rough roadmap for the longterm of Koha (beyond the next release). I think running a demo site with code that's partially through the Quality Assurance process, and soliciting librarian feedback would be a good idea, provided we could actually get librarians to take time to test a system that's not theirs. That's all for now. It's been great meeting so many of you at KohaCon and the following hackfest. I look forward to working together with you all to make 3.4 something great. Now, off to explore some more of New Zealand before I have to head home. Cheers, -Ian 2010/11/2 Chris Nighswonger > On Tue, Nov 2, 2010 at 5:57 PM, Paul Poulain wrote: > >> Le 02/11/2010 22:27, Paul Poulain a ?crit : >> > SUGGESTIONS TO DISCUSS: >> > * branch next version when the RM declare feature freeze for a given >> version >> > > I'll let this one alone for now. > > >> > * have a website rebuilded every night (week ?) (from which branch ? a >> > waiting_librarian_feedback one ?), with all marc21 default values fitted >> > in (with maybe a few biblios added), the librarians being requested to >> > test from a functionnal point of view after the techies QA validation >> >> > I think this is a great idea. If Hudson can do auto builds, surely we can > setup a server with auto-built test installs for various branches. This > would have to be limited to some reasonable number, though. > > Kind Regards, > Chris > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -- Ian Walls Lead Development Specialist ByWater Solutions Phone # (888) 900-8944 http://bywatersolutions.com ian.walls at bywatersolutions.com Twitter: @sekjal -------------- next part -------------- An HTML attachment was scrubbed... URL: From tomascohen at gmail.com Wed Nov 3 01:28:29 2010 From: tomascohen at gmail.com (Tomas Cohen Arazi) Date: Tue, 2 Nov 2010 21:28:29 -0300 Subject: [Koha-devel] itemBarcodeInputFilter or Client side JS? In-Reply-To: References: Message-ID: 2010/11/2 Mike Hafen : > Why can't you correct the barcodes during the migration? > > If you are going from MS-SQL to MySQL you should be able to add another step > in the middle to populate the barcode from the available information in the > database. > > If you are going from MARC into Koha you should be able to populate the > barcode once the data is in Koha and update the MARC and Zebra from there. > > Is it not possible to do one of these two? I think they are asking for the proper way of modifying the script that generates barcodes in koha for NEW items, preserving the semantics of the current system they're using. To+ > On Tue, Nov 2, 2010 at 7:31 AM, Koustubha Kale wrote: >> >> Hi All, >> We are implementing Koha at a college. They have a existing software >> for library. They have barcoded all their collection of about 150000 >> books with this software. >> The peculiar thing about the software and the barcode it generates is this >> : >> >> The barcode related data stored in the database of the software >> backend ( MS SQL) is the item's accession number & item type. The >> actual barcode is not stored anywhere. >> The printed barcodes are like this B011364, B011436, AR011346, >> AR011436,AR018100,AR018103,AR018104,AR018105. ( Note the duplicate >> accession numbers) >> The first alphabets correspond to the itemtype code stored in a table. >> The remaining numbers are the accession number stored in a table >> without the leading zeros. This causes a problem while migrating this >> data to Koha due to the missing leading zeros and arbitrary length of >> the string and the fact that accession numbers are same for different >> item types. >> While generating a barcode for a new item the software takes the >> itemtype and the accession number and prints out a barcode label in >> the required format. >> When the barcode is read back in the softwares circulation module by a >> barcode reader, a asp script on the server separates the item type and >> the accession number, strips the leading zeros from the numeric part >> and does a search on the database by means of a join on the two >> tables. >> >> As the college has stuck these barcode labels on all items, we have to >> figure out how to emulate this functionality in Koha. (Post migration >> we will generate & print barcodes from Koha; so we dont have to >> emulate this logic in the barcode generation for Koha). I was thinking >> we could store the barcode in Koha as itemtype-accession_number ?( >> without the leading zeros e.g. AR-18100) retrieved from the old >> softwares database and when reading it in circulation in Koha we >> convert the barcode from for example AR018100 to AR-18100. >> >> I see two options to acchieve this.. >> options : >> 1) Create another filter for itemBarcodeInputFilter and modify the sub >> barcodedecode ?in C4/Circulation.pm >> 2) write a client side javascript to do it. >> >> I would like to know is >> 1) What is the better way to acchive this? >> 2) Is there way for us to avoid a patch to Koha which will probably be >> useless to anybody not migrating from this particular software. By >> using opacuserjs perhaps.. >> >> >> Regards, >> Koustubha Kale >> Anant Corporation >> >> Contact Details : >> Address? : 103, Armaan Residency, R. W Sawant Road, Nr. Golden Dyes >> Naka, Thane (w), >> ? ? ? ? ??? ? ? Maharashtra, India, Pin : 400601. >> TeleFax? : +91-22-21720108, +91-22-21720109 >> Mobile? ?? : +919820715876 >> Website? : http://www.anantcorp.com >> Blog : http://www.anantcorp.com/blog/?author=2 >> _______________________________________________ >> Koha-devel mailing list >> Koha-devel at lists.koha-community.org >> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >> website : http://www.koha-community.org/ >> git : http://git.koha-community.org/ >> bugs : http://bugs.koha-community.org/ > > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > From ohiocore at gmail.com Wed Nov 3 04:08:14 2010 From: ohiocore at gmail.com (Joe Atzberger) Date: Tue, 2 Nov 2010 23:08:14 -0400 Subject: [Koha-devel] A Discussion on A Policy Setting Forth Standards of Code Submission, etc. [WAS: RFCs for 3.4 from BibLibre (serials & acquisitions)] In-Reply-To: References: Message-ID: On Tue, Nov 2, 2010 at 4:12 PM, Chris Nighswonger < cnighswonger at foundations.edu> wrote: > On Tue, Nov 2, 2010 at 2:10 PM, Joe Atzberger wrote: > >> Chris, I disagree that the first sign-off on a major vendor's patches >> should be external. The first sign-off from a major vendor should be >> *internal* to their quality control process. This was at one time the >> standing policy amongst LibLime and BibLibre for major changes. >> >> > My recommendation is that we require a *minimum* of one sign-off by a > disinterested party. This in no way excludes a company having their own QA > internally and signing off. Furthermore, it really provides no impedance to > the entire process when one considers how simple it should be to obtain a > sign-off on good working code. > Ok, I read that as amending "that *the *required sign-off for pushing to master be a disinterested developer" to "that *a *...". Regarding the simplicity of signing off, I take some issue. It is *severely* non-trivial to test major integration features. Consider SIP2 and LDAP, or something like EDI. It can depend not just on accurate test data, but entire servers, network environments, remote accounts granted by a supplier, foreign domain/language knowledge, etc. Sure, I'd love it for everybody to have a dozen signoffs. I just think blocking code while waiting on a 3rd party (who by design is disinterested) to come around and dedicate some resources is a questionable policy. Right now we can't even get comment on RFCs, let alone dedicated VMs and manpower. > Ahh... nothing in my recommendation suggests that "should block *working > code* from getting in just because a bigger, different or more-deluxe RFC > exists." It simply suggests we have a policy in place which will actively > promote some sort of community collaboration particularly on the part of > large shops who "should" know better than to clam up especially on large > feature development. That is patently bad behavior in the light of community > participation which is the foundation of this project. It is certainly > within the rights of anyone to take the source and run with it in whatever > way they like. They may even take it and never contribute back. However, it > is not within their right to do large, unilateral development and then > expect it to be pushed into the main codebase. > Forgive me if I'm off the pulse a bit, but do these expectations exist today? The release process establishes when new features are accepted or not, and it has been pretty explicit and clear. The problem used to be big unilateral changes that weren't getting submitted (including some code that I wrote). Now the problem is they're getting submitted? > Also, I think if you develop your features "in the open" (e.g., posted w/ >> gitweb, or on github), the burden of synthesizing multiple RFCs and general >> "feature consensus" sentiment isn't on you in quite the same way as when >> changes are delivered en bloc. A vendor has what their clients are paying >> for, and if other devs have an unfunded desire for extra feature X, they can >> follow the branch right along add the last bit themselves, all while still >> in development. Whether X is pulled in by the vendor, >> or separately submitted by the dev to the RM doesn't really matter. >> > > A couple of points addressing this scenario as a whole (none of these is > against the principle of open development): > > 1. That scenario may work for simpler features/code. But consider the > current press to switch from zebra to 'foo' (any one of several suggestions > recently). If a vendor develops an entire replacement for C4::Search and > friends which centers around 'foo' in a unilateral fashion, the de facto > assertion is that the community must either take what they have done or > leave it. Or patch it, or extend it, or put in on a feature branch, or break it into discrete elements, or defer acceptance until sufficiently abstracted, etc. If the vendor cares about it hitting mainline, then they'll follow up to do what it takes, within reason. If they don't, then I don't think new policy requirements will affect them in the least. > 2. Given the problems we already have with a lack of development > cooperation, this scenario at best does nothing to address those problems. > This comes back to the question of whether you can force cooperation. I don't think we can, effectively. I support codifying expectations and best practices, but "requiring" disinterested, competing or downright hostile parties to cooperate or pretend to cooperate seems destined to fail. > 3. This scenario appears a bit "vendor-centric." That's because, as has been pointed out previously, Koha's authorship is de facto vendor-centric. The vast majority of Koha code historically originated from vendors and the majority of new code still comes from vendors. Every RM worked for a vendor (or two). I'm happy to see Koha institutions and cooperatives still able to contribute code back to the project on their own (some for longer than I've been associated with Koha), and I'd love to see more of it. Right now though, the code being affected by proposed policies is still *mostly* in vendors' houses. > I am of the opinion that Koha should be "community-centric" with > individual's first and vendor's second in order of relationship. OK, I know what a vendor is. But in this context I don't know what you mean by "community-centric" or "individual" or "order of relationship". If you just mean that community comes first, then yay, community. I suspect we can escape talking about whether vendors are first, second or other class citizens in the community because it doesn't really matter for the proposed policy. > This may not be the view of all involved. However, if it were not for Koha, > Koha support vendors would be out of some amount of business. And without vendors *no* version of Koha would ever have been written or released. It's sorta funny that I'm the one saying this stuff, as I am not currently affiliated with any vendor. 4. Regarding the statement "A vendor has what their clients are paying > for..." True, vendors are client driven. However, as I said in my initial > post, vendors must educate their clients as to the nature of open source. > Client expectations should be set based on known, published community > procedures. If this were properly done, many problems would be resolved. This is a good point, and I was interested to read Paul's and Ian's replies to it. Building community submission and acceptance management into the quotes and development timelines is an important measure to take, and doing some education up front really will help. Open source is different than other software development institutions may have contracted, so experienced parties may need more education than inexperienced ones. > As it is, I think vendors have a very hard time managing their own growth > once it reaches the ballooning point. Unmanageable growth will kill you... > as we have seen. > Yes, I've died a couple times already! > In the final analysis, if each vendor pursues their own direction, we will > end up with a Baskin-Robins of Koha. Make the job of RM hard enough and no > one will want it. At some point the project dies due to leanness and > overextension. The strength of the project lies in the *two-way* cooperation > of its members. The "we have developed it: take it or leave it" approach is > a one-way, dead-end street for the community. > Thankfully, it doesn't ever actually work like that. No piece of code is accepted under the condition that it never be modified. We don't have a license like that. Say a submitter puts up a good bunch of code representing a considerable amount of work, but we are going to require that some hardcoded configuration X is abstracted into a syspref. If you asked them, "well, do you want to do it and resubmit, or do you want to wait an indefinite period until some random other person gets around to it, at which point you might have rebase issues?" I think 9 times of 10, they'll do it themselves. (Again, within reason. It's not going to happen if the hangup is something huge like "Make it work with UNIMARC in Turkish".) In short, extra signoff(s) are a fine recommendation and best practice, but I wouldn't support them as a strict requirement. Also I think concrete technical details about how the "right kind" of development can be done will do more good than just the policy preferences by themselves. --Joe -------------- next part -------------- An HTML attachment was scrubbed... URL: From ohiocore at gmail.com Wed Nov 3 04:45:23 2010 From: ohiocore at gmail.com (Joe Atzberger) Date: Tue, 2 Nov 2010 23:45:23 -0400 Subject: [Koha-devel] itemBarcodeInputFilter or Client side JS? In-Reply-To: References: Message-ID: > > > Is it not possible to do one of these two? > > I think they are asking for the proper way of modifying the script > that generates barcodes in koha for NEW items, preserving the > semantics of the current system they're using. > I don't think so, based on: Post migration we will generate & print barcodes from Koha; so we dont have > to emulate this logic in the barcode generation for Koha Though I did mention they may want to seed a valid new max value for their intended barcode scheme once everything is loaded. I.E., their values are not numerical, so they can't use incremental, and they'll need to use something else. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kmkale at anantcorp.com Wed Nov 3 05:18:46 2010 From: kmkale at anantcorp.com (Koustubha Kale) Date: Wed, 3 Nov 2010 09:48:46 +0530 Subject: [Koha-devel] itemBarcodeInputFilter or Client side JS? In-Reply-To: References: Message-ID: >just make the data loaded in Koha match the printed barcodes. I would just love to be able to do that. But Its not possible to exactly match barcodes in Koha with printed barcodes because : 1) No report from the old software has barcodes in it. ( the barcode printing utility they have provided is a standalone application) 2) The accession number stored in the MS SQL db has no leading zeros. 3) The printed barcode length is arbitary so I cant predict the number of leading zeros. >If you are going from MARC into Koha you should be able to populate the barcode once >the data is in Koha and update the MARC and Zebra from there. The old software has no notion of MARC.. Regards, Koustubha Kale Anant Corporation Contact Details : Address? : 103, Armaan Residency, R. W Sawant Road, Nr. Golden Dyes Naka, Thane (w), ? ? ? ? ??? ? ? Maharashtra, India, Pin : 400601. TeleFax? : +91-22-21720108, +91-22-21720109 Mobile? ?? : +919820715876 Website? : http://www.anantcorp.com Blog : http://www.anantcorp.com/blog/?author=2 From cybermon at gmail.com Wed Nov 3 11:27:52 2010 From: cybermon at gmail.com (Cybermon) Date: Wed, 3 Nov 2010 18:27:52 +0800 Subject: [Koha-devel] Classification Sources Message-ID: Dear Koha team, Can I use two classification sources for cataloging of Koha. For example, both of DDC and LCC. Sorry I could not find the BBK Classification of Russian in Koha. Please someone help about it. Gana -------------- next part -------------- An HTML attachment was scrubbed... URL: From arosa at tginet.com Wed Nov 3 13:07:34 2010 From: arosa at tginet.com (Toni Rosa) Date: Wed, 3 Nov 2010 13:07:34 +0100 (Hora estándar romance) Subject: [Koha-devel] opac-search.pl In-Reply-To: References: <.77.224.23.61.1288484548.squirrel@webmail.tgi.es> Message-ID: <.88.23.141.155.1288786054.squirrel@webmail.tgi.es> Thanks for your answer. I'll try to find documentation about how to migrate a NoZebra installation to use Zebra. Kind Regards, Toni > Hie, > > Can we think about removing "No Zebra" mode ? > It will make code more clear. > > Regards, > > 2010/10/31 Chris Nighswonger > >> Hi Toni >> >> >> On Sat, Oct 30, 2010 at 8:22 PM, Toni Rosa wrote: >> >>> Hello, >>> >>> We are using Koha 3.0.5 >>> I noticed that the opac advanced search in it (opac-search.pl) when >>> running in a nozebra installation doesn't honor "publication period" >>> intervals (e.g entering the 1800-1900 interval the search doesn't >>> work). >>> I followed the code to the "C4/Search.pm" file, method "NZanalyse" and >>> it >>> seems that it actually doesn't even try to build the sql query taking >>> the >>> intervals into account. >>> Did anyone find the same behavior? (I'd guess everybody should, as it >>> seems a bug/omission to me!) >>> >>> >> NoZebra is basically deprecated and for the most part unsupported. I >> would >> highly recommend switching over to Zebra even for a small collection. >> >> Kind Regards, >> Chris >> >> _______________________________________________ >> Koha-devel mailing list >> Koha-devel at lists.koha-community.org >> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel >> > > > > -- > Fridolyn SOMERS > ICT engineer > PROGILONE - Lyon - France > fridolyn.somers at gmail.com > From cnighswonger at foundations.edu Wed Nov 3 13:41:15 2010 From: cnighswonger at foundations.edu (Chris Nighswonger) Date: Wed, 3 Nov 2010 08:41:15 -0400 Subject: [Koha-devel] opac-search.pl In-Reply-To: <.88.23.141.155.1288786054.squirrel@webmail.tgi.es> References: <.77.224.23.61.1288484548.squirrel@webmail.tgi.es> <.88.23.141.155.1288786054.squirrel@webmail.tgi.es> Message-ID: On Wed, Nov 3, 2010 at 8:07 AM, Toni Rosa wrote: > Thanks for your answer. > I'll try to find documentation about how to migrate a NoZebra installation > to use Zebra. 1. Set the 'NoZebra' syspref (located in the 'Search' tab of the sytem preference editor) to 'Use' or 'On.' 2. Follow the Zebra server setup found here: http://git.koha-community.org/gitweb/?p=koha.git;a=blob;f=INSTALL.ubuntu.lucid;h=0ee096ed244a0b25f7b6be893723c3a5a2602bad;hb=HEAD#l242 3. Enjoy Kind Regard, Chris From gmcharlt at gmail.com Wed Nov 3 13:42:29 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Wed, 3 Nov 2010 08:42:29 -0400 Subject: [Koha-devel] Classification Sources In-Reply-To: References: Message-ID: Hi, 2010/11/3 Cybermon : > Can I use two classification sources for cataloging of Koha. For example, > both of DDC and LCC. Sorry I could not find the?BBK Classification of > Russian in Koha. Please someone help about it. Yes, you can. When you create an item record, you can choose which classification type to set. You can also add BBK as a new classification source from the Administration page, although in order to get BBK call numbers to sort correctly, some programming may be required. Regards, Galen -- Galen Charlton gmcharlt at gmail.com From mdhafen at tech.washk12.org Wed Nov 3 15:43:10 2010 From: mdhafen at tech.washk12.org (Mike Hafen) Date: Wed, 3 Nov 2010 08:43:10 -0600 Subject: [Koha-devel] itemBarcodeInputFilter or Client side JS? In-Reply-To: References: Message-ID: On Tue, Nov 2, 2010 at 10:18 PM, Koustubha Kale wrote: > >just make the data loaded in Koha match the printed barcodes. > > I would just love to be able to do that. > > But Its not possible to exactly match barcodes in Koha with printed > barcodes because : > > 1) No report from the old software has barcodes in it. ( the barcode > printing utility they have provided is a standalone application) > 2) The accession number stored in the MS SQL db has no leading zeros. > 3) The printed barcode length is arbitary so I cant predict the number > of leading zeros. > > So there is no way to predict what the printed barcode will be. That does complicate things. > >If you are going from MARC into Koha you should be able to populate the > barcode once > >the data is in Koha and update the MARC and Zebra from there. > > The old software has no notion of MARC.. > > That may be a good thing in this case. It will give you some time to play with barcodes before committing them to MARC in the database. I think the stand-alone barcode application must have some algorithm for generating the barcodes. Given sufficient time that algorithm could be understood and SQL created to match it. The question then is do you have enough time to understand the algorithm. Not having a barcode report complicates this too. It means you have no easy access to the barcodes. Without easy access to the result of the algorithm it will take much more time to figure out the algorithm. Unless there is someone with inside knowledge. Perhaps you can contact the vendor of the barcode printing application, or a librarian is familiar enough with the barcodes to help you quickly figure out the algorithm? If that fails then you have to do something else to fix the problem. At first glance I would say the way to go is to amend the barcodeInputFilter. It seems the variable of the barcode is the number of zeros, the rest of the information is present. Perhaps you could create barcodes during the migration to approximate the the printed barcode, using the most number of zeros. Then have the barcodeInputFilter take the scanned barcode and try to find it, adding zeros until it does. Once it's found the barcodeInputFilter can from there take some action: put the missed and found barcodes in a table for later reporting, email a dev or librarian to update the item's barcode to match the printed one, or even modify the item itself with the correct barcode. The drawback of this method is you will have to support that code for some time, because it will take a lot of time before every barcode in the library has been scanned. Either way it is a lot of work. Good luck. > > Regards, > Koustubha Kale > Anant Corporation > > Contact Details : > Address : 103, Armaan Residency, R. W Sawant Road, Nr. Golden Dyes > Naka, Thane (w), > Maharashtra, India, Pin : 400601. > TeleFax : +91-22-21720108, +91-22-21720109 > Mobile : +919820715876 > Website : http://www.anantcorp.com > Blog : http://www.anantcorp.com/blog/?author=2 > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dschust1 at tx.rr.com Wed Nov 3 21:35:28 2010 From: dschust1 at tx.rr.com (dschust1 at tx.rr.com) Date: Wed, 3 Nov 2010 16:35:28 -0400 Subject: [Koha-devel] RDA and Koha - where we are and HOW do we get ready Message-ID: <20101103203529.JUS76.67815.root@cdptpa-web03-z01> The final report for RDA will be out in the spring and RDA records are already being created and loaded into OCLC that Koha libraries are trying to use. Library of Congress will most likely adopt and start cataloging next year in RDA format - my personal suspicion. FRBR has been out for 10 years and WE(librarians) have not made much of a concerted effort to make that work I believe because we(librarians) didn?t know what RDA was going to do. This might be a project that we all(librarians and programmers) need to figure out what needs to be done - create a committee and spec out the work and we all pay to have the work done. What I hear from vendors is Until LIBRARIANS can tell the vendor what the LIBRARIANS/USERS WANT/NEED vendors don't know where to begin. There is a LOT of coding that needs to be done not only for bibliographic items but for Authorities to work right not to mention the RDA changes to Authorities. I've seen lots of discussion around Authority work and I would hate to see a library do authority programming NOT based on RDA specs at this point. Joy M. Banks Catalog Librarian came up with these starting points for Koha and RDA - The system needs to be able to uniformly display both the GMDs that are in current AACR2 records and the new 3xx fields that RDA would implement to replace GMDs (this is a big one since I don?t know about anyone else, but we have no intention of going back to update all of our AACR2 records to RDA) - The system needs to take into account a new FRBR model that will both serve the needs of patrons who wish to find every manifestation of a particular work and also those who wish to see specific manifestations - The system needs to be able to perform global updating for authorities so that changes made to authority records will be automatically applied to the bibliographic records to which they are attached - Any development with authorities and display also needs to take into account what will soon be a proliferation of $e?s in 1xx and 7xx fields (this one is fuzzier for me? I?m not quite sure if I?ve captured this change). These subfields should only affect the bibliographic records, not the authorized forms of the headings. She also suggested this: Mac Elrod has developed a kind of cheat sheet (http://slc.bc.ca/cheats/aacr22rda.htm) that highlights some of the major RDA changes. It?s from September, but at least it?s something that we can consider. So where do we start and begin working on this? I believe Ian touched the tip of the iceberg during KohaCon10 in this regard. I'd love to start the dialog and see where it NEEDS to go. David Schuster Plano ISD From kyle.m.hall at gmail.com Thu Nov 4 11:14:25 2010 From: kyle.m.hall at gmail.com (Kyle Hall) Date: Thu, 4 Nov 2010 06:14:25 -0400 Subject: [Koha-devel] Oracle nerfs MySQL, calls it MySQL classic, and charges $2, 000+ for features like InnoDB Message-ID: http://www.mysql.com/products/ I imagine this is going to impact Koha quite a bit, as Koha has been using InnoDB for quite a while now. Switch to Postgres? Kyle http://www.kylehall.info Mill Run Technology Solutions ( http://millruntech.com ) Crawford County Federated Library System ( http://www.ccfls.org ) Meadville Public Library ( http://www.meadvillelibrary.org ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From kyle.m.hall at gmail.com Thu Nov 4 11:16:42 2010 From: kyle.m.hall at gmail.com (Kyle Hall) Date: Thu, 4 Nov 2010 06:16:42 -0400 Subject: [Koha-devel] Oracle nerfs MySQL, calls it MySQL classic, and charges $2, 000+ for features like InnoDB In-Reply-To: References: Message-ID: It looks like I jumped the gun. This looks like it's about commercial support, not features, except where marked on the linked page. Kyle http://www.kylehall.info Mill Run Technology Solutions ( http://millruntech.com ) Crawford County Federated Library System ( http://www.ccfls.org ) Meadville Public Library ( http://www.meadvillelibrary.org ) On Thu, Nov 4, 2010 at 6:14 AM, Kyle Hall wrote: > http://www.mysql.com/products/ > > I imagine this is going to impact Koha > quite a bit, as Koha has been using InnoDB for quite a while now. Switch to > Postgres? > > Kyle > > http://www.kylehall.info > Mill Run Technology Solutions ( http://millruntech.com ) > Crawford County Federated Library System ( http://www.ccfls.org ) > Meadville Public Library ( http://www.meadvillelibrary.org ) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cnighswonger at foundations.edu Thu Nov 4 13:31:20 2010 From: cnighswonger at foundations.edu (Chris Nighswonger) Date: Thu, 4 Nov 2010 08:31:20 -0400 Subject: [Koha-devel] Oracle nerfs MySQL, calls it MySQL classic, and charges $2, 000+ for features like InnoDB In-Reply-To: References: Message-ID: 2010/11/4 Kyle Hall : > It looks like I jumped the gun. This looks like it's about commercial > support, not features, except where marked on the linked page. That's not how I read it. It appears to me that your first assessment was correct. The first row of the chart shows support availability. Classic has no support available. The subsequent rows compare features. Clearly InnoDB is *not* checked in the MySQL Classic column. PG support is a goal for 3.4 IIRC, however, MariaDB (http://mariadb.org/) bills itself as a drop-in replacement for MySQL and developed and maintained by some of the folks who originally wrote MySQL. Kind Regards, Chris PS: This is yet *another* reason for the Koha community to refuse to allow the project to become vendor dominated/controlled. Let's be sure we are awake and smelling the coffee. From gmcharlt at gmail.com Thu Nov 4 13:34:44 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Thu, 4 Nov 2010 08:34:44 -0400 Subject: [Koha-devel] Oracle nerfs MySQL, calls it MySQL classic, and charges $2, 000+ for features like InnoDB In-Reply-To: References: Message-ID: Hi, 2010/11/4 Kyle Hall : > http://www.mysql.com/products/ > I imagine this is going to impact Koha quite a bit, as Koha has been using > InnoDB for quite a while now. Switch to Postgres? In the short term, since MySQL and InnoDB are GPL, Oracle can't make the current code just disappear, but obviously the long-term prospects are not clear. In the medium term, adding support for MariaDB could help -- it gives us an upgrade to 5.1, is supposed to be a drop-in replacement for MySQL, and uses a replacement for InnoDB called XtraDB that looks like an improvement. On the other hand, packaging is a bit lacking at present. Drizzle also looks interesting, but it would require more work to remove the MySQLisms from Koha that Drizzle doesn't use. As far as Postgres support is concerned, I'm all for it. Of course, Chris' DBIx::Class RFC [1] is the current candidate for starting to get there. However, I would point out that a full implementation of DBIx::Class does not guarantee that we'll also get full Postgres support for 3.4; there is a lot of testing that would need to be done. [1] http://wiki.koha-community.org/wiki/RFC_for_using_DBIx_Class_in_Koha Regards, Galen -- Galen Charlton gmcharlt at gmail.com From gmcharlt at gmail.com Thu Nov 4 14:14:46 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Thu, 4 Nov 2010 09:14:46 -0400 Subject: [Koha-devel] Oracle nerfs MySQL, calls it MySQL classic, and charges $2, 000+ for features like InnoDB In-Reply-To: References: Message-ID: Hi, On Thu, Nov 4, 2010 at 8:31 AM, Chris Nighswonger wrote: > 2010/11/4 Kyle Hall : >> It looks like I jumped the gun. This looks like it's about commercial >> support, not features, except where marked on the linked page. > > That's not how I read it. It appears to me that your first assessment > was correct. > > The first row of the chart shows support availability. Classic has no > support available. > > The subsequent rows compare features. Clearly InnoDB is *not* checked > in the MySQL Classic column. InnoDB and the InnoDB plugin are still present in the MySQL "community edition" source download. I imagine that very few Koha users actually purchase "MySQL Classic" or another other version with a support contract from Oracle. (In fact, individual libraries *can't* purchase the Classic Edition, as it is only available to ISVs, OEMs, and VARs. The cheapest supported version that an ordinary user could purchase is Standard Edition). Nothing that Oracle does could cause InnoDB support to be, say, yanked from the Debian packages of MySQL. I don't think that Oracle price changes for MySQL support yesterday have any direct bearing on Koha users right now -- we can still get MySQL with InnoDB. Of course, Oracle can easily do things that would make MySQL very unattractive in the future, for example, by not releasing updates to the community edition, and I'm not defending their handling of their purchase of MySQL AB and Innobase Oy and their treatment of the MySQL project. We need to be prepared for all possibilities, and I will be porting my test database to MariaDB this weekend and seeing how that goes. However, I think we need to be clear on one point -- nothing changed yesterday that would prevent a new Koha library from being able to get MySQL with InnoDB. I can't speak about tomorrow. Regards, Galen -- Galen Charlton gmcharlt at gmail.com From paul.poulain at biblibre.com Thu Nov 4 00:57:15 2010 From: paul.poulain at biblibre.com (Paul Poulain) Date: Thu, 04 Nov 2010 00:57:15 +0100 Subject: [Koha-devel] Mails when patch pushed to git Message-ID: <4CD1F6DB.6010505@biblibre.com> Hello, In the mail we recieved when RM/RMaint pushes a patch to git.koha-community.org I'd be happy to have the details of the patch. For instance, the mail contains: > This is an automated email from the git hooks/post-receive script. It was > generated because a ref change was pushed to the repository containing > the project "main Koha release repository". > OK (even if it could be shorter maybe) > The branch, master has been updated > via e5bdc1e9ab9465c618968eb32f2adbfeee56c445 (commit) > via 4e11a5ade37485a2822d7ce9682fbd84e4d6637b (commit) > via 20dce153b1f785a758d6924103e51a6cef173fd3 (commit) > via 653259e6216efa9359c6c8022de42a342e4a14d1 (commit) > from 5145eb59b786bd2010eb324197034dd210504543 (commit) > What can I do with the previous information? seems cryptic to me unless someone explains (note I know it's a git patchid, but as is, in the mail, I don't see how it is useful) > Those revisions listed above that are new to this repository have > not appeared on any other notification email; so we list those > revisions in full, below. > Is the previous paragraph needed? > - Log ----------------------------------------------------------------- > commit e5bdc1e9ab9465c618968eb32f2adbfeee56c445 > Merge: 5145eb59b786bd2010eb324197034dd210504543 4e11a5ade37485a2822d7ce9682fbd84e4d6637b > Author: Chris Cormack > Date: Wed Nov 3 16:52:21 2010 +1300 > > Merge remote branch 'kc/new/bug_5308' into kcmaster > > ----------------------------------------------------------------------- > > Summary of changes: > installer/data/mysql/kohastructure.sql | 11 ++++++++--- > installer/data/mysql/updatedatabase.pl | 27 +++++++++++++++++++++++++++ > 2 files changed, 35 insertions(+), 3 deletions(-) > > Well, I would be VERY interested by the detailled content of the changes ! For at least 2 reasons : - if I have a patch waiting, it's unclear if that's what I've submitted or not. + as chris (3.4RM) pushed through a patch, we can't know who is the original author of the patch. - I could do a post-push if needed. Is it possible to improve the hook? -- Paul POULAIN http://www.biblibre.com Expert en Logiciels Libres pour l'info-doc Tel : (33) 4 91 81 35 08 -------------- section suivante -------------- Une pi?ce jointe HTML a ?t? nettoy?e... URL: From M.de.Rooy at rijksmuseum.nl Thu Nov 4 14:32:31 2010 From: M.de.Rooy at rijksmuseum.nl (Marcel de Rooy) Date: Thu, 4 Nov 2010 13:32:31 +0000 Subject: [Koha-devel] bugzilla question Message-ID: <809BE39CD64BFD4EB9036172EBCCFA3119B673@S-MAIL-1B.rijksmuseum.intra> Hi, Couldn't find this on the wiki. Should I leave a bug on resolved/fixed, resolved/invalid etc. or should they always go to closed in the end? Just some number: resolved+fixed=2636 bugs. Closed/fixed= only 78. Regards, Marcel -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmcharlt at gmail.com Thu Nov 4 14:45:20 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Thu, 4 Nov 2010 09:45:20 -0400 Subject: [Koha-devel] bugzilla question In-Reply-To: <809BE39CD64BFD4EB9036172EBCCFA3119B673@S-MAIL-1B.rijksmuseum.intra> References: <809BE39CD64BFD4EB9036172EBCCFA3119B673@S-MAIL-1B.rijksmuseum.intra> Message-ID: Hi, 2010/11/4 Marcel de Rooy : > Couldn?t find this on the wiki. Should I leave a bug on resolved/fixed, > resolved/invalid etc. or should they always go to closed in the end? > > Just some number: resolved+fixed=2636 bugs. Closed/fixed= only 78. See http://bugs.koha-community.org/bugzilla3/page.cgi?id=fields.html#status for an explanation of Bugzilla's recommend use of the status field. In principle, once a bug has been marked as resolved (i.e., that a fix has been pushed to the master or appropriate maintenance branch in the public repository), the next step would be for somebody such as the original submitter or a QA volunteer to verify the fix and mark the bug either verified or reopened. A bug wouldn't be marked closed until the fix was available in a packaged release. This is a good idea, actually, but for it to work, there would need to be more people actively wrangling bugs than seem to be available at the moment. So in practice, a RESOLVED FIXED should be considered equivalent to a VERIFIED/CLOSED FIXED, but if anybody wants to take the additional step of verifying bugfixes and marking them as closed after a release, doing so would be a goodness. Regards, Galen -- Galen Charlton gmcharlt at gmail.com From kyle.m.hall at gmail.com Thu Nov 4 16:24:42 2010 From: kyle.m.hall at gmail.com (Kyle Hall) Date: Thu, 4 Nov 2010 11:24:42 -0400 Subject: [Koha-devel] Oracle nerfs MySQL, calls it MySQL classic, and charges $2, 000+ for features like InnoDB In-Reply-To: References: Message-ID: I also think DBIx::Class is definitely the way to go. Here's a question: Is there any way to store the database schema in an intermediate format that can be used to create the database in MySQL, Postgres, or what-have-you? Perhaps there is a way to use DBIx::Class or another module to generate an agnostic schema from the current MySQL one? Caveat: This just popped into my head, I have not done any research of any kind. Kyle ] http://www.kylehall.info Mill Run Technology Solutions ( http://millruntech.com ) Crawford County Federated Library System ( http://www.ccfls.org ) Meadville Public Library ( http://www.meadvillelibrary.org ) On Thu, Nov 4, 2010 at 8:34 AM, Galen Charlton wrote: > Hi, > > 2010/11/4 Kyle Hall : > > http://www.mysql.com/products/ > > I imagine this is going to impact Koha quite a bit, as Koha has been > using > > InnoDB for quite a while now. Switch to Postgres? > > In the short term, since MySQL and InnoDB are GPL, Oracle can't make > the current code just disappear, but obviously the long-term prospects > are not clear. In the medium term, adding support for MariaDB could > help -- it gives us an upgrade to 5.1, is supposed to be a drop-in > replacement for MySQL, and uses a replacement for InnoDB called XtraDB > that looks like an improvement. On the other hand, packaging is a bit > lacking at present. Drizzle also looks interesting, but it would > require more work to remove the MySQLisms from Koha that Drizzle > doesn't use. > > As far as Postgres support is concerned, I'm all for it. Of course, > Chris' DBIx::Class RFC [1] is the current candidate for starting to > get there. However, I would point out that a full implementation of > DBIx::Class does not guarantee that we'll also get full Postgres > support for 3.4; there is a lot of testing that would need to be done. > > [1] http://wiki.koha-community.org/wiki/RFC_for_using_DBIx_Class_in_Koha > > Regards, > > Galen > -- > Galen Charlton > gmcharlt at gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cnighswonger at foundations.edu Thu Nov 4 16:30:46 2010 From: cnighswonger at foundations.edu (Chris Nighswonger) Date: Thu, 4 Nov 2010 11:30:46 -0400 Subject: [Koha-devel] Oracle nerfs MySQL, calls it MySQL classic, and charges $2, 000+ for features like InnoDB In-Reply-To: References: Message-ID: 2010/11/4 Kyle Hall : > I also think DBIx::Class is definitely the way to go. Here's a question: > > Is there any way to store the database schema in an intermediate format that > can be used to create the database in MySQL, Postgres, or what-have-you? > Perhaps there is a way to use DBIx::Class or another module to generate an > agnostic schema from the current MySQL one? Caveat: This just popped into my > head, I have not done any research of any kind. I believe that Chris C has already done this to the point of creating both MySQL and PG db's from the same DBIx::Class schema. Kind Regards, Chris From tomascohen at gmail.com Thu Nov 4 17:34:05 2010 From: tomascohen at gmail.com (Tomas Cohen Arazi) Date: Thu, 4 Nov 2010 13:34:05 -0300 Subject: [Koha-devel] Oracle nerfs MySQL, calls it MySQL classic, and charges $2, 000+ for features like InnoDB In-Reply-To: References: Message-ID: Is it possible to create db-agnostic row-level locks using DBIx::Class? To+ On Thu, Nov 4, 2010 at 12:30 PM, Chris Nighswonger wrote: > 2010/11/4 Kyle Hall : >> I also think DBIx::Class is definitely the way to go. Here's a question: >> >> Is there any way to store the database schema in an intermediate format that >> can be used to create the database in MySQL, Postgres, or what-have-you? >> Perhaps there is a way to use DBIx::Class or another module to generate an >> agnostic schema from the current MySQL one? Caveat: This just popped into my >> head, I have not done any research of any kind. > > I believe that Chris C has already done this to the point of creating > both MySQL and PG db's from the same DBIx::Class schema. > > Kind Regards, > Chris > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > From gmcharlt at gmail.com Thu Nov 4 17:50:20 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Thu, 4 Nov 2010 12:50:20 -0400 Subject: [Koha-devel] Oracle nerfs MySQL, calls it MySQL classic, and charges $2, 000+ for features like InnoDB In-Reply-To: References: Message-ID: Hi, On Thu, Nov 4, 2010 at 12:34 PM, Tomas Cohen Arazi wrote: > Is it possible to create db-agnostic row-level locks using DBIx::Class? True row-level locking is purely a function of the DBMS; DBIx::Class can't create such locks if the backend database doesn't support them. Regards, Galen -- Galen Charlton gmcharlt at gmail.com From tomascohen at gmail.com Thu Nov 4 18:02:50 2010 From: tomascohen at gmail.com (Tomas Cohen Arazi) Date: Thu, 4 Nov 2010 14:02:50 -0300 Subject: [Koha-devel] Oracle nerfs MySQL, calls it MySQL classic, and charges $2, 000+ for features like InnoDB In-Reply-To: References: Message-ID: On Thu, Nov 4, 2010 at 1:50 PM, Galen Charlton wrote: > Hi, > > On Thu, Nov 4, 2010 at 12:34 PM, Tomas Cohen Arazi wrote: >> Is it possible to create db-agnostic row-level locks using DBIx::Class? > > True row-level locking is purely a function of the DBMS; DBIx::Class > can't create such locks if the backend database doesn't support them. Yes, what I meant to ask was if switching to DBIx::Class could prevent us from using those kind of locks that could help on several areas where concurrency might be seen as an issue. I don't know postgres's syntax for START TRANSACTION or if the semantics of mysql's 'SELECT... FOR UPDATE' can me mapped by that library into that postgres uses. To+ From cfouts at liblime.com Thu Nov 4 18:16:06 2010 From: cfouts at liblime.com (Clay Fouts) Date: Thu, 4 Nov 2010 10:16:06 -0700 Subject: [Koha-devel] DBIx::Class vs current Koha's DBI performance In-Reply-To: References: <20101026171735.GA32451@rot13.org> <20101027111518.GA12795@rot13.org> Message-ID: Speaking of DBIC, has anyone looked into other ORM layers as potential candidates for integration? The only other one I've looked at and experimented with is Rose::DB::Object. It's considerably faster than DBIC, both during startup and ongoing execution. The API makes more sense to me, too, but that's a personal preference. However, it lacks some of DBIC's schema management utilities and the ResultSet-style of subquerying (which also has the advantage of not having deal with a ResultSet in the first place). Thoughts? Clay On Wed, Oct 27, 2010 at 12:06 PM, Chris Cormack wrote: > Gah bumped by people on the bus, what I was trying to say was that for 3.4 > we plan to use the schema tools to move us one step away from mysql > dependency. > > We are talking about plack at the dev conf tomorrow, please try and join us > on irc. > > Chris > > On 28 Oct 2010 03:52, "Fouts, Clay" wrote: > > On Wed, Oct 27, 2010 at 4:15 AM, Dobrica Pavlinusic > wrote: > > > > You are correct t... > > DBIx::Class handles cache invalidation very well as long as it is used > uniformly as the only means of accessing the database. My concern is more in > regards to the challenges with getting all of Koha's use of a given set of > tables to go through the dbic interface. > > > > > > Another intersting bit is change at: > > > > http://git.rot13.org/?p=koha.git;a=commit;h=8b27e87c2... > > While this is much faster, the ISO format has hard-coded limitations that > Koha needs to surpass. > > > > > > From my expirience so far, there are quite a few more low hanging fruits > > which we can pick in... > > Getting Koha to run persistently (and to do so safely!) seems like a higher > priority in my estimation than the conversion to DBIC, if for no other > reason than pervasive use of DBIC is going to seriously impede CGI > performance. In mean time, I find that applying small, judicious use of > local caching often gives a high return on very little time and additional > code while not interfering with the API. > > Clay > > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kyle.m.hall at gmail.com Thu Nov 4 20:13:18 2010 From: kyle.m.hall at gmail.com (Kyle Hall) Date: Thu, 4 Nov 2010 15:13:18 -0400 Subject: [Koha-devel] DBIx::Class vs current Koha's DBI performance In-Reply-To: <4CC7C643.2070906@tamil.fr> References: <20101026171735.GA32451@rot13.org> <4CC7C643.2070906@tamil.fr> Message-ID: How much work would it be to move Koha to the Catalyst framework? Kyle http://www.kylehall.info Mill Run Technology Solutions ( http://millruntech.com ) Crawford County Federated Library System ( http://www.ccfls.org ) Meadville Public Library ( http://www.meadvillelibrary.org ) On Wed, Oct 27, 2010 at 2:27 AM, Frederic Demians wrote: > > My experience is that the startup overhead introduced by DBIx::Class is >> not sufficiently offset by its caching features in a CGI environment. >> Running under Plack or something similar would easily recoup that overhead, >> of course. >> > > Yes, this is the solution. We need in 2010 a persistent environment to > execute Koha in: an execution environment, in which they are > application-level, session and page objects Memcaching everything isn't the > solution. > -- > Fr?d?ric > > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chrisc at catalyst.net.nz Thu Nov 4 20:26:18 2010 From: chrisc at catalyst.net.nz (Chris Cormack) Date: Fri, 5 Nov 2010 08:26:18 +1300 Subject: [Koha-devel] DBIx::Class vs current Koha's DBI performance In-Reply-To: References: <20101026171735.GA32451@rot13.org> <4CC7C643.2070906@tamil.fr> Message-ID: <20101104192618.GB4325@rorohiko> * Kyle Hall (kyle.m.hall at gmail.com) wrote: > How much work would it be to move Koha to the Catalyst framework? > Kyle Only a complete rewrite. The work we are doing with Plack and Starman will give us a persistent environment, that will, in all likelihood outperform any framework unless it was also running under Plack. Back to DBIx::Class, I have no plans to use it for anything other than schema abstraction for 3.4. April 22 isn't far away. However if someone sends in patches for a module that uses DBIx::Class and we don't take a big performance hit, (and it passes QA and the rest of the tests) it would go in. Small steps, we have a 6 month release, getting Template::Toolkit, Schema abstraction, C4/Search fixes, all the RFC in is going to be enough. Lets work on small manageable improvements. Anyone who wanted to work on finding the circular object references we have, and breaking them would move us a lot closer to being able to run the whole of Koha under a persistent tool. Currently people are testing with the Opac, and circulation, with good results. Chris > http://www.kylehall.info > Mill Run Technology Solutions ( http://millruntech.com ) > Crawford County Federated Library System ( http://www.ccfls.org ) > Meadville Public Library ( http://www.meadvillelibrary.org ) > > On Wed, Oct 27, 2010 at 2:27 AM, Frederic Demians > wrote: > > My experience is that the startup overhead introduced by DBIx::Class > is not sufficiently offset by its caching features in a CGI > environment. Running under Plack or something similar would easily > recoup that overhead, of course. > > Yes, this is the solution. We need in 2010 a persistent environment to > execute Koha in: an execution environment, in which they are > application-level, session and page objects Memcaching everything isn't > the solution. > -- > Frederic > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ -- Chris Cormack Catalyst IT Ltd. +64 4 803 2238 PO Box 11-053, Manners St, Wellington 6142, New Zealand -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: From cfouts at liblime.com Thu Nov 4 20:53:28 2010 From: cfouts at liblime.com (Clay Fouts) Date: Thu, 4 Nov 2010 12:53:28 -0700 Subject: [Koha-devel] DBIx::Class vs current Koha's DBI performance In-Reply-To: <20101104192618.GB4325@rorohiko> References: <20101026171735.GA32451@rot13.org> <4CC7C643.2070906@tamil.fr> <20101104192618.GB4325@rorohiko> Message-ID: Do you have a roadmap/plan for the transition to TT? This process also seems like a good time to start separating out the monolithic "get_template_and_user" function. It would be very handy and allow for more elegant design if we were able to specify the template just prior to rendering it rather than at the very beginning. One method of breaking up the circularity that appears in many areas of Koha is by splitting functions out in a MVC-style arrangement. If you're finding yourself tempted to introduce a circularity in your data schema, it's often the case that that relationship can be expressed through a View or Controller class instead. Having worked with Catalyst, it's a fine framework, but it's cumbersome and not worth porting existing code to. If writing an app from the ground up it would be a consideration. Clay 2010/11/4 Chris Cormack > * Kyle Hall (kyle.m.hall at gmail.com) wrote: > > How much work would it be to move Koha to the Catalyst framework? > > Kyle > > Only a complete rewrite. > > The work we are doing with Plack and Starman will give us a persistent > environment, that will, in all likelihood outperform any framework > unless it was also running under Plack. > > Back to DBIx::Class, I have no plans to use it for anything other than > schema abstraction for 3.4. April 22 isn't far away. > However if someone sends in patches for a module that uses DBIx::Class > and we don't take a big performance hit, (and it passes QA and the rest > of the tests) it would go in. > > Small steps, we have a 6 month release, getting Template::Toolkit, > Schema abstraction, C4/Search fixes, all the RFC in is going to be > enough. Lets work on small manageable improvements. Anyone who wanted to > work on finding the circular object references we have, and breaking > them would move us a lot closer to being able to run the whole of Koha > under a persistent tool. > > Currently people are testing with the Opac, and circulation, with good > results. > > Chris > > > http://www.kylehall.info > > Mill Run Technology Solutions ( http://millruntech.com ) > > Crawford County Federated Library System ( http://www.ccfls.org ) > > Meadville Public Library ( http://www.meadvillelibrary.org ) > > > > On Wed, Oct 27, 2010 at 2:27 AM, Frederic Demians > > wrote: > > > > My experience is that the startup overhead introduced by > DBIx::Class > > is not sufficiently offset by its caching features in a CGI > > environment. Running under Plack or something similar would easily > > recoup that overhead, of course. > > > > Yes, this is the solution. We need in 2010 a persistent environment > to > > execute Koha in: an execution environment, in which they are > > application-level, session and page objects Memcaching everything > isn't > > the solution. > > -- > > Frederic > > > > _______________________________________________ > > Koha-devel mailing list > > Koha-devel at lists.koha-community.org > > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > > > _______________________________________________ > > Koha-devel mailing list > > Koha-devel at lists.koha-community.org > > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > > website : http://www.koha-community.org/ > > git : http://git.koha-community.org/ > > bugs : http://bugs.koha-community.org/ > > > -- > Chris Cormack > Catalyst IT Ltd. > +64 4 803 2238 > PO Box 11-053, Manners St, Wellington 6142, New Zealand > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.10 (GNU/Linux) > > iEYEARECAAYFAkzTCNkACgkQZgbcHEvgMLMERACfaf1UurKAXRc+yZ+t4ie5n2dg > FPwAnRfenGuZCDlW8j2SK4Z0Ydy04v2v > =odUu > -----END PGP SIGNATURE----- > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chrisc at catalyst.net.nz Thu Nov 4 21:07:36 2010 From: chrisc at catalyst.net.nz (Chris Cormack) Date: Fri, 5 Nov 2010 09:07:36 +1300 Subject: [Koha-devel] DBIx::Class vs current Koha's DBI performance In-Reply-To: References: <20101026171735.GA32451@rot13.org> <4CC7C643.2070906@tamil.fr> <20101104192618.GB4325@rorohiko> Message-ID: <20101104200736.GD4325@rorohiko> * Clay Fouts (cfouts at liblime.com) wrote: > Do you have a roadmap/plan for the transition to TT? This process also > seems like a good time to start separating out the monolithic > "get_template_and_user" function. It would be very handy and allow for > more elegant design if we were able to specify the template just prior to > rendering it rather than at the very beginning. Hi Clay We have a summer of tech student who has lots of experience with machine translation. The roadmap for 3.4 TT work looks like this 1/ Exact translation of the templates from HTML::Template::Pro to TT, scripted and repeatable. 2/ Lots and lots of testing 3/ Call a template freeze 4/ Run the conversion a final time. 5/ Start accepting template patches again, but only TT ones. 6/ Maybe start changing things. But the main aim is to get TT in there, so that we can start changing things like get_template_and_user, caching rendered fragments etc. > One method of breaking up the circularity that appears in many areas of > Koha is by splitting functions out in a MVC-style arrangement. If you're > finding yourself tempted to introduce a circularity in your data schema, > it's often the case that that relationship can be expressed through a View > or Controller class instead. Yep, and if you come across one, patches to fix them would be gratefully received, thats the kind of refactoring we should encourage. > Having worked with Catalyst, it's a fine framework, but it's cumbersome > and not worth porting existing code to. If writing an app from the ground > up it would be a consideration. I agree. Chris > Clay > > 2010/11/4 Chris Cormack > > * Kyle Hall (kyle.m.hall at gmail.com) wrote: > > How much work would it be to move Koha to the Catalyst framework? > > Kyle > > Only a complete rewrite. > > The work we are doing with Plack and Starman will give us a persistent > environment, that will, in all likelihood outperform any framework > unless it was also running under Plack. > > Back to DBIx::Class, I have no plans to use it for anything other than > schema abstraction for 3.4. April 22 isn't far away. > However if someone sends in patches for a module that uses DBIx::Class > and we don't take a big performance hit, (and it passes QA and the rest > of the tests) it would go in. > > Small steps, we have a 6 month release, getting Template::Toolkit, > Schema abstraction, C4/Search fixes, all the RFC in is going to be > enough. Lets work on small manageable improvements. Anyone who wanted to > work on finding the circular object references we have, and breaking > them would move us a lot closer to being able to run the whole of Koha > under a persistent tool. > > Currently people are testing with the Opac, and circulation, with good > results. > > Chris > > http://www.kylehall.info > > Mill Run Technology Solutions ( http://millruntech.com ) > > Crawford County Federated Library System ( http://www.ccfls.org ) > > Meadville Public Library ( http://www.meadvillelibrary.org ) > > > > On Wed, Oct 27, 2010 at 2:27 AM, Frederic Demians > > > wrote: > > > > My experience is that the startup overhead introduced by > DBIx::Class > > is not sufficiently offset by its caching features in a CGI > > environment. Running under Plack or something similar would > easily > > recoup that overhead, of course. > > > > Yes, this is the solution. We need in 2010 a persistent > environment to > > execute Koha in: an execution environment, in which they are > > application-level, session and page objects Memcaching everything > isn't > > the solution. > > -- > > Frederic > > > > _______________________________________________ > > Koha-devel mailing list > > Koha-devel at lists.koha-community.org > > > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > > > _______________________________________________ > > Koha-devel mailing list > > Koha-devel at lists.koha-community.org > > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > > website : http://www.koha-community.org/ > > git : http://git.koha-community.org/ > > bugs : http://bugs.koha-community.org/ > > -- > Chris Cormack > Catalyst IT Ltd. > +64 4 803 2238 > PO Box 11-053, Manners St, Wellington 6142, New Zealand > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.10 (GNU/Linux) > > iEYEARECAAYFAkzTCNkACgkQZgbcHEvgMLMERACfaf1UurKAXRc+yZ+t4ie5n2dg > FPwAnRfenGuZCDlW8j2SK4Z0Ydy04v2v > =odUu > -----END PGP SIGNATURE----- > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ -- Chris Cormack Catalyst IT Ltd. +64 4 803 2238 PO Box 11-053, Manners St, Wellington 6142, New Zealand -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: From robin at catalyst.net.nz Thu Nov 4 22:52:17 2010 From: robin at catalyst.net.nz (Robin Sheat) Date: Fri, 05 Nov 2010 10:52:17 +1300 Subject: [Koha-devel] Mails when patch pushed to git In-Reply-To: <4CD1F6DB.6010505@biblibre.com> References: <4CD1F6DB.6010505@biblibre.com> Message-ID: <1288907537.2328.198.camel@zarathud> Paul Poulain schreef op do 04-11-2010 om 00:57 [+0100]: > In the mail we recieved when RM/RMaint pushes a patch to > git.koha-community.org I'd be happy to have the details of the patch. Is it perhaps easier to use the RSS feed? It gives me entries like (just pasting from my reader): Bug 5363 - Removing unused module (C4::Cache::FastMemcached) Auteur Chris Cormack - chrisc at catalyst.net.nz vlag bookmark link cosmos Bug 5363 - Removing unused module (C4::Cache::FastMemcached) Signed-off-by: Chris Nighswonger Signed-off-by: Chris Cormack * [DH] C4/Cache.pm * [DH] C4/Cache/FastMemcached.pm * [DH] t/Cache_FastMemcached.t Where the [DH] stuff are links to the diff and history for that file. The RSS feeds can be found all over the gitweb site, e.g. at http://git.koha-community.org/gitweb/?p=koha.git;a=summary -- Robin Sheat Catalyst IT Ltd. ? +64 4 803 2204 GPG: 5957 6D23 8B16 EFAB FEF8 7175 14D3 6485 A99C EB6D -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part URL: From chrisc at catalyst.net.nz Thu Nov 4 23:37:18 2010 From: chrisc at catalyst.net.nz (Chris Cormack) Date: Fri, 5 Nov 2010 11:37:18 +1300 Subject: [Koha-devel] Mails when patch pushed to git In-Reply-To: <1288907537.2328.198.camel@zarathud> References: <4CD1F6DB.6010505@biblibre.com> <1288907537.2328.198.camel@zarathud> Message-ID: <20101104223718.GF4325@rorohiko> * Robin Sheat (robin at catalyst.net.nz) wrote: > Paul Poulain schreef op do 04-11-2010 om 00:57 [+0100]: > > In the mail we recieved when RM/RMaint pushes a patch to > > git.koha-community.org I'd be happy to have the details of the patch. And because I do pointless things like this, heres a pretty graph showing the merges of the last week. http://blog.bigballofwax.co.nz/2010/11/05/pretty-graph-showing-the-last-weeks-worth-of-merges/ Chris -- Chris Cormack Catalyst IT Ltd. +64 4 803 2238 PO Box 11-053, Manners St, Wellington 6142, New Zealand -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: From tomascohen at gmail.com Thu Nov 4 23:48:50 2010 From: tomascohen at gmail.com (Tomas Cohen Arazi) Date: Thu, 4 Nov 2010 19:48:50 -0300 Subject: [Koha-devel] Mails when patch pushed to git In-Reply-To: <20101104223718.GF4325@rorohiko> References: <4CD1F6DB.6010505@biblibre.com> <1288907537.2328.198.camel@zarathud> <20101104223718.GF4325@rorohiko> Message-ID: 2010/11/4 Chris Cormack : > And because I do pointless things like this, heres a pretty graph > showing the merges of the last week. :-D to+ From chris at bigballofwax.co.nz Fri Nov 5 08:14:33 2010 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Fri, 5 Nov 2010 20:14:33 +1300 Subject: [Koha-devel] RDA and Koha - where we are and HOW do we get ready In-Reply-To: <20101103203529.JUS76.67815.root@cdptpa-web03-z01> References: <20101103203529.JUS76.67815.root@cdptpa-web03-z01> Message-ID: On 4 November 2010 09:35, wrote: > The final report for RDA will be out in the spring and RDA records are already being created and loaded into OCLC that Koha libraries are trying to use. ?Library of Congress will most likely adopt and start cataloging next year in RDA format - my personal suspicion. > http://wiki.koha-community.org/wiki/RDA Chris From fridolyn.somers at gmail.com Fri Nov 5 15:23:31 2010 From: fridolyn.somers at gmail.com (Fridolyn SOMERS) Date: Fri, 5 Nov 2010 15:23:31 +0100 Subject: [Koha-devel] opac-search.pl In-Reply-To: References: <.77.224.23.61.1288484548.squirrel@webmail.tgi.es> <.88.23.141.155.1288786054.squirrel@webmail.tgi.es> Message-ID: You can find documentation about Zebra : http://www.indexdata.com/yaz/doc/index.html Good luke On Wed, Nov 3, 2010 at 1:41 PM, Chris Nighswonger < cnighswonger at foundations.edu> wrote: > On Wed, Nov 3, 2010 at 8:07 AM, Toni Rosa wrote: > > Thanks for your answer. > > I'll try to find documentation about how to migrate a NoZebra > installation > > to use Zebra. > > 1. Set the 'NoZebra' syspref (located in the 'Search' tab of the sytem > preference editor) to 'Use' or 'On.' > 2. Follow the Zebra server setup found here: > > http://git.koha-community.org/gitweb/?p=koha.git;a=blob;f=INSTALL.ubuntu.lucid;h=0ee096ed244a0b25f7b6be893723c3a5a2602bad;hb=HEAD#l242 > 3. Enjoy > > Kind Regard, > Chris > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -- Fridolyn SOMERS ICT engineer PROGILONE - Lyon - France fridolyn.somers at gmail.com -------------- section suivante -------------- Une pi?ce jointe HTML a ?t? nettoy?e... URL: From charl at prograbiz.com Fri Nov 5 20:52:39 2010 From: charl at prograbiz.com (Charl) Date: Fri, 5 Nov 2010 21:52:39 +0200 Subject: [Koha-devel] Webcam Barcode reader Message-ID: <4DAFF910A5424728A11F1AFE2601431E@CharlPC> Did anyone try the ZBar open source webcam barcode reader? Is it practical to use in a library environment? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mason at kohaaloha.com Sat Nov 6 04:52:21 2010 From: mason at kohaaloha.com (Mason JAMES) Date: Sat, 6 Nov 2010 16:52:21 +1300 Subject: [Koha-devel] Webcam Barcode reader In-Reply-To: <4DAFF910A5424728A11F1AFE2601431E@CharlPC> References: <4DAFF910A5424728A11F1AFE2601431E@CharlPC> Message-ID: <9045EE54-A89D-4523-93B1-73D0A1912785@KohaAloha.com> On 2010-11-6, at 8:52 AM, Charl wrote: > Did anyone try the ZBar open source webcam barcode reader? > Is it practical to use in a library environment? no i haven't... ... but its got its own CPAN module! ;) cheers, Mason From dpavlin at rot13.org Fri Nov 5 22:35:51 2010 From: dpavlin at rot13.org (Dobrica Pavlinusic) Date: Fri, 5 Nov 2010 22:35:51 +0100 Subject: [Koha-devel] Webcam Barcode reader In-Reply-To: <4DAFF910A5424728A11F1AFE2601431E@CharlPC> References: <4DAFF910A5424728A11F1AFE2601431E@CharlPC> Message-ID: <20101105213551.GA371@rot13.org> On Fri, Nov 05, 2010 at 09:52:39PM +0200, Charl wrote: > Did anyone try the ZBar open source webcam barcode reader? > Is it practical to use in a library environment? ZBar is great, but you will need do have optics on webcam that will make barcode sharp enough to be reconizable. In my expirience, webcams rearly have focus manipulation if they are built in laptops, and even external open usually can't make sharp picture for anything which is closer than 30cm or so which makes picture too small. I made even a small bounty among my friend to try different webcams, and so far noone reported succesfull scan of barcode with webcam. YMMV. I had better luck feeding zbar with pictures from digital camera, but my best expirience is with zxing on cell phones, so you might take a look at that option: http://code.google.com/p/zxing/ p.s. additional benefit of zxing is that it has support for codabar which we have on our books, but that is specific for our library. -- Dobrica Pavlinusic 2share!2flame dpavlin at rot13.org Unix addict. Internet consultant. http://www.rot13.org/~dpavlin From frederic at tamil.fr Mon Nov 8 08:23:00 2010 From: frederic at tamil.fr (=?ISO-8859-1?Q?Fr=E9d=E9ric_Demians?=) Date: Mon, 08 Nov 2010 08:23:00 +0100 Subject: [Koha-devel] DBIx::Class vs current Koha's DBI performance In-Reply-To: <20101104192618.GB4325@rorohiko> References: <20101026171735.GA32451@rot13.org> <4CC7C643.2070906@tamil.fr> <20101104192618.GB4325@rorohiko> Message-ID: <4CD7A554.6030607@tamil.fr> > Small steps, we have a 6 month release, getting Template::Toolkit, > Schema abstraction, C4/Search fixes, all the RFC in is going to be > enough. Lets work on small manageable improvements. Anyone who wanted to > work on finding the circular object references we have, and breaking > them would move us a lot closer to being able to run the whole of Koha > under a persistent tool. How do you identify exactly this circular object references? Other question: do you plan to make Moose a Koha requirement and recommendation? -- Fr?d?ric From chris at bigballofwax.co.nz Mon Nov 8 08:31:43 2010 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Mon, 8 Nov 2010 20:31:43 +1300 Subject: [Koha-devel] DBIx::Class vs current Koha's DBI performance In-Reply-To: <4CD7A554.6030607@tamil.fr> References: <20101026171735.GA32451@rot13.org> <4CC7C643.2070906@tamil.fr> <20101104192618.GB4325@rorohiko> <4CD7A554.6030607@tamil.fr> Message-ID: 2010/11/8 Fr?d?ric Demians : > >> Small steps, we have a 6 month release, getting Template::Toolkit, >> Schema abstraction, C4/Search fixes, all the RFC in is going to be >> enough. Lets work on small manageable improvements. Anyone who wanted to >> work on finding the circular object references we have, and breaking >> them would move us a lot closer to being able to run the whole of Koha >> under a persistent tool. > > How do you identify exactly this circular object references? They are quite hard to find, Test::Memory::Cycle and Devel::Cycle are two ways to find them, but you have to have some suspicion where to look. From frederic at tamil.fr Mon Nov 8 08:44:31 2010 From: frederic at tamil.fr (=?ISO-8859-1?Q?Fr=E9d=E9ric_Demians?=) Date: Mon, 08 Nov 2010 08:44:31 +0100 Subject: [Koha-devel] DBIx::Class vs current Koha's DBI performance In-Reply-To: References: <20101026171735.GA32451@rot13.org> <4CC7C643.2070906@tamil.fr> <20101104192618.GB4325@rorohiko> <4CD7A554.6030607@tamil.fr> Message-ID: <4CD7AA5F.5010501@tamil.fr> > They are quite hard to find, Test::Memory::Cycle and Devel::Cycle are > two ways to find them, but you have to have some suspicion where to > look. Thanks. I will explore those modules. Could you point in HEAD an existing and known circular reference I could test with this technique? >> Other question: do you plan to make Moose a Koha requirement and >> recommendation? > > Not for 3.4. I think the above list plus all the rfcs to get done in 6 > months is plenty ambitious enough. Yes, ambitious and awesome. > And until we can run Koha in a manner, adding Moose would just kill us. And using Moose outside Koha in /misc/* maintenance scripts? It'll be a Moose integration first step. From chris at bigballofwax.co.nz Mon Nov 8 08:57:20 2010 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Mon, 8 Nov 2010 20:57:20 +1300 Subject: [Koha-devel] DBIx::Class vs current Koha's DBI performance In-Reply-To: <4CD7AA5F.5010501@tamil.fr> References: <20101026171735.GA32451@rot13.org> <4CC7C643.2070906@tamil.fr> <20101104192618.GB4325@rorohiko> <4CD7A554.6030607@tamil.fr> <4CD7AA5F.5010501@tamil.fr> Message-ID: 2010/11/8 Fr?d?ric Demians : > >> They are quite hard to find, Test::Memory::Cycle and Devel::Cycle are >> two ways to find them, but you have to have some suspicion where to >> look. > > Thanks. I will explore those modules. Could you point in HEAD an existing > and known circular reference I could test with this technique? Not off the top of my head, I suspect we might have issues in the Search results, that seems to leak the most. Thats the problem with them, its hard to track them down. I'm wondering if there is a biblio that references an item which references the biblio, or something similair to that. > > >>> Other question: do you plan to make Moose a Koha requirement and >>> recommendation? >> >> Not for 3.4. I think the above list plus all the rfcs to get done in 6 >> months is plenty ambitious enough. > > Yes, ambitious and awesome. > >> And until we can run Koha in a manner, adding Moose would just kill us. > > And using Moose outside Koha in /misc/* maintenance scripts? It'll be a > Moose integration first step. > > I wouldn't reject that out of hand :) It would add another dependency, but it might be worth it. Do you have a sample script? Chris From mjr at phonecoop.coop Tue Nov 9 03:01:48 2010 From: mjr at phonecoop.coop (MJ Ray) Date: Tue, 9 Nov 2010 02:01:48 +0000 (GMT) Subject: [Koha-devel] Mails when patch pushed to git In-Reply-To: <4CD1F6DB.6010505@biblibre.com> Message-ID: <20101109020148.CE84DF7311@nail.towers.org.uk> Paul Poulain wrote: > In the mail we recieved when RM/RMaint pushes a patch to > git.koha-community.org I'd be happy to have the details of the patch. I think the email is a good idea, but the current format is maybe a bit confusing. It would be nice to see why you are sent it, for example. Is there a hooks/post-receive script someone would suggest? Hope that helps, -- MJ Ray (slef), member of www.software.coop, a for-more-than-profit co-op. Past Koha Release Manager (2.0), LMS programmer, statistician, webmaster. In My Opinion Only: see http://mjr.towers.org.uk/email.html Available for hire for Koha work http://www.software.coop/products/koha From cnighswonger at foundations.edu Tue Nov 9 03:45:09 2010 From: cnighswonger at foundations.edu (Chris Nighswonger) Date: Mon, 8 Nov 2010 21:45:09 -0500 Subject: [Koha-devel] A Discussion on A Policy Setting Forth Standards of Code Submission, etc. [WAS: RFCs for 3.4 from BibLibre (serials & acquisitions)] In-Reply-To: References: Message-ID: On Tue, Nov 2, 2010 at 11:08 PM, Joe Atzberger wrote: > On Tue, Nov 2, 2010 at 4:12 PM, Chris Nighswonger > wrote: >> >> On Tue, Nov 2, 2010 at 2:10 PM, Joe Atzberger wrote: >>> >>> Chris, I disagree that the first sign-off on a major vendor's patches >>> should be external. ?The first sign-off from a major vendor should be >>> *internal* to their quality control process. ?This was at one time the >>> standing policy amongst LibLime and BibLibre for major changes. >> >> My recommendation is that we require a *minimum* of one sign-off by a >> disinterested party. This in no way excludes a company having their own QA >> internally and signing off. Furthermore, it really provides no impedance to >> the entire process when one considers how simple it should be to obtain a >> sign-off on good working code. > > Ok, I read that as amending "that the required sign-off for pushing to > master be a disinterested developer" to "that a ...". No, I think you may be reading it with an emphasis which I did not intend. > Regarding the simplicity of signing off, I take some issue. ?It is > *severely* non-trivial to test major integration features. ?Consider SIP2 > and LDAP, or something like EDI. ?It can depend not just on accurate test > data, but entire servers, network environments, remote accounts granted by a > supplier, foreign domain/language knowledge, etc. ?Sure, I'd love it for > everybody to have a dozen signoffs. ?I just think blocking code while > waiting on a 3rd party (who by design is disinterested) to come around and > dedicate some resources is a questionable policy. I'm sure there are any number of features which could be tested in very complex environments, and perhaps even more complex than those used or anticipated by the original developers themselves. I wonder if the original supplier of the SIP2 and LDAP features actually went to the level of testing you describe prior to committing those features. However, I do agree that testing should be defined in a way to ensure that it is both effective and yet not a de facto "block" on the acceptance of code. > ?Right now we can't even > get comment on RFCs, let alone dedicated VMs and manpower. The scant participation in this thread by others is very much proof of the problem pointed out here! I find it hard to believe how few have chimed in for as much "noise" as there is about these issues. > >> >> Ahh... nothing in my recommendation suggests that "should block *working >> code* from getting in just because a bigger, different or more-deluxe RFC >> exists." It simply suggests we have a policy in place which will actively >> promote some sort of community collaboration particularly on the part of >> large shops who "should" know better than to clam up especially on large >> feature development. That is patently bad behavior in the light of community >> participation which is the foundation of this project. It is certainly >> within the rights of anyone to take the source and run with it in whatever >> way they like. They may even take it and never contribute back. However, it >> is not within their right to do large, unilateral development and then >> expect it to be pushed into the main codebase. > > Forgive me if I'm off the pulse a bit, but do these expectations exist > today? ?The release process establishes when new features are accepted or > not, and it has been pretty explicit and clear. ?The problem used to be big > unilateral changes that weren't getting submitted (including some code that > I wrote). ?Now the problem is they're getting submitted? > Perhaps this point bears greater clarification. By "expect it to be pushed" I meant particularly without any prior discussion/RFC/community participation. What was/is going on with LibLime/PTFS LEK is a classic example of the sort of thing which needs to be discouraged. I refer particularly to the process. PTFS has stated in a number of forums that once their "contract" obligations are met, they will submit this code to the community. However, the job is done at that point and clients will have implemented the product. There will be a certain level of "expectation" that "new" features "should" be pushed to the main code base. This sort of thing will probably be more prevalent as vendors become larger. This behavior may also be driven somewhat by the lack of a clear policy of inclusion. Simply having a policy of "whatever you dump on us by the (largely undefined) date that version X.Y.Z is to be released will be included" is a recipe for disaster in the long run. >>> >>> Also, I think if you develop your features "in the open" (e.g., posted w/ >>> gitweb, or on github), the burden of synthesizing multiple RFCs and general >>> "feature consensus" sentiment isn't on you in quite the same way as when >>> changes are delivered en bloc. ?A vendor has what their clients are paying >>> for, and if other devs have an unfunded desire for extra feature X, they can >>> follow the branch right along add the last bit themselves, all while still >>> in development. ?Whether X is pulled in by the vendor, >>> or?separately?submitted by the dev to the RM doesn't really matter. >> >> A couple of points addressing this scenario as a whole (none of these is >> against the principle of open development): >> >> 1. That scenario may work for simpler features/code. But consider the >> current press to switch from zebra to 'foo' (any one of several suggestions >> recently). If a vendor develops an entire replacement for C4::Search and >> friends which centers around 'foo' in a unilateral fashion, the de facto >> assertion is that the community must either take what they have done or >> leave it. > > Or patch it, or extend it, or put in on a feature branch, or break it into > discrete elements, or defer acceptance until sufficiently abstracted, etc. > ?If the vendor cares about it hitting mainline, then they'll follow up to do > what it takes, within reason. ?If they don't, then I don't think new policy > requirements will affect them in the least. > I agree that ultimately a vendor's desire to participate will be the deciding factor. But the fact that a policy won't affect a vendor is not necessarily a reason to not have a policy. >> >> 2. Given the problems we already have with a lack of development >> cooperation, this scenario at best does nothing to address those problems. > > This comes back to the question of whether you can force cooperation. ?I > don't think we can, effectively. ?I support codifying expectations and best > practices, but "requiring" disinterested, competing or downright hostile > parties to cooperate or pretend to cooperate seems destined to fail. > By the very nature of things, a project such as this entails a practical requirement of cooperation among competing parties. Just look at all of the vendors participating. While the vast majority are kind and well mannered toward each other, they are, in fact, in competition with each other in some sense of the word. And I'll not revisit my discussion of the fact that vendors and customers are in the business of "requiring" and "forcing cooperation" with each other all of the time in those little pieces of paper we call contracts. Now this is not to suggest that we begin any formal contractual relationships or that we attempt to "force" cooperation. But the recent work by ByWater and Software.coop on the EDI code and Catalyst and Biblibre on Biblibre's work illustrate that it is not beyond the realm of reason to expect and even strongly encourage competing parties to cooperate for the benefit of the thing that earns their bread and butter. And as for downright hostile parties, they should go elsewhere until they can leave off some of their hostility, imho. >> >> 3. This scenario appears a bit "vendor-centric." > > That's because, as has been pointed out previously, Koha's authorship is de > facto vendor-centric. ?The vast majority of Koha code historically > originated from vendors and the majority of new code still comes from > vendors. ?Every RM worked for a vendor (or two). ?I'm happy to see Koha > institutions and cooperatives still able to contribute code back to the > project on their own (some for longer than I've been associated with Koha), > and I'd love to see more of it. ?Right now though, the code being affected > by proposed policies is still *mostly* in vendors' houses. > In spite of this, I hope we can avoid becoming vendor-centric. "Vendor-centricity" leads to vendor dominance and control ultimately. I am not personally aware of a vendor controlled FOSS project that does not lean heavily in that vendor's favor. I furthermore hope that in the future we will see more individuals in my own position of vendor neutrality being elected to lead developer roles in the community. There are several who I can think of that would be very capable of filling these positions and help balance the trend you point out here. >> >> I am of the opinion that Koha should be "community-centric" with >> individual's first and vendor's second in order of relationship. > > OK, I know what a vendor is. ?But in this context I don't know what you mean > by "community-centric" or "individual" or "order of relationship". ?If you > just mean that community comes first, then yay, community. ?I suspect we can > escape talking about whether vendors are first, second or other class > citizens in the community because it doesn't really matter for the proposed > policy. > The problem is not one of where to "relegate" vendors in the "social" structure of the community. Rather it is all about keeping vendors who have relatively limitless resources from holding the controlling interest in the community in whatever form that may take and at the same time, clearly defining the obligations of those individuals who *elect* to participate in the community in such a way that vendors can transact business with some amount of certainty. >> >> This may not be the view of all involved. However, if it were not for >> Koha, Koha support vendors would be out of some amount of business. > > And without vendors *no* version of Koha would ever have been written or > released. ?It's sorta funny that I'm the one saying this stuff, as I am not > currently affiliated with any vendor. I think that 1.0 was written by coders for hire (aka Katipo) and was released by, not a vendor, but HLT, a library. So, in fact, no vendor in the technical understanding was involved. Chris C. can correct me if I'm wrong here. > >> 4. Regarding the statement "A vendor has what their clients are paying >> for..." True, vendors are client driven. However, as I said in my initial >> post, vendors must educate their clients as to the nature of open source. >> Client expectations should be set based on known, published community >> procedures. If this were properly done, many problems would be resolved. > > This is a good point, and I was interested to read Paul's and Ian's replies > to it. ?Building community submission and acceptance management into the > quotes and development timelines is an important measure to take, and doing > some education up front really will help. ?Open source is different than > other software development institutions may have contracted, so experienced > parties may need more education than inexperienced ones. > I wish other vendor types would jump in. There sure has been enough.... complaining (/me ducks) that I would have thought this thread would be quite hot by now. As it is, I think you and I are just getting a workout in debate... :-) >> >> As it is, I think vendors have a very hard time managing their own growth >> once it reaches the ballooning point. Unmanageable growth will kill you... >> as we have seen. > > Yes, I've died a couple times already! > I hope you keep coming back! >> >> In the final analysis, if each vendor pursues their own direction, we will >> end up with a Baskin-Robins of Koha. Make the job of RM hard enough and no >> one will want it. At some point the project dies due to leanness and >> overextension. The strength of the project lies in the *two-way* cooperation >> of its members. The "we have developed it: take it or leave it" approach is >> a one-way, dead-end street for the community. > > Thankfully, it doesn't ever actually work like that. ?No piece of code is > accepted under the condition that it never be modified. ?We don't have a > license like that. I understand the license and the ability to modify, but if you have a zillion lines of code dumped on you which you will have to somehow pick through in piece-meal fashion, extracting what you want and then integrating it with what you have, it is highly probable (as you mention, based on the facts of little to no responses to RFCs and small response to this thread) that that code will drop by the wayside and be left behind. That is not "two-way cooperation" in any definition I'm aware of. It's death by overdose. > Say a submitter puts up a good bunch of code representing a considerable > amount of work, but we are going to require that some hardcoded > configuration X is abstracted into a syspref. ?If you asked them, "well, do > you want to do it and resubmit, or do you want to wait an indefinite period > until some random other person gets around to it, at which point you might > have rebase issues?" I think 9 times of 10, they'll do it themselves. On this small scale, I agree. > ?(Again, within reason. ?It's not going to happen if the hangup is something > huge like "Make it work with UNIMARC in Turkish".) See the above paragraph. > In short, extra signoff(s) are a fine recommendation and best practice, but > I wouldn't support them as a strict requirement. ?Also I think concrete > technical details about how the "right kind" of development can be done will > do more good than just the policy preferences by themselves. I agree with you that we need concrete technical details about how the right kind of development can be done. That is the presupposition of this thread. However, I think they need to be formalized in written form (call that by whatever name you like the best). Kind Regards, Chris From mjr at phonecoop.coop Tue Nov 9 04:45:47 2010 From: mjr at phonecoop.coop (MJ Ray) Date: Tue, 9 Nov 2010 03:45:47 +0000 (GMT) Subject: [Koha-devel] A Discussion on A Policy Setting Forth Standards of Code Submission, etc. [WAS: RFCs for 3.4 from BibLibre (serials & acquisitions)] In-Reply-To: Message-ID: <20101109034547.D9BE8F7311@nail.towers.org.uk> Chris Nighswonger wrote: > As a vendor-neutral voice, I would like to encourage everyone who has an > vested interest in these areas and the best interests of the Koha project at > heart to actively participate and respond to these RFC's. It seems that > often there is little dicussion, etc. on RFCs in the community. And even > when there is discussion, etc. it is often unclear if a consensus is reached > (at least publicly). Why is there little discussion? I think low-comment RFCs are often posted in large batches. That is easier for the requestor but means that the same weekly average of commenter time gets spread between them all, resulting in a low level of discussion on each one. The current wiki isn't the easiest thing to track or edit and wikis are generally bad places for lengthy discussion. How might we remedy this? "RFC Corner" in the newsletter and meetings? How else? > [...] Clients of vendors should be educated during > the RFQ process as to this aspect of open source, and their expectations > managed accordingly, imho. Well, we try, but free software vendors can't do this too much because some suppliers of "open source" disagree that these communities are good or even necessary. Community-friendly vendors would probably lose orders to them if we pushed the point harder because it would make us look slower. It needs to be done by vendor-neutral advisory services like www.oss-watch.ac.uk - anyone want to back one for Koha-Community.org? [...] > Secondly, I would suggest that we implement a strong recommendation that > larger shops submit timely RFC's *prior to* beginning work on code and then > promote discussion on those RFC's. This recommendation should with some > lesser strength suggest that everyone submit timely RFC's to maximize > productivity and usefulness of the resources of all concerned. This will almost certainly not happen in some cases, such as where the ordering librarian wants the feature yesterday or as near as, so work is expected to start immediately when the order is placed, or in situations where the ordering librarian is new to FOSS and things like RFC processes. Although we don't do it and I hope no-one does, I also suggest there's a commercial incentive to hold back details in the hope of being paid twice for the same feature. If the RFCs ever helped gather orders (such as I'd like http://wiki.koha-community.org/wiki/NISO_CORE_protocol to do), then maybe some commercial incentive would push the other way, but how many examples of RFC-first development work have there been? Even when the RFC appears before coding, the order has already been placed. The new Template:RFC doesn't even have an option for an not-yet-ordered RFC. So while I feel the sentiment is good, I think it would be unrealistic to make this recommendation strong. I'd be delighted if we could work like that, but it's not what clients usually request (or pay for). > Thirdly, I would suggest a stated policy (and such a policy is presently in > place practically) which requires all submissions to pass through a QA > branch and receive at a minimum one sign-off prior to being pushed into > master. [...] I feel that this and the "review and consensus process" are both up to the RMs and should be part of their manifestoes (or not, as the case may be). It's a release management matter, not a general policy one. Do as you will. Hope that helps, -- MJ Ray (slef), member of www.software.coop, a for-more-than-profit co-op. Past Koha Release Manager (2.0), LMS programmer, statistician, webmaster. In My Opinion Only: see http://mjr.towers.org.uk/email.html Available for hire for Koha work http://www.software.coop/products/koha From magnus at enger.priv.no Tue Nov 9 09:00:09 2010 From: magnus at enger.priv.no (Magnus Enger) Date: Tue, 9 Nov 2010 09:00:09 +0100 Subject: [Koha-devel] General meeting on IRC: Wednesday, 10 November 2010 at 19:00 UTC+0 Message-ID: Hi All Our next IRC meeting is on Wednesday, 10 November 2010 at 19:00 UTC+0 on the #koha irc channel on irc.katipo.co.nz. The agenda is here: http://wiki.koha-community.org/wiki/General_IRC_Meeting,_10_November_2010 Currently the agenda looks like this: * Update on Roadmap to 3.2 * Update on Roadmap to 3.0 * Update on Roadmap to 3.4 1. BibLibre branches submitted to QA (roadmap & what's still expected from BibLibre. feedback from librarians) * Community building : 1. RFCs agreement workflow (discussion about the workflow for RFCs to be accepted/validated) 2. git management (branches) 3. jqueryui adoption instead of YUI 4. proposal to form a Technical Committee * Follow-up on actions from General IRC Meeting, 6 October 2010. * Agree times of next meetings. Please feel free to edit the agenda to add any issues you would like to talk about, and feel free to attend the meeting using your favourite irc client. Best regards, Magnus From david at lang.hm Tue Nov 9 09:32:16 2010 From: david at lang.hm (david at lang.hm) Date: Tue, 9 Nov 2010 00:32:16 -0800 (PST) Subject: [Koha-devel] [Koha] General meeting on IRC: Wednesday, 10 November 2010 at 19:00 UTC+0 In-Reply-To: References: Message-ID: On Tue, 9 Nov 2010, Magnus Enger wrote: > 1. RFCs agreement workflow (discussion about the workflow for > RFCs to be accepted/validated) an RFC is a curtasy, having a RFC 'accepted' doesn't mean that the code that results will be accepted, conversely, having an RFC 'rejected' doesn't mean that the code created will never make it in. in closed source development, RFCs have a lot of weight, and approving or rejecting one is committing to future acceptance or rejection of a bunch of code to implement that feature. in open source development, an RFC is an outline of your intent and design, which may or may not get a lot of review by people. It's a good thing to do so that you can explain to others what you are working on, and the general approach that you are taking. It's useful for others who are interested in that feature (or the things that you will affect in implementing that feature), but even if people object, you may still be able to convince others that the benefits you are providing are worth the cost that people are objecting to. On the other hand, even if people like you design, they may not like the resulting code. an e-mail to this list is either 1. a response to another message 2. a request for help or 3. a proposed change (either in code or otherwise), all messages that fall in category #3 are RFCs, even if they don't have that tag in the subject. David Lang From paul.poulain at biblibre.com Tue Nov 9 10:35:25 2010 From: paul.poulain at biblibre.com (Paul Poulain) Date: Tue, 09 Nov 2010 10:35:25 +0100 Subject: [Koha-devel] opac-search.pl In-Reply-To: References: <.77.224.23.61.1288484548.squirrel@webmail.tgi.es> Message-ID: <4CD915DD.206@biblibre.com> Le 02/11/2010 10:21, Fridolyn SOMERS a ?crit : > Hie, > > Can we think about removing "No Zebra" mode ? Yep. > It will make code more clear. definetly ! BibLibre will take/is taking care of this on the solR stuff (it's already removed in search.pl, not yet in Search.pm, but can be now http://git.biblibre.com/?p=koha;a=blob;f=opac/opac-search.pl;h=6b3a3c59cb596d7d4e9e1186c3bf325b65279da9;hb=refs/heads/poc_solr) HTH -- Paul POULAIN http://www.biblibre.com Expert en Logiciels Libres pour l'info-doc Tel : (33) 4 91 81 35 08 From dpavlin at rot13.org Tue Nov 9 13:24:13 2010 From: dpavlin at rot13.org (Dobrica Pavlinusic) Date: Tue, 9 Nov 2010 13:24:13 +0100 Subject: [Koha-devel] Webcam Barcode reader In-Reply-To: References: <4DAFF910A5424728A11F1AFE2601431E@CharlPC> <20101105213551.GA371@rot13.org> Message-ID: <20101109122413.GA11140@rot13.org> On Mon, Nov 08, 2010 at 09:28:04AM -0500, Galen Charlton wrote: > Hi, > > On Fri, Nov 5, 2010 at 5:35 PM, Dobrica Pavlinusic wrote: > > p.s. additional benefit of zxing is that it has support for codabar > > which we have on our books, but that is specific for our library. > > Actually, quite a few libraries use Codabar, so zxing would be of broader use. Good point. However, I should note for future reference that zxing by default doesn't have Codabar enabled, so it has to be compiled using following patch: http://code.google.com/p/zxing/issues/detail?id=538 -- Dobrica Pavlinusic 2share!2flame dpavlin at rot13.org Unix addict. Internet consultant. http://www.rot13.org/~dpavlin From nengard at gmail.com Tue Nov 9 13:30:19 2010 From: nengard at gmail.com (Nicole Engard) Date: Tue, 9 Nov 2010 07:30:19 -0500 Subject: [Koha-devel] Webcam Barcode reader In-Reply-To: <20101109122413.GA11140@rot13.org> References: <4DAFF910A5424728A11F1AFE2601431E@CharlPC> <20101105213551.GA371@rot13.org> <20101109122413.GA11140@rot13.org> Message-ID: Just FYI - there is an enhancement request for this that I reported ages ago - http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=1937 If we want to add the discussion there it will be tracked better and if anyone adds this development they can update the bug report. Nicole On Tue, Nov 9, 2010 at 7:24 AM, Dobrica Pavlinusic wrote: > On Mon, Nov 08, 2010 at 09:28:04AM -0500, Galen Charlton wrote: >> Hi, >> >> On Fri, Nov 5, 2010 at 5:35 PM, Dobrica Pavlinusic wrote: >> > p.s. additional benefit of zxing is that it has support for codabar >> > which we have on our books, but that is specific for our library. >> >> Actually, quite a few libraries use Codabar, so zxing would be of broader use. > > Good point. However, I should note for future reference that zxing by > default doesn't have Codabar enabled, so it has to be compiled using > following patch: > > http://code.google.com/p/zxing/issues/detail?id=538 > > -- > Dobrica Pavlinusic ? ? ? ? ? ? ? 2share!2flame ? ? ? ? ? ?dpavlin at rot13.org > Unix addict. Internet consultant. ? ? ? ? ? ? http://www.rot13.org/~dpavlin > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > From nengard at gmail.com Tue Nov 9 13:39:09 2010 From: nengard at gmail.com (Nicole Engard) Date: Tue, 9 Nov 2010 07:39:09 -0500 Subject: [Koha-devel] A Discussion on A Policy Setting Forth Standards of Code Submission, etc. [WAS: RFCs for 3.4 from BibLibre (serials & acquisitions)] In-Reply-To: References: Message-ID: On Mon, Nov 8, 2010 at 9:45 PM, Chris Nighswonger wrote: >>> 2. Given the problems we already have with a lack of development >>> cooperation, this scenario at best does nothing to address those problems. >> >> This comes back to the question of whether you can force cooperation. ?I >> don't think we can, effectively. ?I support codifying expectations and best >> practices, but "requiring" disinterested, competing or downright hostile >> parties to cooperate or pretend to cooperate seems destined to fail. >> > > By the very nature of things, a project such as this entails a > practical requirement of cooperation among competing parties. Just > look at all of the vendors participating. While the vast majority are > kind and well mannered toward each other, they are, in fact, in > competition with each other in some sense of the word. And I'll not > revisit my discussion of the fact that vendors and customers are in > the business of "requiring" and "forcing cooperation" with each other > all of the time in those little pieces of paper we call contracts. Now > this is not to suggest that we begin any formal contractual > relationships or that we attempt to "force" cooperation. But the > recent work by ByWater and Software.coop on the EDI code and Catalyst > and Biblibre on Biblibre's work illustrate that it is not beyond the > realm of reason to expect and even strongly encourage competing > parties to cooperate for the benefit of the thing that earns their > bread and butter. And as for downright hostile parties, they should go > elsewhere until they can leave off some of their hostility, imho. This reply fits with a lot of what has been discussed in this thread and it's not even my own words. During a moment during the hackfest we talked about vendors and their customers' expectations. Someone (I can't remember who) mentioned that we have to educate our customers about the world they have just entered. Basically they can request features from us, but if those features are not guaranteed to be put into the final product the way they spec them out. Instead educating our customers that they have entered a new way of working and collaboration might make it a bit less likely that vendors run off on their own and work in a silo. Also Paul mentioned (and I love this) that he doesn't think he has any customers who would fight this process - that if he told them he can give them the feature they want but it needs to be changed a bit in this way or that they would probably agree to his suggestions because his customers are in it for the open source aspects and I hope everyone else's are. So, it doesn't really 'force' cooperation but it does make it easier for us to cooperate if our customers understand the process of getting their development ideas into the final releases of Koha. Nicole C. Engard ByWater Solutions From nengard at gmail.com Tue Nov 9 13:40:51 2010 From: nengard at gmail.com (Nicole Engard) Date: Tue, 9 Nov 2010 07:40:51 -0500 Subject: [Koha-devel] A Discussion on A Policy Setting Forth Standards of Code Submission, etc. [WAS: RFCs for 3.4 from BibLibre (serials & acquisitions)] In-Reply-To: <20101109034547.D9BE8F7311@nail.towers.org.uk> References: <20101109034547.D9BE8F7311@nail.towers.org.uk> Message-ID: On Mon, Nov 8, 2010 at 10:45 PM, MJ Ray wrote: > How might we remedy this? ?"RFC Corner" in the newsletter and meetings? > How else? I will gladly add a section of links to RFCs to the newsletter! I'm not sure how to start - should November newsletter include links to all RFCs? What do you all want to see? I can probably just put in a link to the RFC page for this newsletter .. From mjr at phonecoop.coop Tue Nov 9 15:50:25 2010 From: mjr at phonecoop.coop (MJ Ray) Date: Tue, 9 Nov 2010 14:50:25 +0000 (GMT) Subject: [Koha-devel] A Discussion on A Policy Setting Forth Standards of Code Submission, etc. [WAS: RFCs for 3.4 from BibLibre (serials & acquisitions)] In-Reply-To: Message-ID: <20101109145025.63883F7311@nail.towers.org.uk> Chris Nighswonger wrote: > On Tue, Nov 2, 2010 at 11:08 PM, Joe Atzberger wrote: > > Right now we can't even > > get comment on RFCs, let alone dedicated VMs and manpower. > > The scant participation in this thread by others is very much proof of > the problem pointed out here! I find it hard to believe how few have > chimed in for as much "noise" as there is about these issues. !!! You hide this discussion in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying 'Beware of the Leopard' in an unlit cellar with broken stairs (or at least a subject line which shows only "A Discussion on A Policy Setting..." in my mailbox overview, which isn't really the sort of fun and attractive thread I rush to open, especially when others have already replied) and then grumble about low participation? Please excuse me while I pick my jaw off the floor. ;-) As you know, I only saw this thread now after you asked me to look for it... and even then, only at the second search attempt! I think this shares something with the low-comment RFCs. We authors should acknowledge how we're contributing to failure too: vague titles, fragmented discussion and information overload, to name three. Overload is a hard one to deal with, especially as people love Koha which maybe makes us too verbose, and summarising discussion is something probably Nicole can explain far better than me (judging by the great kohacon10 blog posts), so I'll take the vague titles one. Here's my tip on subject lines: http://mjr.towers.org.uk/email.html#subject "Good email has a good subject line. The subject line is your way to promote your message as one I should read first. Make it a short (max 10 words?) summary of what the email is about. Sometimes I look at "(no subject)" but not often if I don't know the sender's name. Stuff that looks like spam or viruses also gets mostly left unopened. Stuff with words like "URGENT" on the subject line often gets left until last, just to spite them [...] I know your email is important, but if I can't tell that it's important from the subject, I may delete it by mistake. So, if you don't hear back after a while, try resending with "RESEND" in the subject." Looking at this subject line with those glasses on: "A Discussion on A Policy Setting Forth Standards of Code Submission, etc. [WAS: RFCs for 3.4 from BibLibre (serials & acquisitions)]" deters in four ways: "Discussion" is redundant (this is a mailing list), "Policy" and "Standards" are unfun words, "etc." suggests a lack of focus, and the "WAS:..." suggests there's context that I probably don't remember. I feel a better title might be "RFC Code Submission Standards" and to link to the previous thread in the first paragraph, explaining if/how it's relevant. Some other advice may be in guides on how to write newspaper headlines or press release titles. Other than that, I simply re-emphasise my previous points that some recommendations seem unrealistic and I feel others are within the RM's power to decide and should be left there. Hope that helps, -- MJ Ray (slef), member of www.software.coop, a for-more-than-profit co-op. Past Koha Release Manager (2.0), LMS programmer, statistician, webmaster. In My Opinion Only: see http://mjr.towers.org.uk/email.html Available for hire for Koha work http://www.software.coop/products/koha From lculber at mdah.state.ms.us Tue Nov 9 16:17:14 2010 From: lculber at mdah.state.ms.us (Linda Culberson) Date: Tue, 09 Nov 2010 09:17:14 -0600 Subject: [Koha-devel] Koha-dev: Discussion on A Policy Setting Forth Standards of Code Submission, etc. In-Reply-To: <20101109145025.63883F7311@nail.towers.org.uk> References: <20101109145025.63883F7311@nail.towers.org.uk> Message-ID: <4CD965FA.4000309@mdah.state.ms.us> I apologize for the subject, line, but I was uncertain how to continue the thread when what I really want to respond to is the comment about " scant participation" issue. As a non-vendor, I have been unclear as to: 1. what is the "correct" way to respond to an RFC - is there a protocol? 2. where is the "correct" place to respond - on the list or on the wiki or where? 3. to whom do I respond? The community at large or the the vendor responsible for the RFC? 4. Ok, this is a weird one, I'll admit, but I've wondered at times whether my response would be welcomed or is this a "vendors-only" issue? -- Linda Culberson lculber at mdah.state.ms.us Archives and Records Services Division Ms. Dept. of Archives& History P. O. Box 571 Jackson, MS 39205-0571 Telephone: 601/576-6873 Facsimile: 601/576-6824 From jcamins at cpbibliography.com Tue Nov 9 16:23:27 2010 From: jcamins at cpbibliography.com (Jared Camins-Esakov) Date: Tue, 9 Nov 2010 10:23:27 -0500 Subject: [Koha-devel] Koha-dev: Discussion on A Policy Setting Forth Standards of Code Submission, etc. In-Reply-To: <4CD965FA.4000309@mdah.state.ms.us> References: <20101109145025.63883F7311@nail.towers.org.uk> <4CD965FA.4000309@mdah.state.ms.us> Message-ID: Linda, As a non-vendor, I have been unclear as to: > 1. what is the "correct" way to respond to an RFC - is there a protocol? > I have been wondering this myself for a while now. I generally just comment via IRC, since I never know what else to do. That's definitely *not* the best way to do things, though. > 2. where is the "correct" place to respond - on the list or on the wiki or > where? > See above. > 3. to whom do I respond? The community at large or the the vendor > responsible for the RFC? > Errr... see above again. > 4. Ok, this is a weird one, I'll admit, but I've wondered at times whether > my response would be welcomed or is this a "vendors-only" issue? > Speaking for myself as a semi-vendor, I would welcome any and all comments, *particularly* those from non-vendor-type people, on my RFCs. Regards, Jared -- Jared Camins-Esakov Freelance bibliographer, C & P Bibliography Services, LLC (phone) +1 (917) 727-3445 (e-mail) jcamins at cpbibliography.com (web) http://www.cpbibliography.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nengard at gmail.com Tue Nov 9 17:43:52 2010 From: nengard at gmail.com (Nicole Engard) Date: Tue, 9 Nov 2010 11:43:52 -0500 Subject: [Koha-devel] Koha-dev: Discussion on A Policy Setting Forth Standards of Code Submission, etc. In-Reply-To: References: <20101109145025.63883F7311@nail.towers.org.uk> <4CD965FA.4000309@mdah.state.ms.us> Message-ID: 2010/11/9 Jared Camins-Esakov : >> 1. what is the "correct" way to respond to an RFC ?- is there a protocol? > > I have been wondering this myself for a while now. I generally just comment > via IRC, since I never know what else to do. That's definitely *not* the > best way to do things, though. I don't know if there is an official policy but I've seen it done in two ways. The first is to reply to the email sent to the developers list. The other is to use the comments function on the wiki where you can discuss a specific page (aka RFC). While IRC works, it's not logged in the same place as the RFC so if others have comments and aren't on IRC at the same time as you things can get lost - so I'd stick to one of the above and if someone has more official knowledge they can let us know. Nicole C. Engard From cnighswonger at foundations.edu Tue Nov 9 17:45:06 2010 From: cnighswonger at foundations.edu (Chris Nighswonger) Date: Tue, 9 Nov 2010 11:45:06 -0500 Subject: [Koha-devel] [Koha] Koha - Trouble loading patron files NOT solved In-Reply-To: References: <4CD82F06.8040408@mdah.state.ms.us> <20101109025438.73956F7311@nail.towers.org.uk> Message-ID: On Mon, Nov 8, 2010 at 10:06 PM, Chris Nighswonger wrote: > On Mon, Nov 8, 2010 at 9:54 PM, MJ Ray wrote: >> So I think that error means that either @columns contains an undefined >> value, or that $csvkeycol{$key} isn't defined but we test that it is >> so that shouldn't be the problem. ?How can @columns contain an >> undefined value? ?Maybe if a line doesn't have enough fields? >> >> Right now I'd be checking that CSV file very very closely. > > One of the problems *is* the CSV, but that's because the instructions > are not clear. Several fields in the borrowers table are NOT NULL > which means that several columns in the CSV which advertise themselves > as *not* mandatory are, indeed, mandatory. I'm in the process of > correcting this along with making the patron import tool more > informative as to errors which occur during attempted imports. I hope > to have this work finished and submitted the first part of this week. This patch should resolve these issues: http://git.koha-community.org/gitweb/?p=wip/koha-fbc.git;a=shortlog;h=refs/heads/k_bug_5379 It applies against the current HEAD, but should apply against 3.2.x as well. Once it is pushed to HEAD, I'll back-port to 3.2.x. In the meantime, testing and sign-off would be appreciated. (CC' ing the devel list to get more coverage.) Kind Regards, Chris From cfouts at liblime.com Tue Nov 9 18:16:18 2010 From: cfouts at liblime.com (Clay Fouts) Date: Tue, 9 Nov 2010 09:16:18 -0800 Subject: [Koha-devel] A Discussion on A Policy Setting Forth Standards of Code Submission, etc. [WAS: RFCs for 3.4 from BibLibre (serials & acquisitions)] In-Reply-To: References: <4CD0823D.6070400@biblibre.com> Message-ID: Releasing earlier is absolutely a factor in this, and I'm heartened that you've made it a priority for 3.4. Because Koha's API and schema lack consistency, abstraction, and isolation of concerns, adding nearly anything substantial demands that those elements change in ways that affect other areas radically. The amount of resources required to rebase dozens of individual feature branches when half of them require meddling with the key internals in way that will affect others increases in a non-linear fashion with the passage of time. Until Koha has a stable and more sophisticated API, short release cycles of working code is a necessity for developers who are creating lots of features stand a chance at being able to cooperate with each other in the long term. As is, it's very expensive for a developer to maintain a slew of feature branches compared to keeping a unified development trunk of their own. Clay On Tue, Nov 2, 2010 at 3:05 PM, Chris Cormack wrote: > On 3 November 2010 10:27, Paul Poulain wrote: > > > We haven't started working on any of those RFCs (except solR, to have a > > proof of concept). > > What has really be a problem for us is that we published RFCs for Lyon3 > > university a long time ago (mail from Nicolas on koha-devel oct, 12, > > 2009), there has been strictly no reaction/feedback to those RFCs. > > Now they are done, and we have rebased them vs head (huge work, and huge > > QA to do, and probably a lot of time lost) > > Could they be rejected by the community ? hopefully I hope no, but I > > frankly don't know what we (BibLibre) could do if it were :-((( (because > > the customers are live now !) > > I think we (all) failed because Koha 3.2 was 9 months late. Well, in > > fact, I think the mistake was not to branch 3.4 immediatly on feature > > freeze. That would have been much less pain for us (that are > > customer-planning driven) (suggestion below). > > What would have caused much much much less pain for you, was to > develop your features in small branches, rather than one monolithic > branch which makes rebasing much harder than it needs to be. > > This is a lesson that cannot be overstated, topic/bug/feature branches > make everyones lives much easier. And they mean that if one feature is > rejected ... then the whole stack doesn't need to be. > > I don't think branching sooner or an earlier release would have helped > anywhere near as much as developing in smaller branches, not one huge > one. > > Chris > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ohiocore at gmail.com Tue Nov 9 20:39:12 2010 From: ohiocore at gmail.com (Joe Atzberger) Date: Tue, 9 Nov 2010 14:39:12 -0500 Subject: [Koha-devel] A Discussion on A Policy Setting Forth Standards of Code Submission, etc. [WAS: RFCs for 3.4 from BibLibre (serials & acquisitions)] In-Reply-To: References: Message-ID: > > Regarding the simplicity of signing off, I take some issue. It is > > *severely* non-trivial to test major integration features. Consider SIP2 > > and LDAP, or something like EDI. It can depend not just on accurate test > > data, but entire servers, network environments, remote accounts granted > by a > > supplier, foreign domain/language knowledge, etc. Sure, I'd love it for > > everybody to have a dozen signoffs. I just think blocking code while > > waiting on a 3rd party (who by design is disinterested) to come around > and > > dedicate some resources is a questionable policy. > > I'm sure there are any number of features which could be tested in > very complex environments, and perhaps even more complex than those > used or anticipated by the original developers themselves. I wonder if > the original supplier of the SIP2 and LDAP features actually went to > the level of testing you describe prior to committing those features. > Since that was me in both cases, I can tell you: yes for SIP2, not so much for LDAP. Initially LDAP was written and tested against openldap, without any access to an Active Directory server, and it was a bitch. (Openldap has this horrible behavior of crashing the daemon completely if you give it a malformed command-line query or attempt to insert/edit a record unsuccessfully. It also simultaneously corrupts the data that is stored in compiled B-trees. Apparently performance was key, not reliability.) Only later did I test between VMs using Sun's OpenDirectory and remotely to ActiveDirectory. SIP2 testing was fairly robust, but required an extreme amount of data tuning to make it possible. I.E., a test requires that a patron have a $22 fine and an overdue item, so you have to make a user have a $22 fine and an overdue item. And then you cannot run overdue fines again. Ever. So basically that requires a dedicated instance. > > Forgive me if I'm off the pulse a bit, but do these expectations exist > > today? The release process establishes when new features are accepted or > > not, and it has been pretty explicit and clear. The problem used to be > big > > unilateral changes that weren't getting submitted (including some code > that > > I wrote). Now the problem is they're getting submitted? > > > > Perhaps this point bears greater clarification. By "expect it to be pushed" > I meant particularly without any prior discussion/RFC/community > participation. What was/is going on with LibLime/PTFS LEK is a classic > example of the sort of thing which needs to be discouraged. I refer > particularly to the process. PTFS has stated in a number of forums that once > their "contract" obligations are met, they will submit this code to the > community. However, the job is done at that point and clients will have > implemented the product. There will be a certain level of "expectation" that > "new" features "should" be pushed to the main code base. On the plus side, this expectation is what should drive the vendor to submit the code in its most-likely-to-be-accepted form. In reality, the code that gets kept in the closet for a year or more *will* lose out. A vendor or their client may feel like they get a competitive advantage by deferring submission, but this is illusory. It just backloads and complicates the otherwise critical work of getting patches into mainline. While the patches are sitting around getting stale, if the feature is at all desirable, other people are working on similar or competing versions, and then extending the published version, bugfixing and documenting it. The work you do to extend or revise the withheld feature is quite possibly wasted effort. Resolving the competing implementations is often more work than it was to write either one of them, and could have been avoided entirely with earlier publication. Getting your patches in master is a *defensive* position that establishes your data model, API and presentation as accepted. But we can preach all day on this. Let's try not to. > In spite of this, I hope we can avoid becoming vendor-centric. > "Vendor-centricity" leads to vendor dominance and control ultimately. I am not personally aware of a vendor controlled FOSS project that > does not lean heavily in that vendor's favor. I think you conflate the fact that vendors are primary players with the outcome of a single-vendor control. A healthy FOSS project has many players. Whether they come from different commercial interests, or different user bases, it doesn't really matter. > The problem is not one of where to "relegate" vendors in the "social" > structure of the community. Rather it is all about keeping vendors who > have relatively limitless resources from holding the controlling > interest in the community ... > Your impression of a vendor's available resources is a bit fantastical. Having worked at both LibLime and Equinox, I can tell you neither company had more than 40 employees, with only a minority in development. If LibLime had such boundless resources, they would not have had to sell to PTFS. And PTFS, in terms of resources relevant to Koha, is not applying a team even half that size. Compared to clients like King County (WA), WALDO or GPLS, the operational capacity and physical resources of vendors (even PTFS) is scant. > >> This may not be the view of all involved. However, if it were not for > >> Koha, Koha support vendors would be out of some amount of business. > > > > And without vendors *no* version of Koha would ever have been written or > > released. It's sorta funny that I'm the one saying this stuff, as I am > not > > currently affiliated with any vendor. > > I think that 1.0 was written by coders for hire (aka Katipo) and was > released by, not a vendor, but HLT, a library. So, in fact, no vendor > in the technical understanding was involved. Chris C. can correct me > if I'm wrong here. > Katipo was and is a vendor. They were not users of the software themselves. Instead they sold software development and services to HLT. This is the technical and practical definition of a vendor, and it is exactly the same as what other Koha development shops do, except Katipo was starting from scratch instead of an existing version. I'm not sure what additional indicators of vendorness you might be looking for. Perhaps an evil moustache? : }) --Joe -------------- next part -------------- An HTML attachment was scrubbed... URL: From frederic at tamil.fr Wed Nov 10 07:11:18 2010 From: frederic at tamil.fr (=?ISO-8859-1?Q?Fr=E9d=E9ric_Demians?=) Date: Wed, 10 Nov 2010 07:11:18 +0100 Subject: [Koha-devel] A Discussion on A Policy Setting Forth Standards of Code Submission, etc. [WAS: RFCs for 3.4 from BibLibre (serials & acquisitions)] In-Reply-To: References: <4CD0823D.6070400@biblibre.com> Message-ID: <4CDA3786.9040800@tamil.fr> > > Because Koha's API and schema lack consistency, abstraction, and > isolation of concerns, adding nearly anything substantial demands that > those elements change in ways that affect other areas radically. The > amount of resources required to rebase dozens of individual feature > branches when half of them require meddling with the key internals in > way that will affect others increases in a non-linear fashion with the > passage of time. I agree. Rather than forming comitees, Koha community has to deal with software engenering challenges. A dump (and informal) rule should impose to any entity adding to Koha a large new feature to do also substantial code rationalization and cleanup. (I don't say it's easy...) From henridamien.laurent at gmail.com Wed Nov 10 09:41:23 2010 From: henridamien.laurent at gmail.com (LAURENT Henri-Damien) Date: Wed, 10 Nov 2010 09:41:23 +0100 Subject: [Koha-devel] A Discussion on A Policy Setting Forth Standards of Code Submission, etc. [WAS: RFCs for 3.4 from BibLibre (serials & acquisitions)] In-Reply-To: <4CDA3786.9040800@tamil.fr> References: <4CD0823D.6070400@biblibre.com> <4CDA3786.9040800@tamil.fr> Message-ID: <4CDA5AB3.70509@gmail.com> Le 10/11/2010 07:11, Fr?d?ric Demians a ?crit : >> >> Because Koha's API and schema lack consistency, abstraction, and >> isolation of concerns, adding nearly anything substantial demands that >> those elements change in ways that affect other areas radically. The >> amount of resources required to rebase dozens of individual feature >> branches when half of them require meddling with the key internals in >> way that will affect others increases in a non-linear fashion with the >> passage of time. > > I agree. Rather than forming comitees, Koha community has to deal with > software engenering challenges. A dump (and informal) rule should impose > to any entity adding to Koha a large new feature to do also substantial > code rationalization and cleanup. (I don't say it's easy...) Doing so without sharing with community members or even working in pairs or groups with some of the parties interested on the way chosen is a waste of time for everyone. API changes would have to be done by concensus and could be eventually rejected and then the work done would be nullified. Comittee was maybe a misleading term or might not be what people think it is. What was finally proposed in the informal meeting on the last day was organising meetings on some technical issues in order to share work. Working on refactoring code or Data persistance, Plack usage or work on circular dependencies or DBIx::Class is a shared view and should be done in the open, could have consequences on the data structure and tasks could be quite easily be assigned to more than one company. Then charge could be also be shared communautary and become sustainable for everyone. a) multiple signoffs would become more evident. So integration into the code would be also more easy. and therefore, sustainability would be more achievable. b) the development cost would be shared by companies. drawbacks, it would require more time than working alone. Any library hiring a developer to work on Koha because they believe in the Free software,or any company (BibLibre but I think Tamil or Xercode, or even Catalyst) want their investment not to be lost. Even HLT have long "cried" on lost features from 1.0. Should we consider this is not an issue ? Friendly. -- Henri-Damien LAURENT From henridamien.laurent at biblibre.com Wed Nov 10 17:11:24 2010 From: henridamien.laurent at biblibre.com (LAURENT Henri-Damien) Date: Wed, 10 Nov 2010 17:11:24 +0100 Subject: [Koha-devel] externalising translator and etc Message-ID: <4CDAC42C.3090103@biblibre.com> Hi I propose to get etc and translator as external git repositories. Why ? It would allow to have po files in their own repository and limit the weight of Koha git archives. For etc, it would allow to change the installer in order to keep track of local changes for every person and therefore be able to synchronize with community version properly. HOW ? operation is simple : git filter-branch --subdirectory-filter misc/translator -- --all and then git push on an external repository. for instance etc.git and translator.git on git.koha-community This would allow to have more than one branch for etc and for instance create and etc/apache etc/nginx etc/plack and so on... WHEN ? when you think operation should be done, if you think it could be interesting. and then we could have git submodules in the koha.git repository in order to keep in synch and synchronise versions when we want. http://www.kernel.org/pub/software/scm/git/docs/git-submodule.html It is just a proposition. -- Henri-Damien LAURENT From cfouts at liblime.com Wed Nov 10 17:19:19 2010 From: cfouts at liblime.com (Clay Fouts) Date: Wed, 10 Nov 2010 08:19:19 -0800 Subject: [Koha-devel] A Discussion on A Policy Setting Forth Standards of Code Submission, etc. [WAS: RFCs for 3.4 from BibLibre (serials & acquisitions)] In-Reply-To: <4CDA5AB3.70509@gmail.com> References: <4CD0823D.6070400@biblibre.com> <4CDA3786.9040800@tamil.fr> <4CDA5AB3.70509@gmail.com> Message-ID: "Committee" in the sense that I'm reading you has the connotation of setting aside a space and time where a group of people can focus on a specific issue to discuss it and work toward building consensus. It seems some people have a knee-jerk response to the word as if it's an effort to establish a hegemonic regime bent on undercutting the release manager's role and authority. Frankly, Koha's long-term viability depends on creating an architectural vision more expansive than "the next release," and a working group focussed specifically on giving that vision some formal attention is a perfectly good way to approach that problem. Without an obvious candidate of eminence to establish a Linus-style authority, a working group/committee is probably the only viable approach to it. Yes, it will generally take longer to iron out differences and to code with an eye toward established best practices, but the extra short-term effort is necessary for long-term strategy. Clay On Wed, Nov 10, 2010 at 12:41 AM, LAURENT Henri-Damien < henridamien.laurent at gmail.com> wrote: > Le 10/11/2010 07:11, Fr?d?ric Demians a ?crit : > >> > >> Because Koha's API and schema lack consistency, abstraction, and > >> isolation of concerns, adding nearly anything substantial demands that > >> those elements change in ways that affect other areas radically. The > >> amount of resources required to rebase dozens of individual feature > >> branches when half of them require meddling with the key internals in > >> way that will affect others increases in a non-linear fashion with the > >> passage of time. > > > > I agree. Rather than forming comitees, Koha community has to deal with > > software engenering challenges. A dump (and informal) rule should impose > > to any entity adding to Koha a large new feature to do also substantial > > code rationalization and cleanup. (I don't say it's easy...) > Doing so without sharing with community members or even working in pairs > or groups with some of the parties interested on the way chosen is a > waste of time for everyone. API changes would have to be done by > concensus and could be eventually rejected and then the work done would > be nullified. > Comittee was maybe a misleading term or might not be what people think > it is. What was finally proposed in the informal meeting on the last day > was organising meetings on some technical issues in order to share work. > Working on refactoring code or Data persistance, Plack usage or work on > circular dependencies or DBIx::Class is a shared view and should be done > in the open, could have consequences on the data structure and tasks > could be quite easily be assigned to more than one company. Then charge > could be also be shared communautary and become sustainable for everyone. > a) multiple signoffs would become more evident. So integration into the > code would be also more easy. and therefore, sustainability would be > more achievable. > b) the development cost would be shared by companies. > drawbacks, it would require more time than working alone. > Any library hiring a developer to work on Koha because they believe in > the Free software,or any company (BibLibre but I think Tamil or Xercode, > or even Catalyst) want their investment not to be lost. > Even HLT have long "cried" on lost features from 1.0. > Should we consider this is not an issue ? > Friendly. > -- > Henri-Damien LAURENT > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From frederic at tamil.fr Wed Nov 10 17:55:10 2010 From: frederic at tamil.fr (=?ISO-8859-1?Q?Fr=E9d=E9ric_Demians?=) Date: Wed, 10 Nov 2010 17:55:10 +0100 Subject: [Koha-devel] externalising translator and etc In-Reply-To: <4CDAC42C.3090103@biblibre.com> References: <4CDAC42C.3090103@biblibre.com> Message-ID: <4CDACE6E.2040104@tamil.fr> > I propose to get etc and translator as external git repositories. > Why ? > > It would allow to have po files in their own repository and limit the > weight of Koha git archives. > For etc, it would allow to change the installer in order to keep track > of local changes for every person and therefore be able to synchronize > with community version properly. Good idea. I don't know if the size of Koha git repository is an issue but for organizational purpose, dividing Koha into smaller parts seems great. Another issue is the fact that we have two Koha downloadable versions, one with translated templates and one without. With translated templates, Koha archive is 241 Mo, when english-only version is 30 Mo. I plan for 3.4, when Template Toolkit will be integrated, to modify the web installer in order to add the ability to pick up a language and to translate it on the fly. This way we would have a unique Koha archive. But we can also have multiple translation packages containing .po files (and Debian packages) installable only when needed. Imagine: apt-get install koha koha-locale-fr A tar gzip of current po directory produces a 21 Mo archive. So I suppose that a Koha archive without any translation at all (no .po files) would weight only 9 Mo. -- Fr?d?ric From gmcharlt at gmail.com Wed Nov 10 19:10:20 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Wed, 10 Nov 2010 13:10:20 -0500 Subject: [Koha-devel] externalising translator and etc In-Reply-To: <4CDAC42C.3090103@biblibre.com> References: <4CDAC42C.3090103@biblibre.com> Message-ID: Hi, On Wed, Nov 10, 2010 at 11:11 AM, LAURENT Henri-Damien wrote: > Hi > I propose to get etc and translator as external git repositories. > Why ? > > It would allow to have po files in their own repository and limit the > weight of Koha git archives. > For etc, it would allow to change the installer in order to keep track > of local changes for every person and therefore be able to synchronize > with community version properly. -1 for moving etc to a separate Git project. The default configuration files are an integral part of Koha. For variant configurations such as a Plack or nginx config, it would be clearer to have relevant files in subdirectories or identified by file name instead of putting them in branches. That way, somebody installing Koha for the first time would see (say) both an etc/koha-apache.conf and an etc/koha-nginx.conf; i.e., all of the options would be directly in front of them. Furthermore, often changes made to files in etc are dependent on changes made to code or templates; separating etc and the rest of Koha into submodules would make it harder to prepare patches and manage topic branches. 0 for moving the PO files to a separate Git project. The size of the repository doesn't really strike me as a big deal; the Git protocol is pretty efficient. That said, while I don't see a great deal of benefit to splitting the translations off into a separate repository, I don't see much harm either. One thing to keep in mind is that whether the release tarballs include all languages expanded or not has nothing to do with how the Git repositories are structured. Looking at the download statistics for koha-3.02.00.tar.gz versus koha-3.02.00-all-translations.tar.gz, very few people use or need the all-translations version. Regards, Galen -- Galen Charlton gmcharlt at gmail.com From cnighswonger at foundations.edu Wed Nov 10 22:30:34 2010 From: cnighswonger at foundations.edu (Chris Nighswonger) Date: Wed, 10 Nov 2010 16:30:34 -0500 Subject: [Koha-devel] 3.2.x Release Maintenance Procedures Message-ID: During the general IRC meeting today, a question came up concerning how I plan to push patches to 3.2.x. Here is the general procedure I intend to follow, but am certainly reserving my right to change it as necessary. 1. The general policy will be to cherry-pick/merge patches from HEAD as they appear and are applicable to 3.2.x. Besides the fact that bugs in 3.2.x are also bugs in HEAD, these patches have already passed QA and are considered stable, so this greatly reduces the workload. 2. At such a time as a merge conflict should occur due to the eventually inevitable divergence of 3.2.x and HEAD, *the submitter* of a patch will be responsible to ensure that the patch applies to both branches (HEAD and 3.2.x) *before* submission, and if the patch does not apply to the 3.2.x branch, *the submitter* will be responsible to take whatever action is necessary to make it apply before submission. This will most likely mean submitting two versions of the patch: one for HEAD and one for 3.2.x. This responsibility is an ethical one as it is not ethical to expect the release maintainer who is one person to make the many patches apply when the submitter who is one person merely has to make one patch apply. One thing which will greatly reduce the burden of #2 is the RM's planned use of topic branches. Merge conflicts are much easier to manage and far less overwhelming with the use of topic branches. Suggestions and opinions are welcome. Kind Regards, Chris Nighswonger Koha 3.2 Release Maintainer From lrea at nekls.org Wed Nov 10 22:42:36 2010 From: lrea at nekls.org (Liz Rea) Date: Wed, 10 Nov 2010 15:42:36 -0600 Subject: [Koha-devel] 3.2.x Release Maintenance Procedures In-Reply-To: References: Message-ID: How would you like patches labeled that apply for 3.2.x only (when they inevitably diverge)? [3.2.x bug xxxx] [Signed Off] Commit message? Some other way? I wish to do your bidding! ;) Liz On Nov 10, 2010, at 3:30 PM, Chris Nighswonger wrote: > During the general IRC meeting today, a question came up concerning > how I plan to push patches to 3.2.x. Here is the general procedure I > intend to follow, but am certainly reserving my right to change it as > necessary. > > 1. The general policy will be to cherry-pick/merge patches from HEAD > as they appear and are applicable to 3.2.x. Besides the fact that bugs > in 3.2.x are also bugs in HEAD, these patches have already passed QA > and are considered stable, so this greatly reduces the workload. > > 2. At such a time as a merge conflict should occur due to the > eventually inevitable divergence of 3.2.x and HEAD, *the submitter* of > a patch will be responsible to ensure that the patch applies to both > branches (HEAD and 3.2.x) *before* submission, and if the patch does > not apply to the 3.2.x branch, *the submitter* will be responsible to > take whatever action is necessary to make it apply before submission. > This will most likely mean submitting two versions of the patch: one > for HEAD and one for 3.2.x. This responsibility is an ethical one as > it is not ethical to expect the release maintainer who is one person > to make the many patches apply when the submitter who is one person > merely has to make one patch apply. > > One thing which will greatly reduce the burden of #2 is the RM's > planned use of topic branches. Merge conflicts are much easier to > manage and far less overwhelming with the use of topic branches. > > Suggestions and opinions are welcome. > > Kind Regards, > Chris Nighswonger > Koha 3.2 Release Maintainer > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ From cnighswonger at foundations.edu Wed Nov 10 23:07:14 2010 From: cnighswonger at foundations.edu (Chris Nighswonger) Date: Wed, 10 Nov 2010 17:07:14 -0500 Subject: [Koha-devel] 3.2.x Release Maintenance Procedures In-Reply-To: References: Message-ID: On Wed, Nov 10, 2010 at 4:42 PM, Liz Rea wrote: > How would you like patches labeled that apply for 3.2.x only (when they > inevitably diverge)? > > [3.2.x bug xxxx] [Signed Off] Commit message? Some other way? > > That's fine. > I wish to do your bidding! ;) > > I'm glad to see we've come to a consensus... ;-) Kind Regards, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From robin at catalyst.net.nz Wed Nov 10 23:17:07 2010 From: robin at catalyst.net.nz (Robin Sheat) Date: Thu, 11 Nov 2010 11:17:07 +1300 Subject: [Koha-devel] externalising translator and etc In-Reply-To: <4CDACE6E.2040104@tamil.fr> References: <4CDAC42C.3090103@biblibre.com> <4CDACE6E.2040104@tamil.fr> Message-ID: <1289427427.11078.46.camel@zarathud> Fr?d?ric Demians schreef op wo 10-11-2010 om 17:55 [+0100]: > But we can also have multiple translation packages containing .po > files > (and Debian packages) installable only when needed. Imagine: > > apt-get install koha koha-locale-fr fwiw, this is exactly my eventual plan for dealing with translations in the packages. -- Robin Sheat Catalyst IT Ltd. ? +64 4 803 2204 GPG: 5957 6D23 8B16 EFAB FEF8 7175 14D3 6485 A99C EB6D -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part URL: From cfouts at liblime.com Wed Nov 10 23:42:34 2010 From: cfouts at liblime.com (Clay Fouts) Date: Wed, 10 Nov 2010 14:42:34 -0800 Subject: [Koha-devel] externalising translator and etc In-Reply-To: <4CDAC42C.3090103@biblibre.com> References: <4CDAC42C.3090103@biblibre.com> Message-ID: I like the idea of separating out the .po files if for no other reason than it expediting "grep -r". Reducing the overall repo footprint is also helpful. It doesn't matter much for a single instance, but it does add up when you have dozens of them. Clay On Wed, Nov 10, 2010 at 8:11 AM, LAURENT Henri-Damien < henridamien.laurent at biblibre.com> wrote: > Hi > I propose to get etc and translator as external git repositories. > Why ? > > It would allow to have po files in their own repository and limit the > weight of Koha git archives. > For etc, it would allow to change the installer in order to keep track > of local changes for every person and therefore be able to synchronize > with community version properly. > > HOW ? > operation is simple : > git filter-branch --subdirectory-filter misc/translator -- --all > and then git push on an external repository. > for instance etc.git and translator.git on git.koha-community > This would allow to have more than one branch for etc and for instance > create and etc/apache etc/nginx etc/plack and so on... > > WHEN ? > when you think operation should be done, if you think it could be > interesting. > > and then we could have git submodules in the koha.git repository in > order to keep in synch and synchronise versions when we want. > http://www.kernel.org/pub/software/scm/git/docs/git-submodule.html > > It is just a proposition. > -- > Henri-Damien LAURENT > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cnighswonger at foundations.edu Wed Nov 10 23:55:47 2010 From: cnighswonger at foundations.edu (Chris Nighswonger) Date: Wed, 10 Nov 2010 17:55:47 -0500 Subject: [Koha-devel] externalising translator and etc In-Reply-To: References: <4CDAC42C.3090103@biblibre.com> Message-ID: On Wed, Nov 10, 2010 at 1:10 PM, Galen Charlton wrote: > -1 for moving etc to a separate Git project. The default > configuration files are an integral part of Koha. For variant > configurations such as a Plack or nginx config, it would be clearer to > have relevant files in subdirectories or identified by file name > instead of putting them in branches. That way, somebody installing > Koha for the first time would see (say) both an etc/koha-apache.conf > and an etc/koha-nginx.conf; i.e., all of the options would be directly > in front of them. Furthermore, often changes made to files in etc are > dependent on changes made to code or templates; separating etc and the > rest of Koha into submodules would make it harder to prepare patches > and manage topic branches. > > I'm with Galen here. > 0 for moving the PO files to a separate Git project. The size of the > repository doesn't really strike me as a big deal; the Git protocol is > pretty efficient. That said, while I don't see a great deal of > benefit to splitting the translations off into a separate repository, > I don't see much harm either. > If setting up the PO files as a separate project allows one PO repo to be "submoduled" to several Koha repos then I can see how this might benefit a multi-koha server. As it is, we don't implement any language other than English, so it really does not matter to me one way or the other. Kind Regards, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From ian.walls at bywatersolutions.com Wed Nov 10 23:58:27 2010 From: ian.walls at bywatersolutions.com (Ian Walls) Date: Wed, 10 Nov 2010 17:58:27 -0500 Subject: [Koha-devel] Guidelines for Patch Acceptance/Rejection In-Reply-To: References: Message-ID: Everyone, While there can be no guarantees as to whether a patch will be committed into the Koha codebase, I think in practice there are several requirements. This email is an attempt to identify a few of them, and hopefully start a discussion about whether they are truly requirements, and what others could possibly be added. 1. The patch must do what it claims to do, in all commonly-supported Koha environments 2. The patch must not break existing functionality 3. The patch must apply to the current HEAD of the master branch of the code 4. The patch must follow the Coding Style Guidelines 5. The patch must be MARC-flavour agnostic 6. The patch must contain appropriate copyright information 7. If a database update is require, the patch must handle the update both for new installs and upgrades 8. If a new feature is added, the patch must include appropriate Help documentation What do people think of these requirements? Are they reasonable? Should there be more? I understand that there may not be any set of requirements that's completely sufficient, but if we can identify as many as possible, it would make developers lives a bit easier, since we'd all have a better idea what is needed for our patches to be committable. Cheers, -Ian -- Ian Walls Lead Development Specialist ByWater Solutions Phone # (888) 900-8944 http://bywatersolutions.com ian.walls at bywatersolutions.com Twitter: @sekjal -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at bigballofwax.co.nz Thu Nov 11 00:35:10 2010 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Thu, 11 Nov 2010 12:35:10 +1300 Subject: [Koha-devel] Guidelines for Patch Acceptance/Rejection In-Reply-To: References: Message-ID: I have a few extra rules for 3.4 also From cnighswonger at foundations.edu Thu Nov 11 03:52:24 2010 From: cnighswonger at foundations.edu (Chris Nighswonger) Date: Wed, 10 Nov 2010 21:52:24 -0500 Subject: [Koha-devel] Guidelines for Patch Acceptance/Rejection In-Reply-To: References: Message-ID: And another for 3.2.x: All patches which will not apply cleanly to 3.2.x should be ported and submitted as separate patches marked for 3.2.x. (Hopefully the number of these sort of patches will be few.) Kind Regards, Chris On Wed, Nov 10, 2010 at 6:35 PM, Chris Cormack wrote: > I have a few extra rules for 3.4 also > > From here http://wiki.koha-community.org/wiki/Roadmap_to_3.4 > > All patches should have at least 1 signoff before the Release Manager > looks at them, (exceptions will be made for trivial patches). > Preferably 2 signoffs, 1 from the QA manager and 1 from someone else. > Although 1 from the QA manager will suffice. > > All patches should refer to a bug number > > Chris > > 2010/11/11 Ian Walls : > > Everyone, > > > > While there can be no guarantees as to whether a patch will be committed > > into the Koha codebase, I think in practice there are several > requirements. > > This email is an attempt to identify a few of them, and hopefully start > a > > discussion about whether they are truly requirements, and what others > could > > possibly be added. > > 1. The patch must do what it claims to do, in all commonly-supported > Koha > > environments > > 2. The patch must not break existing functionality > > 3. The patch must apply to the current HEAD of the master branch of the > > code > > 4. The patch must follow the Coding Style Guidelines > > 5. The patch must be MARC-flavour agnostic > > 6. The patch must contain appropriate copyright information > > 7. If a database update is require, the patch must handle the update > both > > for new installs and upgrades > > 8. If a new feature is added, the patch must include appropriate Help > > documentation > > What do people think of these requirements? Are they reasonable? Should > > there be more? I understand that there may not be any set of > requirements > > that's completely sufficient, but if we can identify as many as possible, > it > > would make developers lives a bit easier, since we'd all have a better > idea > > what is needed for our patches to be committable. > > Cheers, > > > > -Ian > > -- > > Ian Walls > > Lead Development Specialist > > ByWater Solutions > > Phone # (888) 900-8944 > > http://bywatersolutions.com > > ian.walls at bywatersolutions.com > > Twitter: @sekjal > > _______________________________________________ > > Koha-devel mailing list > > Koha-devel at lists.koha-community.org > > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > > website : http://www.koha-community.org/ > > git : http://git.koha-community.org/ > > bugs : http://bugs.koha-community.org/ > > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pianohacker at gmail.com Thu Nov 11 04:03:19 2010 From: pianohacker at gmail.com (Jesse) Date: Wed, 10 Nov 2010 20:03:19 -0700 Subject: [Koha-devel] Guidelines for Patch Acceptance/Rejection In-Reply-To: References: Message-ID: 2010/11/10 Chris Cormack > I have a few extra rules for 3.4 also > > From here http://wiki.koha-community.org/wiki/Roadmap_to_3.4 > > All patches should have at least 1 signoff before the Release Manager > looks at them, (exceptions will be made for trivial patches). > Preferably 2 signoffs, 1 from the QA manager and 1 from someone else. > Although 1 from the QA manager will suffice. > > All patches should refer to a bug number > > Chris > > 2010/11/11 Ian Walls : > > Everyone, > > > > While there can be no guarantees as to whether a patch will be committed > > into the Koha codebase, I think in practice there are several > requirements. > > This email is an attempt to identify a few of them, and hopefully start > a > > discussion about whether they are truly requirements, and what others > could > > possibly be added. > > 1. The patch must do what it claims to do, in all commonly-supported > Koha > > environments > > 2. The patch must not break existing functionality > > 3. The patch must apply to the current HEAD of the master branch of the > > code > > 4. The patch must follow the Coding Style Guidelines > > 5. The patch must be MARC-flavour agnostic > > 6. The patch must contain appropriate copyright information > > 7. If a database update is require, the patch must handle the update > both > > for new installs and upgrades > > 8. If a new feature is added, the patch must include appropriate Help > > documentation > > What do people think of these requirements? Are they reasonable? Should > > there be more? I understand that there may not be any set of > requirements > > that's completely sufficient, but if we can identify as many as possible, > it > > would make developers lives a bit easier, since we'd all have a better > idea > > what is needed for our patches to be committable. > > Cheers, > > > > -Ian > > -- > > Ian Walls > > Lead Development Specialist > > ByWater Solutions > > Phone # (888) 900-8944 > > http://bywatersolutions.com > > ian.walls at bywatersolutions.com > > Twitter: @sekjal > > _______________________________________________ > > Koha-devel mailing list > > Koha-devel at lists.koha-community.org > > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > > website : http://www.koha-community.org/ > > git : http://git.koha-community.org/ > > bugs : http://bugs.koha-community.org/ > > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > Question for Mr. 3.4 RM: Is the procedure for dealing with DB revision numbers still the same? As far as I remember from the 3.2 development days, the procedure was to patch kohastructure.sql (or sysprefs.sql, or whatever), then add the update to the end of updatedatabase.pl with a generic version number, like 3.01.00.XXX. Patching kohastructure.pl was left to the RM when they applied the patch. I had a crazy table on the wiki for a bit, but this seemed to work better. That still the consensus? -- Jesse Weaver -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at bigballofwax.co.nz Thu Nov 11 04:07:22 2010 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Thu, 11 Nov 2010 16:07:22 +1300 Subject: [Koha-devel] Guidelines for Patch Acceptance/Rejection In-Reply-To: References: Message-ID: > Question for Mr. 3.4 RM: > :) > Is the procedure for dealing with DB revision numbers still the same? As far > as I remember from the 3.2 development days, the procedure was to patch > kohastructure.sql (or sysprefs.sql, or whatever), then add the update to the > end of updatedatabase.pl with a generic version number, like 3.01.00.XXX. > Patching kohastructure.pl was left to the RM when they applied the patch. Patching kohaversion.pl you mean? > > I had a crazy table on the wiki for a bit, but this seemed to work better. > > That still the consensus? > Yup that is the current practice. If we do implement DBIx::Class::Schema and DBIx::Class::Schema::Versioned, updatedatabase.pl and kohastructure.pl might both go away. But not yet. Chris From colin.campbell at ptfs-europe.com Thu Nov 11 10:11:13 2010 From: colin.campbell at ptfs-europe.com (Colin Campbell) Date: Thu, 11 Nov 2010 09:11:13 +0000 Subject: [Koha-devel] Guidelines for Patch Acceptance/Rejection In-Reply-To: References: Message-ID: <4CDBB331.4070507@ptfs-europe.com> It is worth stressing that there are things which will encourage quick adoption of a patch. Good commit messages, tests that prove it does what it says and doesn't add needlessly to the amount of entropy in the universe. -- Colin Campbell Chief Software Engineer, PTFS Europe Limited Content Management and Library Solutions +44 (0) 208 366 1295 (phone) +44 (0) 7759 633626 (mobile) colin.campbell at ptfs-europe.com skype: colin_campbell2 http://www.ptfs-europe.com From colin.campbell at ptfs-europe.com Thu Nov 11 10:30:50 2010 From: colin.campbell at ptfs-europe.com (Colin Campbell) Date: Thu, 11 Nov 2010 09:30:50 +0000 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CA98C01.8080709@biblibre.com> References: <4CA98C01.8080709@biblibre.com> Message-ID: <4CDBB7CA.5080404@ptfs-europe.com> One thing I'd like to see come out of the discussion is some idea of what we want from from a Search interface. One of the problems of finding shiny new solutions is that having chased after them you also import a lot of junk you then have to live with. (retrospectively we may have done that with zebra). I'm quite willing to believe that some of the perceived zebra problems are problems not with it per se but with how we handle it. When developing software it is a good method to write a test first that fails then write the functionality to pass that test. It means you have a clear target of what you are trying to achieve. We need to have a clear idea of what we want a search package to provide us and what constraints it must meet, otherwise we can't evaluate solutions (or even identify if the failure to meet these lies in the external software or in areas we can code our way out of them). Whatever we decide concerning zebra/Solr etc. I'd hope we come out with a clearer picture of what we need from them or from x. Colin -- Colin Campbell Chief Software Engineer, PTFS Europe Limited Content Management and Library Solutions +44 (0) 208 366 1295 (phone) +44 (0) 7759 633626 (mobile) colin.campbell at ptfs-europe.com skype: colin_campbell2 http://www.ptfs-europe.com From tajoli at cilea.it Thu Nov 11 11:30:53 2010 From: tajoli at cilea.it (Zeno Tajoli) Date: Thu, 11 Nov 2010 11:30:53 +0100 Subject: [Koha-devel] about Unimarc and Analytical record implementation Message-ID: <4CDBC5DD.4080406@cilea.it> Hi to all, about this request of Henri-Damien on koha-patches > Date: Wed, 10 Nov 2010 22:59:43 +0100 > From: LAURENT Henri-Damien > Subject: Re: [Koha-patches] [PATCH] Analytical record: improved code > to create analytical record from an item > To: koha-patches at lists.koha-community.org > Message-ID: <4CDB15CF.2080104 at biblibre.com> [...] > This is marc21 only. > Could you be so kind as to at least use some variables ? > Or create the variable and then populate it ? I'm the person that will do those changes. I wrote to Savitra and Amit and I said that I will do this type of changes. So you will see them. -- Zeno Tajoli CILEA - Segrate (MI) tajoliAT_SPAM_no_prendiATcilea.it (Indirizzo mascherato anti-spam; sostituisci qaunto tra AT con @) From gmcharlt at gmail.com Thu Nov 11 13:50:26 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Thu, 11 Nov 2010 07:50:26 -0500 Subject: [Koha-devel] Guidelines for Patch Acceptance/Rejection In-Reply-To: <4CDBB331.4070507@ptfs-europe.com> References: <4CDBB331.4070507@ptfs-europe.com> Message-ID: Hi, On Thu, Nov 11, 2010 at 4:11 AM, Colin Campbell wrote: > It is worth stressing that there are things which will encourage quick > adoption of a patch. > Good commit messages, tests that prove it does what it says and doesn't > add needlessly to the amount of entropy in the universe. Here are other characteristics of a good patch: [1] The patch covers a single topic. For example, a patch that fixes a circulation bug and adds a new cataloging feature is problematic; if the circ bugfix works but the new cataloging feature doesn't, the patch can't be accepted as is. Also, such a patch is more difficult to test; somebody who is an expert in Koha's circulation module may not be comfortable signing off on a cataloging enhancement. [2] The patch is easy to read. In particular, please don't mix up major whitespace correction and functionality. If you want to clean up whitespace, please do so in a separate patch. [3] The patch does not contain the detritus of your coding process. In other words, the patch shouldn't contain things like unguarded warns left over from your debugging or temporary files. Regards, Galen -- Galen Charlton gmcharlt at gmail.com From mjr at software.coop Thu Nov 11 14:09:48 2010 From: mjr at software.coop (MJ Ray) Date: Thu, 11 Nov 2010 13:09:48 +0000 (GMT) Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CA98C01.8080709@biblibre.com> Message-ID: <20101111130948.A5CB2F7316@nail.towers.org.uk> LAURENT Henri-Damien wrote: > involved in the community as previously. Paul promised some POCs, here > is one available. [...] Sorry for taking a while to look at this, but it raised so many questions in my mind when I first read it and I've been a bit busy, so I thought I'd leave it a while and see if some were covered by others. Some were (thanks!) but many are left, so here we go: What's a POC? Piece Of Code? (I assume it's not the C I'd usually mean in that abbreviation ;-) ) > zebra is fast and embeds native z3950 server. But it has also some major > drawbacks we have to cope with on our everyday life making it quite > difficult to maintain. > > 1. zebra config files are a nightmare. I've librarians editing zebra config files. They've seen far worse from the awful library management systems of the past. > You can't drive the > configuration file easily. namely : Can't edit indexs via HTTP or > configuration. all is in files hardcoded on disk. We could fix this by providing an HTTP interface if anyone wanted. This isn't a problem unique to Zebra: some Koha configuration is only in files on disk. Being in a config file is not hardcoded! So, this is solvable if someone wanted it enough. Does anyone want me to take this enhancement forwards? > [...] And ppl donot get a translation of the indexes since all > the indexes are hardcoded in the ccl.properties and we donot have a > translation process so that ccl attributes could be translated into > different languages. This sounds like a problem in our translation process. Would the translation manager like to consider it, please? > 2. no real-time indexing : the use of a crontab is poor: when you > add an authority while creating a biblio, you have to wait some some > minutes to end your biblio This is being considered in bug 5165. It's a problem in how we are using Zebra, really. > (might be solved since zebra has some way to > index biblios via z3950 extended services, but hard and should be tested > and at the time community first tested that, a performance problem was > raised on indexing.) Does someone have a link to this performance problem, please? > 3. no way to access/process/delete data easily. If you have indexes > in it or have some problems with your data, you have to reindex the > whole stuff and indexing errors are quite difficult to detect. I'm not entirely sure what is being wanted here. Indexing problems have been a bit nasty on many systems. > 4. during index process of a file, if you have a problem in your > data, zebraidx just fails silently? Example? > And this is NOT secure. What security data does zebra leak in this failure case? > And you have > no way to know WHICH biblio made the process crash. [...] It's quite possible, but Koha has made that mistake too. In one recent less serious example, I found that Koha knew which biblio was at fault, but didn't bother to report the biblio details in the failure error message. However, if it's an actual crash, it can be difficult to generate an error from a crashing C process. Maybe you could dump core and examine it, but bissecting the input data like HDL did is probably quicker. > 5. facets are not working properly : they are on the result displayed > because there are problems with diacritics & facets that can't be solved > as of today. And noone can provide a solution (we spoke about that with > indexdata and no clear solution was really provided. Does anyone have a link to that conversation, please? I'd like to know more about it before we hit it for real. > 6. zebra does not evolve anymore. There is no real community around > it, it's just an opensource indexdata software. We sent many questions > onlist and never got answers. We could pay for better support but the > fee required is quite deterrent and benefit is still questionable. It's disappointing there's no community, but that happens sometimes. I guess we could try and make it part of our community, if it's important enough. It's some different skills, but not completely inconsistent. What fee is being asked for what benefit? > 7. icu & zebra are colleagues, not really friends : right truncation > not working, fuzzy search not working and facets. Those are pretty specific claims, directly contradicting http://www.indexdata.com/zebra/doc/querymodel-rpn.html#querymodel-bib1-truncation http://www.indexdata.com/zebra/doc/querymodel-zebra.html#querymodel-zebra-attr-scan and so on. Anyone else like to comment on them? > 8. we use a deprecated way to define indexes for biblios (grs1) and > the tool developped by indexdata to change to DOM has many flaws. we > could manage and do with it. But is it worth the strive ? I think so. > I think that every one agrees that we have to refactor C4::Search. > Indeed, query parser is not able to manage independantly all the > configuration options. And usage of usmarc as internal for biblio comes > with a serious limitation of 9999 bytes, which for big biblios with many > items, is not enough. Where does that 9999-byte limit come from? Could some methods from the analytic records RFC give us a route around it? > BibLibre investigated in a catalogue based on solr. > A University in France contracted us for that development. > This University is in relation with all the community here in France and > solr will certainly be adopted by all the libraries France wide. [...] That's disappointing. While it's not a problem for universities, the big problem I see with Solr http://lucene.apache.org/solr/ is that it is Java, which poses many management challenges for smaller libraries and requires very different skills to current Koha deployments. It seems a bit like throwing the baby out with the bath water, at first glance. > Solr indexes with data with HTTP. Why is this the top benefit? We're smart enough to write for whatever protocol, so I'm not sure I understand. > It can provide fuzzy search, search on synonyms, suggestions > It can provide facet search, stemming. In theory, so can Zebra. > utf8 support is embedded. Hmmm, we'll see. (I thought Java preferred some other unicode form.) > Community is really impressively reactive and numerous and efficient. > And documentation is very good and exhaustive. Well, those are both good things. > a) Librarians can define their own indexes, and there is a plugin that > fetches data from rejected authorities and from authorised_values (that > could/should have been achieved with zebra but only with major work on > xslt). I've read that to be so configurable online, Solr must be allowed to create files in its own installation directory, which seems like a security problem (or at least, against debian policy). Is that true? For example http://sourceforge.net/mailarchive/forum.php?thread_name=ABC31E122EEAC44897B337BDEA897736BE58B9663A%40VUEX2.vuad.villanova.edu&forum_name=vufind-general Can we predefine indexes? But it could be achieved with zebra? > b) C4/Search.pm count lines of code could be shrinked ten times. > You can test from poc_solr branch on > git://git.biblibre.com/koha_biblibre.git > But you have to install solr. In other questions, how does its performance compare? Are there drawbacks to adopting it, counterweighing the benefits of overcoming the above-stated problems with Zebra? Hope that helps move the discussion on, -- MJ Ray (slef), member of www.software.coop, a for-more-than-profit co-op. Past Koha Release Manager (2.0), LMS programmer, statistician, webmaster. In My Opinion Only: see http://mjr.towers.org.uk/email.html Available for hire for Koha work http://www.software.coop/products/koha From ian.walls at bywatersolutions.com Thu Nov 11 14:23:34 2010 From: ian.walls at bywatersolutions.com (Ian Walls) Date: Thu, 11 Nov 2010 08:23:34 -0500 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <20101111130948.A5CB2F7316@nail.towers.org.uk> References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> Message-ID: > > I think that every one agrees that we have to refactor C4::Search. > > Indeed, query parser is not able to manage independantly all the > > configuration options. And usage of usmarc as internal for biblio comes > > with a serious limitation of 9999 bytes, which for big biblios with many > > items, is not enough. > > Where does that 9999-byte limit come from? Could some methods from > the analytic records RFC give us a route around it? > > The byte limit is a restriction of the ISO-2709 format. If we use MARCXML, we can avoid that, but as I understand it, Zebra currently just ingests binary MARC. Analytics support could help with this by moving item data off one biblio onto another, but there would be no guarantee of it working universally. Cheers, -Ian -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmcharlt at gmail.com Thu Nov 11 15:10:49 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Thu, 11 Nov 2010 09:10:49 -0500 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> Message-ID: Hi, 2010/11/11 Ian Walls : > The byte limit is a restriction of the ISO-2709 format.? If we use MARCXML, > we can avoid that, but as I understand it, Zebra currently just ingests > binary MARC. This is incorrect. As configured by default for use with Koha, Zebra can ingest either ISO2709 blobs or MARCXML (e.g., by passing the -x option to rebuild_zebra.pl), so the ISO2709 record size limit is not a limitation of Zebra. Regards, Galen -- Galen Charlton gmcharlt at gmail.com From ian.walls at bywatersolutions.com Thu Nov 11 15:19:01 2010 From: ian.walls at bywatersolutions.com (Ian Walls) Date: Thu, 11 Nov 2010 09:19:01 -0500 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> Message-ID: Ah, right, I'd forgotten about that switch, as it doesn't work with authorities, and I can't do rebuild_zebra.pl -a -b -x -z in my crontab. So, the ISO2709 character limit is not actually an issue at all, then. -Ian On Thu, Nov 11, 2010 at 9:10 AM, Galen Charlton wrote: > Hi, > > 2010/11/11 Ian Walls : > > The byte limit is a restriction of the ISO-2709 format. If we use > MARCXML, > > we can avoid that, but as I understand it, Zebra currently just ingests > > binary MARC. > > This is incorrect. As configured by default for use with Koha, Zebra > can ingest either ISO2709 blobs or MARCXML (e.g., by passing the -x > option to rebuild_zebra.pl), so the ISO2709 record size limit is not a > limitation of Zebra. > > Regards, > > Galen > -- > Galen Charlton > gmcharlt at gmail.com > -- Ian Walls Lead Development Specialist ByWater Solutions Phone # (888) 900-8944 http://bywatersolutions.com ian.walls at bywatersolutions.com Twitter: @sekjal -------------- next part -------------- An HTML attachment was scrubbed... URL: From cnighswonger at foundations.edu Thu Nov 11 15:52:50 2010 From: cnighswonger at foundations.edu (Chris Nighswonger) Date: Thu, 11 Nov 2010 09:52:50 -0500 Subject: [Koha-devel] Tests failing due to "none-mandatory" modules not being installed Message-ID: I've noticed recently that several tests fail if one elects not to install the so-called "optional" dependencies of Koha. It seems to me that this is undesirable behavior, but then, I may be missing something. So.... is this desirable, or should tests of code requiring "optional" deps be conditioned to skip parts that would fail due to non-installation of these deps? Kind Regards, Chris From gmcharlt at gmail.com Thu Nov 11 21:16:34 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Thu, 11 Nov 2010 15:16:34 -0500 Subject: [Koha-devel] z3950NormalizeAuthor Message-ID: Hi, Question for the UNIMARC users among us - what is the purpose of this system preference? I can make out what it's doing in terms of copying over selected 7XX values to the UNIMARC author tag, but why is this necessary? Regards, Galen -- Galen Charlton gmcharlt at gmail.com From cnighswonger at foundations.edu Thu Nov 11 21:17:13 2010 From: cnighswonger at foundations.edu (Chris Nighswonger) Date: Thu, 11 Nov 2010 15:17:13 -0500 Subject: [Koha-devel] While we are cleaning house on RFCs and Enhancement Requests Message-ID: I note that there are some 620 open enhancement requests in Bugzilla. Here is the search: http://bugs.koha-community.org/bugzilla3/buglist.cgi?query_format=advanced&bug_severity=enhancement&bug_status=NEW&bug_status=ASSIGNED&bug_status=REOPENED It might be good to take some time and see if any of these belong to you and update the status of them, as well as consider adding the appropriate RFC to the wiki using the template available here: http://wiki.koha-community.org/wiki/Category:RFCs#RFC_Template Kind Regards, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From robin at catalyst.net.nz Thu Nov 11 22:58:48 2010 From: robin at catalyst.net.nz (Robin Sheat) Date: Fri, 12 Nov 2010 10:58:48 +1300 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <20101111130948.A5CB2F7316@nail.towers.org.uk> References: <20101111130948.A5CB2F7316@nail.towers.org.uk> Message-ID: <1289512728.11078.78.camel@zarathud> MJ Ray schreef op do 11-11-2010 om 13:09 [+0000]: > > 4. during index process of a file, if you have a problem in your > > data, zebraidx just fails silently? > Example? When Zebra has a record that it chokes on, it will sometimes segfault. This is a problem in itself, a second part of that problem (and this is one we can fix) is that rebuild_zebra doesn't notice this. It should really start screaming about it. When zebra fails (segfaulting, or just not liking some data) it does things like refuse to process anything else from that point on, and it can be quite a time consuming process to track down exactly what record it is that's causing the issue. > > And this is NOT secure. > What security data does zebra leak in this failure case? There's more than one definition of the word security. It's not secure in the same sense that a wheel on a car might not be secure. When it comes loose and goes flying, things don't work so well. (Although in the case of Zebra, you may not notice for a while.) -- Robin Sheat Catalyst IT Ltd. ? +64 4 803 2204 GPG: 5957 6D23 8B16 EFAB FEF8 7175 14D3 6485 A99C EB6D -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part URL: From cfouts at liblime.com Thu Nov 11 23:46:54 2010 From: cfouts at liblime.com (Clay Fouts) Date: Thu, 11 Nov 2010 14:46:54 -0800 Subject: [Koha-devel] XML processing Message-ID: Hi, all. Can anyone point me to documentation about why Expat is considered incompatible with Koha (and I'm assuming MARC::File::XML). I know there are dire warnings against such, but what are the test cases where it fails? Expat is considerably faster than the LibXML implementation, and it would be a shame if we were continuing to avoid using it without due cause. I've been unable to detect issues with it in my anecdotal test cases thus far. Thank you, Clay -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at bigballofwax.co.nz Fri Nov 12 00:03:43 2010 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Fri, 12 Nov 2010 12:03:43 +1300 Subject: [Koha-devel] XML processing In-Reply-To: References: Message-ID: Here we go http://www.nntp.perl.org/group/perl.perl4lib/2006/05/msg2369.html You might even be able to find those files somewhere on your servers :-) Chris 2010/11/12 Clay Fouts : > Hi, all. > Can anyone point me to documentation about why Expat is considered > incompatible with Koha (and I'm assuming MARC::File::XML). I know there are > dire warnings against such, but what are the test cases where it fails? > Expat is considerably faster than the LibXML implementation, and it would be > a shame if we were continuing to avoid using it without due cause. I've been > unable to detect issues with it in my anecdotal test cases thus far. > Thank you, > Clay > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > From cfouts at liblime.com Fri Nov 12 00:14:51 2010 From: cfouts at liblime.com (Clay Fouts) Date: Thu, 11 Nov 2010 15:14:51 -0800 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CA98C01.8080709@biblibre.com> References: <4CA98C01.8080709@biblibre.com> Message-ID: I am less than thrilled with many aspects of Zebra. It would suit my purposes better if it had more graceful failure modes, parallelization, simpler configuration, etc. However, I am extremely wary of any replacement which involves Java. For what it does, zebrasrv runs very lean in terms of RAM and CPU utilization. For a single install the extra overhead of using a Java-based search engine may not be too expensive to compensate for. With mass-hosting that is not going to being the case, though, as all that overhead quickly adds up, and that's the use case I'm most concerned with. A possible workaround for this problem would be if multiple Koha instances' records could be stored and queried discretely from within a single SolR process. Do you know how difficult that would be to implement? I am in favor of modularizing record search capabilities in any case. Clay On Mon, Oct 4, 2010 at 1:10 AM, LAURENT Henri-Damien < henridamien.laurent at biblibre.com> wrote: > Hi > As you already read in Paul previous message about > "BibLibre strategy for 3.4 and next version", we are growing, want be > involved in the community as previously. Paul promised some POCs, here > is one available. We also worked on Plack and support. We created a base > of script to search for Memoryleaks. We'll demonstrate that later. > > > zebra is fast and embeds native z3950 server. But it has also some major > drawbacks we have to cope with on our everyday life making it quite > difficult to maintain. > > 1. zebra config files are a nightmare. You can't drive the > configuration file easily. namely : Can't edit indexs via HTTP or > configuration. all is in files hardcoded on disk. => you can't list > indexes you can't change indexes, you can't edit indexes, you can't say > I want this index at OPAC, that in intranet. (Could be done with > scraping ccl.properties, and then record.abs and bib1.att.... But what a > HELL) So you cannot customize configuration defining the indexes you > want easily. And ppl donot get a translation of the indexes since all > the indexes are hardcoded in the ccl.properties and we donot have a > translation process so that ccl attributes could be translated into > different languages. > > 2. no real-time indexing : the use of a crontab is poor: when you > add an authority while creating a biblio, you have to wait some some > minutes to end your biblio (might be solved since zebra has some way to > index biblios via z3950 extended services, but hard and should be tested > and at the time community first tested that, a performance problem was > raised on indexing.) > > 3. no way to access/process/delete data easily. If you have indexes > in it or have some problems with your data, you have to reindex the > whole stuff and indexing errors are quite difficult to detect. > > 4. during index process of a file, if you have a problem in your > data, zebraidx just fails silently... And this is NOT secure. And you have > no way to know WHICH biblio made the process crash. We had a LOT of > trouble with Aix-Marseille universities that have some > arabic translitterated biblios that makes zebra/icu completly crash ! We > had to do some recursive script to find 14 biblios on 730 000 that makes > zebra crash (even is properly stored & displayed) > > 5. facets are not working properly : they are on the result displayed > because there are problems with diacritics & facets that can't be solved > as of today. And noone can provide a solution (we spoke about that with > indexdata and no clear solution was really provided. > > 6. zebra does not evolve anymore. There is no real community around > it, it's just an opensource indexdata software. We sent many questions > onlist and never got answers. We could pay for better support but the > fee required is quite deterrent and benefit is still questionable. > > 7. icu & zebra are colleagues, not really friends : right truncation > not working, fuzzy search not working and facets. > > 8. we use a deprecated way to define indexes for biblios (grs1) and > the tool developped by indexdata to change to DOM has many flaws. we > could manage and do with it. But is it worth the strive ? > > I think that every one agrees that we have to refactor C4::Search. > Indeed, query parser is not able to manage independantly all the > configuration options. And usage of usmarc as internal for biblio comes > with a serious limitation of 9999 bytes, which for big biblios with many > items, is not enough. > > BibLibre investigated in a catalogue based on solr. > A University in France contracted us for that development. > This University is in relation with all the community here in France and > solr will certainly be adopted by all the libraries France wide. > We are planning to release the code on our git early spring next year > and rebase on whatever Koha version will be released at that time 3.4 or > 3.6. > > > Why ? > > Solr indexes with data with HTTP. > It can provide fuzzy search, search on synonyms, suggestions > It can provide facet search, stemming. > utf8 support is embedded. > Community is really impressively reactive and numerous and efficient. > And documentation is very good and exhaustive. > > You can see the results on solr.biblibre.com and > catalogue.solr.biblibre.com > > http://catalogue.solr.biblibre.com/cgi-bin/koha/opac-search.pl?q=jean > http://solr.biblibre.com/cgi-bin/koha/admin/admin-home.pl > you can log there with demo/demo lgoin/password > > http://solr.biblibre.com/cgi-bin/koha/solr/indexes.pl > is the page where ppl can manage their indexes and links. > > a) Librarians can define their own indexes, and there is a plugin that > fetches data from rejected authorities and from authorised_values (that > could/should have been achieved with zebra but only with major work on > xslt). > > b) C4/Search.pm count lines of code could be shrinked ten times. > You can test from poc_solr branch on > git://git.biblibre.com/koha_biblibre.git > But you have to install solr. > > Any feedback/idea welcome. > -- > Henri-Damien LAURENT > BibLibre > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From cfouts at liblime.com Fri Nov 12 01:05:09 2010 From: cfouts at liblime.com (Clay Fouts) Date: Thu, 11 Nov 2010 16:05:09 -0800 Subject: [Koha-devel] XML processing In-Reply-To: References: Message-ID: Here's a script that tests for Josh's described bug in Expat: http://treebeard.liblime.com/ctf/expat-marc-test.pl It passes my testing, but his description of the failure case is only vaguely described as "sometimes causes the entire record to be destroyed." Anyone come up with a failure case? Clay On Thu, Nov 11, 2010 at 3:03 PM, Chris Cormack wrote: > Here we go > > http://www.nntp.perl.org/group/perl.perl4lib/2006/05/msg2369.html > > You might even be able to find those files somewhere on your servers :-) > > Chris > > 2010/11/12 Clay Fouts : > > Hi, all. > > Can anyone point me to documentation about why Expat is considered > > incompatible with Koha (and I'm assuming MARC::File::XML). I know there > are > > dire warnings against such, but what are the test cases where it fails? > > Expat is considerably faster than the LibXML implementation, and it would > be > > a shame if we were continuing to avoid using it without due cause. I've > been > > unable to detect issues with it in my anecdotal test cases thus far. > > Thank you, > > Clay > > > > _______________________________________________ > > Koha-devel mailing list > > Koha-devel at lists.koha-community.org > > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > > website : http://www.koha-community.org/ > > git : http://git.koha-community.org/ > > bugs : http://bugs.koha-community.org/ > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From krichel at openlib.org Fri Nov 12 01:17:15 2010 From: krichel at openlib.org (Thomas Krichel) Date: Fri, 12 Nov 2010 01:17:15 +0100 Subject: [Koha-devel] XML processing In-Reply-To: References: Message-ID: <20101112001715.GA31246@openlib.org> Clay Fouts writes > Expat is considerably faster than the LibXML implementation, What is your source for this information? A quick Google search seems to suggest that Expat is a bit faster, but not much. Cheers, Thomas Krichel http://openlib.org/home/krichel http://authorclaim.org/profile/pkr1 skype: thomaskrichel From cfouts at liblime.com Fri Nov 12 01:29:43 2010 From: cfouts at liblime.com (Clay Fouts) Date: Thu, 11 Nov 2010 16:29:43 -0800 Subject: [Koha-devel] XML processing In-Reply-To: <20101112001715.GA31246@openlib.org> References: <20101112001715.GA31246@openlib.org> Message-ID: I am gleaning that from personal testing. That test script is very system-dependent, unfortunately, so not meaningfully distributed. But to clarify a bit, I tested with the ExpatXS module and using the M::R::new_from_xml() method. Creating the new object from an XML record with 1,000 embedded 952 tags took half the amount of time in ExpatXS as compared to LibXML::Parser. Clay On Thu, Nov 11, 2010 at 4:17 PM, Thomas Krichel wrote: > Clay Fouts writes > > > Expat is considerably faster than the LibXML implementation, > > What is your source for this information? A quick Google > search seems to suggest that Expat is a bit faster, but > not much. > > Cheers, > > Thomas Krichel http://openlib.org/home/krichel > http://authorclaim.org/profile/pkr1 > skype: thomaskrichel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From frederic at tamil.fr Fri Nov 12 07:53:24 2010 From: frederic at tamil.fr (=?ISO-8859-1?Q?Fr=E9d=E9ric_Demians?=) Date: Fri, 12 Nov 2010 07:53:24 +0100 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: References: <4CA98C01.8080709@biblibre.com> Message-ID: <4CDCE464.4030509@tamil.fr> > A possible workaround for this problem would be if multiple Koha > instances' records could be stored and queried discretely from within > a single SolR process. Do you know how difficult that would be to > implement? It's possible and easy to do with Solr since version 1.3 using a 'multicore' setup. On the other side, if you have a farm of Koha instances using a Solr backend you can scale up, if necessary, by deploying multiple Solr servers. So you could have for example, 1000 Koha instance running on 10 hosts and using one multicore Solr deployed on 3 servers. -- Fr?d?ric From frederic at tamil.fr Fri Nov 12 07:55:30 2010 From: frederic at tamil.fr (=?ISO-8859-1?Q?Fr=E9d=E9ric_Demians?=) Date: Fri, 12 Nov 2010 07:55:30 +0100 Subject: [Koha-devel] XML processing In-Reply-To: References: Message-ID: <4CDCE4E2.3040107@tamil.fr> > Here we go > > http://www.nntp.perl.org/group/perl.perl4lib/2006/05/msg2369.html From this email, as I understand it, it seems that here is the reason why Koha in search result deserialize MARC records from their ISO2709 representation rather than their MARCXML. If we were able to use marcxml, the 99,999 limitation for MARC record size could be exceed. And we would have one less reason to move to SolR--notwithstanding the other reasons to move to. -- Fr?d?ric From frederic at tamil.fr Fri Nov 12 07:59:01 2010 From: frederic at tamil.fr (=?ISO-8859-1?Q?Fr=E9d=E9ric_Demians?=) Date: Fri, 12 Nov 2010 07:59:01 +0100 Subject: [Koha-devel] XML processing In-Reply-To: References: <20101112001715.GA31246@openlib.org> Message-ID: <4CDCE5B5.1010308@tamil.fr> > I am gleaning that from personal testing. That test script is very > system-dependent, unfortunately, so not meaningfully distributed. But anyway would it be possible to provide to us sample data to do similar testing? -- Fr?d?ric From colin.campbell at ptfs-europe.com Fri Nov 12 10:59:56 2010 From: colin.campbell at ptfs-europe.com (Colin Campbell) Date: Fri, 12 Nov 2010 09:59:56 +0000 Subject: [Koha-devel] Perl resource Message-ID: <4CDD101C.8050900@ptfs-europe.com> This may be of interest to some: http://www.onyxneon.com/books/modern_perl/index.html This new book is available as a free pdf. Cheers Colin -- Colin Campbell Chief Software Engineer, PTFS Europe Limited Content Management and Library Solutions +44 (0) 208 366 1295 (phone) +44 (0) 7759 633626 (mobile) colin.campbell at ptfs-europe.com skype: colin_campbell2 http://www.ptfs-europe.com From cfouts at liblime.com Fri Nov 12 15:41:37 2010 From: cfouts at liblime.com (Clay Fouts) Date: Fri, 12 Nov 2010 06:41:37 -0800 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CDCE464.4030509@tamil.fr> References: <4CA98C01.8080709@biblibre.com> <4CDCE464.4030509@tamil.fr> Message-ID: That is very encouraging, then. Thank you for clarifying. Clay 2010/11/11 Fr?d?ric Demians > > A possible workaround for this problem would be if multiple Koha > > instances' records could be stored and queried discretely from within > > a single SolR process. Do you know how difficult that would be to > > implement? > > It's possible and easy to do with Solr since version 1.3 using a > 'multicore' setup. On the other side, if you have a farm of Koha > instances using a Solr backend you can scale up, if necessary, by > deploying multiple Solr servers. So you could have for example, 1000 > Koha instance running on 10 hosts and using one multicore Solr deployed > on 3 servers. > -- > Fr?d?ric > -------------- next part -------------- An HTML attachment was scrubbed... URL: From henridamien.laurent at gmail.com Fri Nov 12 15:43:06 2010 From: henridamien.laurent at gmail.com (Henri-Damien LAURENT) Date: Fri, 12 Nov 2010 15:43:06 +0100 Subject: [Koha-devel] RE : Re: XML processing Message-ID: Other reason is that deserializing from xml rather than iso2709 is WAY slower and proc intensive. It would not be a problem is any setup would CORRECTLY and SENSIBLY use XSLT. But since XSLT.pm is what it is ie taking marc record, editing, tranforms to xml before processing xslt, this process would only be slower if we used xml. Moreover the main reason why record are bigger than 9999 bytes is because of items. It is proven that it would really be HEALTHY to remove them. So the problem wouls not exist any longer. My 2 cts. -- Henri-Damien Laurent Le 12 nov. 2010, 7:55 AM, "Fr?d?ric Demians" a ?crit : > Here we go > > http://www.nntp.perl.org/group/perl.perl4lib/2006/05/msg2369.html From cfouts at liblime.com Fri Nov 12 16:02:22 2010 From: cfouts at liblime.com (Clay Fouts) Date: Fri, 12 Nov 2010 07:02:22 -0800 Subject: [Koha-devel] RE : Re: XML processing In-Reply-To: References: Message-ID: Both LEK and PTFS Koha (>=1.2) apply this tactic of using MARCXML (with dynamic item data insertion) as the authoritative source to enable bibs with large numbers of items. The tradeoff is indeed that it is slower than parsing the binary MARC data, but there are optimizations available to compensate. The helpful part is that the extra processing time only becomes noticeable to the user when they come across the large records that Koha could not have previously handled because of the 9,999 constraint. Clay 2010/11/12 Henri-Damien LAURENT > Other reason is that deserializing from xml rather than iso2709 is WAY > slower and proc intensive. It would not be a problem is any setup would > CORRECTLY and SENSIBLY use XSLT. But since XSLT.pm is what it is ie taking > marc record, editing, tranforms to xml before processing xslt, this process > would only be slower if we used xml. > Moreover the main reason why record are bigger than 9999 bytes is because > of items. It is proven that it would really be HEALTHY to remove them. So > the problem wouls not exist any longer. > My 2 cts. > -- > Henri-Damien Laurent > > Le 12 nov. 2010, 7:55 AM, "Fr?d?ric Demians" a ?crit : > > > > Here we go > > > http://www.nntp.perl.org/group/perl.perl4lib/2006/05/msg2369.html > From this email, as I understand it, it seems that here is the reason why > Koha in search result deserialize MARC records from their ISO2709 > representation rather than their MARCXML. If we were able to use marcxml, > the 99,999 limitation for MARC record size could be exceed. And we would > have one less reason to move to SolR--notwithstanding the other reasons to > move to. > -- > Fr?d?ric > > _______________________________________________ Koha-devel mailing list > Koha-devel at lists.koha-comm... > > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lculber at mdah.state.ms.us Fri Nov 12 16:07:37 2010 From: lculber at mdah.state.ms.us (Linda Culberson) Date: Fri, 12 Nov 2010 09:07:37 -0600 Subject: [Koha-devel] Koha: XML processing In-Reply-To: References: Message-ID: <4CDD5839.80104@mdah.state.ms.us> I'd like to add my 2cts. to the reasons for removing items from the bibliographic record. We have many records with thousands of items. This is often the case with long-running serial publications in academic institutions. In the case of archival institutions like ours, we are talking about decades if not centuries of a record series. To split them into groups just because they don't fit within the 9999 byte limit is doable, but not logical, in my own (limited and aging) mind. Thanks. -- Linda Culberson lculber at mdah.state.ms.us Archives and Records Services Division Ms. Dept. of Archives& History P. O. Box 571 Jackson, MS 39205-0571 Telephone: 601/576-6873 Facsimile: 601/576-6824 On 11/12/2010 8:43 AM, Henri-Damien LAURENT wrote: > > Other reason is that deserializing from xml rather than iso2709 is WAY > slower and proc intensive. It would not be a problem is any setup > would CORRECTLY and SENSIBLY use XSLT. But since XSLT.pm is what it is > ie taking marc record, editing, tranforms to xml before processing > xslt, this process would only be slower if we used xml. > Moreover the main reason why record are bigger than 9999 bytes is > because of items. It is proven that it would really be HEALTHY to > remove them. So the problem wouls not exist any longer. > My 2 cts. > -- > Henri-Damien Laurent > >> Le 12 nov. 2010, 7:55 AM, "Fr?d?ric Demians" > > a ?crit : >> >> > Here we go > > >> http://www.nntp.perl.org/group/perl.perl4lib/2006/05/msg2369.html >> >> >From this email, as I understand it, it seems that here is the >> reason why Koha in search result deserialize MARC records from their >> ISO2709 representation rather than their MARCXML. If we were able to >> use marcxml, the 99,999 limitation for MARC record size could be >> exceed. And we would have one less reason to move to >> SolR--notwithstanding the other reasons to move to. >> -- >> Fr?d?ric >> >> _______________________________________________ Koha-devel mailing >> list Koha-devel at lists.koha-comm... >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mac.xercode at gmail.com Fri Nov 12 18:06:30 2010 From: mac.xercode at gmail.com (=?ISO-8859-1?Q?Miguel_Angel_Calvo_L=E1zaro?=) Date: Fri, 12 Nov 2010 18:06:30 +0100 Subject: [Koha-devel] Time for reflexion Message-ID: We want to take this opportunity to congratulate Biblibre for initiating the search engine change with which we are identified and in which we would like to collaborate, due to the important problem on the facets. We believe that it is time to examine some aspects of the project, with no intention of offending any member of the community in doing so. In Xercode, despite being a new company, some of our members have been working on the project since 2007, when we first came into contact with PERL. All of our present developers have been working on PHP and JAVA projects. Since then, we have been conscious that the structure of KOHA is unlike any other major project we have worked on previously. In our opinion, it has become necessary to upgrade technologies employed in the project making KOHA more competitive with similar software. We believe that is urgent to use a framework like DANCER, SOLR as an index engine (our main problem with clients), increase performance, ... We believe, however, that this will not completely resolve our present situation due to the fact that the project structure is not sufficiently defined. A good example is the propose of search engine change, a current point of conflict. We have traditionally followed an Object Oriented Metodology (working with interfaces and abstract classes). By doing this we would achieve an abstraction of the search engine and the result of using a method, for example ?engineSearch?, would be the same with indifference to use SOLR or ZEBRA and the majority of problems that we are currently facing in the community (principally the integration of the diverse developments by each member) can be easily resolved. Applying this philosophy to .pl files would make them more concise and more easily understood. DRUPAL and OTRS (PERL) are good examples composed by a kernel and specific modules. Considering all of this, this is a perfect moment for reflexion due to the fact that these points may affect the scalability and evolution of KOHA. Regards, * Miguel Angel Calvo L?zaro* *Direcci?n de Soluciones* miguel.calvo at xercode.es Telf. 653 038 238 Rosal?a de Castro, 53 4? C 15895 Milladoiro - Ames (A Coru?a) Telf. 881 975 576 www.xercode.es info at xercode.es La informaci?n contenida en este mensaje y sus posibles documentos adjuntos es privada y confidencial y est? dirigida ?nicamente a su destinatario/a. Si usted no es el/la destinatario/a original de este mensaje, por favor elim?nelo. La distribuci?n o copia de este mensaje no est? autorizada. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 5396 bytes Desc: not available URL: From marc.chantreux at biblibre.com Fri Nov 12 19:19:42 2010 From: marc.chantreux at biblibre.com (Marc Chantreux) Date: Fri, 12 Nov 2010 19:19:42 +0100 Subject: [Koha-devel] Time for reflexion In-Reply-To: References: Message-ID: <20101112181942.GC8115@auckland.lan> hello, On Fri, Nov 12, 2010 at 06:06:30PM +0100, Miguel Angel Calvo L?zaro wrote: > We believe that is urgent to use a framework like DANCER, SOLR as an index > engine (our main problem with clients), increase performance, ... Dancer would really help to drop lot of code from koha. Plus I really would like to see koha based on external libs. Some of them are: http://search.cpan.org/~flora/MooseX-Declare-0.34/lib/MooseX/Declare.pm http://search.cpan.org/~frew/DBIx-Class-0.08124/lib/DBIx/Class.pm http://git.tamil.fr/?p=marc-moose;a=summary http://search.cpan.org/~sartak/Template-Declare-0.43/lib/Template/Declare.pm https://github.com/eiro/MARC-Template regards -- Marc Chantreux BibLibre, expert en logiciels libres pour l'info-doc http://biblibre.com From cfouts at liblime.com Fri Nov 12 19:59:41 2010 From: cfouts at liblime.com (Clay Fouts) Date: Fri, 12 Nov 2010 10:59:41 -0800 Subject: [Koha-devel] Time for reflexion In-Reply-To: References: Message-ID: Hello, Miguel. You state well the necessity of adopting these sorts of strategies to promote the long term viability of Koha. Without architectural clarity, the ability to add and refine features is growing increasingly difficult without stepping on other people's work and introducing action-at-a-distance bugs. Working toward a solution to this dilemma is exactly the purpose of having a technical committee^H^H^H^H^H^H^H meetings to hash these issues out and try to develop a consensus on which tools and patterns would best suit Koha, then draw a road map describing incremental steps developers can take in order to get from point A to point B. I have a few high-level ideas toward this end: * start separating out the monolithic C4 modules into Model and Controller modules, cleaning up circular dependencies as needed. * move most of the contents of .pl files into the View modules. * switch to more flexible template system, like TT. * split out C4::Context into "user" context for authorization, "schema" context for data sources, "environment" context for CGI vs. CLI vs. PSGI * centralize database access calls, either through an ORM or through a customized layer on top of DBI. I think the specific tools applied are less critical than the underlying principles which any number of those tools could facilitate. Cheers, Clay 2010/11/12 Miguel Angel Calvo L?zaro > We want to take this opportunity to congratulate Biblibre for initiating > the search engine change with which we are identified and in which we would > like to collaborate, due to the important problem on the facets. > > We believe that it is time to examine some aspects of the project, with no > intention of offending any member of the community in doing so. > > In Xercode, despite being a new company, some of our members have been > working on the project since 2007, when we first came into contact with > PERL. All of our present developers have been working on PHP and JAVA > projects. Since then, we have been conscious that the structure of KOHA is > unlike any other major project we have worked on previously. > > In our opinion, it has become necessary to upgrade technologies employed in > the project making KOHA more competitive with similar software. > > We believe that is urgent to use a framework like DANCER, SOLR as an index > engine (our main problem with clients), increase performance, ... We > believe, however, that this will not completely resolve our present > situation due to the fact that the project structure is not sufficiently > defined. > > A good example is the propose of search engine change, a current point of > conflict. We have traditionally followed an Object Oriented Metodology > (working with interfaces and abstract classes). By doing this we would > achieve an abstraction of the search engine and the result of using a > method, for example ?engineSearch?, would be the same with indifference to > use SOLR or ZEBRA and the majority of problems that we are currently facing > in the community (principally the integration of the diverse developments by > each member) can be easily resolved. > > Applying this philosophy to .pl files would make them more concise and more > easily understood. DRUPAL and OTRS (PERL) are good examples composed by a > kernel and specific modules. > > Considering all of this, this is a perfect moment for reflexion due to the > fact that these points may affect the scalability and evolution of KOHA. > > Regards, > > > * > Miguel Angel Calvo L?zaro* > > *Direcci?n de Soluciones* > miguel.calvo at xercode.es > Telf. 653 038 238 > > Rosal?a de Castro, 53 4? C > 15895 Milladoiro - Ames (A Coru?a) > Telf. 881 975 576 > www.xercode.es > info at xercode.es > > > La informaci?n contenida en este mensaje y sus posibles documentos > adjuntos es privada y confidencial y est? dirigida ?nicamente a su > destinatario/a. Si usted no es el/la destinatario/a original de este > mensaje, por favor elim?nelo. La distribuci?n o copia de este mensaje no > est? autorizada. > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 5396 bytes Desc: not available URL: From nengard at gmail.com Sat Nov 13 14:23:24 2010 From: nengard at gmail.com (Nicole Engard) Date: Sat, 13 Nov 2010 07:23:24 -0600 Subject: [Koha-devel] November Newsletter In-Reply-To: References: Message-ID: Final call, if you wrote about KohaCon anywhere send me the link (all languages welcome) otherwise it's gonna be the Nicole & Ian newsletter (with a post thrown in from Chris) :) Nicole On Fri, Oct 29, 2010 at 4:13 AM, Nicole Engard wrote: > As many of you saw I covered the conference in blog posts and Ian is > covering the hackfest in blog posts. ?Did anyone else blog about the > conference? or do you plan to? If so make sure you send me those links > so that I can put them into our conference edition of the newsletter. > > Also remember to put your pics online and tag them kohacon10 and put > your slides in the Slideshare event: > http://www.slideshare.net/event/kohacon10 > > All these links will be put into the newsletter in 15 days or so. > > Nicole > > On Thu, Oct 21, 2010 at 1:21 PM, Nicole Engard wrote: >> Hello all, >> >> I'm thinking for the next Koha Newsletter we'll do a conference >> sum-up. ?So between the start of the conference and the 12th of >> November please send me links to posts you may have written about >> conference sessions, links to pictures from KohaCon, and anything else >> conference related. >> >> Thanks >> Nicole C. Engard >> > From ian.walls at bywatersolutions.com Sun Nov 14 01:05:17 2010 From: ian.walls at bywatersolutions.com (Ian Walls) Date: Sat, 13 Nov 2010 19:05:17 -0500 Subject: [Koha-devel] Guidelines for Patch Acceptance/Rejection In-Reply-To: References: <4CDBB331.4070507@ptfs-europe.com> Message-ID: I have compiled these guidelines into a wiki page: http://wiki.koha-community.org/wiki/Guidelines_for_Patch_Acceptance/Rejection. If others would like to weigh in, please feel free to update the page in the wiki. Cheers, -Ian On Thu, Nov 11, 2010 at 7:50 AM, Galen Charlton wrote: > Hi, > > On Thu, Nov 11, 2010 at 4:11 AM, Colin Campbell > wrote: > > It is worth stressing that there are things which will encourage quick > > adoption of a patch. > > Good commit messages, tests that prove it does what it says and doesn't > > add needlessly to the amount of entropy in the universe. > > Here are other characteristics of a good patch: > > [1] The patch covers a single topic. > > For example, a patch that fixes a circulation bug and adds a new > cataloging feature is problematic; if the circ bugfix works but the > new cataloging feature doesn't, the patch can't be accepted as is. > Also, such a patch is more difficult to test; somebody who is an > expert in Koha's circulation module may not be comfortable signing off > on a cataloging enhancement. > > [2] The patch is easy to read. > > In particular, please don't mix up major whitespace correction and > functionality. If you want to clean up whitespace, please do so in a > separate patch. > > [3] The patch does not contain the detritus of your coding process. > > In other words, the patch shouldn't contain things like unguarded > warns left over from your debugging or temporary files. > > Regards, > > Galen > -- > Galen Charlton > gmcharlt at gmail.com > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -- Ian Walls Lead Development Specialist ByWater Solutions Phone # (888) 900-8944 http://bywatersolutions.com ian.walls at bywatersolutions.com Twitter: @sekjal -------------- next part -------------- An HTML attachment was scrubbed... URL: From henridamien.laurent at biblibre.com Sun Nov 14 17:48:51 2010 From: henridamien.laurent at biblibre.com (LAURENT Henri-Damien) Date: Sun, 14 Nov 2010 17:48:51 +0100 Subject: [Koha-devel] externalising translator and etc In-Reply-To: References: <4CDAC42C.3090103@biblibre.com> Message-ID: <4CE012F3.1050006@biblibre.com> Le 10/11/2010 19:10, Galen Charlton a ?crit : > Hi, > > On Wed, Nov 10, 2010 at 11:11 AM, LAURENT Henri-Damien > wrote: >> Hi >> I propose to get etc and translator as external git repositories. >> Why ? >> >> It would allow to have po files in their own repository and limit the >> weight of Koha git archives. >> For etc, it would allow to change the installer in order to keep track >> of local changes for every person and therefore be able to synchronize >> with community version properly. > > -1 for moving etc to a separate Git project. The default > configuration files are an integral part of Koha. For variant > configurations such as a Plack or nginx config, it would be clearer to > have relevant files in subdirectories or identified by file name > instead of putting them in branches. That way, somebody installing > Koha for the first time would see (say) both an etc/koha-apache.conf > and an etc/koha-nginx.conf; i.e., all of the options would be directly > in front of them. Furthermore, often changes made to files in etc are > dependent on changes made to code or templates; separating etc and the > rest of Koha into submodules would make it harder to prepare patches > and manage topic branches. My idea for that is that tracking addition of zebra indexes and all that stuff would be considerably eased if there was an initial etc repository that installer clone into /home/koha/kohadev Then makes the updates on strings and commit. All the index to fit customized library need could be tracked. We create a git repository for all the installations, but when you create the repository from the processed files, it looses synchronization with all the common indexes which could be required for Koha when one adds some other minor feature or fixes a bug in the indexer. An other reason is that etc doesnot vary much. But when it varies, when you upgrade users should be aware that they might loose their custom indexes. I wanted to make next upgrades smoother for libraries. I am with you when you propose to use different directories. This would bring to : etc |- koha-conf.xml or koha-conf.yml i.e. ONLY Koha common configuration (database access and so on. No more zebra stuff in that. ) |- authentication ||- LDAP ||- CAS |-webserver ||- apache2 ||- nginx |- searchengine ||-zebradb ||-solr ||-pazpar2 Are you with me ? But then, when one chooses one type of webserver, one type of authentication, one type of searchengine, he would use only a few of all the installation files (which could become quite a forest). I wanted the structure for etc simpler so that sysadmins would not be overwhelmed by big picture. > 0 for moving the PO files to a separate Git project. The size of the > repository doesn't really strike me as a big deal; the Git protocol is > pretty efficient. That said, while I don't see a great deal of > benefit to splitting the translations off into a separate repository, > I don't see much harm either. Well it is quite striking when you get 249Mo to dl when doing a git clone. :) (mainly because any change you commit on po files is storing a new instance of this file) ADSL is coping well with that... But there are still some places in the world which donot have access to wide bandwidth. Regards. -- Henri-Damien LAURENT From henridamien.laurent at biblibre.com Sun Nov 14 18:25:49 2010 From: henridamien.laurent at biblibre.com (LAURENT Henri-Damien) Date: Sun, 14 Nov 2010 18:25:49 +0100 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> Message-ID: <4CE01B9D.2090008@biblibre.com> Le 11/11/2010 15:19, Ian Walls a ?crit : > Ah, right, I'd forgotten about that switch, as it doesn't work with > authorities, and I can't do rebuild_zebra.pl > -a -b -x -z in my crontab. > > So, the ISO2709 character limit is not actually an issue at all, then. > Well, actually, it is in koha. Since Koha gets iso2709 from a zebra search. If it Koha would take marcxml, it would be even slower to get search results with the Search.pm as it stands now... And let me know if you know a librarian who would like that. -- Henri-Damien LAURENT From chris at bigballofwax.co.nz Sun Nov 14 18:32:04 2010 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Mon, 15 Nov 2010 06:32:04 +1300 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CE01B9D.2090008@biblibre.com> References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> <4CE01B9D.2090008@biblibre.com> Message-ID: On 15 November 2010 06:25, LAURENT Henri-Damien wrote: > Le 11/11/2010 15:19, Ian Walls a ?crit : >> Ah, right, I'd forgotten about that switch, as it doesn't work with >> authorities, and I can't do rebuild_zebra.pl >> -a -b -x -z in my crontab. >> >> So, the ISO2709 character limit is not actually an issue at all, then. >> > Well, actually, it is in koha. > Since Koha gets iso2709 from a zebra search. > If it Koha would take marcxml, it would be even slower to get search > results with the Search.pm as it stands now... And let me know if you > know a librarian who would like that. Yep, there is utterly no doubt C4::Search needs a rewrite. This was a goal of 3.4. I'm pretty sure I mentioned it in my proposal, and we had some volunteers to work on it. I still think it is nessecary, but I do think it is better to do it in a way that allows for a structure like C4/Search.pm C4/Search/Nutch.pm C4/Search/Zebra.pm C4/Search/Solr.pm Or using searchengine or something else to achieve the same. Chris From henridamien.laurent at gmail.com Sun Nov 14 18:42:43 2010 From: henridamien.laurent at gmail.com (LAURENT Henri-Damien) Date: Sun, 14 Nov 2010 18:42:43 +0100 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <1289512728.11078.78.camel@zarathud> References: <20101111130948.A5CB2F7316@nail.towers.org.uk> <1289512728.11078.78.camel@zarathud> Message-ID: <4CE01F93.3010909@gmail.com> Le 11/11/2010 22:58, Robin Sheat a ?crit : > MJ Ray schreef op do 11-11-2010 om 13:09 [+0000]: >>> 4. during index process of a file, if you have a problem in your >>> data, zebraidx just fails silently? >> Example? > > When Zebra has a record that it chokes on, it will sometimes segfault. > This is a problem in itself, a second part of that problem (and this is > one we can fix) is that rebuild_zebra doesn't notice this. It should > really start screaming about it. > > When zebra fails (segfaulting, or just not liking some data) it does > things like refuse to process anything else from that point on, and it > can be quite a time consuming process to track down exactly what record > it is that's causing the issue. > >>> And this is NOT secure. >> What security data does zebra leak in this failure case? > > There's more than one definition of the word security. It's not secure > in the same sense that a wheel on a car might not be secure. When it > comes loose and goes flying, things don't work so well. (Although in the > case of Zebra, you may not notice for a while.) > Well, librarians consider the search engine to be more compared to the reactor of a plane rather than the wheel of a car. But I hope that MJ got the idea. In fact, the only way to get some information from zebrasrv is to catch the warn " previous transaction doesn't reach commit" in the logs from zebrasrv... And if you are indexing 10000 records, and you have one record which cause that error, your whole bunch of records is not indexed for want of maybe only one record (maybe more maybe less, who knows ? zebra stops at the very first without telling you which it is and without indexing the part that worked.).... So sometimes, in zebraqueue, things are marked as indexed... while they are not.... You may think it is ok for a good while and realize it is not. You are on a plane up to 10000 feet in the sky, you never know when your engine will blow out or stop. No warnings (from the crontab), no way to know that it is working. If it stops, hold on to your stick. -- Henri-Damien LAURENT From mjr at phonecoop.coop Sun Nov 14 19:18:05 2010 From: mjr at phonecoop.coop (MJ Ray) Date: Sun, 14 Nov 2010 18:18:05 +0000 (GMT) Subject: [Koha-devel] Bug 5401: WYSWYG for Koha News In-Reply-To: Message-ID: <20101114181805.9B88FF7474@nail.towers.org.uk> Galen Charlton wrote: > Since TinyMCE is used by only one page, if we decide to switch to > elRTE (or to something else entirely), it won't be a big deal. Has > anybody done a direct comparison? I've used TinyMCE and haven't used elRTE, but wouldn't this question get better answers if asked on koha-devel? Thanks -- MJ Ray (slef), member of www.software.coop, a for-more-than-profit co-op. Past Koha Release Manager (2.0), LMS programmer, statistician, webmaster. In My Opinion Only: see http://mjr.towers.org.uk/email.html Available for hire for Koha work http://www.software.coop/products/koha From gmcharlt at gmail.com Sun Nov 14 20:12:18 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Sun, 14 Nov 2010 14:12:18 -0500 Subject: [Koha-devel] [Koha-patches] Bug 5401: WYSWYG for Koha News In-Reply-To: <20101114181805.9B88FF7474@nail.towers.org.uk> References: <20101114181805.9B88FF7474@nail.towers.org.uk> Message-ID: Hi, On Sun, Nov 14, 2010 at 1:18 PM, MJ Ray wrote: > Galen Charlton wrote: >> Since TinyMCE is used by only one page, if we decide to switch to >> elRTE (or to something else entirely), it won't be a big deal. ?Has >> anybody done a direct comparison? > > I've used TinyMCE and haven't used elRTE, but wouldn't this question > get better answers if asked on koha-devel? As the question was stated, yes, although if it turned out that Koustubha was just unaware that Koha was already using TinyMCE, a replacement patch using it instead of elRTE may have sufficed. There is a long standing practice of discussion of individual patches on koha-commits, however, so while koha-devel is certainly a valid choice for discussing this issue, I do want to point out and remind people that some relevant discussion does take place on the koha-commits list. Regards, Galen -- Galen Charlton gmcharlt at gmail.com From chris at bigballofwax.co.nz Sun Nov 14 20:24:51 2010 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Mon, 15 Nov 2010 08:24:51 +1300 Subject: [Koha-devel] [Koha-patches] Bug 5401: WYSWYG for Koha News In-Reply-To: References: <20101114181805.9B88FF7474@nail.towers.org.uk> Message-ID: On 15 November 2010 08:12, Galen Charlton wrote: > Hi, > > On Sun, Nov 14, 2010 at 1:18 PM, MJ Ray wrote: >> Galen Charlton wrote: >>> Since TinyMCE is used by only one page, if we decide to switch to >>> elRTE (or to something else entirely), it won't be a big deal. ?Has >>> anybody done a direct comparison? >> >> I've used TinyMCE and haven't used elRTE, but wouldn't this question >> get better answers if asked on koha-devel? > > As the question was stated, yes, although if it turned out that > Koustubha was just unaware that Koha was already using TinyMCE, a > replacement patch using it instead of elRTE may have sufficed. > > There is a long standing practice of discussion of individual patches > on koha-commits, however, so while koha-devel is certainly a valid > choice for discussing this issue, I do want to point out and remind > people that some relevant discussion does take place on the > koha-commits list. > And on the koha-patches list too :) Chris From henridamien.laurent at biblibre.com Sun Nov 14 20:28:45 2010 From: henridamien.laurent at biblibre.com (LAURENT Henri-Damien) Date: Sun, 14 Nov 2010 20:28:45 +0100 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> <4CE01B9D.2090008@biblibre.com> Message-ID: <4CE0386D.9070403@biblibre.com> Le 14/11/2010 18:32, Chris Cormack a ?crit : > On 15 November 2010 06:25, LAURENT Henri-Damien > wrote: >> Le 11/11/2010 15:19, Ian Walls a ?crit : >>> Ah, right, I'd forgotten about that switch, as it doesn't work with >>> authorities, and I can't do rebuild_zebra.pl >>> -a -b -x -z in my crontab. >>> >>> So, the ISO2709 character limit is not actually an issue at all, then. >>> >> Well, actually, it is in koha. >> Since Koha gets iso2709 from a zebra search. >> If it Koha would take marcxml, it would be even slower to get search >> results with the Search.pm as it stands now... And let me know if you >> know a librarian who would like that. > > Yep, there is utterly no doubt C4::Search needs a rewrite. This was a > goal of 3.4. I'm pretty sure I mentioned it in my proposal, and we had > some volunteers to work on it. Who ? What is their plan ? Where is the discussion about the rewrite ? Where is the code ? Is there a working group on that ? Is there any place for collaboration ? -- Henri-Damien LAURENT From gmcharlt at gmail.com Sun Nov 14 20:31:36 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Sun, 14 Nov 2010 14:31:36 -0500 Subject: [Koha-devel] [Koha-patches] Bug 5401: WYSWYG for Koha News In-Reply-To: References: <20101114181805.9B88FF7474@nail.towers.org.uk> Message-ID: Hi, On Sun, Nov 14, 2010 at 2:24 PM, Chris Cormack wrote: > And on the koha-patches list too :) Indeed. To understand my previous email correctly, please replace every use of "koha-commits" with "koha-patches". :) Regards, Galen -- Galen Charlton gmcharlt at gmail.com From chris at bigballofwax.co.nz Sun Nov 14 20:34:40 2010 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Mon, 15 Nov 2010 08:34:40 +1300 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CE0386D.9070403@biblibre.com> References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> <4CE01B9D.2090008@biblibre.com> <4CE0386D.9070403@biblibre.com> Message-ID: On 15 November 2010 08:28, LAURENT Henri-Damien wrote: > Le 14/11/2010 18:32, Chris Cormack a ?crit : >> On 15 November 2010 06:25, LAURENT Henri-Damien >> wrote: >>> Le 11/11/2010 15:19, Ian Walls a ?crit : >>>> Ah, right, I'd forgotten about that switch, as it doesn't work with >>>> authorities, and I can't do rebuild_zebra.pl >>>> -a -b -x -z in my crontab. >>>> >>>> So, the ISO2709 character limit is not actually an issue at all, then. >>>> >>> Well, actually, it is in koha. >>> Since Koha gets iso2709 from a zebra search. >>> If it Koha would take marcxml, it would be even slower to get search >>> results with the Search.pm as it stands now... And let me know if you >>> know a librarian who would like that. >> >> Yep, there is utterly no doubt C4::Search needs a rewrite. This was a >> goal of 3.4. I'm pretty sure I mentioned it in my proposal, and we had >> some volunteers to work on it. > Who ? > What is their plan ? > Where is the discussion about the rewrite ? > Where is the code ? > Is there a working group on that ? > Is there any place for collaboration ? Wow, why so hostile, I just pointed out that it was in my RM proposal, it's on the wiki. The volunteering was done on irc, if i remember rightly and as far as I know no one has written a plan up yet. I don't see the need to jump on me for saying it. However, I still maintain that without a proper search engine abstraction layer, I would consider any work on Search to be unfinished. Chris From henridamien.laurent at biblibre.com Sun Nov 14 20:41:13 2010 From: henridamien.laurent at biblibre.com (LAURENT Henri-Damien) Date: Sun, 14 Nov 2010 20:41:13 +0100 Subject: [Koha-devel] Guidelines for Patch Acceptance/Rejection In-Reply-To: References: Message-ID: <4CE03B59.3050102@biblibre.com> Le 11/11/2010 04:07, Chris Cormack a ?crit : >> Question for Mr. 3.4 RM: >> > :) > >> Is the procedure for dealing with DB revision numbers still the same? As far >> as I remember from the 3.2 development days, the procedure was to patch >> kohastructure.sql (or sysprefs.sql, or whatever), then add the update to the >> end of updatedatabase.pl with a generic version number, like 3.01.00.XXX. >> Patching kohastructure.pl was left to the RM when they applied the patch. > > Patching kohaversion.pl you mean? >> >> I had a crazy table on the wiki for a bit, but this seemed to work better. >> >> That still the consensus? >> > Yup that is the current practice. > > If we do implement DBIx::Class::Schema and > DBIx::Class::Schema::Versioned, updatedatabase.pl and kohastructure.pl > might both go away. But not yet. Well, as far as DB structure is concerned, this is ok. But if we need some new systempreference or some new data in the database (for instance some change in the marc framework...) then Schema and its versioning system would not be enough. Any plans for that ? -- Henri-Damien LAURENT From chris at bigballofwax.co.nz Sun Nov 14 20:45:10 2010 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Mon, 15 Nov 2010 08:45:10 +1300 Subject: [Koha-devel] Guidelines for Patch Acceptance/Rejection In-Reply-To: <4CE03B59.3050102@biblibre.com> References: <4CE03B59.3050102@biblibre.com> Message-ID: On 15 November 2010 08:41, LAURENT Henri-Damien wrote: > Le 11/11/2010 04:07, Chris Cormack a ?crit : >>> Question for Mr. 3.4 RM: >>> >> :) >> >>> Is the procedure for dealing with DB revision numbers still the same? As far >>> as I remember from the 3.2 development days, the procedure was to patch >>> kohastructure.sql (or sysprefs.sql, or whatever), then add the update to the >>> end of updatedatabase.pl with a generic version number, like 3.01.00.XXX. >>> Patching kohastructure.pl was left to the RM when they applied the patch. >> >> Patching kohaversion.pl you mean? >>> >>> I had a crazy table on the wiki for a bit, but this seemed to work better. >>> >>> That still the consensus? >>> >> Yup that is the current practice. >> >> If we do implement DBIx::Class::Schema and >> DBIx::Class::Schema::Versioned, updatedatabase.pl and kohastructure.pl >> might both go away. But not yet. > Well, as far as DB structure is concerned, this is ok. > But if we need some new systempreference or some new data in the > database (for instance some change in the marc framework...) then Schema > and its versioning system would not be enough. Any plans for that ? Thats a good question, one I don't have a good answer for yet, but I do think your atomic updates work is certainly a step in the right direction. ANSI compliant sql inserts/updates for the win :) Chris From chris at bigballofwax.co.nz Sun Nov 14 20:48:02 2010 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Mon, 15 Nov 2010 08:48:02 +1300 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> <4CE01B9D.2090008@biblibre.com> <4CE0386D.9070403@biblibre.com> Message-ID: On 15 November 2010 08:34, Chris Cormack wrote: > On 15 November 2010 08:28, LAURENT Henri-Damien > wrote: >> Le 14/11/2010 18:32, Chris Cormack a ?crit : >>> On 15 November 2010 06:25, LAURENT Henri-Damien >>> wrote: >>>> Le 11/11/2010 15:19, Ian Walls a ?crit : >>>>> Ah, right, I'd forgotten about that switch, as it doesn't work with >>>>> authorities, and I can't do rebuild_zebra.pl >>>>> -a -b -x -z in my crontab. >>>>> >>>>> So, the ISO2709 character limit is not actually an issue at all, then. >>>>> >>>> Well, actually, it is in koha. >>>> Since Koha gets iso2709 from a zebra search. >>>> If it Koha would take marcxml, it would be even slower to get search >>>> results with the Search.pm as it stands now... And let me know if you >>>> know a librarian who would like that. >>> >>> Yep, there is utterly no doubt C4::Search needs a rewrite. This was a >>> goal of 3.4. I'm pretty sure I mentioned it in my proposal, and we had >>> some volunteers to work on it. >> Who ? >> What is their plan ? >> Where is the discussion about the rewrite ? >> Where is the code ? >> Is there a working group on that ? >> Is there any place for collaboration ? > > Wow, why so hostile, I just pointed out that it was in my RM proposal, > it's on the wiki. The volunteering was done on irc, if i remember > rightly and as far as I know no one has written a plan up yet. I don't > see the need to jump on me for saying it. > > However, I still maintain that without a proper search engine > abstraction layer, I would consider any work on Search to be > unfinished. Ahh I realise I am to blame for some of the confusion. I haven't stated clearly, I do think adding Solr would be beneficial, and I would be happy to accept patches that do that, as long as they do it in a way that provides a search engine abstraction layer. I'd hate to see us lock ourselves hard to another search engine and have to redo it all again when we find a better one. If we are working towards DB abstraction, lets do Search engine abstraction too. Chris From laurenthdl at alinto.com Sun Nov 14 21:06:48 2010 From: laurenthdl at alinto.com (LAURENT Henri-Damien) Date: Sun, 14 Nov 2010 21:06:48 +0100 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> <4CE01B9D.2090008@biblibre.com> <4CE0386D.9070403@biblibre.com> Message-ID: <4CE04158.5090106@alinto.com> Le 14/11/2010 20:34, Chris Cormack a ?crit : > On 15 November 2010 08:28, LAURENT Henri-Damien > wrote: >> Le 14/11/2010 18:32, Chris Cormack a ?crit : >>> On 15 November 2010 06:25, LAURENT Henri-Damien >>> wrote: >>>> Le 11/11/2010 15:19, Ian Walls a ?crit : >>>>> Ah, right, I'd forgotten about that switch, as it doesn't work with >>>>> authorities, and I can't do rebuild_zebra.pl >>>>> -a -b -x -z in my crontab. >>>>> >>>>> So, the ISO2709 character limit is not actually an issue at all, then. >>>>> >>>> Well, actually, it is in koha. >>>> Since Koha gets iso2709 from a zebra search. >>>> If it Koha would take marcxml, it would be even slower to get search >>>> results with the Search.pm as it stands now... And let me know if you >>>> know a librarian who would like that. >>> >>> Yep, there is utterly no doubt C4::Search needs a rewrite. This was a >>> goal of 3.4. I'm pretty sure I mentioned it in my proposal, and we had >>> some volunteers to work on it. >> Who ? >> What is their plan ? >> Where is the discussion about the rewrite ? >> Where is the code ? >> Is there a working group on that ? >> Is there any place for collaboration ? > > Wow, why so hostile, I just pointed out that it was in my RM proposal, > it's on the wiki. The volunteering was done on irc, if i remember > rightly and as far as I know no one has written a plan up yet. I don't > see the need to jump on me for saying it. > > However, I still maintain that without a proper search engine > abstraction layer, I would consider any work on Search to be > unfinished. It was simple questions, to get some information. It looked to me that you were saying that there was already a team working on C4::Search rewrite. I was not aware of that and saw no discussion on some ongoing work on C4::Search other than the ones on Solr, which we initiated. Friendly -- Henri-Damien LAURENT From frederic at tamil.fr Sun Nov 14 21:07:21 2010 From: frederic at tamil.fr (=?UTF-8?B?RnLDqWTDqXJpYyBEZW1pYW5z?=) Date: Sun, 14 Nov 2010 21:07:21 +0100 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CE01B9D.2090008@biblibre.com> References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> <4CE01B9D.2090008@biblibre.com> Message-ID: <4CE04179.2060605@tamil.fr> >> So, the ISO2709 character limit is not actually an issue at all, then. > Well, actually, it is in koha. > Since Koha gets iso2709 from a zebra search. > If it Koha would take marcxml, it would be even slower to get search > results with the Search.pm as it stands now... And let me know if you > know a librarian who would like that. MARCXML parsing is slow because MARC::File::XML uses a SAX parser to do the job and do some 'magic' encoding-decoding to-from MARC8--Galen could correct me if I'm wrong. But since records stored into Koha are cleanly UTF-8 encoded, are well formed XML and respect a minimalist schema, we could parse them much more quickly directly in Perl. I've done some experimentation. It works easily. This code could be ported in five minutes: http://tinyurl.com/3x3d6b9 -- Fr?d?ric -------------- next part -------------- An HTML attachment was scrubbed... URL: From henridamien.laurent at biblibre.com Sun Nov 14 21:33:22 2010 From: henridamien.laurent at biblibre.com (LAURENT Henri-Damien) Date: Sun, 14 Nov 2010 21:33:22 +0100 Subject: [Koha-devel] Guidelines for Patch Acceptance/Rejection In-Reply-To: References: <4CE03B59.3050102@biblibre.com> Message-ID: <4CE04792.4020509@biblibre.com> Le 14/11/2010 20:45, Chris Cormack a ?crit : > On 15 November 2010 08:41, LAURENT Henri-Damien > wrote: >> Le 11/11/2010 04:07, Chris Cormack a ?crit : >>>> Question for Mr. 3.4 RM: >>>> >>> :) >>> >>>> Is the procedure for dealing with DB revision numbers still the same? As far >>>> as I remember from the 3.2 development days, the procedure was to patch >>>> kohastructure.sql (or sysprefs.sql, or whatever), then add the update to the >>>> end of updatedatabase.pl with a generic version number, like 3.01.00.XXX. >>>> Patching kohastructure.pl was left to the RM when they applied the patch. >>> >>> Patching kohaversion.pl you mean? >>>> >>>> I had a crazy table on the wiki for a bit, but this seemed to work better. >>>> >>>> That still the consensus? >>>> >>> Yup that is the current practice. >>> >>> If we do implement DBIx::Class::Schema and >>> DBIx::Class::Schema::Versioned, updatedatabase.pl and kohastructure.pl >>> might both go away. But not yet. >> Well, as far as DB structure is concerned, this is ok. >> But if we need some new systempreference or some new data in the >> database (for instance some change in the marc framework...) then Schema >> and its versioning system would not be enough. Any plans for that ? > > Thats a good question, one I don't have a good answer for yet, but I > do think your atomic updates work is certainly a step in the right > direction. Would need some concensus on the directory structure, which could help. Why not a directory structure db_updates |- acquisitions |- members/patrons/borrowers |- ...... One file for each update. And have a directory by release and symlinks to those db_updates. .... But there was a discussion on that... http://lists.koha.org/pipermail/koha-devel/2010-February/033588.html No concensus reached... No decisions, no collaboration planned. But that would be quite a change. for all the previous works... could be quite tedious. > ANSI compliant sql inserts/updates for the win :) -- Henri-Damien LAURENT From henridamien.laurent at biblibre.com Sun Nov 14 22:14:06 2010 From: henridamien.laurent at biblibre.com (LAURENT Henri-Damien) Date: Sun, 14 Nov 2010 22:14:06 +0100 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> <4CE01B9D.2090008@biblibre.com> Message-ID: <4CE0511E.9080601@biblibre.com> Le 14/11/2010 18:32, Chris Cormack a ?crit : > On 15 November 2010 06:25, LAURENT Henri-Damien > wrote: >> Le 11/11/2010 15:19, Ian Walls a ?crit : >>> Ah, right, I'd forgotten about that switch, as it doesn't work with >>> authorities, and I can't do rebuild_zebra.pl >>> -a -b -x -z in my crontab. >>> >>> So, the ISO2709 character limit is not actually an issue at all, then. >>> >> Well, actually, it is in koha. >> Since Koha gets iso2709 from a zebra search. >> If it Koha would take marcxml, it would be even slower to get search >> results with the Search.pm as it stands now... And let me know if you >> know a librarian who would like that. > > Yep, there is utterly no doubt C4::Search needs a rewrite. This was a > goal of 3.4. I'm pretty sure I mentioned it in my proposal, and we had > some volunteers to work on it. > > I still think it is nessecary, but I do think it is better to do it in > a way that allows for a structure like > C4/Search.pm > C4/Search/Nutch.pm > C4/Search/Zebra.pm > C4/Search/Solr.pm > > Or using searchengine or something else to achieve the same. Well. Our code is based on Data::SearchEngine, All we would have to do is writing a wrapper for Zebra, Nutch Whatever. And try and use that in the C4::Search. We took the burden to refactor the C4::Search. We wanted to test and make that work with Solr. If someone is volunteering for a Data::SearchEngine::Zebra, feel free... We also just discovered something about zoom and zebra. It really has a problem when you are using persistent connexions to it. Be warned... Indeed, a) when doing more than 200 searches, searches are getting slower and slower. And memory footprint increases. b) there is no way to disconnect cleanly. One has to destroy the connection, (we patched C4::Context to ba able to destroy the connections) but destroy won't close cleanly the connection on the zebra server. and there will be a thread that will be left for each connection you open and close. -- Henri-Damien LAURENT From chris at bigballofwax.co.nz Sun Nov 14 22:22:48 2010 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Mon, 15 Nov 2010 10:22:48 +1300 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CE0511E.9080601@biblibre.com> References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> <4CE01B9D.2090008@biblibre.com> <4CE0511E.9080601@biblibre.com> Message-ID: On 15 November 2010 10:14, LAURENT Henri-Damien wrote: > Le 14/11/2010 18:32, Chris Cormack a ?crit : >> On 15 November 2010 06:25, LAURENT Henri-Damien >> wrote: >>> Le 11/11/2010 15:19, Ian Walls a ?crit : >>>> Ah, right, I'd forgotten about that switch, as it doesn't work with >>>> authorities, and I can't do rebuild_zebra.pl >>>> -a -b -x -z in my crontab. >>>> >>>> So, the ISO2709 character limit is not actually an issue at all, then. >>>> >>> Well, actually, it is in koha. >>> Since Koha gets iso2709 from a zebra search. >>> If it Koha would take marcxml, it would be even slower to get search >>> results with the Search.pm as it stands now... And let me know if you >>> know a librarian who would like that. >> >> Yep, there is utterly no doubt C4::Search needs a rewrite. This was a >> goal of 3.4. I'm pretty sure I mentioned it in my proposal, and we had >> some volunteers to work on it. >> >> I still think it is nessecary, but I do think it is better to do it in >> a way that allows for a structure like >> C4/Search.pm >> C4/Search/Nutch.pm >> C4/Search/Zebra.pm >> C4/Search/Solr.pm >> >> Or using searchengine or something else to achieve the same. > Well. > Our code is based on Data::SearchEngine, > All we would have to do is writing a wrapper for Zebra, Nutch Whatever. > And try and use that in the C4::Search. > We took the burden to refactor the C4::Search. > We wanted to test and make that work with Solr. > If someone is volunteering for a Data::SearchEngine::Zebra, feel free... > Yes, please do (someone). Because this will make it much much more likely the Solr work is accepted into master for 3.4 Chris From gmcharlt at gmail.com Sun Nov 14 22:28:57 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Sun, 14 Nov 2010 16:28:57 -0500 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CE04179.2060605@tamil.fr> References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> <4CE01B9D.2090008@biblibre.com> <4CE04179.2060605@tamil.fr> Message-ID: Hi, 2010/11/14 Fr?d?ric Demians : > MARCXML parsing is slow because MARC::File::XML uses a SAX parser to do the > job and do some 'magic' encoding-decoding to-from MARC8--Galen could correct > me if I'm wrong. Properly used (i.e., with BinaryEncoding => utf8 when parsing known UTF-8 MARCXML records), MARC::File::XML doesn't automatically transcode from MARC-8 to UTF-8, so that's a nonissue. Admittedly, there are still some circumstances where MARC::File::XML does inappropriately try to inject a MARC-8 to UTF-8 conversion. Patches to improve MARC::File::XML are welcome. > But since records stored into Koha are cleanly UTF-8 > encoded, are well formed XML and respect a minimalist schema, That is the ideal. In practice, Koha currently does not enforce either of your two assumptions in that statement; patches to tighten that up would be a good idea. > we could parse > them much more quickly directly in Perl. I question whether a pure Perl implementation would be faster. LibXML::XML::SAX, XML::SAX::Expat, and XML::SAX::ExpatXS have the the advantage that much of the parsing is handled by C code. > I've done some experimentation. It > works easily. This code could be ported in five minutes: > > http://tinyurl.com/3x3d6b9 Are you suggesting that we adopt yet another hand-crafted, pure Perl XML parser, one that does not support namespaces (a lot of MARCXML data in the wild does reference the marc namespace) and does not check for well formed XML *and* adopt a new MARC module that appears to be all of a few days old and lacks test cases for use in Koha? What you propose is interesting, and I'm sure you'll pursue it, but it would need more time to bake. On a more general note, XML parsing is a (mostly) solved problem in Perl. I don't think the way forward is to interpose hand-crafted pure-Perl-parsing of the MARCXML. To suggest an alternative approach that I think would would bear fruit and be less error prone, we can try other standard XML parsers such as XML::Twig. Regards, Galen -- Galen Charlton gmcharlt at gmail.com From henridamien.laurent at biblibre.com Sun Nov 14 22:42:18 2010 From: henridamien.laurent at biblibre.com (LAURENT Henri-Damien) Date: Sun, 14 Nov 2010 22:42:18 +0100 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> <4CE01B9D.2090008@biblibre.com> <4CE04179.2060605@tamil.fr> Message-ID: <4CE057BA.1050505@biblibre.com> Le 14/11/2010 22:28, Galen Charlton a ?crit : > Hi, > > 2010/11/14 Fr?d?ric Demians : >> MARCXML parsing is slow because MARC::File::XML uses a SAX parser to do the >> job and do some 'magic' encoding-decoding to-from MARC8--Galen could correct >> me if I'm wrong. > > Properly used (i.e., with BinaryEncoding => utf8 when parsing known > UTF-8 MARCXML records), MARC::File::XML doesn't automatically > transcode from MARC-8 to UTF-8, so that's a nonissue. Admittedly, > there are still some circumstances where MARC::File::XML does > inappropriately try to inject a MARC-8 to UTF-8 conversion. Patches > to improve MARC::File::XML are welcome. > >> But since records stored into Koha are cleanly UTF-8 >> encoded, are well formed XML and respect a minimalist schema, > > That is the ideal. In practice, Koha currently does not enforce > either of your two assumptions in that statement; patches to tighten > that up would be a good idea. Some work on it is pushed in BibLibre-various branch. C4::Charset, C4::Biblio and C4::Search. We used that to get Korean correctly displayed... and searched. > >> we could parse >> them much more quickly directly in Perl. > > On a more general note, XML parsing is a (mostly) solved problem in > Perl. I don't think the way forward is to interpose hand-crafted > pure-Perl-parsing of the MARCXML. To suggest an alternative approach > that I think would would bear fruit and be less error prone, we can > try other standard XML parsers such as XML::Twig. Having played with that module, it is quite neat and fast at parsing, was meant for parsing on the fly big XML documents. But again, we would have to improve C4::XSLT.pm... -- Henri-Damien LAURENT From chris at bigballofwax.co.nz Sun Nov 14 22:48:43 2010 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Mon, 15 Nov 2010 10:48:43 +1300 Subject: [Koha-devel] Call for testing .. Reports module changes Message-ID: Hi all There is a branch at git.koha-community.org, new/biblibre_reports. It has had some good initial testing, with a few issues found and fixed, (Thanks Galen). If people could check it out and do some more testing, and let me know what they find. I think it is very close to being merged to master, I would just like a little more testing by some others. You can see the branch in gitweb here http://git.koha-community.org/gitweb/?p=koha.git;a=shortlog;h=refs/heads/new/biblibre_reports And can report your findings here http://wiki.koha-community.org/wiki/BibLibre_patches_to_be_integrated_for_3.4#Reports Thanks in advance. Chris From gmcharlt at gmail.com Sun Nov 14 22:51:11 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Sun, 14 Nov 2010 16:51:11 -0500 Subject: [Koha-devel] externalising translator and etc In-Reply-To: <4CE012F3.1050006@biblibre.com> References: <4CDAC42C.3090103@biblibre.com> <4CE012F3.1050006@biblibre.com> Message-ID: Hi, On Sun, Nov 14, 2010 at 11:48 AM, LAURENT Henri-Damien wrote: > My idea for that is that tracking addition of zebra indexes and all that > stuff would be considerably eased if there was an initial etc repository > that installer clone into /home/koha/kohadev Since adding a Zebra index is sometimes paired with an update to template files, splitting the configuration into a separate repository would actually make it more difficult to track Zebra updates. It gets worse if you consider changes such as adding or modifying an authentication module, for which the default configuration changes are almost invariably associated with code changes. What you're proposing would make it more difficult to cleanly manage Koha development that touches the default configuration files; among other problems, Git's submodule support is such that it would require that users and developers would have to do more than a simple git pull or git fetch/rebase in order to ensure that they've fetched updates both to Koha and to its configuration files. > We create a git repository for all the installations, but when you > create the repository from the processed files, it looses > synchronization with all the common indexes which could be required for > Koha when one adds some other minor feature or fixes a bug in the indexer. Since it sounds like you're using dev-mode deployments for your customers, you (i.e., BibLibre, not necessarily you personally) could just as readily keep track of local customizations in the git clone for each installation you run, then do a make update_zebra_conf. I'm sure that there are a variety of ways to script that, and perhaps some patches to Makefile.PL would help you better than splitting out etc into a submodule. > An other reason is that etc doesnot vary much. But when it varies, when > you upgrade users should be aware that they might loose their custom > indexes. I wanted to make next upgrades smoother for libraries. My suggestion immediately above should help you, but I also want to point out that make upgrade *does* create backups of any files it touches, so local changes are preserved that way. I am not suggesting that there aren't things that could be done to make things easier for anybody who runs a lot of dev-mode Koha installations, but sticking etc off into a submodule is not the solution. > I am with you when you propose to use different directories. This would > bring to : > etc > ?|- koha-conf.xml or koha-conf.yml i.e. ONLY Koha common configuration > (database access and so on. No more zebra stuff in that. ) I would support that. It's mostly a historical accident that koha-conf.xml currently serves as both the main configuration file for Koha and the top-level config file for zebrasrv, but there's no reason why those two configuration functions couldn't be placed in separate files. > ?|- authentication > ?||- LDAP > ?||- CAS > ?|-webserver > ?||- apache2 > ?||- nginx > ?|- searchengine > ?||-zebradb > ?||-solr > ?||-pazpar2 I'm OK with splitting the configuration files. > But then, when one chooses one type of webserver, one type of > authentication, one type of searchengine, he would use only a few of all > the installation files (which could become quite a forest). > I wanted the structure for etc simpler so that sysadmins would not be > overwhelmed by big picture. But all of the options have to exist *somewhere*, and it would be simpler to manage the development of the various options if the configuration files and directories were all laid out directly in the Git repository, not relegated to topic branches. Furthermore, if all of the possible configuration files are available in a production installation, it would be easier for a sysadmin to (say) switch from Apache to nginx. In other words, I think it is better to organize the configuration files well (and document them!) than to effectively atomize the management of them during Koha development by having permanent topic branches for various configuration modes. >> 0 for moving the PO files to a separate Git project. ?The size of the >> repository doesn't really strike me as a big deal; the Git protocol is >> pretty efficient. ?That said, while I don't see a great deal of >> benefit to splitting the translations off into a separate repository, >> I don't see much harm either. As I said, I don't have an strong opinion either way -- that's what the 0 means -- but I do think there are a couple misconceptions to clear up: > Well it is quite striking when you get 249Mo to dl when doing a git > clone. :) Not sure where you're getting 249M from -- it's more like 151 MiB when I measured today. > (mainly because any change you commit on po files is storing a > new instance of this file) Actually, no, it doesn't. Git is better designed than that; generally what it would do if you commit a change to a PO file is store just the delta. git gc is run on the public repo every week. When you push or pull to a repository, Git transfers just compressed deltas. > ADSL is coping well with that... But there are still some places in the > world which donot have access to wide bandwidth. True. But a Git clone (of the public repo) is a once-and-done operation. Anybody installing Koha for production use could use the tarball or (even better) the Debian package. Particularly because of the Debian package, we're getting past the point where dev mode would be recommend for use by single-library production installations. I've been doing some measurements. A PO-only repository would be about 50M in size, and creating such a thing is the easy part. But if we move misc/translator/po to a separate repository, we would have to also remove that directory from the main repository in order to realize the repository size savings motivating your proposal - a 'git rm misc/translator/po' wouldn't reduce the size of the repo. My test run is not quite finished yet (it takes a long time for git-filter-branch to handle almost 13,000 commits), but even assuming that 50M could be pared from the main repository, actually doing that would come at a significant cost: every commit would be rewritten by the git-filter-branch operation. Rewriting history like that could mean that every single person who clones against the public repo could have to deal with forced branch updates, to say nothing of invalidating all of the release tags. That prospect doesn't hearten me. I'll report back once my test finishes. Regards, Galen -- Galen Charlton gmcharlt at gmail.com From frederic at tamil.fr Sun Nov 14 23:57:41 2010 From: frederic at tamil.fr (=?UTF-8?B?RnLDqWTDqXJpYyBEZW1pYW5z?=) Date: Sun, 14 Nov 2010 23:57:41 +0100 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> <4CE01B9D.2090008@biblibre.com> <4CE04179.2060605@tamil.fr> Message-ID: <4CE06965.8040307@tamil.fr> >> But since records stored into Koha are cleanly UTF-8 encoded, are >> well formed XML and respect a minimalist schema, > That is the ideal. In practice, Koha currently does not enforce either > of your two assumptions in that statement; patches to tighten that up > would be a good idea. I don't understand. Do you mean that biblioitems.marcxml field and its mirror in Zebra can contain something else than valid MARCXML? Invalid encoded characters shouldn't change anything whatever parser is used. I see bug #2916 on bugzilla. Is there something more? >> we could parse them much more quickly directly in Perl. > I question whether a pure Perl implementation would be faster. > LibXML::XML::SAX, XML::SAX::Expat, and XML::SAX::ExpatXS have the the > advantage that much of the parsing is handled by C code. My tests show that pure Perl parsing is twelve time as fast as with a SAX parser. A script test is here: http://tinyurl.com/23xaqkg > Are you suggesting that we adopt yet another hand-crafted, pure Perl > XML parser, one that does not support namespaces (a lot of MARCXML > data in the wild does reference the marc namespace) and does not check > for well formed XML *and* adopt a new MARC module that appears to be > all of a few days old and lacks test cases for use in Koha? I've neither said nor suggested that. The issue pointed by Henri-Damien is that in C4::Search we get MARC::Record from ISO2709 because using MARCXML to build the same MARC::Record is much slower. And so we're limited to 99,999 record size. I say that we could build a MARC::Record from the MARCXML returned by Zebra using a pure Perl parser. And so I pointed to some code explaining that it could be ported, ie adapted to generate a MARC::Record as usually used in Koha. Have I proposed to substitute a new (immature?) MARC module, for whatever motives? I don't think so. > On a more general note, XML parsing is a (mostly) solved problem in > Perl. I don't think the way forward is to interpose hand-crafted > pure-Perl-parsing of the MARCXML. To suggest an alternative approach > that I think would would bear fruit and be less error prone, we can > try other standard XML parsers such as XML::Twig. A MARCXML document is very simple XML which doesn't need a full fledged XML parser. I'm just saying that as soon as MARCXML records as stored in Koha are valid, if it isn't already the case, we can avoid using an heavy-weighted parser which impact performance and isn't required. We need of course to continue to use a SAX parser for incoming records. -- Fr?d?ric From gmcharlt at gmail.com Mon Nov 15 00:40:41 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Sun, 14 Nov 2010 18:40:41 -0500 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CE06965.8040307@tamil.fr> References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> <4CE01B9D.2090008@biblibre.com> <4CE04179.2060605@tamil.fr> <4CE06965.8040307@tamil.fr> Message-ID: Hi, 2010/11/14 Fr?d?ric Demians : > generate a MARC::Record as usually used in Koha. Have I proposed to > substitute a new (immature?) MARC module, for whatever motives? ?I don't > think so. Your example code is a Marc::Moose module, no? Regards, Galen -- Galen Charlton gmcharlt at gmail.com From gmcharlt at gmail.com Mon Nov 15 05:46:15 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Sun, 14 Nov 2010 23:46:15 -0500 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CE06965.8040307@tamil.fr> References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> <4CE01B9D.2090008@biblibre.com> <4CE04179.2060605@tamil.fr> <4CE06965.8040307@tamil.fr> Message-ID: Hi, 2010/11/14 Fr?d?ric Demians : > A MARCXML document is very simple XML which doesn't need a full > fledged XML parser. I'm just saying that as soon as MARCXML records as > stored in Koha are valid, if it isn't already the case, we can avoid > using an heavy-weighted parser which impact performance and isn't > required. We need of course to continue to use a SAX parser for incoming > records. I've measured, and your parser is, in fact pretty fast -- *if* you feed it only MARCXML that meets narrower constraints than are permitted by the MARC21slim schema. However, I see no good reason to limit Koha to that artificial restriction; having biblioitems.marcxml contain MARCXML that validates against the MARC21slim is sufficient. Two parsers doing similar operations is an invitation for subtle bugs. The pure Perl parser you propose currently doesn't handle namespaces prefixes (which are allowed in MARC21slim records), wouldn't handle any situation where the attributes aren't in the order you expect them in (attribute order is not significant per the XML specification), and will blithely accept non-well-formed XML without complaining (this is *not* a good thing). It also doesn't recognize and correctly handle XML entities. Obviously you could address much of this in your code, but I suspect what you'll find is that you'll end up with an XML parser that is slower and still has more bugs than any of the standard parser modules. Fortunately, I've found an approach that is significantly faster than MARC::File::XML/SAX: dropping SAX from MARC::File::XML entirely and using XML::LibXML's DOM parser instead [1]. It is faster [2] than using XML::LibXML::SAX::Parser [3], XML::Expat [4], and even XML::ExpatXS [5]. A pure Perl approach based on your work [6] does win the race [7], but it also fails some of MARC::File::XML's test cases and I'm sure it would lose speed once extended to handle the full range of what constitutes a valid MARCXML document. But, one might ask, what about memory usage with a DOM parser? MARC::File::XML as used by Koha (and used in general) is geared towards parsing one record at a time; it doesn't currently have any provision for loading an entire file in memory. A DOM tree for a typical MARCXML record is not a big deal, and even a record having several thousand items wouldn't be any more unmanageable. (Of course, as we all know, one of the most significant gains to be had will arise from changing Koha so that it doesn't embed item data in bib MARC tags as a matter of course). In fact, we already have proof that we'd be no worse off as far as memory consumption is concerned -- XML::LibXML::SAX::Parser, as it happens, isn't a traditional SAX parser. What it does is load the XML document into a DOM tree, then walks the tree and fires off SAX events. In other words, we're *already* using a DOM parser. In any event, I would be grateful for people to test the DOM version of MARC::File::XML. It passes MARC::File::XML's test suite successfully, but more testing to verify that it won't break things would help a great deal. By the way, I did also try XML::Twig, but that didn't turn out to be faster than XML::LibXML::SAX::Parser, and in some cases was slower. [1] http://git.librarypolice.com/?p=marcpm.git;a=shortlog;h=refs/heads/use-dom-instead-of-sax [2] http://librarypolice.com/nytprof/run-libxml-dom-2/index.html [3] http://librarypolice.com/nytprof/run-sax-libxml-sax-parser/ [4] http://librarypolice.com/nytprof/run-sax-expat/index.html [5] http://librarypolice.com/nytprof/run-sax-expatxs/index.html [6] http://git.librarypolice.com/?p=marcpm.git;a=shortlog;h=refs/heads/pure-perl [7] http://librarypolice.com/nytprof/run-pp/ Regards, Galen -- Galen Charlton gmcharlt at gmail.com From gmcharlt at gmail.com Mon Nov 15 05:52:13 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Sun, 14 Nov 2010 23:52:13 -0500 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> <4CE01B9D.2090008@biblibre.com> <4CE04179.2060605@tamil.fr> <4CE06965.8040307@tamil.fr> Message-ID: Hi 2010/11/14 Galen Charlton : > [1] http://git.librarypolice.com/?p=marcpm.git;a=shortlog;h=refs/heads/use-dom-instead-of-sax > [2] http://librarypolice.com/nytprof/run-libxml-dom-2/index.html > [3] http://librarypolice.com/nytprof/run-sax-libxml-sax-parser/ > [4] http://librarypolice.com/nytprof/run-sax-expat/index.html > [5] http://librarypolice.com/nytprof/run-sax-expatxs/index.html > [6] http://git.librarypolice.com/?p=marcpm.git;a=shortlog;h=refs/heads/pure-perl > [7] http://librarypolice.com/nytprof/run-pp/ By the way, the profiles are coming from runs of the following script applied to a MARCXML file containing 5,000 records, with various combinations of SAX parsers and the DOM and PP code for MARC::File::XML. #!/usr/bin/perl use MARC::File::XML (BinaryEncoding => 'utf8'); use MARC::Record; use MARC::Batch; binmode STDOUT, ':utf8'; my $batch = MARC::Batch->new('XML', $ARGV[0]); while (my $record = $batch->next) { $i++; print $record->as_usmarc(); if ($i % 1000 == 0) { print STDERR "$i ", scalar(localtime), "\n"; } last if $i == 5000; } Regards, Galen -- Galen Charlton gmcharlt at gmail.com From frederic at tamil.fr Mon Nov 15 06:56:44 2010 From: frederic at tamil.fr (=?UTF-8?B?RnLDqWTDqXJpYyBEZW1pYW5z?=) Date: Mon, 15 Nov 2010 06:56:44 +0100 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> <4CE01B9D.2090008@biblibre.com> <4CE04179.2060605@tamil.fr> <4CE06965.8040307@tamil.fr> Message-ID: <4CE0CB9C.2070806@tamil.fr> Thanks a lot for those thorough tests. Your optimization of MARCXML records parsing looks fantastic. > I've measured, and your parser is, in fact pretty fast -- *if* you > feed it only MARCXML that meets narrower constraints than are > permitted by the MARC21slim schema. However, I see no good reason to > limit Koha to that artificial restriction; having biblioitems.marcxml > contain MARCXML that validates against the MARC21slim is sufficient. It's a design choice. MARCXML is the Koha internal serialization format for MARC records. There is no obligation to conform to MARC21slim schema. We even could choose another serialization format as it has already been discussed. biblioitems.marcxml isn't open to the wide. It is written by C4::ModBiblioMarc which uses MARC::Record::as_xml_record function to populate marcxml DB field. So we already have an internal restricted version of MARC21slim schema. And we could benefit of it if pure Perl parsing is a real performance gain. That is for the good reason. > Two parsers doing similar operations is an invitation for subtle bugs. > The pure Perl parser you propose currently doesn't handle namespaces > prefixes (which are allowed in MARC21slim records), wouldn't handle > any situation where the attributes aren't in the order you expect them > in (attribute order is not significant per the XML specification), and > will blithely accept non-well-formed XML without complaining (this is > *not* a good thing). It also doesn't recognize and correctly handle > XML entities. Obviously you could address much of this in your code, > but I suspect what you'll find is that you'll end up with an XML > parser that is slower and still has more bugs than any of the standard > parser modules. See above. I don't see the need to handle any MARC21slim peculiarity in the limited needs of Koha internal functions. Regards, -- Fr?d?ric From frederic at tamil.fr Mon Nov 15 07:10:21 2010 From: frederic at tamil.fr (=?UTF-8?B?RnLDqWTDqXJpYyBEZW1pYW5z?=) Date: Mon, 15 Nov 2010 07:10:21 +0100 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> <4CE01B9D.2090008@biblibre.com> <4CE04179.2060605@tamil.fr> <4CE06965.8040307@tamil.fr> Message-ID: <4CE0CECD.2050006@tamil.fr> >> generate a MARC::Record as usually used in Koha. Have I proposed to >> substitute a new (immature?) MARC module, for whatever motives? I >> don't think so. > Your example code is a Marc::Moose module, no? Yes, that's why I've said since my first email that it could be ported to be used with MARC::Record. And this is finally what you have done in your tests. There was no call at all for changing the MARC module used in Koha. Misunderstandings may arise in an international communitarian project like Koha from the fact that all participants are not native English speakers, which is my case. Thanks. -- Fr?d?ric From chrisc at catalyst.net.nz Mon Nov 15 07:46:14 2010 From: chrisc at catalyst.net.nz (Chris Cormack) Date: Mon, 15 Nov 2010 19:46:14 +1300 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CE0CB9C.2070806@tamil.fr> References: <20101111130948.A5CB2F7316@nail.towers.org.uk> <4CE01B9D.2090008@biblibre.com> <4CE04179.2060605@tamil.fr> <4CE06965.8040307@tamil.fr> <4CE0CB9C.2070806@tamil.fr> Message-ID: <20101115064614.GT4325@rorohiko> * Fr?d?ric Demians (frederic at tamil.fr) wrote: > Thanks a lot for those thorough tests. Your optimization of MARCXML > records parsing looks fantastic. They sure do, I'll be testing out your git branch you have pushed for MARC/Perl http://marcpm.git.sourceforge.net/git/gitweb.cgi?p=marcpm/marcpm;a=shortlog;h=refs/heads/use-dom-instead-of-sax > > > I've measured, and your parser is, in fact pretty fast -- *if* you > > feed it only MARCXML that meets narrower constraints than are > > permitted by the MARC21slim schema. However, I see no good reason to > > limit Koha to that artificial restriction; having biblioitems.marcxml > > contain MARCXML that validates against the MARC21slim is sufficient. > > It's a design choice. MARCXML is the Koha internal serialization format > for MARC records. There is no obligation to conform to MARC21slim > schema. We even could choose another serialization format as it has > already been discussed. biblioitems.marcxml isn't open to the wide. It > is written by C4::ModBiblioMarc which uses MARC::Record::as_xml_record > function to populate marcxml DB field. So we already have an internal > restricted version of MARC21slim schema. And we could benefit of it if > pure Perl parsing is a real performance gain. That is for the good > reason. I think that getting speed and compliance to the standard is the best of both worlds. If we store standard compliant MARCXML then our export routine is trivially easy :) Not to mention the benefit of being able to say we store MARCXML compliant to the standard. Chris -- Chris Cormack Catalyst IT Ltd. +64 4 803 2238 PO Box 11-053, Manners St, Wellington 6142, New Zealand -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: From frederic at tamil.fr Mon Nov 15 07:50:30 2010 From: frederic at tamil.fr (=?ISO-8859-1?Q?Fr=E9d=E9ric_Demians?=) Date: Mon, 15 Nov 2010 07:50:30 +0100 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <20101115064614.GT4325@rorohiko> References: <20101111130948.A5CB2F7316@nail.towers.org.uk> <4CE01B9D.2090008@biblibre.com> <4CE04179.2060605@tamil.fr> <4CE06965.8040307@tamil.fr> <4CE0CB9C.2070806@tamil.fr> <20101115064614.GT4325@rorohiko> Message-ID: <4CE0D836.4010604@tamil.fr> > I think that getting speed and compliance to the standard is the best of > both worlds. If we store standard compliant MARCXML then our export > routine is trivially easy :) Not to mention the benefit of being able to > say we store MARCXML compliant to the standard. Koha MARCXML being a subset of MARC21slim is de facto compliant. From frederic at tamil.fr Mon Nov 15 07:54:33 2010 From: frederic at tamil.fr (=?UTF-8?B?RnLDqWTDqXJpYyBEZW1pYW5z?=) Date: Mon, 15 Nov 2010 07:54:33 +0100 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> <4CE01B9D.2090008@biblibre.com> <4CE04179.2060605@tamil.fr> <4CE06965.8040307@tamil.fr> Message-ID: <4CE0D929.7030702@tamil.fr> And for those who want to run test by their self, here attached is my tests comparing pure Perl parsing and various SAX parser (which need to be installed): XML::SAX::PurePerl XML::LibXML::SAX::Parser XML::SAX::Expat XML::SAX::ExpatXS SAX parsing is done directly, without using MARC::File::XML in order to have raw figures. Parsing in MARC::File::XML should slow down a little bit but I can't say of what magnitude. -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: test-parsing-marcxml URL: From chrisc at catalyst.net.nz Mon Nov 15 08:01:36 2010 From: chrisc at catalyst.net.nz (Chris Cormack) Date: Mon, 15 Nov 2010 20:01:36 +1300 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CE0D836.4010604@tamil.fr> References: <4CE01B9D.2090008@biblibre.com> <4CE04179.2060605@tamil.fr> <4CE06965.8040307@tamil.fr> <4CE0CB9C.2070806@tamil.fr> <20101115064614.GT4325@rorohiko> <4CE0D836.4010604@tamil.fr> Message-ID: <20101115070136.GU4325@rorohiko> * Fr?d?ric Demians (frederic at tamil.fr) wrote: > > >I think that getting speed and compliance to the standard is the best of > >both worlds. If we store standard compliant MARCXML then our export > >routine is trivially easy :) Not to mention the benefit of being able to > >say we store MARCXML compliant to the standard. > > Koha MARCXML being a subset of MARC21slim is de facto compliant. That's as may be, but using a MARC::Record that deals with fully MARC21slim compatible records in a fast manner is surely a good thing. Anyway lets not let it distract us from testing the DOM implementation which looks like a very promising development and a good path forward for MARCPM, hopefully lots of people take up the invitation to test. We should be able to use it to help us when we are ingesting records (either via Z3950 or bulkimport) to check for compliance, and in a fast manner :) Chris -- Chris Cormack Catalyst IT Ltd. +64 4 803 2238 PO Box 11-053, Manners St, Wellington 6142, New Zealand -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: From chris at bigballofwax.co.nz Mon Nov 15 08:08:11 2010 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Mon, 15 Nov 2010 20:08:11 +1300 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CE0D929.7030702@tamil.fr> References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> <4CE01B9D.2090008@biblibre.com> <4CE04179.2060605@tamil.fr> <4CE06965.8040307@tamil.fr> <4CE0D929.7030702@tamil.fr> Message-ID: 2010/11/15 Fr?d?ric Demians : > And for those who want to run test by their self, here attached is my > tests comparing pure Perl parsing and various SAX parser (which need to > be installed): > > ? ?XML::SAX::PurePerl > ? ?XML::LibXML::SAX::Parser > ? ?XML::SAX::Expat > ? ?XML::SAX::ExpatXS > > SAX parsing is done directly, without using MARC::File::XML in order to > have raw figures. Parsing in MARC::File::XML should slow down a little > bit but I can't say of what magnitude. > > You could add the test using DOM also IE just XML::LibXML (Without the sax) which we now know is a lot faster :) Galen's tests essentially do the same thing, except with passing it through MARC::File::XML Chris Chris From frederic at tamil.fr Mon Nov 15 08:38:32 2010 From: frederic at tamil.fr (=?ISO-8859-1?Q?Fr=E9d=E9ric_Demians?=) Date: Mon, 15 Nov 2010 08:38:32 +0100 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> <4CE01B9D.2090008@biblibre.com> <4CE04179.2060605@tamil.fr> <4CE06965.8040307@tamil.fr> <4CE0D929.7030702@tamil.fr> Message-ID: <4CE0E378.4010903@tamil.fr> > IE just XML::LibXML (Without the sax) which we now know is a lot > faster :) Galen's tests essentially do the same thing, except with > passing it through MARC::File::XML Comparisons are odious. DOM uses an underlying SAX parser to load any XML document in memory. DOM is not as SAX parser as itself. Galen tests, as I understand them, show that current MARC::File::XML parser, which include a specif SAX event handler, is slower than loading directly a DOM document. It contradicts the theory. The explanation is, as stated by Galen, that Perl SAX parser implementation is not good... My tests use XML::Simple and so load the whole MARCXML document in memory before rendering it into a MARC::Record object. It gives a picture of the difference between parsing MARCXML in pure Perl vs using an external SAX parser. -- Fr?d?ric From chris at bigballofwax.co.nz Mon Nov 15 08:52:48 2010 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Mon, 15 Nov 2010 20:52:48 +1300 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CE0E378.4010903@tamil.fr> References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> <4CE01B9D.2090008@biblibre.com> <4CE04179.2060605@tamil.fr> <4CE06965.8040307@tamil.fr> <4CE0D929.7030702@tamil.fr> <4CE0E378.4010903@tamil.fr> Message-ID: 2010/11/15 Fr?d?ric Demians : > >> IE just XML::LibXML (Without the sax) which we now know is a lot >> faster :) Galen's tests essentially do the same thing, except with >> passing it through MARC::File::XML > > Comparisons are odious. DOM uses an underlying SAX parser to load any > XML document in memory. DOM is not as SAX parser as itself. Galen tests, > as I understand them, show that current MARC::File::XML parser, which > include a specif SAX event handler, is slower than loading directly a > DOM document. It contradicts the theory. The explanation is, as stated > by Galen, that Perl SAX parser implementation is not good... My tests > use XML::Simple and so load the whole MARCXML document in memory before > rendering it into a MARC::Record object. It gives a picture of the > difference between parsing MARCXML in pure Perl vs using an external SAX > parser. I work with the author of XML::Simple .. and he would (and does) tell people not to use it for anything than parsing very simple XML structures. http://search.cpan.org/~grantm/XML-Simple-2.18/lib/XML/Simple.pm#WHERE_TO_FROM_HERE? So do I understand from what you are saying, that Galens work is not useful, and that a pureperl XML parser is the only way forward? I hope this is just another language based misunderstanding. Because I disagree totally if not. Chris From frederic at tamil.fr Mon Nov 15 09:07:07 2010 From: frederic at tamil.fr (=?ISO-8859-1?Q?Fr=E9d=E9ric_Demians?=) Date: Mon, 15 Nov 2010 09:07:07 +0100 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> <4CE01B9D.2090008@biblibre.com> <4CE04179.2060605@tamil.fr> <4CE06965.8040307@tamil.fr> <4CE0D929.7030702@tamil.fr> <4CE0E378.4010903@tamil.fr> Message-ID: <4CE0EA2B.5050202@tamil.fr> > I work with the author of XML::Simple .. and he would (and does) tell > people not to use it for anything than parsing very simple XML > structures. Which is the case when benchmarking a simple MARCXML document parsing switching quickly from one underlying SAX parser to the other. > So do I understand from what you are saying, that Galens work is not > useful, and that a pureperl XML parser is the only way forward? No. I'm not saying Galen's work is not useful. > I hope this is just another language based misunderstanding. Because I > disagree totally if not. Being the English native speaker, you have to do the effort to not misunderstand as I have to do my best to be understood. Thanks. -- Fr?d?ric From chris at bigballofwax.co.nz Mon Nov 15 09:14:25 2010 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Mon, 15 Nov 2010 21:14:25 +1300 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CE0EA2B.5050202@tamil.fr> References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> <4CE01B9D.2090008@biblibre.com> <4CE04179.2060605@tamil.fr> <4CE06965.8040307@tamil.fr> <4CE0D929.7030702@tamil.fr> <4CE0E378.4010903@tamil.fr> <4CE0EA2B.5050202@tamil.fr> Message-ID: 2010/11/15 Fr?d?ric Demians : >> I work with the author of XML::Simple .. and he would (and does) tell >> people not to use it for anything than parsing very simple XML >> structures. > > Which is the case when benchmarking a simple MARCXML document parsing > switching quickly from one underlying SAX parser to the other. > >> So do I understand from what you are saying, that Galens work is not >> useful, and that a pureperl XML parser is the only way forward? > > No. I'm not saying Galen's work is not useful. > >> I hope this is just another language based misunderstanding. Because I >> disagree totally if not. > > Being the English native speaker, you have to do the effort to not > misunderstand as I have to do my best to be understood. > That is what I am attempting to do and for the record, I spoke Maori before I spoke English. Another misunderstanding :) Chris From fridolyn.somers at gmail.com Mon Nov 15 09:30:51 2010 From: fridolyn.somers at gmail.com (Fridolyn SOMERS) Date: Mon, 15 Nov 2010 09:30:51 +0100 Subject: [Koha-devel] Perl resource In-Reply-To: <4CDD101C.8050900@ptfs-europe.com> References: <4CDD101C.8050900@ptfs-europe.com> Message-ID: Thanks. On Fri, Nov 12, 2010 at 10:59 AM, Colin Campbell < colin.campbell at ptfs-europe.com> wrote: > This may be of interest to some: > http://www.onyxneon.com/books/modern_perl/index.html > > This new book is available as a free pdf. > > Cheers > Colin > > -- > Colin Campbell > Chief Software Engineer, > PTFS Europe Limited > Content Management and Library Solutions > +44 (0) 208 366 1295 (phone) > +44 (0) 7759 633626 (mobile) > colin.campbell at ptfs-europe.com > skype: colin_campbell2 > > http://www.ptfs-europe.com > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -- Fridolyn SOMERS ICT engineer PROGILONE - Lyon - France fridolyn.somers at gmail.com -------------- section suivante -------------- Une pi?ce jointe HTML a ?t? nettoy?e... URL: From frederic at tamil.fr Mon Nov 15 09:31:20 2010 From: frederic at tamil.fr (=?ISO-8859-1?Q?Fr=E9d=E9ric_Demians?=) Date: Mon, 15 Nov 2010 09:31:20 +0100 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> <4CE01B9D.2090008@biblibre.com> <4CE04179.2060605@tamil.fr> <4CE06965.8040307@tamil.fr> <4CE0D929.7030702@tamil.fr> <4CE0E378.4010903@tamil.fr> <4CE0EA2B.5050202@tamil.fr> Message-ID: <4CE0EFD8.9090906@tamil.fr> >> misunderstand as I have to do my best to be understood. > That is what I am attempting to do and for the record, I spoke Maori > before I spoke English. Another misunderstanding :) So thanks for trying to understand. And be ye merciful, even as your Maori (metaphoric) Father is merciful. From fridolyn.somers at gmail.com Mon Nov 15 10:05:44 2010 From: fridolyn.somers at gmail.com (Fridolyn SOMERS) Date: Mon, 15 Nov 2010 10:05:44 +0100 Subject: [Koha-devel] Time for reflexion In-Reply-To: References: Message-ID: Hie, I agree with the point that abstraction is necessary in modern softwares. But can Perl achieve this goal ? I'm used to Java with MVC, patterns and object/database mapping. We could imagine a migration step by step from Perl to Java since Perl compiler is integrated into JVM. Regards, 2010/11/12 Clay Fouts > Hello, Miguel. > > You state well the necessity of adopting these sorts of strategies to > promote the long term viability of Koha. Without architectural clarity, the > ability to add and refine features is growing increasingly difficult without > stepping on other people's work and introducing action-at-a-distance bugs. > Working toward a solution to this dilemma is exactly the purpose of having a > technical committee^H^H^H^H^H^H^H meetings to hash these issues out and try > to develop a consensus on which tools and patterns would best suit Koha, > then draw a road map describing incremental steps developers can take in > order to get from point A to point B. > > I have a few high-level ideas toward this end: > * start separating out the monolithic C4 modules into Model and Controller > modules, cleaning up circular dependencies as needed. > * move most of the contents of .pl files into the View modules. > * switch to more flexible template system, like TT. > * split out C4::Context into "user" context for authorization, "schema" > context for data sources, "environment" context for CGI vs. CLI vs. PSGI > * centralize database access calls, either through an ORM or through a > customized layer on top of DBI. > > I think the specific tools applied are less critical than the underlying > principles which any number of those tools could facilitate. > > Cheers, > Clay > -- Fridolyn SOMERS ICT engineer PROGILONE - Lyon - France fridolyn.somers at gmail.com -------------- section suivante -------------- Une pi?ce jointe HTML a ?t? nettoy?e... URL: From paul.poulain at biblibre.com Mon Nov 15 10:12:13 2010 From: paul.poulain at biblibre.com (Paul Poulain) Date: Mon, 15 Nov 2010 10:12:13 +0100 Subject: [Koha-devel] Time for reflexion In-Reply-To: References: Message-ID: <4CE0F96D.3080909@biblibre.com> Le 15/11/2010 10:05, Fridolyn SOMERS a ?crit : > Hie, > > I agree with the point that abstraction is necessary in modern softwares. > But can Perl achieve this goal ? > I'm used to Java with MVC, patterns and object/database mapping. > > We could imagine a migration step by step from Perl to Java since Perl > compiler is integrated into JVM. Yikes ! I won't say no without reading more arguments, but you'll have to be VERY persuasive about what we would win and how such a move could be done ! (and yes, I think Perl can achieve the MVC goal) -- Paul POULAIN http://www.biblibre.com Expert en Logiciels Libres pour l'info-doc Tel : (33) 4 91 81 35 08 From laurenthdl at alinto.com Mon Nov 15 10:16:11 2010 From: laurenthdl at alinto.com (LAURENT Henri-Damien) Date: Mon, 15 Nov 2010 10:16:11 +0100 Subject: [Koha-devel] externalising translator and etc In-Reply-To: References: <4CDAC42C.3090103@biblibre.com> <4CE012F3.1050006@biblibre.com> Message-ID: <4CE0FA5B.2000001@alinto.com> Le 14/11/2010 22:51, Galen Charlton a ?crit : > Hi, > > On Sun, Nov 14, 2010 at 11:48 AM, LAURENT Henri-Damien > wrote: >> My idea for that is that tracking addition of zebra indexes and all that >> stuff would be considerably eased if there was an initial etc repository >> that installer clone into /home/koha/kohadev > > Since adding a Zebra index is sometimes paired with an update to > template files, splitting the configuration into a separate repository > would actually make it more difficult to track Zebra updates. I consider this as a design flaw, could be overcome with work and collaboration on a better abstraction for lists of indexes. (It is already work in progress for Solr. Could be quite long to implement with zebra, but achievable, but it would only be a matter of parsing the ccl.properties file and proposing an interface for better management) > It gets > worse if you consider changes such as adding or modifying an > authentication module, for which the default configuration changes are > almost invariably associated with code changes. Same for authentication. I consider this as a design flaw. We should split authentication and identification. And manage identification in a modular way... So that administrators would just have to edit configuration files in order to make correct mappings, and not dive into the code, change and commit (if they know git enough...) > What you're proposing > would make it more difficult to cleanly manage Koha development that > touches the default configuration files; among other problems, Git's > submodule support is such that it would require that users and > developers would have to do more than a simple git pull or git > fetch/rebase in order to ensure that they've fetched updates both to > Koha and to its configuration files. Well changing to git was already making things kind of a little bit hard for non git users. Change was good and now everybody is using it (at least we are even proposing ppl) Clear and documented information would overcome this fact. > >> We create a git repository for all the installations, but when you >> create the repository from the processed files, it looses >> synchronization with all the common indexes which could be required for >> Koha when one adds some other minor feature or fixes a bug in the indexer. > > Since it sounds like you're using dev-mode deployments for your > customers, you (i.e., BibLibre, not necessarily you personally) could > just as readily keep track of local customizations in the git clone > for each installation you run, then do a make update_zebra_conf. I'm > sure that there are a variety of ways to script that, and perhaps some > patches to Makefile.PL would help you better than splitting out etc > into a submodule. > >> An other reason is that etc doesnot vary much. But when it varies, when >> you upgrade users should be aware that they might loose their custom >> indexes. I wanted to make next upgrades smoother for libraries. > > My suggestion immediately above should help you, but I also want to > point out that make upgrade *does* create backups of any files it > touches, so local changes are preserved that way. Nice. > I am not suggesting that there aren't things that could be done to > make things easier for anybody who runs a lot of dev-mode Koha > installations, but sticking etc off into a submodule is not the > solution. Well it was a proposal. > >> I am with you when you propose to use different directories. This would >> bring to : >> etc That organisation was the big picture, what I would envision for long term. I didnot mean that would have to be done right away. >> |- koha-conf.xml or koha-conf.yml i.e. ONLY Koha common configuration >> (database access and so on. No more zebra stuff in that. ) > > I would support that. It's mostly a historical accident that > koha-conf.xml currently serves as both the main configuration file for > Koha and the top-level config file for zebrasrv, but there's no reason > why those two configuration functions couldn't be placed in separate > files. > >> |- authentication >> ||- LDAP >> ||- CAS >> |-webserver >> ||- apache2 >> ||- nginx >> |- searchengine >> ||-zebradb >> ||-solr >> ||-pazpar2 > > I'm OK with splitting the configuration files. > >> But then, when one chooses one type of webserver, one type of >> authentication, one type of searchengine, he would use only a few of all >> the installation files (which could become quite a forest). >> I wanted the structure for etc simpler so that sysadmins would not be >> overwhelmed by big picture. > > But all of the options have to exist *somewhere*, and it would be > simpler to manage the development of the various options if the > configuration files and directories were all laid out directly in the > Git repository, not relegated to topic branches. Furthermore, if all > of the possible configuration files are available in a production > installation, it would be easier for a sysadmin to (say) switch from > Apache to nginx. Why not... But then, you would have to make dependencies quite thorough... Or optional. debconf would surely be quite handy for that. But would need some clarifications. > > In other words, I think it is better to organize the configuration > files well (and document them!) than to effectively atomize the > management of them during Koha development by having permanent topic > branches for various configuration modes. > >>> 0 for moving the PO files to a separate Git project. The size of the >>> repository doesn't really strike me as a big deal; the Git protocol is >>> pretty efficient. That said, while I don't see a great deal of >>> benefit to splitting the translations off into a separate repository, >>> I don't see much harm either. > > As I said, I don't have an strong opinion either way -- that's what > the 0 means -- but I do think there are a couple misconceptions to > clear up: > >> Well it is quite striking when you get 249Mo to dl when doing a git >> clone. :) > > Not sure where you're getting 249M from -- it's more like 151 MiB when > I measured today. > >> (mainly because any change you commit on po files is storing a >> new instance of this file) > > Actually, no, it doesn't. Git is better designed than that; generally > what it would do if you commit a change to a PO file is store just the > delta. git gc is run on the public repo every week. When you push or > pull to a repository, Git transfers just compressed deltas. From mjr at phonecoop.coop Mon Nov 15 11:10:29 2010 From: mjr at phonecoop.coop (MJ Ray) Date: Mon, 15 Nov 2010 10:10:29 +0000 (GMT) Subject: [Koha-devel] Discussion of development topics, was: Bug 5401: WYSWYG for Koha News In-Reply-To: Message-ID: <20101115101029.EBBE4F7477@nail.towers.org.uk> Galen Charlton wrote: > There is a long standing practice of discussion of individual patches > on koha-commits, however, so while koha-devel is certainly a valid > choice for discussing this issue, I do want to point out and remind > people that some relevant discussion does take place on the > koha-commits list. I don't like that practice because:- 1. the list description "Patches submitted to Koha" gives no indication it occurs and it's different from other projects' -commit mailing lists; 2. developers have to subscribe to receive all patches even if they're only interested in their fields of activity and the occasional discussions, contributing to information overload; 3. I think the patches are carried in only one of the web forum versions of our discussions http://koha-community.org/support/forums/ 4. usually the patches relate to a bug and email discussion isn't automatically attached to the bug report - I have started work to address this, but it's still going to have some delay between the discussion and posting to the bug report, which may mean the discussion is irrelevant by the time the bug is told; 5. most importantly, many topics are wider than a single patch, such as this one about what rich editor to adopt. Can we overcome these problems, or state clearly that wider-than-one-patch discussions should be on -devel? Thanks, -- MJ Ray (slef), member of www.software.coop, a for-more-than-profit co-op. Past Koha Release Manager (2.0), LMS programmer, statistician, webmaster. In My Opinion Only: see http://mjr.towers.org.uk/email.html Available for hire for Koha work http://www.software.coop/products/koha From M.de.Rooy at rijksmuseum.nl Mon Nov 15 12:08:39 2010 From: M.de.Rooy at rijksmuseum.nl (Marcel de Rooy) Date: Mon, 15 Nov 2010 11:08:39 +0000 Subject: [Koha-devel] Discussion of development topics, was: Bug 5401: WYSWYG for Koha News In-Reply-To: <20101115101029.EBBE4F7477@nail.towers.org.uk> References: <20101115101029.EBBE4F7477@nail.towers.org.uk> Message-ID: <809BE39CD64BFD4EB9036172EBCCFA311A2195@S-MAIL-1B.rijksmuseum.intra> From gmcharlt at gmail.com Mon Nov 15 13:18:14 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Mon, 15 Nov 2010 07:18:14 -0500 Subject: [Koha-devel] Externalizing PO files (Was Re: externalising translator and etc) Message-ID: Hi, [Splitting the thread, there are two distinct topics being discussed.] On Mon, Nov 15, 2010 at 4:16 AM, LAURENT Henri-Damien wrote: > Le 14/11/2010 22:51, Galen Charlton a ?crit : > From my understanding of Scott Chacon (a git developer) book and git > talk, git stores the files in its database, since it is a DAG oriented > SCM. It may send the diffs, but actualy stores the files. A cursory reading of a Git book reveals that a statement that Git "actually stores the files" is simplistic, at particular as far as the implications for storage are concerned. Please take a look at: http://book.git-scm.com/7_how_git_stores_objects.html Git's object storage takes the form of compressed version of each object (commit, file, tag, etc.) a files and (and this is the important part) packfiles. Packfiles, roughly speaking, encode only the changes to objects, so a significant amount of space is saved. If one disabled automatic garbage collection in a git repo and never ran git gc, you might end up with something close to storing the (compressed) version of each changed file, but in practice that doesn't happen. > I don't think all the release tags would be broken. As it turns out, they would. I have actually tried running a git filter-branch to remove the PO directory. As I suspected, it does rewrite all of the commits (thereby changing the commit IDs), and it would be necessary to rewrite the tags. But doing so would invalidate the release tags that have a GPG signature. Rewriting history like this is a sure way to get months of complaints and calls for help on koha-devel with questions like "why does git rebase origin no longer work". > And it would allow to release localisation at a different pace... When > there is a need. There's is nothing stopping releasing localizations on a different schedule now. Packaging does not have any necessary connection with how the Git repository is laid out. I will grant that all other things being equal, it would have been a valid choice to have the PO files live in a separate repository. In fact, we perhaps can still do so. One thing that Chris Cormack mentioned to me yesterday is that Pootle has a Git plugin which can automatically commit changes to PO files to a repository. That would be a bit dangerous to allow for the main Koha repository, but safe enough for a separate translations file. Consequently, there is a possible compromise: [1] We seed a new repository for PO files. [2] We do a simple git rm misc/translator/po but *don't* rewrite history. This won't reduce the size of the main repository, of course, but achieves the separation. And, to be honest, it's mostly the possibility of integration with Pootle that would make me interested in this. However, since it is the translation manager who would be primarily affected by the split on a day-to-day basis, I would be interested in hearing from Fr?d?ric as well as past TMs. Regards, Galen -- Galen Charlton gmcharlt at gmail.com From gmcharlt at gmail.com Mon Nov 15 13:19:14 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Mon, 15 Nov 2010 07:19:14 -0500 Subject: [Koha-devel] Discussion of development topics, was: Bug 5401: WYSWYG for Koha News In-Reply-To: <809BE39CD64BFD4EB9036172EBCCFA311A2195@S-MAIL-1B.rijksmuseum.intra> References: <20101115101029.EBBE4F7477@nail.towers.org.uk> <809BE39CD64BFD4EB9036172EBCCFA311A2195@S-MAIL-1B.rijksmuseum.intra> Message-ID: Hi, 2010/11/15 Marcel de Rooy : > This mailing list is an informational list to which commit descriptions for > patches pushed to the public Koha Git repository (currently > git://git.koha-community.org/koha.git) are posted. To reiterate, I mispoke. I meant koha-patches, not koha-commits. Regards, Galen -- Galen Charlton gmcharlt at gmail.com From henridamien.laurent at gmail.com Mon Nov 15 13:48:41 2010 From: henridamien.laurent at gmail.com (LAURENT Henri-Damien) Date: Mon, 15 Nov 2010 13:48:41 +0100 Subject: [Koha-devel] Externalizing PO files (Was Re: externalising translator and etc) In-Reply-To: References: Message-ID: <4CE12C29.3060806@gmail.com> Le 15/11/2010 13:18, Galen Charlton a ?crit : > Hi, > > [Splitting the thread, there are two distinct topics being discussed.] > > On Mon, Nov 15, 2010 at 4:16 AM, LAURENT Henri-Damien > wrote: >> Le 14/11/2010 22:51, Galen Charlton a ?crit : > >> From my understanding of Scott Chacon (a git developer) book and git >> talk, git stores the files in its database, since it is a DAG oriented >> SCM. It may send the diffs, but actualy stores the files. > > A cursory reading of a Git book reveals that a statement that Git > "actually stores the files" is simplistic, at particular as far as the > implications for storage are concerned. Please take a look at: > > http://book.git-scm.com/7_how_git_stores_objects.html > > Git's object storage takes the form of compressed version of each > object (commit, file, tag, etc.) a files and (and this is the > important part) packfiles. Packfiles, roughly speaking, encode only > the changes to objects, so a significant amount of space is saved. > > If one disabled automatic garbage collection in a git repo and never > ran git gc, you might end up with something close to storing the > (compressed) version of each changed file, but in practice that > doesn't happen. > >> I don't think all the release tags would be broken. > > As it turns out, they would. I have actually tried running a git > filter-branch to remove the PO directory. As I suspected, it does > rewrite all of the commits (thereby changing the commit IDs), and it > would be necessary to rewrite the tags. But doing so would invalidate > the release tags that have a GPG signature. OK. I knew it would rewrite history. But I thought it would be more efficient and cleaner though. That said, my proposal was only a proposal. > > Rewriting history like this is a sure way to get months of complaints > and calls for help on koha-devel with questions like "why does git > rebase origin no longer work". If you think so. > >> And it would allow to release localisation at a different pace... When >> there is a need. > > There's is nothing stopping releasing localizations on a different > schedule now. Packaging does not have any necessary connection with > how the Git repository is laid out. > > I will grant that all other things being equal, it would have been a > valid choice to have the PO files live in a separate repository. In > fact, we perhaps can still do so. One thing that Chris Cormack > mentioned to me yesterday is that Pootle has a Git plugin which can > automatically commit changes to PO files to a repository. That would > be a bit dangerous to allow for the main Koha repository, but safe > enough for a separate translations file. > > Consequently, there is a possible compromise: > > [1] We seed a new repository for PO files. > [2] We do a simple git rm misc/translator/po but *don't* rewrite history. Nice for me. This would break the commit history of previous po files. But who cares ? > > This won't reduce the size of the main repository, of course, but > achieves the separation. And, to be honest, it's mostly the > possibility of integration with Pootle that would make me interested > in this. > > However, since it is the translation manager who would be primarily > affected by the split on a day-to-day basis, I would be interested in > hearing from Fr?d?ric as well as past TMs. > > Regards, > > Galen Thanks for your hints and thoughts. Regards. -- Henri-Damien LAURENT From gmcharlt at gmail.com Mon Nov 15 14:15:52 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Mon, 15 Nov 2010 08:15:52 -0500 Subject: [Koha-devel] Externalizing PO files (Was Re: externalising translator and etc) In-Reply-To: <4CE12C29.3060806@gmail.com> References: <4CE12C29.3060806@gmail.com> Message-ID: Hi, On Mon, Nov 15, 2010 at 7:48 AM, LAURENT Henri-Damien wrote: >> [1] We seed a new repository for PO files. >> [2] We do a simple git rm misc/translator/po but *don't* rewrite history. > Nice for me. This would break the commit history of previous po files. > But who cares ? That doesn't follow. The PO file repository could be seeded from the main repository as you had originally proposed, keeping the history intact in the new repository. And, of course, doing a git rm of that directory in the main repository will still keep the history intact there as well. Regards, Galen -- Galen Charlton gmcharlt at gmail.com From gmcharlt at gmail.com Mon Nov 15 14:34:24 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Mon, 15 Nov 2010 08:34:24 -0500 Subject: [Koha-devel] Time for reflexion In-Reply-To: <4CE0F96D.3080909@biblibre.com> References: <4CE0F96D.3080909@biblibre.com> Message-ID: Hi, On Mon, Nov 15, 2010 at 4:12 AM, Paul Poulain wrote: > Le 15/11/2010 10:05, Fridolyn SOMERS a ?crit : >> I'm used to Java with MVC, patterns and object/database mapping. >> >> We could imagine a migration step by step from Perl to Java since Perl >> compiler is integrated into JVM. > Yikes ! Indeed, yikes! :) Perl is capable of supporting MVC and ORMs. Rewriting Koha in Java, no matter how carefully planned, would be a time consuming process, and the result would almost certainly be worse than simply writing a new Java-based ILS from scratch. That is not to say, however, that there couldn't be some interesting experiments to try getting Koha and Java code to interoperate, and if/when Koha adopts Perl 6, Parrot might provide a bridge as well. Regards, Galen -- Galen Charlton gmcharlt at gmail.com From nengard at gmail.com Mon Nov 15 14:40:48 2010 From: nengard at gmail.com (Nicole Engard) Date: Mon, 15 Nov 2010 08:40:48 -0500 Subject: [Koha-devel] Official Koha Newsletter: Volume 1, Issue 11: November 2010 Message-ID: This issue has a lot of links and should be read online to see all of the external content: http://koha-community.org/koha-newsletter-volume-1issue-11-november-2010 Official Koha Newsletter (ISSN 2153-8328) Volume 1, Issue 11: November 2010 Table of Contents * KohaCon10 Summary o Summary of KohaCon10 in Blog Posts o KohaCon10 Presentations o KohaCon10 Videos o KohaCon10 Pictures KohaCon10 Summary of KohaCon10 in Blog Posts KohaCon10 took place in Wellington, New Zealand last month to honor 10 years of Koha development, use and community! It seems that your trusty documentation manager was unofficially names the official blogger of the conference, so excuse a newsletter full of links to posts written by me (and Ian Walls, my colleague who had cover the sessions I couldn?t). * Conference Proper o Keynote o Why a Librarian Loves Koha o Koha and Kete o Koha Adoption in Nigeria o Koha Governance o What?s Coming in Koha 3.4 o Fun with Library Data o Identifying eBooks o A cooperative view o Koha in Schools o Sharing is Good o History of Koha o Koha in Prison o Participation is Key o Koha in Malaysia o LAMP to Koha o Promoting Free Software in Libraries o Open Library and Koha o Koha: 10 years in 10 minutes * Hackfest o Intro to Git o Git and Koha o Template Toolkit o MySQL and Postgres Differences o Debian Packaging for Koha o Koha in the Cloud o An Argument for a project Coding Style o Ideas for Koha 4.0 o Koha Performance * Summaries o KohaCon10 is Finished o KohaCon10 Additional Posts KohaCon10 Presentations Some KohaCon10 presentations were posted on SlideShare, others posted on personal websites and others plotted on a map. If I?ve missed any please feel free to add them in the comments below. * KohaCon10 on SlideShare * KohaCon10 Presentation Map * Specific Presentations o Koha at ASHS by Mark Osborne o Fun with Library Data by David Friggens KohaCon10 Videos KohaCon10 videos are still making their way online, here are a few that are up so far. You can search for the rest as they?re added on blip.tv by using the keyword ?kohacon10? * Keynote by Rosalie Blake * Why Librarians Love Koha by Lee Phillips o Video of Lee?s librarians * Kete and Koha by Walter McGinnis * Koha uptake in Nigeria by Olugbenga Adara * Library Data for Fun & Profit by David Friggens * Koha: 10 years in 10 minutes KohaCon10 Pictures Mostly KohaCon10 pictures were posted on Flickr and tagged ?kohacon10?. My favorites were taken by Kristina, she knows how to take pictures!! Thank you to all who took pictures and shared them so that those back home could share in the festivities with us and so that we can all look back and remember our time together fondly. What we didn?t get was a group picture that I could post here, so you?ll have to go browse through the Flickr pictures and spot your friends and community members that way. Newsletter edited by Nicole C. Engard, Koha Documentation Manager. Please send future story ideas to me. From ian.walls at bywatersolutions.com Mon Nov 15 14:58:57 2010 From: ian.walls at bywatersolutions.com (Ian Walls) Date: Mon, 15 Nov 2010 08:58:57 -0500 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CE0EFD8.9090906@tamil.fr> References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> <4CE01B9D.2090008@biblibre.com> <4CE04179.2060605@tamil.fr> <4CE06965.8040307@tamil.fr> <4CE0D929.7030702@tamil.fr> <4CE0E378.4010903@tamil.fr> <4CE0EA2B.5050202@tamil.fr> <4CE0EFD8.9090906@tamil.fr> Message-ID: Just to throw in on something I ready earlier in this thread, I'd say that for a general practice with Koha going forward, we should pick a single XML parser that can handle arbitrary schemas, and use that. I would very much like to make Koha not just MARC-agnostic, but metadata schema agnostic, and coding ourselves into a corner now (even for a noticeable performance boost), would make life difficult later. As I think the rest of the thread attests, there are other ways to improve our XML parsing. If this had already been resolved earlier in the conversation, I apologize for redundancy; I haven't had my morning coffee yet. -Ian -- Ian Walls Lead Development Specialist ByWater Solutions Phone # (888) 900-8944 http://bywatersolutions.com ian.walls at bywatersolutions.com Twitter: @sekjal -------------- next part -------------- An HTML attachment was scrubbed... URL: From tajoli at cilea.it Mon Nov 15 16:51:22 2010 From: tajoli at cilea.it (Zeno Tajoli) Date: Mon, 15 Nov 2010 16:51:22 +0100 Subject: [Koha-devel] Guidelines for Patch Acceptance/Rejection In-Reply-To: <4CE04792.4020509@biblibre.com> References: <4CE03B59.3050102@biblibre.com> <4CE04792.4020509@biblibre.com> Message-ID: <4CE156FA.2090709@cilea.it> Hi, I have a request about "Guidelines for Patch Acceptance/Rejection". In specific about "How do we send the patches to koha-patches at lists.koha-community.org ? Now I work on Windows 7 with GitExtensions ver 2.25 To send a mail with a patch i need to use the command 'format patch' The result is here: http://lists.koha-community.org/pipermail/koha-patches/2010-November/013015.html In fact the patch is in the attach file, http://lists.koha-community.org/pipermail/koha-patches/attachments/20101115/38d3ceff/attachment-0001.obj Do you think is a correct way to send a patch ? Bye Zeno Tajoli -- Zeno Tajoli CILEA - Segrate (MI) tajoliAT_SPAM_no_prendiATcilea.it (Indirizzo mascherato anti-spam; sostituisci qaunto tra AT con @) From gmcharlt at gmail.com Mon Nov 15 16:55:13 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Mon, 15 Nov 2010 10:55:13 -0500 Subject: [Koha-devel] Guidelines for Patch Acceptance/Rejection In-Reply-To: <4CE156FA.2090709@cilea.it> References: <4CE03B59.3050102@biblibre.com> <4CE04792.4020509@biblibre.com> <4CE156FA.2090709@cilea.it> Message-ID: Hi, On Mon, Nov 15, 2010 at 10:51 AM, Zeno Tajoli wrote: > In fact the patch is in the attach file, > http://lists.koha-community.org/pipermail/koha-patches/attachments/20101115/38d3ceff/attachment-0001.obj > > Do you think is a correct way to send a patch ? I've found it slightly quicker to process patch emails when the patch is in the body of the message (e.g., as generated by the command-line git format-patch), so personally I prefer that where possible. However, since there are cases where that won't work (e.g., because the patch contains a line which is too long, which git send-email won't accept, or because the contributor may have a MTA setup at work that doesn't readily support mailing patches directly with git send-email), submitting the patches as attachments is also acceptable. Pull requests from a public Git repository are also fine. Regards, Galen -- Galen Charlton gmcharlt at gmail.com From krichel at openlib.org Mon Nov 15 21:11:41 2010 From: krichel at openlib.org (Thomas Krichel) Date: Mon, 15 Nov 2010 21:11:41 +0100 Subject: [Koha-devel] Time for reflexion In-Reply-To: References: <4CE0F96D.3080909@biblibre.com> Message-ID: <20101115201141.GD30416@openlib.org> Folks since this says it's about "reflexion", let me add my voice here. I have to confess I don't know much about the internals of koha, and I have not used solr. I understand that solr is a full-text indexer. I think to use this best, it would be advisable to first create a set of static web pages, one for each item in the catalog, and then use solr on this set. Libraries can in this way expose their catalog to the web (and have all the visibility benefits from that) and use the pages for a second search engine via solr. Such an approach to solr indexing could be an optional add on that would not conflict with the current internals of koha. I would not wish to see solr as the primary engine since zebra does so many things that are specific to the library world. Cheers, Thomas Krichel http://openlib.org/home/krichel http://authorclaim.org/profile/pkr1 skype: thomaskrichel From paul.poulain at biblibre.com Mon Nov 15 21:15:33 2010 From: paul.poulain at biblibre.com (Paul Poulain) Date: Mon, 15 Nov 2010 21:15:33 +0100 Subject: [Koha-devel] Time for reflexion In-Reply-To: <20101115201141.GD30416@openlib.org> References: <4CE0F96D.3080909@biblibre.com> <20101115201141.GD30416@openlib.org> Message-ID: <4CE194E5.8080800@biblibre.com> Le 15/11/2010 21:11, Thomas Krichel a ?crit : > I would not wish to see solr as the primary engine since zebra > does so many things that are specific to the library world. > Could you tell us which specific things you're thinking of ? Because, until now, except for the z3950 server, I don't see anything solR can't handle. thanks for your inputs -- Paul POULAIN http://www.biblibre.com Expert en Logiciels Libres pour l'info-doc Tel : (33) 4 91 81 35 08 From krichel at openlib.org Mon Nov 15 21:59:35 2010 From: krichel at openlib.org (Thomas Krichel) Date: Mon, 15 Nov 2010 21:59:35 +0100 Subject: [Koha-devel] Time for reflexion In-Reply-To: <4CE194E5.8080800@biblibre.com> References: <4CE0F96D.3080909@biblibre.com> <20101115201141.GD30416@openlib.org> <4CE194E5.8080800@biblibre.com> Message-ID: <20101115205935.GA31598@openlib.org> Paul Poulain writes > Le 15/11/2010 21:11, Thomas Krichel a ?crit : > > I would not wish to see solr as the primary engine since zebra > > does so many things that are specific to the library world. > > > Could you tell us which specific things you're thinking of ? Because, > until now, except for the z3950 server, I don't see anything solR can't > handle. ok. I still think that the Z39.50 is important enough but I may be wrong on that. But more importantly, I think that koha ils systems will do better to globally expose all contents to the web, and free ride on Google at alii indexing, than setting up more search features internally. Cheers, Thomas Krichel http://openlib.org/home/krichel http://authorclaim.org/profile/pkr1 skype: thomaskrichel From paul.poulain at biblibre.com Mon Nov 15 22:38:47 2010 From: paul.poulain at biblibre.com (Paul Poulain) Date: Mon, 15 Nov 2010 22:38:47 +0100 Subject: [Koha-devel] Time for reflexion In-Reply-To: <20101115205935.GA31598@openlib.org> References: <4CE0F96D.3080909@biblibre.com> <20101115201141.GD30416@openlib.org> <4CE194E5.8080800@biblibre.com> <20101115205935.GA31598@openlib.org> Message-ID: <4CE1A867.8060004@biblibre.com> Le 15/11/2010 21:59, Thomas Krichel a ?crit : >> Could you tell us which specific things you're thinking of ? Because, >> until now, except for the z3950 server, I don't see anything solR can't >> handle. >> > ok. I still think that the Z39.50 is important enough but I may be > wrong on that. > > nope, you're right. And fortunatly, it seems there is a solution. > But more importantly, I think that koha ils systems will do better > to globally expose all contents to the web, and free ride on Google > at alii indexing, than setting up more search features internally. > I don't understand what you mean here. do you mean having all biblios just indexed by google and rely on google to search ? sorry, but it's unclear to me (maybe it's because it's almost 11PM here, and I have to go to bed ;-) ) -- Paul POULAIN http://www.biblibre.com Expert en Logiciels Libres pour l'info-doc Tel : (33) 4 91 81 35 08 From krichel at openlib.org Tue Nov 16 08:29:05 2010 From: krichel at openlib.org (Thomas Krichel) Date: Tue, 16 Nov 2010 08:29:05 +0100 Subject: [Koha-devel] Time for reflexion In-Reply-To: <4CE1A867.8060004@biblibre.com> References: <4CE0F96D.3080909@biblibre.com> <20101115201141.GD30416@openlib.org> <4CE194E5.8080800@biblibre.com> <20101115205935.GA31598@openlib.org> <4CE1A867.8060004@biblibre.com> Message-ID: <20101116072905.GD1645@openlib.org> Paul Poulain writes > I don't understand what you mean here. do you mean having all biblios > just indexed by google and rely on google to search ? Not only Google. Building a module that would exposes all biblios to the public web as static pages should be a priority for koha development. Then there is a host of engines can can be applied to web pages. So I can search "moby dick jackson heights" and see if the book is at my local public library. > sorry, but it's unclear to me (maybe it's because it's almost 11PM here, > and I have to go to bed ;-) ) It's 2:28 here, so you may excuse my very early morning ramblings... Cheers, Thomas Krichel http://openlib.org/home/krichel http://authorclaim.org/profile/pkr1 skype: thomaskrichel From kmkale at anantcorp.com Mon Nov 15 04:56:02 2010 From: kmkale at anantcorp.com (Koustubha Kale) Date: Mon, 15 Nov 2010 09:26:02 +0530 Subject: [Koha-devel] [Koha-patches] Bug 5401: WYSWYG for Koha News In-Reply-To: References: <20101114181805.9B88FF7474@nail.towers.org.uk> Message-ID: On Mon, Nov 15, 2010 at 12:42 AM, Galen Charlton wrote: > Hi, > > On Sun, Nov 14, 2010 at 1:18 PM, MJ Ray wrote: >> Galen Charlton wrote: >>> Since TinyMCE is used by only one page, if we decide to switch to >>> elRTE (or to something else entirely), it won't be a big deal. ?Has >>> anybody done a direct comparison? >> >> I've used TinyMCE and haven't used elRTE, but wouldn't this question >> get better answers if asked on koha-devel? > > As the question was stated, yes, although if it turned out that > Koustubha was just unaware that Koha was already using TinyMCE, a > replacement patch using it instead of elRTE may have sufficed. As it happens I am aware of TinyMCE being already in Koha. But try as I might I could not get it to play nice ( read - at all ) with the News page. I tried the jquery varient of TinyMCE too and no go. Also I dont like the fact that the imagemanager etc of TinyMCE are only for paid customers. elRTE worked pretty well. Had to do some tweaks to its css etc. I have cleaned up the patch. Divided it into two separate patches. One for the elRTE library and one for the Koha changes I did. Now both apply squeaky clean. Got all features like image, media insert etc working now. Attaching both the patches here. Will also upload in bugzilla. Regards, Koustubha Kale Anant Corporation Contact Details : Address : 103, Armaan Residency, R. W Sawant Road, Nr. Golden Dyes Naka, Thane (w), Maharashtra, India, Pin : 400601. TeleFax : +91-22-21720108, +91-22-21720109 Mobile : +919820715876 Website : http://www.anantcorp.com Blog : http://www.anantcorp.com/blog/?author=2 -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-Bug-5401-WYSWYG-for-Koha-News-Patch-1-eltre-1.1-libr.patch Type: application/octet-stream Size: 874527 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0002-Bug-5401-WYSWYG-for-Koha-News-Patch-2-Koha-files.patch Type: application/octet-stream Size: 4999 bytes Desc: not available URL: From robin at catalyst.net.nz Tue Nov 16 12:47:32 2010 From: robin at catalyst.net.nz (Robin Sheat) Date: Wed, 17 Nov 2010 00:47:32 +1300 Subject: [Koha-devel] Time for reflexion In-Reply-To: <20101116072905.GD1645@openlib.org> References: <4CE1A867.8060004@biblibre.com> <20101116072905.GD1645@openlib.org> Message-ID: <201011170047.42749.robin@catalyst.net.nz> Op dinsdag 16 november 2010 20:29:05 schreef Thomas Krichel: > Not only Google. Building a module that would exposes all biblios to > the public web as static pages should be a priority for koha > development. Then there is a host of engines can can be applied to > web pages. So I can search "moby dick jackson heights" and see if > the book is at my local public library. You can do that anyway. Static pages aren't necessary for this. -- Robin Sheat Catalyst IT Ltd. ? +64 4 803 2204 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part. URL: From lculber at mdah.state.ms.us Tue Nov 16 15:27:26 2010 From: lculber at mdah.state.ms.us (Linda Culberson) Date: Tue, 16 Nov 2010 08:27:26 -0600 Subject: [Koha-devel] Koha - city field in patrons Message-ID: <4CE294CE.3020302@mdah.state.ms.us> This is only a suggestion, and not a formal RFP. When showing Koha to some of our staff, I was asked why the state (region/province/provincial district in the case of some countries) isn't separated from the "borrowers.city" field. (Sorry, not enough coffee yet, this morning. ) In other words, why doesn't the state (region/province/provincial district) have it's own field? When asked, I didn't have an answer. I will admit that one of my borrower_attribute_types.codes is STATECODE, so the state information is actually duplicated in our database, but I need it for statistical reasons. I apologize if I'm posting this to the wrong list, but I plead caffeine deprivation. Thanks. -- Linda Culberson lculber at mdah.state.ms.us Archives and Records Services Division Ms. Dept. of Archives& History P. O. Box 571 Jackson, MS 39205-0571 Telephone: 601/576-6873 Facsimile: 601/576-6824 From gmcharlt at gmail.com Tue Nov 16 15:58:08 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Tue, 16 Nov 2010 09:58:08 -0500 Subject: [Koha-devel] Koha - city field in patrons In-Reply-To: <4CE294CE.3020302@mdah.state.ms.us> References: <4CE294CE.3020302@mdah.state.ms.us> Message-ID: Hi, On Tue, Nov 16, 2010 at 9:27 AM, Linda Culberson wrote: > ?This is only a suggestion, and not a formal RFP. When showing Koha to some > of our staff, I was asked why the state (region/province/provincial district > in the case of some countries) isn't ?separated from the "borrowers.city" > field. (Sorry, not enough coffee yet, this morning. ) ?In other words, why > doesn't the state (region/province/provincial district) have it's own field? > When asked, I didn't have an answer. ?I will admit that one of my > borrower_attribute_types.codes is STATECODE, so the state information is > actually duplicated in our database, but I need it for statistical reasons. A separate state/province/etc. column for each address would indeed be useful for the countries that have one. Regards, Galen -- Galen Charlton gmcharlt at gmail.com From cnighswonger at foundations.edu Tue Nov 16 16:21:08 2010 From: cnighswonger at foundations.edu (Chris Nighswonger) Date: Tue, 16 Nov 2010 10:21:08 -0500 Subject: [Koha-devel] Koha - city field in patrons In-Reply-To: References: <4CE294CE.3020302@mdah.state.ms.us> Message-ID: On Tue, Nov 16, 2010 at 9:58 AM, Galen Charlton wrote: > Hi, > > On Tue, Nov 16, 2010 at 9:27 AM, Linda Culberson > wrote: >> ?This is only a suggestion, and not a formal RFP. When showing Koha to some >> of our staff, I was asked why the state (region/province/provincial district >> in the case of some countries) isn't ?separated from the "borrowers.city" >> field. (Sorry, not enough coffee yet, this morning. ) ?In other words, why >> doesn't the state (region/province/provincial district) have it's own field? >> When asked, I didn't have an answer. ?I will admit that one of my >> borrower_attribute_types.codes is STATECODE, so the state information is >> actually duplicated in our database, but I need it for statistical reasons. > > A separate state/province/etc. column for each address would indeed be > useful for the countries that have one. I'll second that. Kind Regards, Chris From chrisc at catalyst.net.nz Tue Nov 16 23:55:55 2010 From: chrisc at catalyst.net.nz (Chris Cormack) Date: Wed, 17 Nov 2010 11:55:55 +1300 Subject: [Koha-devel] Branches awaiting QA Message-ID: <20101116225555.GX4325@rorohiko> Hi All I have pushed up 7 branches on git.koha-community.org new/awaiting_qa/biblibre_memb_circ_upd new/awaiting_qa/biblibre_admin new/awaiting_qa/biblibre_various new/awaiting_qa/biblibre_serials new/awaiting_qa/biblibre_cataloguing new/awaiting_qa/biblibre_opac new/awaiting_qa/biblibre_acquisitions They are all awaiting QA. Another branch new/biblibre_reports Has been through initial QA, and feedback has been sent to Biblibre, once that has been dealt with, that branch will be merged. The other branches do need QA done on them though, before they can move on. Chris -- Chris Cormack Catalyst IT Ltd. +64 4 803 2238 PO Box 11-053, Manners St, Wellington 6142, New Zealand -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: From frederic at tamil.fr Wed Nov 17 06:10:53 2010 From: frederic at tamil.fr (=?ISO-8859-1?Q?Fr=E9d=E9ric_Demians?=) Date: Wed, 17 Nov 2010 06:10:53 +0100 Subject: [Koha-devel] Branches awaiting QA In-Reply-To: <20101116225555.GX4325@rorohiko> References: <20101116225555.GX4325@rorohiko> Message-ID: <4CE363DD.1040307@tamil.fr> > I have pushed up 7 branches on git.koha-community.org > > new/awaiting_qa/biblibre_memb_circ_upd > new/awaiting_qa/biblibre_admin > new/awaiting_qa/biblibre_various > new/awaiting_qa/biblibre_serials > new/awaiting_qa/biblibre_cataloguing > new/awaiting_qa/biblibre_opac > new/awaiting_qa/biblibre_acquisitions I've just tested OpacHiddenItems. Here are my comments: - Some documentation should be welcome for the documentation manager and for testers. Some valuable info is found in a text file opac/OpacHiddenItems.txt. - A function (enabled_opac_search_views) references 2 syspref (OPACviewMARC and OPACviewISBD) which are not defined in .pref file nor in DB. Some doc needed also. - Two new syspref are referenced in get_template_and_user but are not defined anywhere else: IntranetXSLTDetailsDisplay, IntranetXSLTResultsDisplay. Not OPAC related. - TemplateEncoding syspref is revived. Isn't it supposed to be dead. - OpacHiddenItems new syspref is defined in DB (kohastructure.sql & updatabase.pl). But it isn't defined in opac.pref. - I can't display anything on result page. I have a result count and pages navigation bar but without records themselves. - If I reference directly opac-detail.pl page with a biblionumber, I get error message when XSLT is enabled. Without XSLT it works. I can see a new functionality to search subjects but the display is strange. I suppose some CSS is missing to pipup search when clicking on subjects. So, from a functional point of view, I can understand that this branch add: - A new way to add/show MARC and ISBD view - A new way to hide items on criteria defined in a YAML encoded syspref - A subject enhanced search functionality on OPAC detail page (not in XSLT mode). From colin.campbell at ptfs-europe.com Wed Nov 17 14:30:06 2010 From: colin.campbell at ptfs-europe.com (Colin Campbell) Date: Wed, 17 Nov 2010 13:30:06 +0000 Subject: [Koha-devel] Perl resources Message-ID: <4CE3D8DE.9070903@ptfs-europe.com> Hi. details of the talks at this years London Perl Workshop have been posted: http://conferences.yapceurope.org/lpw2010/talks One thing that may be of interest to some here is a new book on Perl written in French http://www.pearson.fr/livre/?GCOI=27440100979970 Cheers Colin -- Colin Campbell Chief Software Engineer, PTFS Europe Limited Content Management and Library Solutions +44 (0) 208 366 1295 (phone) +44 (0) 7759 633626 (mobile) colin.campbell at ptfs-europe.com skype: colin_campbell2 http://www.ptfs-europe.com From frederic at tamil.fr Wed Nov 17 16:44:25 2010 From: frederic at tamil.fr (=?ISO-8859-1?Q?Fr=E9d=E9ric_Demians?=) Date: Wed, 17 Nov 2010 16:44:25 +0100 Subject: [Koha-devel] Perl resources In-Reply-To: <4CE3D8DE.9070903@ptfs-europe.com> References: <4CE3D8DE.9070903@ptfs-europe.com> Message-ID: <4CE3F859.4010400@tamil.fr> > One thing that may be of interest to some here is a new book on Perl > written in French > http://www.pearson.fr/livre/?GCOI=27440100979970 Merci cher ami ! From cnighswonger at foundations.edu Thu Nov 18 05:06:23 2010 From: cnighswonger at foundations.edu (Chris Nighswonger) Date: Wed, 17 Nov 2010 23:06:23 -0500 Subject: [Koha-devel] Preliminary Release Notes for 3.2.1 Message-ID: Hi all, Below are the preliminary release notes for the pending 3.2.1 release on 22/11/10. Please review them and reply with any corrections, amendments, additions, etc. Kind Regards, Chris Nighswonger Koha 3.2.x Release Maintainer ------------------------------------------------------------ RELEASE NOTES FOR KOHA 3.2.1 - 22 November 2010 ======================================================================== Koha is the first free and open source software library automation package (ILS). Development is sponsored by libraries of varying types and sizes, volunteers, and support companies from around the world. The website for the Koha project is http://koha-community.org/ Koha 3.2.1 can be downloaded from: http://download.koha-community.org/koha-3.02.01.tar.gz Installation instructions can be found at: http://wiki.koha-community.org/wiki/Installation_Documentation Koha 3.2.1 is a bugfix/maintenance release. Bugs fixed in 3.2.1 ====================== 2122 Grayed out Fields not always visible 2567 008 editor falsely populating Illustrations and Nature of Contents positions 3013 Value builder for 006 and 008 need choices for all format types 3211 cataloging value plugin forms should be valid XHTML 3271 Missing message when adding to a list with no items selected 3811 biblios record.abs indexes 008 language as numeric (marc21) 4074 The 'Subject(s)' link(s) are malformed resulting in no search results 4141 reconcile 3.0.x and HEAD database updates for 3.2.0 4254 adding new patron advanced notice doesn't save 4261 keyword mapping should show which framework the map is for 4305 Amazon book covers do not work with ISBN13 4359 noItemTypeImages not active in OPAC 4423 Staff Client XSLT is just a copy of the OPAC one 4472 img tags in xslt broken after automatic translation 4498 Acq always shows '1 suggestions waiting' 4515 Few very small errors in opac-search.pl 4520 facets "show more" doesn't work 4866 Optionally enable Change event for item plugins 4912 After editing private list, user should be redirect to private lists 4913 Budget planning pages should show currency name instead of symbol 4924 Public/Internal notes missing in staff normal view 4933 Link to subfield's edit tab from MARC subfield structure admin summary 4963 sys prefs need date hints and/or picker 4979 Acq: input fields for new record are too short 4980 Acq: pull down 'Restrict access to:' for funds not translatable 4986 move serials prefs from cataloging tab 4991 Overhaul Calendar interface 5003 Can not search for organisation by name 5008 "Remove" link missing when viewing Cart in expanded "More details" view 5019 funds link doesn't go to list 5037 If patron category is empty it shouldn't show 5050 Staff client's language preference should be grouped with I18N/L10N preferences 5056 Untranslatable strings in members.js 5059 Inconsistent use of ordering price and list price in vendor form 5066 Incorrect use of localtime function when calling _session_log 5075 Terms not highlighted w/ xslt on 5082 Not translatable name of default framework 'Default' in MARCdetail.tmpl 5110 NewItemsDefaultLocation should be under cataloging 5112 Organisation does not show links to professional patrons 5114 Can't edit basket in Internet Explorer 5117 Misspelled word: Orgnisztion 5118 Misspelled word: Currencey 5119 Misspelled word: correspounding 5121 Misspelled words: stripts biographyl Begininning 5122 Misspelled word: Transfered/transfered 5123 Misspelled words: Depdending Commited flutucations 5124 Duplicate and Misspelled words: periodicy outputing 5128 Define default 9xx fields for Unimarc setup in all languages 5130 Misspelled words: biblographic delimeter extention 5132 Misspelled words: Acquistion Succesfully professionnal 5133 Misspelled words: reservior notifiying deleete 5134 Misspelled words: exisiting anomolies genereated 5135 Authorized value input maxlength should match table column 5136 Replace embedded SQL query with call to GetAuthorisedValues 5137 Remove obsolete code for counting issues by item type in circulation 5142 Untranslatable strings in tag review template 5146 Patron Import Requires Header Row 5149 Link to noItemTypeImages pref on item types is wrong 5151 Saved Report Breadcrumb in bold 5152 confirm buttons different styles on lists 5162 patron attributes 'new' link should create blank value 5163 holds to pull is titled pending holds 5168 add holdings' should read 'add/edit items' 5171 edit items should be edit item when under 1 item 5175 The opac XSLTDetails view field Publisher: doesn't provide a hyperlink as the non XSLT view does. 5177 Descending sort search result defined by syspref doesn't work 5190 MARC and UNIMARC bibliographic fields documentation accesible when cataloging 5204 Unimarc xslt default templates are in french 5210 Function of back button on batch modification result page unclear 5214 undefined itype images cause broken image icon in request.pl 5219 Perltidy code 5221 Preselect tab containing itemtype/authorized value image in use 5223 related subjects' should read 'subjects' 5224 Replace Shopping Basket with basket 5235 checkout receipt should include patron name 5236 "hide my tags" link does nothing 5237 Special characters in patron barcodes cause renewals to fail. 5243 C4::Search::SimpleSearch call in AuthoritiesMarc.pm 5254 no need to scroll left to right on acq z search 5258 Order should read 'order line' in receive table 5301 In C4/XSLT.pm, itemcallnumber can contain special XML characters 5308 subscriptionroutinglist table is too lax 5309 Progress bar inaccurate 5311 Language dropdown in advanced search broken? 5315 Authority search Templates refer to unused variable 5318 rank weight error in ccl.properties 5322 pwgen not a dependency for the packages 5326 when ExtendedPatronAttributes off error links to old sys prefs 5327 Unit tests required for all C4 modules 5363 Remove dependency on Cache::Memcached::Fast 5368 Browse Shelf link appears even if there isn't an itemcallnumber 5370 Fix all the references to koha.org 5372 Editing a bilbio record, existing value can be replace by default value 5380 an end to copy-and-paste-itis 5381 Fines in notices prints always 0.00 5385 Correct POD errors as highlighted by podchecker 5389 Business::ISBN should be marked as required dependency 5392 reserve/renewscript logging lots of warnings 5393 test case for verifying XML/XSLT files are well-formed 5396 UseTablesortForCirc syspref to toggle table sorter on circ/circulation.pl 5400 add test case to detect merge conflict markers in the code 5412 Double quotes in publisher name, no date cause search results links to break Commits in 3.2.1 without a bug report: * Adding a simple test for Service.pm * create unit test files * Create Unit Test for ImportBatch * unit test stub for Z3950.pm * Test modules compile * Updated links in Main Page Help * remove extraneous semicolon * History updates * history updates - recent releases * Adding 3.2 Release Maintainer to Release Team List * Adding possibility to cleanup_database.pl to purge only older sessions * Display available error information during bulkmarcimport * add missing help file for merging records * (MT 2985) simplify CanBookBeReserved * fix use of outdated boilerplate * remove unused template include * Misspells: deleteing -> deleting * Misspell: Quanity -> Quantity System requirements ====================== Changes since 3.0: * The minimum version of Perl required is now 5.8.8. * There are a number of new Perl module dependencies. Run ./koha_perl_deps.pl -u -m to get a list of any new modules to install during upgrade. Upgrades ====================== The structure of the acquisitions tables have changed significantly from 3.0.x. In particular, the budget hierarchy is quite different. During an upgrade, a new database table is created called fundmapping that contains a record of how budgets were mapped. It is strongly recommended that users of Koha 3.0.x acquisitions carefully review the results of the upgrade before resuming ordering in Koha 3.2.x. Documentation ====================== As of Koha 3.2, the Koha manual is now maintained in DocBook. The home page for Koha documentation is http://koha-community.org/documentation/ As of the date of these release notes, several translations of the Koha manual are available: English: http://koha-community.org/documentation/3-2-manual/ Spanish: http://koha-community.org/documentation/3-2-manual-es/ French: http://koha-community.org/documentation/3-2-manual-fr/ The Git repository for the Koha manual can be found at http://git.koha-community.org/gitweb/?p=kohadocs.git;a=summary Translations ====================== Complete or near-complete translations of the OPAC and staff interface are available in this release for the following languages: * Chinese * Danish * English (New Zealand) * English (USA) * French (France) * French (Canada) * German * Greek * Hindi * Italian * Norwegian * Portuguese * Spanish * Turkish Translation related commits new to 3.2.1: * Staff interface .po file updates * Russian and Ukranian opac language updates * Ukranian and Russian syspref language updates * German and italian language updates Partial translations are available for various other languages. The Koha team welcomes additional translations; please see http://www.kohadocs.org/usersguide/apb.html for information about translating Koha, and join the koha-translate list to volunteer: http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-translate The most up-to-date translations can be found at: http://translate.koha.org/ Release Team ====================== The release team for Koha 3.2 is Release Manager: Galen Charlton Documentation Manager: Nicole Engard Translation Manager: Chris Cormack Release Maintainer (3.0.x): Henri-Damien Laurent Release Maintainer (3.2.x): Chris Nighswonger Credits ====================== We thank the following individuals who contributed patches to Koha 3.2.1. Nahuel Angelinetti Tom??s Cohen Arazi Jared Camins-Esakov Colin Campbell Galen Charlton Chris Cormack Nate Curulla Fr??d??ric Demians Andrew Elwell Brian Engard Nicole Engard Magnus Enger Katrin Fischer Daniel Grobani Kyle M Hall Srdjan Jankovic Bernardo Gonzalez Kriegel Henri-Damien Laurent Owen Leonard Chris Nighswonger Paul Poulain MJ Ray Liz Rea Marcel de Rooy Robin Sheat ByWater Solutions Zeno Tajoli Ian Walls We regret any omissions. If a contributor has been inadvertantly missed, please send patch against these release notes to koha-patches at lists.koha-community.org. Revision control notes ====================== The Koha project uses Git for version control. The current development version of Koha can be retrieved by checking out the master branch of git://git.koha-community.org/koha.git The branch for Koha 3.2.x (i.e., this version of Koha and future bugfix releases) is 3.2.x. The next major feature release of Koha will be Koha 3.4.0. Bugs and feature requests ====================== Bug reports and feature requests can be filed at the Koha bug tracker at http://bugs.koha-community.org/ Naku te rourou, nau te rourou, ka ora ai te iwi. From oleonard at myacpl.org Thu Nov 18 21:59:42 2010 From: oleonard at myacpl.org (Owen Leonard) Date: Thu, 18 Nov 2010 15:59:42 -0500 Subject: [Koha-devel] Branches awaiting QA In-Reply-To: <4CE363DD.1040307@tamil.fr> References: <20101116225555.GX4325@rorohiko> <4CE363DD.1040307@tamil.fr> Message-ID: > - Some documentation should be welcome for the documentation manager and > ?for testers. Some valuable info is found in a text file > ?opac/OpacHiddenItems.txt. I'm curious why this directory and file have been created? Do we not have a standard method for providing help for individual system preferences? > - I can't display anything on result page. I have a result count and > ?pages navigation bar but without records themselves. I get the same result. > - If I reference directly opac-detail.pl page with a biblionumber, I get > ?error message when XSLT is enabled. Without XSLT it works. I get the same error whether XSLT is on or off: Global symbol "$subscriptionsnumber" requires explicit package name at /home/oleonard/kohaclone/opac/opac-detail.pl line 700. Global symbol "$subscriptionsnumber" requires explicit package name at /home/oleonard/kohaclone/opac/opac-detail.pl line 706. -- Owen -- Web Developer Athens County Public Libraries http://www.myacpl.org From kohadevel at agogme.com Thu Nov 18 22:08:42 2010 From: kohadevel at agogme.com (Thomas Dukleth) Date: Thu, 18 Nov 2010 21:08:42 -0000 (UTC) Subject: [Koha-devel] MARC record size limit In-Reply-To: References: <01c73f7770978873e50aaa6d2996374f.squirrel@wmail.agogme.com> <4CB48AEA.3050901@biblibre.com> Message-ID: <4df46ea807d838f912bf9ca5598d5548.squirrel@wmail.agogme.com> Reply inline: On Tue, October 26, 2010 22:01, Fouts, Clay wrote: > I did some (very limited) testing on storing and retrieving MARC in YAML. > The results were not encouraging. IIRC, I just did a direct conversion of > the MARC::Record object into YAML and back. Perhaps there's a way to > optimize the formatting that would improve performance, but my testing > showed sometimes even worse performance than XML. I had not suggested YAML as a prospective data format for simple real time conversion to and from MARC or MARCXML. The potential value which I see in YAML is for storing data types appropriately for special purposes where a record would exist in whatever primary form in Koha and also exist in a completely transformed forms for indexing, display, record exchange, etc. Storing record data normalised for a particular purpose as strings, numeric values, ordered lists, arrays, etc. where appropriate after parsing it from the original MARC record strings is very different from merely storing MARC in a different record syntax. YAML provides data typing which neither MARC nor XML do. MARC whether in MARC communications format (ISO 2709) or MARCXML does not provide sufficient normalisation for many purposes. In considering YAML, I do not exclude the possibility that creating a special normalised XML record format which encodes data type in attributes etc. may be more easily supported than YAML or better in some other way However, XML can be easily embedded in YAML. > > MARCXML is a performance killer at this point, but there's no other > apparent > way to handle large bib records. The parsing is the issue, not the data > transfer load. Perhaps cached BSON-formatted MARC::Record objects are a > way > out of this. [...] Thomas Dukleth Agogme 109 E 9th Street, 3D New York, NY 10003 USA http://www.agogme.com +1 212-674-3783 From kohadevel at agogme.com Thu Nov 18 22:16:54 2010 From: kohadevel at agogme.com (Thomas Dukleth) Date: Thu, 18 Nov 2010 21:16:54 -0000 (UTC) Subject: [Koha-devel] Record parsers In-Reply-To: References: <4CA98C01.8080709@biblibre.com> <20101111130948.A5CB2F7316@nail.towers.org.uk> <4CE01B9D.2090008@biblibre.com> <4CE04179.2060605@tamil.fr> <4CE06965.8040307@tamil.fr> <4CE0D929.7030702@tamil.fr> <4CE0E378.4010903@tamil.fr> <4CE0EA2B.5050202@tamil.fr> <4CE0EFD8.9090906@tamil.fr> Message-ID: <0f78bed6e62c499287e114ca98c6866a.squirrel@wmail.agogme.com> [Much of the discussion of record parsers has very little to do with the subject of Solr/Lucene specifically under which preceding discussion of record parsers appeared.] Reply inline: Previous Subject: Re: [Koha-devel] Search Engine Changes : let's get some solr 1. GENERAL PUROSE XML PARSER. On Mon, November 15, 2010 13:58, Ian Walls wrote: > Just to throw in on something I ready earlier in this thread, I'd say that > for a general practice with Koha going forward, we should pick a single > XML > parser that can handle arbitrary schemas, and use that. Having a general purpose XML parser would be very useful as one step towards greater generalisation and abstraction in Koha. Picking a single XML parser for all use cases might be an optimisation mistake which we would come to regret in future. 2. METADATA SCHEMA AGNOSTIC RECORDS. > I would very much > like to make Koha not just MARC-agnostic, but metadata schema agnostic, > and > coding ourselves into a corner now (even for a noticeable performance > boost), would make life difficult later. As I think the rest of the > thread > attests, there are other ways to improve our XML parsing. > > If this had already been resolved earlier in the conversation, I apologize > for redundancy; I haven't had my morning coffee yet. The issue of a general purpose XML parser had been considered tangentially but without the appropriate context of metadata schema agnostic records. I think that considering record parsers which are not MARC or MARCXML specific is important for long term development. 2.1. INTERNAL RECORD FORMATS. For some future development, Koha should not be dependent upon a metadata exchange record syntax for anything other than lossless data input and data output. An internal record syntax should be optimised for particular library management system functions. The general state of Koha may not be ready for the work which would be required to ensure that changing the base record format would be lossless. However, we should be enabling the future possibility by implementing abstraction when opportunities arise. Fr?d?ric Demians recognises the distinction between internal record use for Koha and external record use for interfacing with the world. Previous discussion in the "MARC record size limit thread" had also considered non-XML record syntaxes such as YAML. On Mon, November 15, 2010 05:56, Fr?d?ric Demians wrote: [...] > It's a design choice. MARCXML is the Koha internal serialization format > for MARC records. There is no obligation to conform to MARC21slim > schema. We even could choose another serialization format as it has > already been discussed. biblioitems.marcxml isn't open to the wide. [...] > And we could benefit of it if > pure Perl parsing is a real performance gain. That is for the good > reason. However, the prospect of using Koha specific record syntax parser for record creation or modification scares me. I would much prefer some lower efficiency with validity constraints from a Perl module widely tested outside of Koha. 2.1.1. REASON FOR INTERNAL RECORD FORMATS. An example record format is record format optimised for indexing which would store information such as the language of material a clear appropriate place for indexing. Records optimised for indexing would be different from the primary form of the record optimised editing and an alternate form optimised for display. MARC often uses one or more of several different places with varying forms of presentation for the same information. Examples include language of material which may be multiple and refer to language from which material was translated; the muddle of recording content type, material type, carrier type and their various relationships; the muddle of date forms and similar numeric and sequential designators; the muddle of ordered classification and similar hierarchical designators; transcribed and natural language record content with no controlled vocabulary; etc. [In the interest of time, I omit providing detailed examples.] Consider the case of language of material. Enhancing records to use fixed fields or fixed subfields for better indexing is insufficient to record the complexity of language use cases. XPath indexing of MARC records cannot cope well enough with all the possibilities. The information can be parsed out of MARC records reliably into a record specially optimised for indexing. Storing the information in MARC in an easily indexable manner is the problem. 3. ENABLING FUTURE DEVELOPMENT. Generalising and abstracting record parsing would enable future development such as records normalised for a particular purpose without being dependent upon MARC. Developments which enable future work do not require a commitment to a particular development idea but help free the constraints of development practicalities by leaving less work to provide some future development. Thomas Dukleth Agogme 109 E 9th Street, 3D New York, NY 10003 USA http://www.agogme.com +1 212-674-3783 From kohadevel at agogme.com Fri Nov 19 01:21:41 2010 From: kohadevel at agogme.com (Thomas Dukleth) Date: Fri, 19 Nov 2010 00:21:41 -0000 (UTC) Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <20101111130948.A5CB2F7316@nail.towers.org.uk> References: <20101111130948.A5CB2F7316@nail.towers.org.uk> Message-ID: <64f8349ca776545a937dad6a8e85cede.squirrel@wmail.agogme.com> Reply inline: On Thu, November 11, 2010 13:09, MJ Ray wrote: > LAURENT Henri-Damien wrote: >> involved in the community as previously. Paul promised some POCs, here >> is one available. [...] > > Sorry for taking a while to look at this, but it raised so many > questions in my mind when I first read it and I've been a bit busy, > so I thought I'd leave it a while and see if some were covered by > others. Some were (thanks!) but many are left, so here we go: > What's a POC? Piece Of Code? (I assume it's not the C I'd usually > mean in that abbreviation ;-) ) I do not remember seeing this question answered. The abbreviation POC had confused me when I first saw it. I think that the recent use of POC in our context is for 'proof of concept'. Use of that abbreviation is uncommon to my knowledge in English even in the context of software development. However, my experience may be lacking. I personally try to minimise my use of abbreviations where I think that some expected readers may not know them. I try to give them as a parenthetical to the full term in first use when I need an abbreviation to avoid excessive repetition. I understand that most people would not take the time to be as careful as I try to be when writing. Thomas Dukleth Agogme 109 E 9th Street, 3D New York, NY 10003 USA http://www.agogme.com +1 212-674-3783 From nengard at gmail.com Fri Nov 19 15:50:48 2010 From: nengard at gmail.com (Nicole Engard) Date: Fri, 19 Nov 2010 09:50:48 -0500 Subject: [Koha-devel] [koha-commits] main Koha release repository branch, master, updated. v3.02.00-rc-20-g45604b8 In-Reply-To: References: Message-ID: I am catching up on commits to update the manual and have a question about this one. The homeorholdingbranch system preference says that it's going to be removed soon in the description ... so how is this a new preference? Or should the description be updated? Thanks Nicole On Thu, Oct 21, 2010 at 8:12 PM, Git repo owner wrote: > This is an automated email from the git hooks/post-receive script. It was > generated because a ref change was pushed to the repository containing > the project "main Koha release repository". > > The branch, master has been updated > ? ? ? via ?45604b8d17e05adb98a98a8a2184bd290f72997a (commit) > ? ? ?from ?4a7bb77e4fcd7ad78ef7ba6f984ef69fccd8a163 (commit) > > Those revisions listed above that are new to this repository have > not appeared on any other notification email; so we list those > revisions in full, below. > > - Log ----------------------------------------------------------------- > commit 45604b8d17e05adb98a98a8a2184bd290f72997a > Author: Henri-Damien LAURENT > Date: ? Thu Sep 9 14:54:12 2010 -0400 > > ? ?(bug 3536) fix homeorholdingbranch on return > > ? ?this patch create a new systempreference "homeorholdingbranch"-like used only for returns. > > ? ?Signed-off-by: Galen Charlton > > ? ?An additional edit was made to circ/returns.pl by Ian Walls of ByWater Solutions to force the dialog message for the return to > ? ?use the branch specified by the new HomeOrHoldingBranchReturn system preference, rather than always Homebranch. > ? ?Signed-off-by: Ian Walls > ? ?Signed-off-by: Galen Charlton > > ----------------------------------------------------------------------- > > Summary of changes: > ?C4/Circulation.pm ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?| ? ?3 ++- > ?admin/systempreferences.pl ? ? ? ? ? ? ? ? ? ? ? ? | ? ?1 + > ?circ/returns.pl ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?| ? ?3 ++- > ?installer/data/mysql/en/mandatory/sysprefs.sql ? ? | ? ?1 + > ?.../1-Obligatoire/unimarc_standard_systemprefs.sql | ? ?1 + > ?installer/data/mysql/updatedatabase.pl ? ? ? ? ? ? | ? ?8 ++++++++ > ?kohaversion.pl ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? | ? ?4 +--- > ?7 files changed, 16 insertions(+), 5 deletions(-) > > > hooks/post-receive > -- > main Koha release repository > _______________________________________________ > koha-commits mailing list > koha-commits at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-commits > From nengard at gmail.com Fri Nov 19 20:39:05 2010 From: nengard at gmail.com (Nicole Engard) Date: Fri, 19 Nov 2010 14:39:05 -0500 Subject: [Koha-devel] RE : Re: [koha-commits] main Koha release repository branch, master, updated. v3.02.00-rc-20-g45604b8 In-Reply-To: References: Message-ID: If so, then we should probably change the description that says it's going to be removed. Nicole On Fri, Nov 19, 2010 at 1:49 PM, Henri-Damien LAURENT wrote: > As far as i know it is still used. > > Le?19 nov. 2010, 3:50 PM, "Nicole Engard" ?a ?crit?: > > I am catching up on commits to update the manual and have a question > about this one. ?The homeorholdingbranch system preference says that > it's going to be removed soon in the description ... so how is this a > new preference? ?Or should the description be updated? > > Thanks > Nicole > > On Thu, Oct 21, 2010 at 8:12 PM, Git repo owner > wrote: >> This is an automated email from the git hooks/post-receive script. It was >> generated because a ref change was pushed to the repository containing >> the project "main Koha release repository". >> >> The branch, master has been updated >> ? ? ? via ?45604b8d17e05adb98a98a8a2184bd290f72997a (commit) >> ? ? ?from ?4a7bb77e4fcd7ad78ef7ba6f984ef69fccd8a163 (commit) >> >> Those revisions listed above that are new to this repository have >> not appeared on any other notification email; so we list those >> revisions in full, below. >> >> - Log ----------------------------------------------------------------- >> commit 45604b8d17e05adb98a98a8a2184bd290f72997a >> Author: Henri-Damien LAURENT >> Date: ? Thu Sep 9 14:54:12 2010 -0400 >> >> ? ?(bug 3536) fix homeorholdingbranch on return >> >> ? ?this patch create a new systempreference "homeorholdingbranch"-like >> used only for returns. >> >> ? ?Signed-off-by: Galen Charlton >> >> ? ?An additional edit was made to circ/returns.pl by Ian Walls of ByWater >> Solutions to force the dialog message for the return to >> ? ?use the branch specified by the new HomeOrHoldingBranchReturn system >> preference, rather than always Homebranch. >> ? ?Signed-off-by: Ian Walls >> ? ?Signed-off-by: Galen Charlton >> >> ----------------------------------------------------------------------- >> >> Summary of changes: >> ?C4/Circulation.pm ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?| ? ?3 ++- >> ?admin/systempreferences.pl ? ? ? ? ? ? ? ? ? ? ? ? | ? ?1 + >> ?circ/returns.pl ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?| ? ?3 ++- >> ?installer/data/mysql/en/mandatory/sysprefs.sql ? ? | ? ?1 + >> ?.../1-Obligatoire/unimarc_standard_systemprefs.sql | ? ?1 + >> ?installer/data/mysql/updatedatabase.pl ? ? ? ? ? ? | ? ?8 ++++++++ >> ?kohaversion.pl ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? | ? ?4 +--- >> ?7 files changed, 16 insertions(+), 5 deletions(-) >> >> >> hooks/post-receive >> -- >> main Koha release repository >> _______________________________________________ >> koha-commits mailing list >> koha-commits at lists.koha-community.org >> http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-commits >> > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ From frederic at tamil.fr Sat Nov 20 08:25:43 2010 From: frederic at tamil.fr (Frederic Demians) Date: Sat, 20 Nov 2010 08:25:43 +0100 Subject: [Koha-devel] Branches awaiting QA In-Reply-To: References: <20101116225555.GX4325@rorohiko> <4CE363DD.1040307@tamil.fr> Message-ID: <4CE777F7.6050703@tamil.fr> We're talking about this branch: new/awaiting_qa/biblibre_opac >> - If I reference directly opac-detail.pl page with a biblionumber, I get >> error message when XSLT is enabled. Without XSLT it works. > > I get the same error whether XSLT is on or off: > > Global symbol "$subscriptionsnumber" requires explicit package name at > /home/oleonard/kohaclone/opac/opac-detail.pl line 700. > Global symbol "$subscriptionsnumber" requires explicit package name at > /home/oleonard/kohaclone/opac/opac-detail.pl line 706. Yes, you're correct. I remember now that I add to do a quick fix to make it work. I send a patch for that. We need some feedback from this branch submitters if we want to do collective QA. -- Fr?d?ric From altaf.mahmud at gmail.com Sat Nov 20 12:52:20 2010 From: altaf.mahmud at gmail.com (Altaf Mahmud) Date: Sat, 20 Nov 2010 17:52:20 +0600 Subject: [Koha-devel] Borrowers' password encryption method in database Message-ID: Hi, I want to know how Koha saves its borrowers' password in database? Is it one-way conversion? For example, if a password is saved as '4QrcOUm6Wau+VuBX8g+IPg', can I decode it back to its original text which was '123456'? Thanks. -- Altaf Mahmud System Programmer Ayesha Abed Library BRAC University Bangladesh. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cnighswonger at foundations.edu Sat Nov 20 13:01:11 2010 From: cnighswonger at foundations.edu (Chris Nighswonger) Date: Sat, 20 Nov 2010 07:01:11 -0500 Subject: [Koha-devel] Borrowers' password encryption method in database In-Reply-To: References: Message-ID: 2010/11/20 Altaf Mahmud > Hi, > > I want to know how Koha saves its borrowers' password in database? Is it > one-way conversion? For example, if a password is saved as > '4QrcOUm6Wau+VuBX8g+IPg', can I decode it back to its original text which > was '123456'? > They are stored as MD5 hashes and you cannot "decode" them as such. IIRCC, what you must do is make an MD5 hash of the password and then compare the two hashes. They should be the same. Kind Regards, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From altaf.mahmud at gmail.com Sat Nov 20 13:13:17 2010 From: altaf.mahmud at gmail.com (Altaf Mahmud) Date: Sat, 20 Nov 2010 18:13:17 +0600 Subject: [Koha-devel] Borrowers' password encryption method in database In-Reply-To: References: Message-ID: Thanks Chris. Is it exactly that MD5 function? My SQL version is 5.1.41, SELECT MD5 ('123456') returns another string, not that one. Is there anything else I've to do? On Sat, Nov 20, 2010 at 6:01 PM, Chris Nighswonger < cnighswonger at foundations.edu> wrote: > 2010/11/20 Altaf Mahmud > > Hi, >> >> I want to know how Koha saves its borrowers' password in database? Is it >> one-way conversion? For example, if a password is saved as >> '4QrcOUm6Wau+VuBX8g+IPg', can I decode it back to its original text which >> was '123456'? >> > > They are stored as MD5 hashes and you cannot "decode" them as such. IIRCC, > what you must do is make an MD5 hash of the password and then compare the > two hashes. They should be the same. > > Kind Regards, > Chris > -- Altaf Mahmud System Programmer Ayesha Abed Library BRAC University Bangladesh. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmcharlt at gmail.com Sat Nov 20 16:21:04 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Sat, 20 Nov 2010 10:21:04 -0500 Subject: [Koha-devel] Borrowers' password encryption method in database In-Reply-To: References: Message-ID: Hi, 2010/11/20 Altaf Mahmud : > Thanks Chris. Is it exactly that MD5 function? My SQL version is 5.1.41, > SELECT MD5 ('123456') returns another string, not that one. Is there > anything else I've to do? It's not quite MD5; it's actually md5_base64 as implemented by the Digest::MD5 Perl module. Regards, Galen -- Galen Charlton gmcharlt at gmail.com From altaf.mahmud at gmail.com Sat Nov 20 19:01:32 2010 From: altaf.mahmud at gmail.com (Altaf Mahmud) Date: Sun, 21 Nov 2010 00:01:32 +0600 Subject: [Koha-devel] [Koha] Borrowers' password encryption method in database In-Reply-To: <5.2.1.1.2.20101120121250.01f91250@localhost> References: <5.2.1.1.2.20101120121250.01f91250@localhost> Message-ID: I found a solution for php: $str = rtrim (base64_encode (pack ('H*', md5 ('123456'))), '='); This returns the desired string in 22 characters, rtrim used for discarding trailing '=' pad characters. Thanks for the help. 2010/11/20 Paul > At 07:01 AM 11/20/2010 -0500, Chris Nighswonger wrote: > > 2010/11/20 Altaf Mahmud > Hi, > I want to know how Koha saves its borrowers' password in database? Is it > one-way conversion? For example, if a password is saved as > '4QrcOUm6Wau+VuBX8g+IPg', can I decode it back to its original text which > was '123456'? > They are stored as MD5 hashes and you cannot "decode" them as such. IIRCC, > what you must do is make an MD5 hash of the password and then compare the > two hashes. They should be the same. > > > The above is not a "pure" MD5 hash [32 character hexadecimal value]; for > 123456, it would be > > e10adc3949ba59abbe56e057f20f883e > > However, the decrypt function at does return 123456 > for 4QrcOUm6Wau+VuBX8g+IPg > > Best - Paul > > _______________________________________________ > Koha mailing list http://koha-community.org > Koha at lists.katipo.co.nz > http://lists.katipo.co.nz/mailman/listinfo/koha > > -- Altaf Mahmud System Programmer Ayesha Abed Library BRAC University Bangladesh. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cnighswonger at foundations.edu Sun Nov 21 00:49:10 2010 From: cnighswonger at foundations.edu (Chris Nighswonger) Date: Sat, 20 Nov 2010 18:49:10 -0500 Subject: [Koha-devel] Koha 3.2.1 Released Message-ID: Greetings: It is with pleasure that I announce the release of Koha 3.2.1 two days ahead of the scheduled release date. The package can be retrieved from: http://download.koha-community.org/koha-3.02.01.tar.gz You can use the following checksum and signature files to verify the download: http://download.koha-community.org/koha-3.02.01.tar.gz.MD5 http://download.koha-community.org/koha-3.02.01.tar.gz.MD5.asc http://download.koha-community.org/koha-3.02.01.tar.gz.sig Come and get it! RELEASE NOTES FOR KOHA 3.2.1 - 20 November 2010 ======================================================================== Koha is the first free and open source software library automation package (ILS). Development is sponsored by libraries of varying types and sizes, volunteers, and support companies from around the world. The website for the Koha project is http://koha-community.org/ Koha 3.2.1 can be downloaded from: http://download.koha-community.org/koha-3.02.01.tar.gz Installation instructions can be found at: http://wiki.koha-community.org/wiki/Installation_Documentation Koha 3.2.1 is a bugfix/maintenance release. Highlights of 3.2.1 ====================== Some of the higher profile bugs addressed in this release are: * Further fixes smoothing the upgrade process from 3.0.x to 3.2.x. (Bug 4141) * Subject links in search results are now properly formed. (Bug 4074) * Advanced notice preferences set during new patron creation are now saved correctly. (Bug 4254) * Img tags in XSLT templates are now handled correctly during automatic translation. (Bug 4472) * The "Show More" link listed with facets has been made to work. (Bug 4520) * Amanzon book covers will now work with ISBN13. (Bug 4305) * The staff client XSLT templates are now unique. (Bug 4423) * The Unimarc XSLT templates are now translated in English. (Bug 5204) Other notable fixes in this release include: * A number of spelling corrections * Various fixes to formerly non-translatable phrases * Some tweaks to fix display problems in Internet Explorer This release also contains a variety of minor enhancements improving Koha's interface. Maintenance activities performed in this release include: * A number of new and improved test cases * Removal of unused dependencies Bugs fixed in 3.2.1 ====================== 2122 Grayed out Fields not always visible 2567 008 editor falsely populating Illustrations and Nature of Contents positions 3013 Value builder for 006 and 008 need choices for all format types 3211 cataloging value plugin forms should be valid XHTML 3271 Missing message when adding to a list with no items selected 3811 biblios record.abs indexes 008 language as numeric (marc21) 4074 The 'Subject(s)' link(s) are malformed resulting in no search results 4141 reconcile 3.0.x and HEAD database updates for 3.2.0 4254 adding new patron advanced notice doesn't save 4261 keyword mapping should show which framework the map is for 4305 Amazon book covers do not work with ISBN13 4359 noItemTypeImages not active in OPAC 4423 Staff Client XSLT is just a copy of the OPAC one 4472 img tags in xslt broken after automatic translation 4498 Acq always shows '1 suggestions waiting' 4515 Few very small errors in opac-search.pl 4520 facets "show more" doesn't work 4866 Optionally enable Change event for item plugins 4912 After editing private list, user should be redirect to private lists 4913 Budget planning pages should show currency name instead of symbol 4924 Public/Internal notes missing in staff normal view 4933 Link to subfield's edit tab from MARC subfield structure admin summary 4963 sys prefs need date hints and/or picker 4979 Acq: input fields for new record are too short 4980 Acq: pull down 'Restrict access to:' for funds not translatable 4986 move serials prefs from cataloging tab 4991 Overhaul Calendar interface 5003 Can not search for organisation by name 5004 Do not block deletion of cities when instances exist in borrowers table 5008 "Remove" link missing when viewing Cart in expanded "More details" view 5019 funds link doesn't go to list 5037 If patron category is empty it shouldn't show 5050 Staff client's language preference should be grouped with I18N/L10N preferences 5056 Untranslatable strings in members.js 5059 Inconsistent use of ordering price and list price in vendor form 5066 Incorrect use of localtime function when calling _session_log 5075 Terms not highlighted w/ xslt on 5082 Not translatable name of default framework 'Default' in MARCdetail.tmpl 5110 NewItemsDefaultLocation should be under cataloging 5112 Organisation does not show links to professional patrons 5114 Can't edit basket in Internet Explorer 5117 Misspelled word: Orgnisztion 5118 Misspelled word: Currencey 5119 Misspelled word: correspounding 5121 Misspelled words: stripts biographyl Begininning 5122 Misspelled word: Transfered/transfered 5123 Misspelled words: Depdending Commited flutucations 5124 Duplicate and Misspelled words: periodicy outputing 5128 Define default 9xx fields for Unimarc setup in all languages 5130 Misspelled words: biblographic delimeter extention 5132 Misspelled words: Acquistion Succesfully professionnal 5133 Misspelled words: reservior notifiying deleete 5134 Misspelled words: exisiting anomolies genereated 5135 Authorized value input maxlength should match table column 5136 Replace embedded SQL query with call to GetAuthorisedValues 5137 Remove obsolete code for counting issues by item type in circulation 5142 Untranslatable strings in tag review template 5146 Patron Import Requires Header Row 5149 Link to noItemTypeImages pref on item types is wrong 5151 Saved Report Breadcrumb in bold 5152 confirm buttons different styles on lists 5162 patron attributes 'new' link should create blank value 5163 holds to pull is titled pending holds 5168 add holdings' should read 'add/edit items' 5171 edit items should be edit item when under 1 item 5175 The opac XSLTDetails view field Publisher: doesn't provide a hyperlink as the non XSLT view does. 5177 Descending sort search result defined by syspref doesn't work 5190 MARC and UNIMARC bibliographic fields documentation accesible when cataloging 5204 Unimarc xslt default templates are in french 5210 Function of back button on batch modification result page unclear 5214 undefined itype images cause broken image icon in request.pl 5219 Perltidy code 5221 Preselect tab containing itemtype/authorized value image in use 5223 related subjects' should read 'subjects' 5224 Replace Shopping Basket with basket 5235 checkout receipt should include patron name 5236 "hide my tags" link does nothing 5237 Special characters in patron barcodes cause renewals to fail. 5243 C4::Search::SimpleSearch call in AuthoritiesMarc.pm 5254 no need to scroll left to right on acq z search 5258 Order should read 'order line' in receive table 5301 In C4/XSLT.pm, itemcallnumber can contain special XML characters 5308 subscriptionroutinglist table is too lax 5309 Progress bar inaccurate 5311 Language dropdown in advanced search broken? 5315 Authority search Templates refer to unused variable 5318 rank weight error in ccl.properties 5322 pwgen not a dependency for the packages 5326 when ExtendedPatronAttributes off error links to old sys prefs 5327 Unit tests required for all C4 modules 5363 Remove dependency on Cache::Memcached::Fast 5368 Browse Shelf link appears even if there isn't an itemcallnumber 5370 Fix all the references to koha.org 5372 Editing a bilbio record, existing value can be replace by default value 5380 an end to copy-and-paste-itis 5381 Fines in notices prints always 0.00 5385 Correct POD errors as highlighted by podchecker 5389 Business::ISBN should be marked as required dependency 5392 reserve/renewscript logging lots of warnings 5393 test case for verifying XML/XSLT files are well-formed 5396 UseTablesortForCirc syspref to toggle table sorter on circ/ circulation.pl 5400 add test case to detect merge conflict markers in the code 5412 Double quotes in publisher name, no date cause search results links to break Commits in 3.2.1 without a referenced bug report: * Adding a simple test for Service.pm * create unit test files * Create Unit Test for ImportBatch * unit test stub for Z3950.pm * Test modules compile * Updated links in Main Page Help * remove extraneous semicolon * History updates * history updates - recent releases * Adding 3.2 Release Maintainer to Release Team List * Adding possibility to cleanup_database.pl to purge only older sessions * Display available error information during bulkmarcimport * add missing help file for merging records * (MT 2985) simplify CanBookBeReserved * fix use of outdated boilerplate * remove unused template include * Misspells: deleteing -> deleting * Misspell: Quanity -> Quantity System requirements ====================== Changes since 3.0: * The minimum version of Perl required is now 5.8.8. * There are a number of new Perl module dependencies. Run ./koha_perl_deps.pl -u -m to get a list of any new modules to install during upgrade. Upgrades ====================== The structure of the acquisitions tables have changed significantly from 3.0.x. In particular, the budget hierarchy is quite different. During an upgrade, a new database table is created called fundmapping that contains a record of how budgets were mapped. It is strongly recommended that users of Koha 3.0.x acquisitions carefully review the results of the upgrade before resuming ordering in Koha 3.2.x. Documentation ====================== As of Koha 3.2, the Koha manual is now maintained in DocBook. The home page for Koha documentation is http://koha-community.org/documentation/ As of the date of these release notes, several translations of the Koha manual are available: English: http://koha-community.org/documentation/3-2-manual/ Spanish: http://koha-community.org/documentation/3-2-manual-es/ French: http://koha-community.org/documentation/3-2-manual-fr/ The Git repository for the Koha manual can be found at http://git.koha-community.org/gitweb/?p=kohadocs.git;a=summary Translations ====================== Complete or near-complete translations of the OPAC and staff interface are available in this release for the following languages: * Chinese * Danish * English (New Zealand) * English (USA) * French (France) * French (Canada) * German * Greek * Hindi * Italian * Norwegian * Portuguese * Spanish * Turkish Translation related commits new to 3.2.1: * Staff interface .po file updates * Russian and Ukranian opac language updates * Ukranian and Russian syspref language updates * German and italian language updates Partial translations are available for various other languages. The Koha team welcomes additional translations; please see http://www.kohadocs.org/usersguide/apb.html for information about translating Koha, and join the koha-translate list to volunteer: http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-translate The most up-to-date translations can be found at: http://translate.koha.org/ Release Team ====================== The release team for Koha 3.2 is Release Manager: Galen Charlton Documentation Manager: Nicole Engard Translation Manager: Chris Cormack Release Maintainer (3.0.x): Henri-Damien Laurent < henridamien.laurent at biblibre.com> Release Maintainer (3.2.x): Chris Nighswonger Credits ====================== We thank the following individuals who contributed patches to Koha 3.2.1. Nahuel Angelinetti Tom??s Cohen Arazi Jared Camins-Esakov Colin Campbell Galen Charlton Chris Cormack Nate Curulla Fr??d??ric Demians Andrew Elwell Brian Engard Nicole Engard Magnus Enger Katrin Fischer Daniel Grobani Kyle M Hall Srdjan Jankovic Bernardo Gonzalez Kriegel Henri-Damien Laurent Owen Leonard Chris Nighswonger Paul Poulain MJ Ray Liz Rea Marcel de Rooy Robin Sheat ByWater Solutions Zeno Tajoli Ian Walls We regret any omissions. If a contributor has been inadvertantly missed, please send patch against these release notes to koha-patches at lists.koha-community.org. The 3.2.x Release Maintainer especially thanks the 3.4 Release Team and all who participated in QA for their diligent labors in testing and pushing well over 100 patches since the 3.2.0 relese. Their work very much makes possible the release of 3.2.1 on its announced schedule. Revision control notes ====================== The Koha project uses Git for version control. The current development version of Koha can be retrieved by checking out the master branch of git://git.koha-community.org/koha.git The branch for Koha 3.2.x (i.e., this version of Koha and future bugfix releases) is 3.2.x. The next major feature release of Koha will be Koha 3.4.0. Bugs and feature requests ====================== Bug reports and feature requests can be filed at the Koha bug tracker at http://bugs.koha-community.org/ Naku te rourou, nau te rourou, ka ora ai te iwi. -------------- next part -------------- An HTML attachment was scrubbed... URL: From henridamien.laurent at biblibre.com Mon Nov 22 11:02:23 2010 From: henridamien.laurent at biblibre.com (LAURENT Henri-Damien) Date: Mon, 22 Nov 2010 11:02:23 +0100 Subject: [Koha-devel] Branches awaiting QA In-Reply-To: <4CE777F7.6050703@tamil.fr> References: <20101116225555.GX4325@rorohiko> <4CE363DD.1040307@tamil.fr> <4CE777F7.6050703@tamil.fr> Message-ID: <4CEA3FAF.9030202@biblibre.com> Le 20/11/2010 08:25, Frederic Demians a ?crit : > We're talking about this branch: > > new/awaiting_qa/biblibre_opac > >>> - If I reference directly opac-detail.pl page with a biblionumber, I get >>> error message when XSLT is enabled. Without XSLT it works. >> >> I get the same error whether XSLT is on or off: >> >> Global symbol "$subscriptionsnumber" requires explicit package name at >> /home/oleonard/kohaclone/opac/opac-detail.pl line 700. >> Global symbol "$subscriptionsnumber" requires explicit package name at >> /home/oleonard/kohaclone/opac/opac-detail.pl line 706. > > Yes, you're correct. I remember now that I add to do a quick fix to make > it work. I send a patch for that. > > We need some feedback from this branch submitters if we want to do > collective QA. > -- > Fr?d?ric Well, patches welcome, either you want to send on list on directly to me. I wanted to work on that but had no time. So If you want to send me patches or suggestions, please do. -- Henri-Damien LAURENT From henridamien.laurent at biblibre.com Mon Nov 22 11:19:05 2010 From: henridamien.laurent at biblibre.com (LAURENT Henri-Damien) Date: Mon, 22 Nov 2010 11:19:05 +0100 Subject: [Koha-devel] RE : Re: [koha-commits] main Koha release repository branch, master, updated. v3.02.00-rc-20-g45604b8 In-Reply-To: References: Message-ID: <4CEA4399.6070505@biblibre.com> Le 19/11/2010 20:39, Nicole Engard a ?crit : > If so, then we should probably change the description that says it's > going to be removed. > > Nicole yes, we should. -- Henri-Damien LAURENT From paul.poulain at biblibre.com Wed Nov 24 17:47:08 2010 From: paul.poulain at biblibre.com (Paul Poulain) Date: Wed, 24 Nov 2010 17:47:08 +0100 Subject: [Koha-devel] BibLibre RFC's on Wiki Needing Updating In-Reply-To: References: Message-ID: <4CED418C.9080401@biblibre.com> Le 11/11/2010 20:44, Chris Nighswonger a ?crit : > Hi Paul, Hi Chris, > Here is a list of RFC's I noted on the wiki which need to be updated > using the new RFC template found here: > > http://wiki.koha-community.org/wiki/Category:RFCs#RFC_Template > > Each of these is also lacking enhancement requests in bugzilla (or at > least they fail to mention them). Fixed except for : http://wiki.koha-community.org/wiki/Money_Calculations that is not a BibLibre RFC (I just added a comment) > > Kind Regards, > Chris > > http://wiki.koha-community.org/wiki/Duplicate_card_button > http://wiki.koha-community.org/wiki/AllowOnShelfHolds > http://wiki.koha-community.org/wiki/Batch_Modification_Biblio_Record_Level > http://wiki.koha-community.org/wiki/Enhancements_to_the_display_of_extended_patron_attributes_in_circ > http://wiki.koha-community.org/wiki/Ergonomics_of_smart-rules_and_circulation_parameters > http://wiki.koha-community.org/wiki/Holds_managed_in_smart-rules > http://wiki.koha-community.org/wiki/Hook-up_with_3M_sensitization_solution > http://wiki.koha-community.org/wiki/Improvements_to_Authority_Searching > http://wiki.koha-community.org/wiki/Money_Calculations > http://wiki.koha-community.org/wiki/SSO_CAS_improvements > http://wiki.koha-community.org/wiki/Transfers_-_enhancements -- Paul POULAIN http://www.biblibre.com Expert en Logiciels Libres pour l'info-doc Tel : (33) 4 91 81 35 08 From fridolyn.somers at gmail.com Wed Nov 24 18:13:54 2010 From: fridolyn.somers at gmail.com (Fridolyn SOMERS) Date: Wed, 24 Nov 2010 18:13:54 +0100 Subject: [Koha-devel] Facets performance In-Reply-To: References: Message-ID: Little up. Any feedback is welcome. On Fri, Oct 22, 2010 at 12:47 AM, Chris Cormack wrote: > 2010/10/21 Fridolyn SOMERS : > > Hie, > > Hi Fridolyn > > > > I have posted a proposed patch for Bug 3154 : > > http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=3154 > > > > It's about the fact that facets computation is limited to the records in > > search results page. > > I think I've found a good way to improve the facets extraction > performance. > > > > Any comment or modification is welcome. > > > This looks really promising, I probably won't get a chance to try it > out until after Kohacon. > But i'm looking forward to giving it some testing > > Thank you > > Chris > -- Fridolyn SOMERS ICT engineer PROGILONE - Lyon - France fridolyn.somers at gmail.com -------------- section suivante -------------- Une pi?ce jointe HTML a ?t? nettoy?e... URL: From chrisc at catalyst.net.nz Wed Nov 24 20:32:02 2010 From: chrisc at catalyst.net.nz (Chris Cormack) Date: Thu, 25 Nov 2010 08:32:02 +1300 Subject: [Koha-devel] Facets performance In-Reply-To: References: Message-ID: <20101124193201.GI4325@rorohiko> * Fridolyn SOMERS (fridolyn.somers at gmail.com) wrote: > Little up. > Any feedback is welcome. Hi Fridolyn Can you please send your patch to the koha-patches at lists.koha-community.org mailing list. Then my scripts will pick it up and I can apply it easily :) Chris > > On Fri, Oct 22, 2010 at 12:47 AM, Chris Cormack > wrote: > > 2010/10/21 Fridolyn SOMERS : > > Hie, > > Hi Fridolyn > > > > I have posted a proposed patch for Bug 3154 : > > http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=3154 > > > > It's about the fact that facets computation is limited to the records > in > > search results page. > > I think I've found a good way to improve the facets extraction > performance. > > > > Any comment or modification is welcome. > > > This looks really promising, I probably won't get a chance to try it > out until after Kohacon. > But i'm looking forward to giving it some testing > > Thank you > Chris > > -- > Fridolyn SOMERS > ICT engineer > PROGILONE - Lyon - France > fridolyn.somers at gmail.com > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ -- Chris Cormack Catalyst IT Ltd. +64 4 803 2238 PO Box 11-053, Manners St, Wellington 6142, New Zealand -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: From Peter.Mwangi at negst.edu Thu Nov 25 10:09:31 2010 From: Peter.Mwangi at negst.edu (Peter Mwangi) Date: Thu, 25 Nov 2010 12:09:31 +0300 Subject: [Koha-devel] How do I generate more than 10, 000 koha report reords using excel? In-Reply-To: <20101116225555.GX4325@rorohiko> References: <20101116225555.GX4325@rorohiko> Message-ID: <46B19C089065BD40A97CE67B957C8F7AE1EA69@adam.negst.net> Dear all, We have been trying to generate reports from KOHA. However, after building our report, we download it in excel and only get 10000 records (rows). We expect to get at least 40,000 records. Where could we be going wrong or rather what are we doing or not doing? Is this a koha or excel- related challenge? _______________________________ Peter Mwangi Gichiri Systems Librarian, NEGST/ Africa International University P.O.Box, 24686 - 00502 Karen, Nairobi Tel: 254 (02) 882104/5; Cell: 0721 - 621692 Email: peter.mwangi at negst.edu Website: www.negst.edu From ian.bays at ptfs-europe.com Thu Nov 25 10:33:30 2010 From: ian.bays at ptfs-europe.com (Ian Bays) Date: Thu, 25 Nov 2010 09:33:30 +0000 Subject: [Koha-devel] How do I generate more than 10, 000 koha report reords using excel? In-Reply-To: <46B19C089065BD40A97CE67B957C8F7AE1EA69@adam.negst.net> References: <20101116225555.GX4325@rorohiko> <46B19C089065BD40A97CE67B957C8F7AE1EA69@adam.negst.net> Message-ID: <4CEE2D6A.5000509@ptfs-europe.com> Hi Peter, I believe there is a built-in default limit of 10000 for sql reports "to protect users from downloading too much or filling the server with an error"... I also understand that if you add a LIMIT statement to the sql then that takes precedence. So for example you could add " LIMIT 99999" to your sql and see if that works. It might be nice to have something either on-screen or in help about this. Good luck. Ian PS if you are subscribed to the normal Koha list that might be a better list for this question rather than development... On 25/11/2010 09:09, Peter Mwangi wrote: > > Dear all, > We have been trying to generate reports from KOHA. However, after > building our report, we download it in excel and only get 10000 records > (rows). We expect to get at least 40,000 records. Where could we be > going wrong or rather what are we doing or not doing? Is this a koha or > excel- related challenge? > > > _______________________________ > Peter Mwangi Gichiri > Systems Librarian, NEGST/ Africa International University P.O.Box, 24686 > - 00502 Karen, Nairobi > Tel: 254 (02) 882104/5; Cell: 0721 - 621692 > Email: peter.mwangi at negst.edu > Website: www.negst.edu > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > -- Ian Bays Director of Projects PTFS Europe.com mobile: +44 (0) 7774995297 phone: +44 (0) 800 756 6803 skype: ian.bays email: ian.bays at ptfs-europe.com From M.de.Rooy at rijksmuseum.nl Thu Nov 25 13:51:50 2010 From: M.de.Rooy at rijksmuseum.nl (Marcel de Rooy) Date: Thu, 25 Nov 2010 12:51:50 +0000 Subject: [Koha-devel] FW: [Koha-patches] [PATCH] Bug 4959 (Language inconsistencies on basket groups; skip confirmation when closing basket.) Message-ID: <809BE39CD64BFD4EB9036172EBCCFA311A49B2@S-MAIL-1B.rijksmuseum.intra> Hi, I would welcome any input from you on the following proposed patch. Regards, Marcel -----Oorspronkelijk bericht----- Van: koha-patches-bounces at lists.koha-community.org [mailto:koha-patches-bounces at lists.koha-community.org] Namens Marcel de Rooy Verzonden: donderdag 25 november 2010 13:50 Aan: koha-patches at lists.koha-community.org Onderwerp: [Koha-patches] [PATCH] Bug 4959 (Language inconsistencies on basket groups; skip confirmation when closing basket.) This patch adds a new pref SkipBasketConfirmations. This adds the option to skip confirmations on closing and reopening a basket. If you skip the confirm, you do not create a new basket group. The confusing line Create a purchase order (when closing a basket) is replaced by Attach basket to a new basket group with the same name. A warning for a null value on basketgroupid is fixed. --- acqui/basket.pl | 7 +++++-- .../prog/en/modules/acqui/basket.tmpl | 9 ++++----- .../en/modules/admin/preferences/acquisitions.pref | 6 ++++++ 3 files changed, 15 insertions(+), 7 deletions(-) diff --git a/acqui/basket.pl b/acqui/basket.pl index 005fcb9..57308b1 100755 --- a/acqui/basket.pl +++ b/acqui/basket.pl @@ -90,6 +90,9 @@ if (!defined $op) { $op = q{}; } +my $skip_pref= C4::Context->preference("SkipBasketConfirmations") || 0; +$template->param( skip_confirm_reopen => 1) if $skip_pref; + if ( $op eq 'delete_confirm' ) { my $basketno = $query->param('basketno'); DelBasket($basketno); @@ -144,7 +147,7 @@ if ( $op eq 'delete_confirm' ) { print GetBasketAsCSV($query->param('basketno')); exit; } elsif ($op eq 'close') { - my $confirm = $query->param('confirm'); + my $confirm = $query->param('confirm') || $skip_pref; if ($confirm) { my $basketno = $query->param('basketno'); my $booksellerid = $query->param('booksellerid'); @@ -197,7 +200,7 @@ if ( $op eq 'delete_confirm' ) { if ($basket->{closedate} && haspermission({ flagsrequired => { acquisition => 'group_manage'} })) { $basketgroups = GetBasketgroups($basket->{booksellerid}); for my $bg ( @{$basketgroups} ) { - if ($basket->{basketgroupid} == $bg->{id}){ + if ($basket->{basketgroupid} && $basket->{basketgroupid} == $bg->{id}){ $bg->{default} = 1; } } diff --git a/koha-tmpl/intranet-tmpl/prog/en/modules/acqui/basket.tmpl b/koha-tmpl/intranet-tmpl/prog/en/modules/acqui/basket.tmpl index 9079b6d..3084b48 100644 --- a/koha-tmpl/intranet-tmpl/prog/en/modules/acqui/basket.tmpl +++ b/koha-tmpl/intranet-tmpl/prog/en/modules/acqui/basket.tmpl @@ -39,8 +39,8 @@ @@ -313,7 +312,7 @@

Are you sure you want to close basket ?

- +

" name="basketno" /> diff --git a/koha-tmpl/intranet-tmpl/prog/en/modules/admin/preferences/acquisitions.pref b/koha-tmpl/intranet-tmpl/prog/en/modules/admin/preferences/acquisitions.pref index 2a900f0..41ebe78 100644 --- a/koha-tmpl/intranet-tmpl/prog/en/modules/admin/preferences/acquisitions.pref +++ b/koha-tmpl/intranet-tmpl/prog/en/modules/admin/preferences/acquisitions.pref @@ -18,6 +18,12 @@ Acquisitions: - The default tax rate is - pref: gist - (enter in numeric form, 0.12 for 12%) + - + - When closing or reopening a basket, + - pref: SkipBasketConfirmations + choices: + 0: always ask for confirmation. + 1: do not ask for confirmation. Printing: - -- 1.6.0.6 _______________________________________________ Koha-patches mailing list Koha-patches at lists.koha-community.org http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-patches website : http://www.koha-community.org/ git : http://git.koha-community.org/ bugs : http://bugs.koha-community.org/ From colin.campbell at ptfs-europe.com Thu Nov 25 19:03:58 2010 From: colin.campbell at ptfs-europe.com (Colin Campbell) Date: Thu, 25 Nov 2010 18:03:58 +0000 Subject: [Koha-devel] FW: [Koha-patches] [PATCH] Bug 4959 (Language inconsistencies on basket groups; skip confirmation when closing basket.) In-Reply-To: <809BE39CD64BFD4EB9036172EBCCFA311A49B2@S-MAIL-1B.rijksmuseum.intra> References: <809BE39CD64BFD4EB9036172EBCCFA311A49B2@S-MAIL-1B.rijksmuseum.intra> Message-ID: <4CEEA50E.2030301@ptfs-europe.com> On 25/11/10 12:51, Marcel de Rooy wrote: > Hi, > I would welcome any input from you on the following proposed patch. I wonder if rather than having the option as SkipBasketConfirmation it should be BasketConfirmation so that it could be 'Say yes to do confirmations' Rather than say 'Say yes to not do confirmations' (skipping impling not doing something) Cheers Colin -- Colin Campbell Chief Software Engineer, PTFS Europe Limited Content Management and Library Solutions +44 (0) 208 366 1295 (phone) +44 (0) 7759 633626 (mobile) colin.campbell at ptfs-europe.com skype: colin_campbell2 http://www.ptfs-europe.com From colin.campbell at ptfs-europe.com Fri Nov 26 12:06:30 2010 From: colin.campbell at ptfs-europe.com (Colin Campbell) Date: Fri, 26 Nov 2010 11:06:30 +0000 Subject: [Koha-devel] Deleting Orders Message-ID: <4CEF94B6.4040201@ptfs-europe.com> Hi, I'm just looking at the branch of acquisitions fixes from Biblibre one fix in there is an addition to DelOrder to delete any items associated with the order on its deletion. However it goes further and deletes the associated bib record if it has no items. thus: > diff --git a/C4/Acquisition.pm b/C4/Acquisition.pm > index c1b837b..54b01fa 100644 > --- a/C4/Acquisition.pm > +++ b/C4/Acquisition.pm > @@ -28,6 +28,7 @@ use C4::Suggestions; > use C4::Biblio; > use C4::Debug; > use C4::SQLHelper qw(InsertInTable); > +use C4::Items; > > use Time::localtime; > use HTML::Entities; > @@ -1219,6 +1220,10 @@ sub DelOrder { > my $sth = $dbh->prepare($query); > $sth->execute( $bibnum, $ordernumber ); > $sth->finish; > + > + my @itemnumbers = GetItemnumbersFromOrder( $ordernumber ); > + C4::Items::DelItem( $dbh, $bibnum, $_ ) for @itemnumbers; > + DelBiblio( $dbh, $bibnum ) if C4::Items::GetItemsCount( $bibnum ) == 0; > } My gut feeling is that deleting the biblio automatically at this point is not a good idea but I thought I'd throw it open for comments. Cheers Colin -- Colin Campbell Chief Software Engineer, PTFS Europe Limited Content Management and Library Solutions +44 (0) 208 366 1295 (phone) +44 (0) 7759 633626 (mobile) colin.campbell at ptfs-europe.com skype: colin_campbell2 http://www.ptfs-europe.com From laurenthdl at alinto.com Fri Nov 26 12:20:57 2010 From: laurenthdl at alinto.com (LAURENT Henri-Damien) Date: Fri, 26 Nov 2010 12:20:57 +0100 Subject: [Koha-devel] Deleting Orders In-Reply-To: <4CEF94B6.4040201@ptfs-europe.com> References: <4CEF94B6.4040201@ptfs-europe.com> Message-ID: <4CEF9819.5000806@alinto.com> Le 26/11/2010 12:06, Colin Campbell a ?crit : > Hi, > I'm just looking at the branch of acquisitions fixes from Biblibre one > fix in there is an addition to DelOrder to delete any items associated > with the order on its deletion. However it goes further and deletes the > associated bib record if it has no items. thus: Hi Collin. Thanks for your feedback. This is a discussion we had at the moment we did that. We had two options : either - Delete only the order and leave it to the cataloger to delete those items. But he would have NO information on WHICH items to delete, unless there would be a flag on the item.... OR - Delete the items in order to remove them from catalogue, knowing that having items linked in orders are items which are created by the order. So this action was taken. Because looked safer and involved only the acquisition person. The one that has created the items should be the one responsible for them until they are received and quite get out of his hands. Comments welcome. -- Henri-Damien LAURENT BibLibre >> diff --git a/C4/Acquisition.pm b/C4/Acquisition.pm >> index c1b837b..54b01fa 100644 >> --- a/C4/Acquisition.pm >> +++ b/C4/Acquisition.pm >> @@ -28,6 +28,7 @@ use C4::Suggestions; >> use C4::Biblio; >> use C4::Debug; >> use C4::SQLHelper qw(InsertInTable); >> +use C4::Items; >> >> use Time::localtime; >> use HTML::Entities; >> @@ -1219,6 +1220,10 @@ sub DelOrder { >> my $sth = $dbh->prepare($query); >> $sth->execute( $bibnum, $ordernumber ); >> $sth->finish; >> + >> + my @itemnumbers = GetItemnumbersFromOrder( $ordernumber ); >> + C4::Items::DelItem( $dbh, $bibnum, $_ ) for @itemnumbers; >> + DelBiblio( $dbh, $bibnum ) if C4::Items::GetItemsCount( $bibnum ) == 0; >> } > My gut feeling is that deleting the biblio automatically at this point > is not a good idea but I thought I'd throw it open for comments. From Katrin.Fischer at bsz-bw.de Fri Nov 26 12:39:33 2010 From: Katrin.Fischer at bsz-bw.de (Fischer, Katrin) Date: Fri, 26 Nov 2010 12:39:33 +0100 Subject: [Koha-devel] Deleting Orders In-Reply-To: <4CEF9819.5000806@alinto.com> References: <4CEF94B6.4040201@ptfs-europe.com> <4CEF9819.5000806@alinto.com> Message-ID: <028B1A54D03E7B4482CDCA4EC8F06BFDD7648B@Bodensee.bsz-bw.de> Perhaps this behaviour should be an option. - delete items automatically when cancelling - do not delete items when cancelling an order When does the deletion happen? - when I cancel an order (from receive screen after I closed the basket) - when I delete single titles from an order (leave open or reopen basket and delete single lines) I am not sure the items were always deleted and I had no time to test the acq branch yet :( > -----Original Message----- > From: koha-devel-bounces at lists.koha-community.org [mailto:koha-devel- > bounces at lists.koha-community.org] On Behalf Of LAURENT Henri-Damien > Sent: Friday, November 26, 2010 12:21 PM > To: koha-devel at lists.koha-community.org > Subject: Re: [Koha-devel] Deleting Orders > > Le 26/11/2010 12:06, Colin Campbell a ?crit : > > Hi, > > I'm just looking at the branch of acquisitions fixes from Biblibre > one > > fix in there is an addition to DelOrder to delete any items > associated > > with the order on its deletion. However it goes further and deletes > the > > associated bib record if it has no items. thus: > Hi Collin. > Thanks for your feedback. > This is a discussion we had at the moment we did that. > We had two options : > either > - Delete only the order and leave it to the cataloger to delete those > items. But he would have NO information on WHICH items to delete, > unless > there would be a flag on the item.... > > OR > - Delete the items in order to remove them from catalogue, knowing that > having items linked in orders are items which are created by the order. > So this action was taken. Because looked safer and involved only the > acquisition person. The one that has created the items should be the > one > responsible for them until they are received and quite get out of his > hands. > > Comments welcome. > -- > Henri-Damien LAURENT > BibLibre > > >> diff --git a/C4/Acquisition.pm b/C4/Acquisition.pm > >> index c1b837b..54b01fa 100644 > >> --- a/C4/Acquisition.pm > >> +++ b/C4/Acquisition.pm > >> @@ -28,6 +28,7 @@ use C4::Suggestions; > >> use C4::Biblio; > >> use C4::Debug; > >> use C4::SQLHelper qw(InsertInTable); > >> +use C4::Items; > >> > >> use Time::localtime; > >> use HTML::Entities; > >> @@ -1219,6 +1220,10 @@ sub DelOrder { > >> my $sth = $dbh->prepare($query); > >> $sth->execute( $bibnum, $ordernumber ); > >> $sth->finish; > >> + > >> + my @itemnumbers = GetItemnumbersFromOrder( $ordernumber ); > >> + C4::Items::DelItem( $dbh, $bibnum, $_ ) for @itemnumbers; > >> + DelBiblio( $dbh, $bibnum ) if C4::Items::GetItemsCount( $bibnum > ) == 0; > >> } > > My gut feeling is that deleting the biblio automatically at this > point > > is not a good idea but I thought I'd throw it open for comments. > > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > From colin.campbell at ptfs-europe.com Fri Nov 26 13:24:14 2010 From: colin.campbell at ptfs-europe.com (Colin Campbell) Date: Fri, 26 Nov 2010 12:24:14 +0000 Subject: [Koha-devel] Deleting Orders In-Reply-To: <028B1A54D03E7B4482CDCA4EC8F06BFDD7648B@Bodensee.bsz-bw.de> References: <4CEF94B6.4040201@ptfs-europe.com> <4CEF9819.5000806@alinto.com> <028B1A54D03E7B4482CDCA4EC8F06BFDD7648B@Bodensee.bsz-bw.de> Message-ID: <4CEFA6EE.2080501@ptfs-europe.com> On 26/11/10 11:39, Fischer, Katrin wrote: > Perhaps this behaviour should be an option. > > - delete items automatically when cancelling > - do not delete items when cancelling an order Deletion of biblio as well strikes me as counter productive, very often the site may want the biblio even although there are no items attached, e.g. with orders for serial subscriptions, certain types of electronic material or even just a change in sourcing Cheers Colin -- Colin Campbell Chief Software Engineer, PTFS Europe Limited Content Management and Library Solutions +44 (0) 208 366 1295 (phone) +44 (0) 7759 633626 (mobile) colin.campbell at ptfs-europe.com skype: colin_campbell2 http://www.ptfs-europe.com From Katrin.Fischer at bsz-bw.de Fri Nov 26 13:53:53 2010 From: Katrin.Fischer at bsz-bw.de (Fischer, Katrin) Date: Fri, 26 Nov 2010 13:53:53 +0100 Subject: [Koha-devel] Deleting Orders In-Reply-To: <4CEFA6EE.2080501@ptfs-europe.com> References: <4CEF94B6.4040201@ptfs-europe.com> <4CEF9819.5000806@alinto.com><028B1A54D03E7B4482CDCA4EC8F06BFDD7648B@Bodensee.bsz-bw.de> <4CEFA6EE.2080501@ptfs-europe.com> Message-ID: <028B1A54D03E7B4482CDCA4EC8F06BFDD764B8@Bodensee.bsz-bw.de> I think we need another option or different way of ordering for orders of serials/electronic journals/databases etc. Perhaps a checkbox 'don't create item(s) for this order' would be a possibility. And being able to edit the quantity in this case. So you can order something without being forced to create items. > -----Original Message----- > From: koha-devel-bounces at lists.koha-community.org [mailto:koha-devel- > bounces at lists.koha-community.org] On Behalf Of Colin Campbell > Sent: Friday, November 26, 2010 1:24 PM > To: koha-devel at lists.koha-community.org > Subject: Re: [Koha-devel] Deleting Orders > > On 26/11/10 11:39, Fischer, Katrin wrote: > > Perhaps this behaviour should be an option. > > > > - delete items automatically when cancelling > > - do not delete items when cancelling an order > > Deletion of biblio as well strikes me as counter productive, very often > the site may want the biblio even although there are no items attached, > e.g. with orders for serial subscriptions, certain types of electronic > material or even just a change in sourcing > > Cheers > Colin > > -- > Colin Campbell > Chief Software Engineer, > PTFS Europe Limited > Content Management and Library Solutions > +44 (0) 208 366 1295 (phone) > +44 (0) 7759 633626 (mobile) > colin.campbell at ptfs-europe.com > skype: colin_campbell2 > > http://www.ptfs-europe.com > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > From henridamien.laurent at biblibre.com Fri Nov 26 15:43:55 2010 From: henridamien.laurent at biblibre.com (LAURENT Henri-Damien) Date: Fri, 26 Nov 2010 15:43:55 +0100 Subject: [Koha-devel] Circular references Message-ID: <4CEFC7AB.8080303@biblibre.com> Hi During KohaCon, Plack and Data persistance was discussed and wome work was done on that. Ian posted a perl script. Marc did another script. It works quite fast. I wanted to publicise this in order to contribute code so that eventually all the circular references could be solved. Find enclosed both scripts : CIRCDeps.pl from Marc and dep_check.pl We will try and keep you informed of what we achieved with that. -- Henri-Damien LAURENT -------------- next part -------------- A non-text attachment was scrubbed... Name: CIRCDeps.pl Type: application/x-perl Size: 1214 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: dep_check.pl Type: application/x-perl Size: 3311 bytes Desc: not available URL: From henridamien.laurent at biblibre.com Fri Nov 26 15:48:58 2010 From: henridamien.laurent at biblibre.com (LAURENT Henri-Damien) Date: Fri, 26 Nov 2010 15:48:58 +0100 Subject: [Koha-devel] Facets performance In-Reply-To: References: Message-ID: <4CEFC8DA.80300@biblibre.com> Le 24/11/2010 18:13, Fridolyn SOMERS a ?crit : > Little up. > Any feedback is welcome. In my opinion, since you ask, it may really slow down the process of displaying results. And it still would not be True facets. Since it just takes more records. If you have more than 500 hundred records returned. It would truncate results. BUT... Nevertheless, at least, there is something interesting here... We could also benefit from getting more accurate numbers of biblios displayed in case ppl are using HideLostItems. That said, if it is coupled with a kind of caching the result pages. Maybe it is worth the overhead. My 2 cents. -- Henri-Damien LAURENT BibLibre From henridamien.laurent at biblibre.com Fri Nov 26 15:49:59 2010 From: henridamien.laurent at biblibre.com (LAURENT Henri-Damien) Date: Fri, 26 Nov 2010 15:49:59 +0100 Subject: [Koha-devel] use cases for authentication [was externalising translator and etc] Message-ID: <4CEFC917.1040301@biblibre.com> Hi, as I spotted in previous email some design flaws (in my opinion) in Authentication. >> It gets >> > worse if you consider changes such as adding or modifying an >> > authentication module, for which the default configuration changes are >> > almost invariably associated with code changes. > Same for authentication. I consider this as a design flaw. > We should split authentication and identification. > And manage identification in a modular way... So that administrators > would just have to edit configuration files in order to make correct > mappings, and not dive into the code, change and commit (if they know > git enough...) > I would like to gather some use cases of what hardcoded customization people had to implement in Auth_with_ldap in order to make them run. We have some... Most of which consist in date calculation for users, and hash values mappings for categories or other data or encoding. We also implemented search against multiple branches... (patch to be sent soon (still need some sanitizing) ). But I would be quite interested in more use cases. So that we could have a kind of test cases to implement in order to generalize the thing at its best. -- Henri-Damien LAURENT From nengard at gmail.com Fri Nov 26 16:23:19 2010 From: nengard at gmail.com (Nicole Engard) Date: Fri, 26 Nov 2010 10:23:19 -0500 Subject: [Koha-devel] Fwd: [koha-commits] main Koha release repository branch, master, updated. v3.02.00-188-g4b69538 In-Reply-To: References: Message-ID: I think this was brought up before but how do I know what has changed in these files? Commit emails that tell me bug numbers and a short title/summary are much more helpful than this. I use these emails to determine what to update in the manual and I can't keep track if I have to go hunt down these commit numbers. Is there another way to format these types of messages to look like the ones that include details about each patch? Nicole ---------- Forwarded message ---------- From: Git repo owner Date: Sun, Nov 14, 2010 at 9:22 PM Subject: [koha-commits] main Koha release repository branch, master, updated. v3.02.00-188-g4b69538 To: koha-commits at lists.koha-community.org This is an automated email from the git hooks/post-receive script. It was generated because a ref change was pushed to the repository containing the project "main Koha release repository". The branch, master has been updated ? ? ? via ?4b69538f9bf6809126cc404a7bc09b02c3830328 (commit) ? ? ? via ?38360b766c32ab89c6e5ee87d2459692b47f2b29 (commit) ? ? ?from ?c0272a6b66e053a86010c7eb8fce1bf080bf428b (commit) Those revisions listed above that are new to this repository have not appeared on any other notification email; so we list those revisions in full, below. - Log ----------------------------------------------------------------- ----------------------------------------------------------------------- Summary of changes: ?admin/aqbudgetperiods.pl ? ? ? ? ? ? ? ? ? ? ? ? ? | ? ?4 +++- ?admin/aqbudgets.pl ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? | ? ?4 +++- ?admin/aqplan.pl ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?| ? ?5 +++-- ?.../prog/en/includes/budgets-active-currency.inc ? | ? ?1 + ?.../prog/en/modules/admin/aqbudgetperiods.tmpl ? ? | ? ?4 +++- ?.../prog/en/modules/admin/aqbudgets.tmpl ? ? ? ? ? | ? ?6 +++--- ?.../prog/en/modules/admin/aqplan.tmpl ? ? ? ? ? ? ?| ? ?2 +- ?.../prog/en/modules/virtualshelves/shelves.tmpl ? ?| ? ?6 +++++- ?koha-tmpl/opac-tmpl/prog/en/css/opac.css ? ? ? ? ? | ? ?5 ++++- ?.../opac-tmpl/prog/en/modules/opac-shelves.tmpl ? ?| ? ?3 ++- ?10 files changed, 28 insertions(+), 12 deletions(-) ?create mode 100644 koha-tmpl/intranet-tmpl/prog/en/includes/budgets-active-currency.inc hooks/post-receive -- main Koha release repository _______________________________________________ koha-commits mailing list koha-commits at lists.koha-community.org http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-commits From gmcharlt at gmail.com Fri Nov 26 16:34:27 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Fri, 26 Nov 2010 10:34:27 -0500 Subject: [Koha-devel] Fwd: [koha-commits] main Koha release repository branch, master, updated. v3.02.00-188-g4b69538 In-Reply-To: References: Message-ID: Hi, On Fri, Nov 26, 2010 at 10:23 AM, Nicole Engard wrote: > I think this was brought up before but how do I know what has changed > in these files? Commit emails that tell me bug numbers and a short > title/summary are much more helpful than this. ?I use these emails to > determine what to update in the manual and I can't keep track if I > have to go hunt down these commit numbers. ?Is there another way to > format these types of messages to look like the ones that include > details about each patch? I suggest using an RSS feed: http://git.koha-community.org/gitweb/?p=koha.git;a=rss [master branch] http://git.koha-community.org/gitweb/?p=koha.git;a=rss;h=refs/heads/3.2.x [3.2.x] Regards, Galen -- Galen Charlton gmcharlt at gmail.com From nengard at gmail.com Fri Nov 26 16:51:34 2010 From: nengard at gmail.com (Nicole Engard) Date: Fri, 26 Nov 2010 10:51:34 -0500 Subject: [Koha-devel] Fwd: [koha-commits] main Koha release repository branch, master, updated. v3.02.00-188-g4b69538 In-Reply-To: References: Message-ID: Is that your way of saying it's not possible to get this in the email? Nicole On Fri, Nov 26, 2010 at 10:34 AM, Galen Charlton wrote: > Hi, > > On Fri, Nov 26, 2010 at 10:23 AM, Nicole Engard wrote: >> I think this was brought up before but how do I know what has changed >> in these files? Commit emails that tell me bug numbers and a short >> title/summary are much more helpful than this. ?I use these emails to >> determine what to update in the manual and I can't keep track if I >> have to go hunt down these commit numbers. ?Is there another way to >> format these types of messages to look like the ones that include >> details about each patch? > > I suggest using an RSS feed: > > http://git.koha-community.org/gitweb/?p=koha.git;a=rss [master branch] > > http://git.koha-community.org/gitweb/?p=koha.git;a=rss;h=refs/heads/3.2.x > [3.2.x] > > Regards, > > Galen > -- > Galen Charlton > gmcharlt at gmail.com > From nengard at gmail.com Fri Nov 26 16:55:37 2010 From: nengard at gmail.com (Nicole Engard) Date: Fri, 26 Nov 2010 10:55:37 -0500 Subject: [Koha-devel] Fwd: [koha-commits] main Koha release repository branch, master, updated. v3.02.00-188-g4b69538 In-Reply-To: References: Message-ID: PS. I used to do the RSS feed and the emails were much easier to go through than the feed because it was everything in one email instead of having to click through 50 commit messages. There was another reason I didn't like the RSS feed, but I don't remember what it was. On Fri, Nov 26, 2010 at 10:51 AM, Nicole Engard wrote: > Is that your way of saying it's not possible to get this in the email? > > Nicole > > On Fri, Nov 26, 2010 at 10:34 AM, Galen Charlton wrote: >> Hi, >> >> On Fri, Nov 26, 2010 at 10:23 AM, Nicole Engard wrote: >>> I think this was brought up before but how do I know what has changed >>> in these files? Commit emails that tell me bug numbers and a short >>> title/summary are much more helpful than this. ?I use these emails to >>> determine what to update in the manual and I can't keep track if I >>> have to go hunt down these commit numbers. ?Is there another way to >>> format these types of messages to look like the ones that include >>> details about each patch? >> >> I suggest using an RSS feed: >> >> http://git.koha-community.org/gitweb/?p=koha.git;a=rss [master branch] >> >> http://git.koha-community.org/gitweb/?p=koha.git;a=rss;h=refs/heads/3.2.x >> [3.2.x] >> >> Regards, >> >> Galen >> -- >> Galen Charlton >> gmcharlt at gmail.com >> > From paul.poulain at biblibre.com Fri Nov 26 17:02:55 2010 From: paul.poulain at biblibre.com (Paul Poulain) Date: Fri, 26 Nov 2010 17:02:55 +0100 Subject: [Koha-devel] Fwd: [koha-commits] main Koha release repository branch, master, updated. v3.02.00-188-g4b69538 In-Reply-To: References: Message-ID: <4CEFDA2F.4040503@biblibre.com> Le 26/11/2010 16:55, Nicole Engard a ?crit : > PS. I used to do the RSS feed and the emails were much easier to go > through than the feed because it was everything in one email instead > of having to click through 50 commit messages. There was another > reason I didn't like the RSS feed, but I don't remember what it was. I fully agree with Nicole : it's very handy to have everything in the mail ! (Nicole, it's me that asked, and got the RSS Answer) cheers -- Paul POULAIN http://www.biblibre.com Expert en Logiciels Libres pour l'info-doc Tel : (33) 4 91 81 35 08 From gmcharlt at gmail.com Fri Nov 26 17:31:18 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Fri, 26 Nov 2010 11:31:18 -0500 Subject: [Koha-devel] Fwd: [koha-commits] main Koha release repository branch, master, updated. v3.02.00-188-g4b69538 In-Reply-To: <4CEFDA2F.4040503@biblibre.com> References: <4CEFDA2F.4040503@biblibre.com> Message-ID: Hi, On Fri, Nov 26, 2010 at 11:02 AM, Paul Poulain wrote: > I fully agree with Nicole : it's very handy to have everything in the > mail ! (Nicole, it's me that asked, and got the RSS Answer) There are quite a few services and software packages that can convert RSS feeds to email. Jes' saying. :) That said, if somebody wants to go to the trouble of finding or writing a git post-receive hook to send emails that are more to your liking, go for it. I'll install it if it seems reasonable. Just keep in mind that it will need to notify on branch creation, not just send email for patches that get pushed to master, 3.2.x, and 3.0.x. Regards, Galen -- Galen Charlton gmcharlt at gmail.com From gmcharlt at gmail.com Fri Nov 26 17:36:30 2010 From: gmcharlt at gmail.com (Galen Charlton) Date: Fri, 26 Nov 2010 11:36:30 -0500 Subject: [Koha-devel] FW: [Koha-patches] [PATCH] Bug 4959 (Language inconsistencies on basket groups; skip confirmation when closing basket.) In-Reply-To: <4CEEA50E.2030301@ptfs-europe.com> References: <809BE39CD64BFD4EB9036172EBCCFA311A49B2@S-MAIL-1B.rijksmuseum.intra> <4CEEA50E.2030301@ptfs-europe.com> Message-ID: Hi, On Thu, Nov 25, 2010 at 1:03 PM, Colin Campbell wrote: > On 25/11/10 12:51, Marcel de Rooy wrote: >> Hi, >> I would welcome any input from you on the following proposed patch. > I wonder if rather than having the option as SkipBasketConfirmation it > should be BasketConfirmation so that it could be 'Say yes to do > confirmations' Rather than say 'Say yes to not do confirmations' > (skipping impling not doing something) Yes -- system preference names that are positive assertions rather than negative ones are better; it avoids using a double-negative to enable something. Regards, Galen -- Galen Charlton gmcharlt at gmail.com From reedwade at gmail.com Sun Nov 28 01:53:23 2010 From: reedwade at gmail.com (Reed Wade) Date: Sun, 28 Nov 2010 13:53:23 +1300 Subject: [Koha-devel] flaw in C4/Biblio.pm when controlnumber not set Message-ID: Hi, I was playing around with strange records (one with no setting for control number) and managed to tickle this-- 703156da (Katrin Fischer 2010-11-24 11:35:43 -0500 1266) sub GetMarcControlnumber { 703156da (Katrin Fischer 2010-11-24 11:35:43 -0500 1267) my ( $record, $marcflavour ) = @_; 703156da (Katrin Fischer 2010-11-24 11:35:43 -0500 1268) my $controlnumber = ""; 703156da (Katrin Fischer 2010-11-24 11:35:43 -0500 1269) # Control number or Record identifier are the same field in MARC21 and UNIMARC 703156da (Katrin Fischer 2010-11-24 11:35:43 -0500 1270) # Keep $marcflavour for possible later use 703156da (Katrin Fischer 2010-11-24 11:35:43 -0500 1271) if ($marcflavour eq "MARC21" || $marcflavour eq "UNIMARC") { 703156da (Katrin Fischer 2010-11-24 11:35:43 -0500 1272) $controlnumber = $record->field('001')->data(); 703156da (Katrin Fischer 2010-11-24 11:35:43 -0500 1273) } 703156da (Katrin Fischer 2010-11-24 11:35:43 -0500 1274) } which causes-- Can't call method "data" on an undefined value at /home/reedpetone/koha/dev/koha/C4/Biblio.pm line 1272. when I do this-- http://koha/cgi-bin/koha/opac-detail.pl?biblionumber=2 ------- Since it's a small and recent item, advice on IRC was to just send this to the list. -reed From Katrin.Fischer at bsz-bw.de Sun Nov 28 13:45:25 2010 From: Katrin.Fischer at bsz-bw.de (Fischer, Katrin) Date: Sun, 28 Nov 2010 13:45:25 +0100 Subject: [Koha-devel] flaw in C4/Biblio.pm when controlnumber not set References: Message-ID: <028B1A54D03E7B4482CDCA4EC8F06BFD98ED30@Bodensee.bsz-bw.de> Hi Reed, I am sorry my patch caused this problem. I have worked on the problem and sent a follow-up patch: http://lists.koha-community.org/pipermail/koha-patches/2010-November/013113.html Hope that fixes the problem. Katrin -----Urspr?ngliche Nachricht----- Von: koha-devel-bounces at lists.koha-community.org im Auftrag von Reed Wade Gesendet: So 28.11.2010 01:53 An: Koha Devel Betreff: [Koha-devel] flaw in C4/Biblio.pm when controlnumber not set Hi, I was playing around with strange records (one with no setting for control number) and managed to tickle this-- 703156da (Katrin Fischer 2010-11-24 11:35:43 -0500 1266) sub GetMarcControlnumber { 703156da (Katrin Fischer 2010-11-24 11:35:43 -0500 1267) my ( $record, $marcflavour ) = @_; 703156da (Katrin Fischer 2010-11-24 11:35:43 -0500 1268) my $controlnumber = ""; 703156da (Katrin Fischer 2010-11-24 11:35:43 -0500 1269) # Control number or Record identifier are the same field in MARC21 and UNIMARC 703156da (Katrin Fischer 2010-11-24 11:35:43 -0500 1270) # Keep $marcflavour for possible later use 703156da (Katrin Fischer 2010-11-24 11:35:43 -0500 1271) if ($marcflavour eq "MARC21" || $marcflavour eq "UNIMARC") { 703156da (Katrin Fischer 2010-11-24 11:35:43 -0500 1272) $controlnumber = $record->field('001')->data(); 703156da (Katrin Fischer 2010-11-24 11:35:43 -0500 1273) } 703156da (Katrin Fischer 2010-11-24 11:35:43 -0500 1274) } which causes-- Can't call method "data" on an undefined value at /home/reedpetone/koha/dev/koha/C4/Biblio.pm line 1272. when I do this-- http://koha/cgi-bin/koha/opac-detail.pl?biblionumber=2 ------- Since it's a small and recent item, advice on IRC was to just send this to the list. -reed _______________________________________________ Koha-devel mailing list Koha-devel at lists.koha-community.org http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel website : http://www.koha-community.org/ git : http://git.koha-community.org/ bugs : http://bugs.koha-community.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nengard at gmail.com Sun Nov 28 14:22:48 2010 From: nengard at gmail.com (Nicole Engard) Date: Sun, 28 Nov 2010 08:22:48 -0500 Subject: [Koha-devel] December Newsletter Call for Articles Message-ID: It's that time again! Please submit to me your short newsletter articles by the13th of December. This will be the last "monthly" issue of the Koha newsletter. Starting in 2011 the newsletter will be published every other month. So if you have an announcement you want out before February please send it to me for this newsletter. I encourage those with long articles to post them on the web and send me a summary with a link to the full article. Thanks Nicole From chris at bigballofwax.co.nz Mon Nov 29 01:58:29 2010 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Mon, 29 Nov 2010 13:58:29 +1300 Subject: [Koha-devel] Circular references In-Reply-To: <4CEFC7AB.8080303@biblibre.com> References: <4CEFC7AB.8080303@biblibre.com> Message-ID: 2010/11/27 LAURENT Henri-Damien : > Hi > During KohaCon, Plack and Data persistance was discussed and wome work > was done on that. > Ian posted a perl script. > Marc did another script. It works quite fast. > I wanted to publicise this in order to contribute code so that > eventually all the circular references could be solved. > Find enclosed both scripts : CIRCDeps.pl from Marc and dep_check.pl > We will try and keep you informed of what we achieved with that. > Unfortunately (if i'm reading correctly) this will only find modules that have circular dependencies on other modules. What causes memory leaks is circular references in the code, where an object refers to an object that refers to the inital object (or to another object that refers to it). http://stackoverflow.com/questions/2223721/common-perl-memory-reference-leak-patterns These are much harder to find. But there is a module that can help us http://search.cpan.org/dist/Devel-Cycle/lib/Devel/Cycle.pm I'd love to see some work done on this, because memory leaks will be a blocker to implementing persistent code. Chris From henridamien.laurent at biblibre.com Mon Nov 29 09:08:23 2010 From: henridamien.laurent at biblibre.com (LAURENT Henri-Damien) Date: Mon, 29 Nov 2010 09:08:23 +0100 Subject: [Koha-devel] Circular references In-Reply-To: References: <4CEFC7AB.8080303@biblibre.com> Message-ID: <4CF35F77.6060507@biblibre.com> Le 29/11/2010 01:58, Chris Cormack a ?crit : > 2010/11/27 LAURENT Henri-Damien : >> Hi >> During KohaCon, Plack and Data persistance was discussed and wome work >> was done on that. >> Ian posted a perl script. >> Marc did another script. It works quite fast. >> I wanted to publicise this in order to contribute code so that >> eventually all the circular references could be solved. >> Find enclosed both scripts : CIRCDeps.pl from Marc and dep_check.pl >> We will try and keep you informed of what we achieved with that. >> > Unfortunately (if i'm reading correctly) this will only find modules > that have circular dependencies on other modules. Yes, both (Ian and Marc scripts) are investigating Circular Modules references. > > What causes memory leaks is circular references in the code, where an > object refers to an object that refers to the inital object (or to > another object that refers to it). > > http://stackoverflow.com/questions/2223721/common-perl-memory-reference-leak-patterns > > These are much harder to find. But there is a module that can help us > http://search.cpan.org/dist/Devel-Cycle/lib/Devel/Cycle.pm I am aware of Devel::Cycle, but I doubt we have circular objects references other than Modules, since Koha has mostly been procedural and not object oriented. > > I'd love to see some work done on this, because memory leaks will be a > blocker to implementing persistent code. Yes. Let's do it. Let's try and plan meetings on that to share experiences and achievements ? We could add discussion on schedule to our 8th December meeting. -- Henri-Damien LAURENT From chris at bigballofwax.co.nz Mon Nov 29 09:12:09 2010 From: chris at bigballofwax.co.nz (Chris Cormack) Date: Mon, 29 Nov 2010 21:12:09 +1300 Subject: [Koha-devel] Circular references In-Reply-To: <4CF35F77.6060507@biblibre.com> References: <4CEFC7AB.8080303@biblibre.com> <4CF35F77.6060507@biblibre.com> Message-ID: On 29 November 2010 21:08, LAURENT Henri-Damien wrote: > Le 29/11/2010 01:58, Chris Cormack a ?crit : >> 2010/11/27 LAURENT Henri-Damien : >>> Hi >>> During KohaCon, Plack and Data persistance was discussed and wome work >>> was done on that. >>> Ian posted a perl script. >>> Marc did another script. It works quite fast. >>> I wanted to publicise this in order to contribute code so that >>> eventually all the circular references could be solved. >>> Find enclosed both scripts : CIRCDeps.pl from Marc and dep_check.pl >>> We will try and keep you informed of what we achieved with that. >>> >> Unfortunately (if i'm reading correctly) this will only find modules >> that have circular dependencies on other modules. > Yes, both (Ian and Marc scripts) are investigating Circular Modules > references. > >> >> What causes memory leaks is circular references in the code, where an >> object refers to an object that refers to the inital object (or to >> another object that refers to it). >> >> http://stackoverflow.com/questions/2223721/common-perl-memory-reference-leak-patterns >> >> These are much harder to find. But there is a module that can help us >> http://search.cpan.org/dist/Devel-Cycle/lib/Devel/Cycle.pm > I am aware of Devel::Cycle, but I doubt we have circular objects > references other than Modules, since Koha has mostly been procedural and > not object oriented. > When I say object, I dont mean an Object in the Object Oriented approach, but a thing, it could be a hashref, that references a hash, that contains a reference to the first hash. Id find it highly suprising if we didn't have at least a few of these. Chris From reedwade at gmail.com Mon Nov 29 10:03:41 2010 From: reedwade at gmail.com (Reed Wade) Date: Mon, 29 Nov 2010 22:03:41 +1300 Subject: [Koha-devel] flaw in C4/Biblio.pm when controlnumber not set In-Reply-To: <028B1A54D03E7B4482CDCA4EC8F06BFD98ED30@Bodensee.bsz-bw.de> References: <028B1A54D03E7B4482CDCA4EC8F06BFD98ED30@Bodensee.bsz-bw.de> Message-ID: works for me ta, -reed On Mon, Nov 29, 2010 at 1:45 AM, Fischer, Katrin wrote: > > Hi Reed, > > I am sorry my patch caused this problem. I have worked on the problem and > sent a follow-up patch: > > http://lists.koha-community.org/pipermail/koha-patches/2010-November/013113.html > > Hope that fixes the problem. > > Katrin > > > -----Urspr?ngliche Nachricht----- > Von: koha-devel-bounces at lists.koha-community.org im Auftrag von Reed Wade > Gesendet: So 28.11.2010 01:53 > An: Koha Devel > Betreff: [Koha-devel] flaw in C4/Biblio.pm when controlnumber not set > > Hi, > > I was playing around with strange records (one with no setting for > control number) and managed to tickle this-- > > 703156da (Katrin Fischer?????? 2010-11-24 11:35:43 -0500 1266) sub > GetMarcControlnumber { > 703156da (Katrin Fischer?????? 2010-11-24 11:35:43 -0500 1267)???? my > ( $record, $marcflavour ) = @_; > 703156da (Katrin Fischer?????? 2010-11-24 11:35:43 -0500 1268)???? my > $controlnumber = ""; > 703156da (Katrin Fischer?????? 2010-11-24 11:35:43 -0500 1269)???? # > Control number or Record identifier are the same field in MARC21 and > UNIMARC > 703156da (Katrin Fischer?????? 2010-11-24 11:35:43 -0500 1270)???? # > Keep $marcflavour for possible later use > 703156da (Katrin Fischer?????? 2010-11-24 11:35:43 -0500 1271)???? if > ($marcflavour eq "MARC21" || $marcflavour eq "UNIMARC") { > 703156da (Katrin Fischer?????? 2010-11-24 11:35:43 -0500 1272) > $controlnumber = $record->field('001')->data(); > 703156da (Katrin Fischer?????? 2010-11-24 11:35:43 -0500 1273)???? } > 703156da (Katrin Fischer?????? 2010-11-24 11:35:43 -0500 1274) } > > > which causes-- > > Can't call method "data" on an undefined value at > /home/reedpetone/koha/dev/koha/C4/Biblio.pm line 1272. > > when I do this-- > > http://koha/cgi-bin/koha/opac-detail.pl?biblionumber=2 > > ------- > > Since it's a small and recent item, advice on IRC was to just send > this to the list. > > -reed > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > > > > From nengard at gmail.com Mon Nov 29 15:16:57 2010 From: nengard at gmail.com (Nicole Engard) Date: Mon, 29 Nov 2010 09:16:57 -0500 Subject: [Koha-devel] Fwd: [koha-commits] main Koha release repository branch, master, updated. v3.02.00-188-g4b69538 In-Reply-To: References: <4CEFDA2F.4040503@biblibre.com> Message-ID: Galen, It had to do with what was published to the feed versus what was published in the emails - it wasn't the same so getting those as emails isn't helpful. I'm off to subscribe anyway to see if things are different - that said it doesn't help me with all the commits I'm behind on - any tips on using this message to find out what exactly was pushed? I'm 100 commits behind when it comes to the manual. Nicole On Fri, Nov 26, 2010 at 11:31 AM, Galen Charlton wrote: > Hi, > > On Fri, Nov 26, 2010 at 11:02 AM, Paul Poulain > wrote: >> I fully agree with Nicole : it's very handy to have everything in the >> mail ! (Nicole, it's me that asked, and got the RSS Answer) > > There are quite a few services and software packages that can convert > RSS feeds to email. ?Jes' saying. :) > > That said, if somebody wants to go to the trouble of finding or > writing a git post-receive hook to send emails that are more to your > liking, go for it. ?I'll install it if it seems reasonable. ?Just keep > in mind that it will need to notify on branch creation, not just send > email for patches that get pushed to master, 3.2.x, and 3.0.x. > > Regards, > > Galen > -- > Galen Charlton > gmcharlt at gmail.com > _______________________________________________ > Koha-devel mailing list > Koha-devel at lists.koha-community.org > http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel > website : http://www.koha-community.org/ > git : http://git.koha-community.org/ > bugs : http://bugs.koha-community.org/ > From henridamien.laurent at biblibre.com Mon Nov 29 23:46:43 2010 From: henridamien.laurent at biblibre.com (LAURENT Henri-Damien) Date: Mon, 29 Nov 2010 23:46:43 +0100 Subject: [Koha-devel] meeting solr Message-ID: <4CF42D53.6020700@biblibre.com> Two weeks ago, I said that we would organise a meeting around solr and our developments in progress. We think we may now plan that. We think we could do that next week or the week after in order to tell the community the point what we achieved, what we are missing (And the first and most time-consuming task would be to gather some biblios and some use cases to test so that we donot induce regressions, what we plan to work on, and what the community could help us to do.) Our plan is to share our work on a regular base so that you can all see the direction we are willing to take, and the reasons why we implemented things that way. I would like to invite you to the Doodle poll "meeting solr". I donot want to interfere with the regular meeting. It is more a kind of "Come and play". We will have an up and running interface. Every body interested or even skeptical about this development may come and make his own opinion. Please follow the link in order to participate in the poll: http://doodle.com/2eh7spytgmydduca We will take at least two meetings at two different time in the day so that the wider audience can benefit and ask questions. We will try and assign the questions that you may have. We will therefore take the top two options. One in the morning, and one in the evening. I hope you will join us. -- Henri-Damien LAURENT BibLibre From ian.walls at bywatersolutions.com Tue Nov 30 18:09:40 2010 From: ian.walls at bywatersolutions.com (Ian Walls) Date: Tue, 30 Nov 2010 12:09:40 -0500 Subject: [Koha-devel] ICU chains Message-ID: Fellow Kohackers, There has been some talk before of using ICU chains with Zebra in order to resolve issues with diacritic search. I've looked at the patch from BibLibre ( http://git.biblibre.com/?p=koha;a=commit;h=c2465153cccb8965b0008186cdb0a94a57317849), and it seems pretty straightfoward, but it's nearly a year old, and I'm wondering if any work has been done since then. Does this commit resolve the issue (at least for French diacritics)? There has also been some recent posting on handling both Roman and Devangari script ( http://old.nabble.com/Re:-display-search-result-problem-p28106148.html). It references the BibLibre commit. If this methodology is working, what should we do to get it committed to Koha? I'd imagine that we'd need to generalize it to work with all languages... either by making a master set of rules, or having different xml files that can be called depending on your language choice... ideas? Cheers, -Ian -- Ian Walls Lead Development Specialist ByWater Solutions Phone # (888) 900-8944 http://bywatersolutions.com ian.walls at bywatersolutions.com Twitter: @sekjal -------------- next part -------------- An HTML attachment was scrubbed... URL: From robin at catalyst.net.nz Tue Nov 30 22:56:50 2010 From: robin at catalyst.net.nz (Robin Sheat) Date: Wed, 01 Dec 2010 10:56:50 +1300 Subject: [Koha-devel] Search Engine Changes : let's get some solr In-Reply-To: <4CA98C01.8080709@biblibre.com> References: <4CA98C01.8080709@biblibre.com> Message-ID: <1291154210.2428.37.camel@zarathud> LAURENT Henri-Damien schreef op ma 04-10-2010 om 10:10 [+0200]: > BibLibre investigated in a catalogue based on solr. Not sure if this is known, but I just saw it: http://www.indexdata.com/blog/2010/09/solr-support-zoom-pazpar2-and-masterkey "...we have just completed a project to add support for SOLR targets in the ZOOM API implementation in the YAZ library. So YAZ now supports Z39.50, SRU/SRW 1.x and the SOLR API." -- Robin Sheat Catalyst IT Ltd. ? +64 4 803 2204 GPG: 5957 6D23 8B16 EFAB FEF8 7175 14D3 6485 A99C EB6D -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part URL: