[Koha-devel] The many failings of background_jobs_worker.pl

David Schmidt mail at davidschmidt.at
Wed Dec 21 09:48:10 CET 2022


jup, the JSON bug hit us too

we also encountered some problems regarding rabbitmq/workers

debugging wasn't straigthforward because I had a hard time to find meaningful debug output.

luckily the folks on IRC were very helpful as usual.

the latest bug (not sure if it is a bug as of yet) was that the indexing job for manually created biblios wasnt triggered unless zebra was running (we use elasticsearch)

overall my confidence in the implementation quality of the rabbitmq/worker feature isnt overwhelming. that's not a complaint and I dont mean to step on anyones toes, I know that it's OSS and thus up to me to put in the work if im unhappy.

and as always there is the possibility that I did something wrong and the software is perfectly fine. :)

cheers
david

On Tue, 20 Dec 2022, at 8:13 PM, Philippe Blouin wrote:
> Howdy!
> 
> Since moving a lot of our users to 22.05.06, we've installed the worker everywhere.  But the number of issues encountered is staggering.
> 
> The first one was 
> 
> Can't call method "process" on an undefined value
> 
> where the id received from MQ was not found in the DB, and the process is going straight to process_job and failing.  Absolutely no idea how that occurs, seems completely counterintuitive (the ID comes from the DB after all), but here it is.  Hacked the code to add a "sleep 1" to fix most of that one.
> 
> Then came the fact that stored events were not checked if the connection to MQ was successful at startup.  Bug 30654 refers it.  Hacked a little "$init" in there to clear that up at startup.
> 
> Then came the 
> 
> malformed UTF-8 character in JSON string, at character offset 296 (before "\x{e9}serv\x{e9} au ...")
> 
> at decode_json that crashes the whole process.  And for some reason, it never gets over it, gets the same problem at every restart, like the event is never "eaten" from the queue.  Hacked an eval then a try-catch over it...
> 
> After coding a monitor to alert when a background_jobs has been "new" over 5 minutes in the DB, I was inundated by messages.  There's alway one elasticsearch_update that escapes among the flurry, and they slowly add up.
> 
> At this point, the only viable solution is to run the workers but disable RabbitMQ everywhere.  Are we really the only ones experiencing that?
> 
> Regards,
> 
> PS Our servers are well-above-average Debian 11 machines with lot of firepower (ram, cpu, i/o...).
> 
> -- 
> 
> Philippe Blouin,
> Directeur de la technologie
> 
> 
> Tél.  : (833) 465-4276, poste 230
> philippe.blouin at inLibro.com
> 
> inLibro | pour esprit libre | www.inLibro.com <http://www.inlibro.com/>
> _______________________________________________
> Koha-devel mailing list
> Koha-devel at lists.koha-community.org
> https://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-devel
> website : https://www.koha-community.org/
> git : https://git.koha-community.org/
> bugs : https://bugs.koha-community.org/
> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.koha-community.org/pipermail/koha-devel/attachments/20221221/9845b8fe/attachment.htm>


More information about the Koha-devel mailing list