Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

It makes unexpected pauses #350

Open
dima4p opened this issue Mar 11, 2022 · 8 comments
Open

It makes unexpected pauses #350

dima4p opened this issue Mar 11, 2022 · 8 comments

Comments

@dima4p
Copy link

dima4p commented Mar 11, 2022

There are active runners and active jobs, but for a while none is in work.
Here is the log, where the pause is clear seen

I, [2022-03-11T18:08:24.752521 #1037325]  INFO -- : [ActiveJob] [CreateLinesJob] [a4aa01d6-2b9c-45bb-9ed1-af88282bdebf] Line@60::create_for 126802 500000: 52032401 true =› 1.370552 ms/line
I, [2022-03-11T18:08:24.974651 #1037338]  INFO -- : [ActiveJob] [CreateLinesJob] [4672efd8-2903-4c10-9944-422e52766752] Line@60::create_for 126801 500000: 52107353 true =› 1.391209 ms/line
I, [2022-03-11T18:08:27.969507 #1037282]  INFO -- : [ActiveJob] [CreateLinesJob] [7f34caf0-b783-4ecb-b866-5a5946429bfe] Line@109::create_for 126798 503078: 52433727 true =» 1.389405 ms/line
I, [2022-03-11T18:08:27.969952 #1037282]  INFO -- : [ActiveJob] [CreateLinesJob] [7f34caf0-b783-4ecb-b866-5a5946429bfe] Line@125::create_for 126798 503078: 52433727 true =» 1.389406 ms/line
I, [2022-03-11T18:08:28.408359 #1037338]  INFO -- : [ActiveJob] [CreateLinesJob] [4672efd8-2903-4c10-9944-422e52766752] Line@109::create_for 126801 503155: 52435943 true =» 1.38931 ms/line
I, [2022-03-11T18:08:28.408492 #1037338]  INFO -- : [ActiveJob] [CreateLinesJob] [4672efd8-2903-4c10-9944-422e52766752] Line@125::create_for 126801 503155: 52435943 true =» 1.38931 ms/line
I, [2022-03-11T18:08:28.877149 #1037325]  INFO -- : [ActiveJob] [CreateLinesJob] [a4aa01d6-2b9c-45bb-9ed1-af88282bdebf] Line@109::create_for 126802 503811: 52428988 true =» 1.368372 ms/line
I, [2022-03-11T18:08:28.877469 #1037325]  INFO -- : [ActiveJob] [CreateLinesJob] [a4aa01d6-2b9c-45bb-9ed1-af88282bdebf] Line@125::create_for 126802 503811: 52428988 true =» 1.368373 ms/line
I, [2022-03-11T18:08:34.117963 #1037300]  INFO -- : [ActiveJob] [CreateLinesJob] [772d5492-6d35-46fa-8422-d8ebaf918fba] Line@60::create_for 126800 500000: 51989939 true =› 1.409762 ms/line
I, [2022-03-11T18:08:37.241900 #1037300]  INFO -- : [ActiveJob] [CreateLinesJob] [772d5492-6d35-46fa-8422-d8ebaf918fba] Line@109::create_for 126800 504295: 52430173 true =» 1.40395 ms/line
I, [2022-03-11T18:08:37.242030 #1037300]  INFO -- : [ActiveJob] [CreateLinesJob] [772d5492-6d35-46fa-8422-d8ebaf918fba] Line@125::create_for 126800 504295: 52430173 true =» 1.403951 ms/line
I, [2022-03-11T18:09:38.869953 #1037311]  INFO -- : [ActiveJob] [CreateLinesJob] [0db8212c-f59e-4d44-96ee-a43ed66ce5c2] Line@60::create_for 126803 200000: 20798501 true =› 1.043489 ms/line
I, [2022-03-11T18:10:56.310255 #1037311]  INFO -- : [ActiveJob] [CreateLinesJob] [0db8212c-f59e-4d44-96ee-a43ed66ce5c2] Line@60::create_for 126803 300000: 31197990 true =› 0.953793 ms/line
I, [2022-03-11T18:12:13.069513 #1037311]  INFO -- : [ActiveJob] [CreateLinesJob] [0db8212c-f59e-4d44-96ee-a43ed66ce5c2] Line@60::create_for 126803 400000: 41595143 true =› 0.907243 ms/line
I, [2022-03-11T18:13:30.436332 #1037311]  INFO -- : [ActiveJob] [CreateLinesJob] [0db8212c-f59e-4d44-96ee-a43ed66ce5c2] Line@60::create_for 126803 500000: 51996682 true =› 0.880528 ms/line
I, [2022-03-11T18:13:33.834666 #1037311]  INFO -- : [ActiveJob] [CreateLinesJob] [0db8212c-f59e-4d44-96ee-a43ed66ce5c2] Line@109::create_for 126803 504256: 52437765 true =» 0.879836 ms/line
I, [2022-03-11T18:13:33.834845 #1037311]  INFO -- : [ActiveJob] [CreateLinesJob] [0db8212c-f59e-4d44-96ee-a43ed66ce5c2] Line@125::create_for 126803 504256: 52437765 true =» 0.879836 ms/line
I, [2022-03-11T18:30:48.442714 #1037300]  INFO -- : [ActiveJob] [CreateLinesJob] [e52f2408-f360-42d8-8ed2-3e3dd52785b1] Line@58::create_for 126804 0: 0 true
I, [2022-03-11T18:30:48.847424 #1037311]  INFO -- : [ActiveJob] [CreateLinesJob] [c6cf09ab-0ec5-4fe1-ae22-38af538dc51e] Line@58::create_for 126805 0: 0 true
I, [2022-03-11T18:30:49.667057 #1037282]  INFO -- : [ActiveJob] [CreateLinesJob] [3b13b3e8-8125-477e-bc45-30fb12983973] Line@58::create_for 126806 0: 0 true
I, [2022-03-11T18:30:49.701869 #1037338]  INFO -- : [ActiveJob] [CreateLinesJob] [3507d210-e863-4463-be12-a4580bf9bd00] Line@58::create_for 126807 0: 0 true
I, [2022-03-11T18:30:50.196048 #1037325]  INFO -- : [ActiveJob] [CreateLinesJob] [bc3eab6a-e630-4033-b8d9-0b6a08c281a3] Line@58::create_for 126808 0: 0 true
I, [2022-03-11T18:30:51.323464 #1037325]  INFO -- : [ActiveJob] [CreateLinesJob] [bc3eab6a-e630-4033-b8d9-0b6a08c281a3] Line@109::create_for 126808 796: 67156 true =» 1.416933 ms/line
I, [2022-03-11T18:30:51.323605 #1037325]  INFO -- : [ActiveJob] [CreateLinesJob] [bc3eab6a-e630-4033-b8d9-0b6a08c281a3] Line@125::create_for 126808 796: 67156 true =» 1.417167 ms/line
I, [2022-03-11T18:30:51.722405 #1037325]  INFO -- : [ActiveJob] [CreateLinesJob] [3cf3e483-f150-498f-a49d-9f9c1ee29432] Line@58::create_for 126809 0: 0 true
I, [2022-03-11T18:30:51.987531 #1037325]  INFO -- : [ActiveJob] [CreateLinesJob] [3cf3e483-f150-498f-a49d-9f9c1ee29432] Line@109::create_for 126809 180: 15192 true =» 1.478197 ms/line
I, [2022-03-11T18:30:51.987653 #1037325]  INFO -- : [ActiveJob] [CreateLinesJob] [3cf3e483-f150-498f-a49d-9f9c1ee29432] Line@125::create_for 126809 180: 15192 true =» 1.47904 ms/line
I, [2022-03-11T18:30:52.293484 #1037298]  INFO -- : [ActiveJob] [CreateLinesJob] [d735ccc8-0824-460c-a86e-273634eabd90] Line@58::create_for 126810 0: 0 true
I, [2022-03-11T18:30:52.342014 #1037325]  INFO -- : [ActiveJob] [CreateLinesJob] [0b61c35d-6910-4dbd-a50f-83d6b727d19c] Line@58::create_for 126811 0: 0 true
I, [2022-03-11T18:30:52.949455 #1037298]  INFO -- : [ActiveJob] [CreateLinesJob] [d735ccc8-0824-460c-a86e-273634eabd90] Line@109::create_for 126810 420: 35448 true =» 1.562788 ms/line
I, [2022-03-11T18:30:52.949836 #1037298]  INFO -- : [ActiveJob] [CreateLinesJob] [d735ccc8-0824-460c-a86e-273634eabd90] Line@125::create_for 126810 420: 35448 true =» 1.563751 ms/line
I, [2022-03-11T18:30:53.233778 #1037325]  INFO -- : [ActiveJob] [CreateLinesJob] [0b61c35d-6910-4dbd-a50f-83d6b727d19c] Line@109::create_for 126811 546: 46089 true =» 1.634055 ms/line
I, [2022-03-11T18:30:53.233911 #1037325]  INFO -- : [ActiveJob] [CreateLinesJob] [0b61c35d-6910-4dbd-a50f-83d6b727d19c] Line@125::create_for 126811 546: 46089 true =» 1.634363 ms/line
I, [2022-03-11T18:30:53.426203 #1037298]  INFO -- : [ActiveJob] [CreateLinesJob] [3b977984-bc7e-4e32-91a0-09e16e93245f] Line@58::create_for 126812 0: 0 true
I, [2022-03-11T18:30:53.751565 #1037325]  INFO -- : [ActiveJob] [CreateLinesJob] [a7565333-50d9-475e-a8da-c69081132f23] Line@58::create_for 126813 0: 0 true
I, [2022-03-11T18:30:53.828830 #1037325]  INFO -- : [ActiveJob] [CreateLinesJob] [a7565333-50d9-475e-a8da-c69081132f23] Line@109::create_for 126813 42: 8713 true =» 1.851179 ms/line
I, [2022-03-11T18:30:53.829311 #1037325]  INFO -- : [ActiveJob] [CreateLinesJob] [a7565333-50d9-475e-a8da-c69081132f23] Line@125::create_for 126813 42: 8713 true =» 1.862842 ms/line
I, [2022-03-11T18:30:54.444082 #1037298]  INFO -- : [ActiveJob] [CreateLinesJob] [3b977984-bc7e-4e32-91a0-09e16e93245f] Line@109::create_for 126812 630: 53139 true =» 1.616296 ms/line
I, [2022-03-11T18:30:54.444419 #1037298]  INFO -- : [ActiveJob] [CreateLinesJob] [3b977984-bc7e-4e32-91a0-09e16e93245f] Line@125::create_for 126812 630: 53139 true =» 1.61687 ms/line
I, [2022-03-11T18:32:29.520813 #1037300]  INFO -- : [ActiveJob] [CreateLinesJob] [e52f2408-f360-42d8-8ed2-3e3dd52785b1] Line@60::create_for 126804 100000: 10420347 true =› 1.010785 ms/line
I, [2022-03-11T18:32:29.675982 #1037311]  INFO -- : [ActiveJob] [CreateLinesJob] [c6cf09ab-0ec5-4fe1-ae22-38af538dc51e] Line@60::create_for 126805 100000: 10407147 true =› 1.00829 ms/line

What can be the reason for this delay?

@ZimbiX
Copy link
Member

ZimbiX commented Mar 14, 2022

@dima4p Please provide more context. What do you mean by 'runner'? What does a log message mean? What is your worker configuration? What does the worker/database CPU usage look like? What kind of work is the job doing? Are there particular job options in use?

@dima4p
Copy link
Author

dima4p commented Mar 14, 2022

@dima4p Please provide more context.
What do you mean by 'runner'?

An instance of que. In my case:

que -p 10,20,30,50,50 --maximum-buffer-size=0 --minimum-buffer-size=0
que -p any --maximum-buffer-size=0 --minimum-buffer-size=0
que -p any --maximum-buffer-size=0 --minimum-buffer-size=0
que -p any --maximum-buffer-size=0 --minimum-buffer-size=0
que -p any --maximum-buffer-size=0 --minimum-buffer-size=0
que -p any --maximum-buffer-size=0 --minimum-buffer-size=0
que -p any --maximum-buffer-size=0 --minimum-buffer-size=0

What does a log message mean?

In the first pairs of [] there is time and PID. You can see, that during the execution of the task there is a message about every 4 seconds. At the end of every job execution there is a message with the string "Line@125".
You can see, that after 18:13:33 there was a pause for more than 15 minutes till 18:30:48. And then the processes started further without any intervention.

What is your worker configuration?

I think the answer is above.

What does the worker/database CPU usage look like?

Until 18:13:33 the load was high, about 80% on every core. But between 18:13:33 and 18:30:48 very low.

What kind of work is the job doing?

The job reads a file, parses it to recognise a time and for each line created a record in the DB with the parsed time.

Are there particular job options in use?

Only id of the record pointing on the file to read.

@dima4p
Copy link
Author

dima4p commented Mar 14, 2022

I forgot to mention, that during the pause the queue of the jobs was long.
And second, almost the whole Job runs in one transaction.

@ZimbiX
Copy link
Member

ZimbiX commented Mar 15, 2022

I'm afraid I can't think of a reason for this. Perhaps --log-internals could help you debug? Or logging when each job starts and ends? Are you able to find a way to consistently reproduce the issue? A shareable minimal reproduction would be helpful.

@oeoeaio
Copy link
Contributor

oeoeaio commented Mar 15, 2022

@dima4p how large are the files you are reading? Is it possible there is a db lock somewhere within your transaction that blocks other workers?

I can’t help but notice that the first log message after the gap for every single worker process is:
Line@58::create_for 126810 0: 0 true
What does this mean? Could it be related? Feels like whatever causes that message rather than the Line@125 message could be a clue.

@milgner
Copy link

milgner commented May 6, 2022

I'm currently looking at a similar issue. There's one job class queued multiple times (with different arguments) as multiple jobs. And although there are 6 worker threads, que only processes one of these jobs. I can see that the Locker has acquired advisory locks on both of these jobs so it is planning to execute them. Unfortunately these are long-running jobs that are supposed to be run concurrently in the background.
It's not clear why que would wait with execution of the second job.

Unfortunately there isn't any log output either. Even though the full command line says que -q default -i 1 --log-internals -l debug, the only output is

May 06 11:23:38 [3728676]:   Worker threads: 6 (priorities: 10, 30, 50, any, any, any)
May 06 11:23:38 [3728676]:   Buffer size: 8
May 06 11:23:38 [3728676]:   Queues:
May 06 11:23:38 [3728676]:     - default (poll interval: 1.0s)
May 06 11:23:38 [3728676]: Que waiting for jobs...
May 06 11:23:38 [3728676]: Starting discovery job for source 4

(I added the last one as a puts statement to the start of the job so I'd know whether it's starting at all)

@ZimbiX
Copy link
Member

ZimbiX commented May 7, 2022

@milgner If you've set Que.logger, perhaps your logger is not outputting anything, e.g. due to its own log level config? How consistently does this occur for you? Which version of Que are you using? Do you see different behaviour between 1.3.1 and 1.4.0?

I still can't work out how to reproduce this. Again, a minimal Que app demonstrating this behaviour would be very helpful.

@dima4p
Copy link
Author

dima4p commented May 17, 2022

I have met the same problem in other project.

In the console I see there is one job ready to be executed:

2.6.3 :001 > Que::ActiveRecord::Model.all
 => #<ActiveRecord::Relation [#<Que::ActiveRecord::Model priority: 90, run_at: "2022-05-17 12:56:13", id: 139, job_class: "ActiveJob::QueueAdapters::QueAdapter::JobWrapper", error_count: 0, last_error_message: nil, queue: "default", last_error_backtrace: nil, finished_at: nil, expired_at: nil, args: [{"job_id"=>"1f09e471-6899-41e7-bbf4-b2fa5738f4c3", "locale"=>"ru", "priority"=>90, "arguments"=>[], "job_class"=>"GetPodcastsJob", "executions"=>0, "queue_name"=>"default", "provider_job_id"=>nil}], data: {}, job_schema_version: 1>]> 
2.6.3 :002 > Time.current
 => Tue, 17 May 2022 15:19:31 UTC +00:00 

I bit earlier I have restarted que runner two times:

$ RAILS_ENV=production bundle exec que -p any,any,any --log-internals -l debug &
[1] 23810
17:12:46 May 17 …                        

In the log there is:

I, [2022-05-17T17:12:47.084667 #23810]  INFO -- : {"lib":"que","hostname":"hz00","pid":23810,"thread":3709340,"internal_event":"locker_instantiate","object_id":44081060,"t":"2022-05-17T15:12:47.084603Z","queues":["default"],"listen":true,"poll":true,"poll_interval":5,"wait_period":50,"maximum_buffer_size":8,"minimum_buffer_size":2,"worker_priorities":[null,null,null]}
I, [2022-05-17T17:12:47.084752 #23810]  INFO -- : {"lib":"que","hostname":"hz00","pid":23810,"thread":3709340,"internal_event":"worker_instantiate","object_id":44078660,"t":"2022-05-17T15:12:47.084733Z","priority":null,"job_buffer":44080480,"result_queue":44079920}
I, [2022-05-17T17:12:47.084831 #23810]  INFO -- : {"lib":"que","hostname":"hz00","pid":23810,"thread":3709340,"internal_event":"worker_instantiate","object_id":44077720,"t":"2022-05-17T15:12:47.084799Z","priority":null,"job_buffer":44080480,"result_queue":44079920}
I, [2022-05-17T17:12:47.084959 #23810]  INFO -- : {"lib":"que","hostname":"hz00","pid":23810,"thread":3709340,"internal_event":"worker_instantiate","object_id":44076720,"t":"2022-05-17T15:12:47.084941Z","priority":null,"job_buffer":44080480,"result_queue":44079920}
I, [2022-05-17T17:12:47.092078 #23810]  INFO -- : {"lib":"que","hostname":"hz00","pid":23810,"thread":45705860,"internal_event":"connection_execute","object_id":45705920,"t":"2022-05-17T15:12:47.092045Z","backend_pid":23821,"command":"SELECT set_config('application_name', $1, false)","params":["Que Locker: 23821"],"ntuples":1}
I, [2022-05-17T17:12:47.212878 #23810]  INFO -- : {"lib":"que","hostname":"hz00","pid":23810,"thread":45705860,"internal_event":"connection_execute","object_id":45705920,"t":"2022-05-17T15:12:47.212844Z","backend_pid":23821,"command":"          -- Temporary composite type we need for our queries to work.\n          CREATE TYPE pg_temp.que_query_result AS (\n            locked boolean,\n            remaining_priorities jsonb\n          );\n\n          CREATE FUNCTION pg_temp.lock_and_update_priorities(priorities jsonb, job que_jobs)\n          RETURNS pg_temp.que_query_result\n          AS $$\n            WITH\n              -- Take the lock in a CTE because we want to use the result\n              -- multiple times while only taking the lock once.\n              lock_taken AS (\n                SELECT pg_try_advisory_lock((job).id) AS taken\n              ),\n              relevant AS (\n                SELECT priority, count\n                FROM (\n                  SELECT\n                    key::smallint AS priority,\n                    value::text::integer AS count\n                  FROM jsonb_each(priorities)\n                  ) t1\n                WHERE priority >= (job).priority\n                ORDER BY priority ASC\n                LIMIT 1\n              )\n            SELECT\n              (SELECT taken FROM lock_taken), -- R\n              CASE (SELECT taken FROM lock_taken)\n              WHEN false THEN\n                -- Simple case - we couldn't lock the job, so don't update the\n                -- priorities hash.\n                priorities\n              WHEN true THEN\n                CASE count\n                WHEN 1 THEN\n                  -- Remove the priority from the JSONB doc entirely, rather\n                  -- than leaving a zero entry in it.\n                  priorities - priority::text\n                ELSE\n                  -- Decrement the value in the JSONB doc.\n                  jsonb_set(\n                    priorities,\n                    ARRAY[priority::text],\n                    to_jsonb(count - 1)\n                  )\n                END\n              END\n            FROM relevant\n          $$\n          STABLE\n          LANGUAGE SQL;\n\n          CREATE FUNCTION pg_temp.que_highest_remaining_priority(priorities jsonb) RETURNS smallint AS $$\n            SELECT max(key::smallint) FROM jsonb_each(priorities)\n          $$\n          STABLE\n          LANGUAGE SQL;\n","params":[],"ntuples":0}
I, [2022-05-17T17:12:47.212949 #23810]  INFO -- : {"lib":"que","hostname":"hz00","pid":23810,"thread":45705860,"internal_event":"listener_instantiate","object_id":45703400,"t":"2022-05-17T15:12:47.212925Z","backend_pid":23821}
I, [2022-05-17T17:12:47.212990 #23810]  INFO -- : {"lib":"que","hostname":"hz00","pid":23810,"thread":45705860,"internal_event":"poller_instantiate","object_id":70011456617900,"t":"2022-05-17T15:12:47.212971Z","backend_pid":23821,"queue":"default","poll_interval":5}
D, [2022-05-17T17:12:47.213023 #23810] DEBUG -- : {"lib":"que","hostname":"hz00","pid":23810,"thread":45705860,"event":"locker_start","queues":["default"]}
I, [2022-05-17T17:12:47.213055 #23810]  INFO -- : {"lib":"que","hostname":"hz00","pid":23810,"thread":45705860,"internal_event":"locker_start","object_id":44081060,"t":"2022-05-17T15:12:47.213037Z","backend_pid":23821,"worker_priorities":[null,null,null],"pollers":[["default",5]]}
I, [2022-05-17T17:12:47.213131 #23810]  INFO -- : {"lib":"que","hostname":"hz00","pid":23810,"thread":45705860,"internal_event":"connection_execute","object_id":45705920,"t":"2022-05-17T15:12:47.213111Z","backend_pid":23821,"command":"LISTEN que_listener_23821","params":[],"ntuples":0}
I, [2022-05-17T17:12:47.213856 #23810]  INFO -- : {"lib":"que","hostname":"hz00","pid":23810,"thread":45705860,"internal_event":"connection_execute","object_id":45705920,"t":"2022-05-17T15:12:47.213830Z","backend_pid":23821,"command":"clean_lockers","params":[],"ntuples":0}
I, [2022-05-17T17:12:47.214068 #23810]  INFO -- : {"lib":"que","hostname":"hz00","pid":23810,"thread":45705860,"internal_event":"connection_execute","object_id":45705920,"t":"2022-05-17T15:12:47.214043Z","backend_pid":23821,"command":"register_locker","params":[3,"{NULL,NULL,NULL}",23810,"hz00",true,"{\"default\"}",1],"ntuples":0}
I, [2022-05-17T17:12:47.214118 #23810]  INFO -- : {"lib":"que","hostname":"hz00","pid":23810,"thread":45705860,"internal_event":"locker_polling","object_id":44081060,"t":"2022-05-17T15:12:47.214100Z","priorities":{"32767":11},"held_locks":[],"queue":"default"}
I, [2022-05-17T17:12:47.216120 #23810]  INFO -- : {"lib":"que","hostname":"hz00","pid":23810,"thread":45705860,"internal_event":"poller_polled","object_id":70011456617900,"t":"2022-05-17T15:12:47.216093Z","queue":"default","locked":0,"priorities":{"32767":11},"held_locks":[],"newly_locked":[]}
I, [2022-05-17T17:12:47.216173 #23810]  INFO -- : {"lib":"que","hostname":"hz00","pid":23810,"thread":45705860,"internal_event":"listener_waiting","object_id":45703400,"t":"2022-05-17T15:12:47.216149Z","backend_pid":23821,"channel":"que_listener_23821","timeout":0.05}
I, [2022-05-17T17:12:47.266326 #23810]  INFO -- : {"lib":"que","hostname":"hz00","pid":23810,"thread":45705860,"internal_event":"locker_polling","object_id":44081060,"t":"2022-05-17T15:12:47.266300Z","priorities":{"32767":11},"held_locks":[],"queue":"default"}
I, [2022-05-17T17:12:47.266383 #23810]  INFO -- : {"lib":"que","hostname":"hz00","pid":23810,"thread":45705860,"internal_event":"listener_waiting","object_id":45703400,"t":"2022-05-17T15:12:47.266363Z","backend_pid":23821,"channel":"que_listener_23821","timeout":0.05}
I, [2022-05-17T17:12:47.316520 #23810]  INFO -- : {"lib":"que","hostname":"hz00","pid":23810,"thread":45705860,"internal_event":"locker_polling","object_id":44081060,"t":"2022-05-17T15:12:47.316495Z","priorities":{"32767":11},"held_locks":[],"queue":"default"}
I, [2022-05-17T17:12:47.316565 #23810]  INFO -- : {"lib":"que","hostname":"hz00","pid":23810,"thread":45705860,"internal_event":"listener_waiting","object_id":45703400,"t":"2022-05-17T15:12:47.316545Z","backend_pid":23821,"channel":"que_listener_23821","timeout":0.05}
I, [2022-05-17T17:12:47.366701 #23810]  INFO -- : {"lib":"que","hostname":"hz00","pid":23810,"thread":45705860,"internal_event":"locker_polling","object_id":44081060,"t":"2022-05-17T15:12:47.366676Z","priorities":{"32767":11},"held_locks":[],"queue":"default"}
I, [2022-05-17T17:12:47.366752 #23810]  INFO -- : {"lib":"que","hostname":"hz00","pid":23810,"thread":45705860,"internal_event":"listener_waiting","object_id":45703400,"t":"2022-05-17T15:12:47.366733Z","backend_pid":23821,"channel":"que_listener_23821","timeout":0.05}
I, [2022-05-17T17:12:47.416890 #23810]  INFO -- : {"lib":"que","hostname":"hz00","pid":23810,"thread":45705860,"internal_event":"locker_polling","object_id":44081060,"t":"2022-05-17T15:12:47.416864Z","priorities":{"32767":11},"held_locks":[],"queue":"default"}
I, [2022-05-17T17:12:47.416935 #23810]  INFO -- : {"lib":"que","hostname":"hz00","pid":23810,"thread":45705860,"internal_event":"listener_waiting","object_id":45703400,"t":"2022-05-17T15:12:47.416917Z","backend_pid":23821,"channel":"que_listener_23821","timeout":0.05}
I, [2022-05-17T17:12:47.467070 #23810]  INFO -- : {"lib":"que","hostname":"hz00","pid":23810,"thread":45705860,"internal_event":"locker_polling","object_id":44081060,"t":"2022-05-17T15:12:47.467044Z","priorities":{"32767":11},"held_locks":[],"queue":"default"}

and then the messages are repeated.

Since nothing happened, I restarted as usual:

$ RAILS_ENV=production bundle exec que -p any,any,any &
[1] 24154
17:15:25 May 17 …

The job remains waiting…

The only thing that helps is deleting the job and creating a new one.

the version of que is based on commit 3b7fccc

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants