Author |
Message |
skgivenVolunteer moderator Volunteer tester
Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
Scientific publications
|
17/10/2010 15:17:56 GPUGRID Message from server: Project has no jobs available
17/10/2010 15:18:31 Project communication failed: attempting access to reference site
17/10/2010 15:18:31 GPUGRID Temporarily failed upload of g158r2-TONI_KKi4-5-200-RND4901_1_4: HTTP error
17/10/2010 15:18:31 GPUGRID Backing off 1 min 0 sec on upload of g158r2-TONI_KKi4-5-200-RND4901_1_4
17/10/2010 15:18:32 Internet access OK - project servers may be temporarily down.
17/10/2010 15:18:32 GPUGRID Temporarily failed upload of 26-KASHIF_HIVPR_n1_bound_cl_ba2-47-100-RND8463_1_4: HTTP error
17/10/2010 15:18:32 GPUGRID Backing off 1 min 0 sec on upload of 26-KASHIF_HIVPR_n1_bound_cl_ba2-47-100-RND8463_1_4
17/10/2010 15:18:32 GPUGRID Started upload of g158r2-TONI_KKi4-5-200-RND4901_1_1
17/10/2010 15:18:32 GPUGRID Started upload of g158r2-TONI_KKi4-5-200-RND4901_1_2
17/10/2010 15:18:39 GPUGRID Finished upload of g158r2-TONI_KKi4-5-200-RND4901_1_1
17/10/2010 15:18:46 GPUGRID Finished upload of g158r2-TONI_KKi4-5-200-RND4901_1_2
17/10/2010 15:19:32 GPUGRID Started upload of g158r2-TONI_KKi4-5-200-RND4901_1_4
17/10/2010 15:19:33 GPUGRID [error] Error reported by file upload server: [g158r2-TONI_KKi4-5-200-RND4901_1_4] locked by file_upload_handler PID=6102
17/10/2010 15:19:33 GPUGRID Started upload of 26-KASHIF_HIVPR_n1_bound_cl_ba2-47-100-RND8463_1_4
17/10/2010 15:19:33 GPUGRID Temporarily failed upload of g158r2-TONI_KKi4-5-200-RND4901_1_4: transient upload error
17/10/2010 15:19:33 GPUGRID Backing off 1 min 0 sec on upload of g158r2-TONI_KKi4-5-200-RND4901_1_4
17/10/2010 15:19:34 GPUGRID [error] Error reported by file upload server: [26-KASHIF_HIVPR_n1_bound_cl_ba2-47-100-RND8463_1_4] locked by file_upload_handler PID=6122
17/10/2010 15:19:34 GPUGRID Temporarily failed upload of 26-KASHIF_HIVPR_n1_bound_cl_ba2-47-100-RND8463_1_4: transient upload error
17/10/2010 15:19:34 GPUGRID Backing off 1 min 10 sec on upload of 26-KASHIF_HIVPR_n1_bound_cl_ba2-47-100-RND8463_1_4
Any server issues?
These are normally transient errors, but I have been getting quite a few of them. |
|
|
ktsSend message
Joined: 4 Nov 10 Posts: 21 Credit: 25,973,574 RAC: 0 Level
Scientific publications
|
same issue, on my first work unit. 10hrs 33min by GTX460
11/6/2010 1:05:32 PM GPUGRID [error] Error reported by file upload server: [p29-IBUCH_2_opt1_pYEEI_101027-7-20-RND2807_0_1] locked by file_upload_handler PID=19639
|
|
|
skgivenVolunteer moderator Volunteer tester
Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
Scientific publications
|
Seems to have been reported now,
3239846 2044372 4 Nov 2010 16:51:15 UTC 6 Nov 2010 10:44:36 UTC Completed and validated 38,015.73 8,034.30 7,954.42 9,943.03 ACEMD2: GPU molecular dynamics v6.11 (cuda31) |
|
|
ktsSend message
Joined: 4 Nov 10 Posts: 21 Credit: 25,973,574 RAC: 0 Level
Scientific publications
|
thanks... again small issue:
11/12/2010 7:14:44 AM GPUGRID [error] Error reported by file upload server: [input_r165s1-TONI_MSM5-3-4-RND5451_1_3] locked by file_upload_handler PID=17673
yet server status reports GROSSO is all green, yet only 2 results to send.
Thanks from Japan
|
|
|
|
My BOINC client uploads the result files, and then I receive an error message:
GPUGRID [error] Error reported by file upload server: can't open file
The result files sitting in the BOINC client data transfer queue.
My GPUs have ran out of work.
I can see on the server status page, the gpugrid_file_deleter's status is "Not Running". Are you aware of this situation?
|
|
|
skgivenVolunteer moderator Volunteer tester
Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
Scientific publications
|
Server status: gpugrid_file_deleter grosso Not Running
When I upload tasks it reaches 100% but does not finish and backs off.
Don't like this bit,
31/01/2011 17:55:44 GPUGRID Message from server: (reached limit of 2 GPU tasks in progress)
|
|
|
|
Don't like this bit,
31/01/2011 17:55:44 GPUGRID Message from server: (reached limit of 2 GPU tasks in progress)
Me too. My machines can't upload. Project backoff. This way the server thinks that 2 WU's are still active and as a consequence refuses to download more WU's. So at the moment no more crunching. I spend less money in energy as my boards start sleeping. I think we should be able to have a few more WU's in cache. |
|
|
skgivenVolunteer moderator Volunteer tester
Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
Scientific publications
|
Its up and running normally again, Server up again, but if you have no tasks do a manual update. |
|
|
|
I'm receiving this:
2011.07.04. 11:30:26 GPUGRID [error] Error reported by file upload server: [A182-TONI_AGGsoup1-18-100-RND3708_1_4] locked by file_upload_handler PID=6511
2011.07.04. 11:30:26 GPUGRID Temporarily failed upload of A182-TONI_AGGsoup1-18-100-RND3708_1_4: transient upload error
2011.07.04. 11:30:26 GPUGRID Backing off 3 min 44 sec on upload of A182-TONI_AGGsoup1-18-100-RND3708_1_4
2011.07.04. 11:34:27 GPUGRID Started upload of A182-TONI_AGGsoup1-18-100-RND3708_1_4
2011.07.04. 11:48:29 Project communication failed: attempting access to reference site
2011.07.04. 11:48:29 GPUGRID Temporarily failed upload of A182-TONI_AGGsoup1-18-100-RND3708_1_4: HTTP error
2011.07.04. 11:48:29 GPUGRID Backing off 12 min 44 sec on upload of A182-TONI_AGGsoup1-18-100-RND3708_1_4
2011.07.04. 11:48:30 Internet access OK - project servers may be temporarily down.
Server status are all green, while the project's webpages are loading very slowly. |
|
|
skgivenVolunteer moderator Volunteer tester
Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
Scientific publications
|
Yeah, I am seeing repeated back offs when trying to upload completed work, but I did download new tasks around that time (last few hours).
I see there is new Betas in the queue, so perhaps they are/were just loading those onto the system at the time. GDF did say he would be working at the priority issues, so perhaps that is behind this. |
|
|