Message boards : GPU Users Group message board : Whatever
Author | Message |
---|---|
Target in sight.... | |
ID: 54077 | Rating: 0 | rate:
![]() ![]() ![]() | |
Go get Bob. | |
ID: 54101 | Rating: 0 | rate:
![]() ![]() ![]() | |
Am I correct that this project runs with no cache at all? Even after increasing the resource share to 100 because SETI@Home is gone, I get: Wed 01 Apr 2020 08:35:55 AM EDT | GPUGRID | [sched_op] NVIDIA GPU work request: 340500.27 seconds; 0.00 devices Wed 01 Apr 2020 08:35:56 AM EDT | GPUGRID | Scheduler request completed: got 0 new tasks Wed 01 Apr 2020 08:35:56 AM EDT | GPUGRID | [sched_op] Server version 613 Wed 01 Apr 2020 08:35:56 AM EDT | GPUGRID | No tasks sent Wed 01 Apr 2020 08:35:56 AM EDT | GPUGRID | This computer has reached a limit on tasks in progress This is with only two active tasks, a queue of one and one uploading, set for two days cache. Another issue that is going to become prevalent with the influx of new power hosts is the size of the uploaded result files (3MB for one of them) choking the upload server. | |
ID: 54190 | Rating: 0 | rate:
![]() ![]() ![]() | |
this project has a limit of 2 WU per GPU, and a max of 16 total in progress. | |
ID: 54191 | Rating: 0 | rate:
![]() ![]() ![]() | |
Perfect...thanks Ian. | |
ID: 54192 | Rating: 0 | rate:
![]() ![]() ![]() | |
it does work. When I first attached to the project I was still running the old spoofed client with 64 GPUs and it gave me the max 16. | |
ID: 54193 | Rating: 0 | rate:
![]() ![]() ![]() | |
Not used to seeing any others here, lol. Well, guess it's something to get used to. Just read the entire thread. Welcome everyone... | |
ID: 54197 | Rating: 0 | rate:
![]() ![]() ![]() | |
I just set the Pandora config to 2X the number of gpus in the host. Same as what the project default is. | |
ID: 54213 | Rating: 0 | rate:
![]() ![]() ![]() | |
Except it stopped working overnight and my cache fell and wasn't being replenished because it never asked for work. | |
ID: 54228 | Rating: 0 | rate:
![]() ![]() ![]() | |
Hi Guys, | |
ID: 54558 | Rating: 0 | rate:
![]() ![]() ![]() | |
you've only submitted a handful of tasks, and the tasks being distributed now can be a bit variable for runtime and credit received. I would give it more time and then check the averages after both cards have submitted a couple hundred tasks. what motherboard are you running with that system? which slots are the cards in? | |
ID: 54560 | Rating: 0 | rate:
![]() ![]() ![]() | |
Thanks for the points, Ian. I am seeing that each new task has a different run time on the same card. And, the 2070 Super was finishing much faster, but the first cases on each were 4.6 pts/sec for the 2070 and 8.0 pts/sec for the 1070Ti. I'll keep watching to get more signal/noise. :) | |
ID: 54567 | Rating: 0 | rate:
![]() ![]() ![]() | |
I'm dabbling a bit with some further power reductions and efficiency boost. | |
ID: 54588 | Rating: 0 | rate:
![]() ![]() ![]() | |
I'm still waiting on another flowmeter I ordered from China over a month ago. Other than China post saying it is in the system, no further progress. | |
ID: 54594 | Rating: 0 | rate:
![]() ![]() ![]() | |
testing +125 core right now with the same 150W PL, and tomorrow I'll try to squeeze +600mem on top of that to try to claw back that 2%, if i can. +125 core/+400 mem got me back that 2%. so now it's performing the same at 150W as it did at 165W (x7). with cooler temps and cuts about 100W off the system power draw. win-win if it can stay stable. it's run for 2 days now at 150W so at least the +100/+400 and the +125/+400 settings seem stable. I run fan speeds static at 75% for all cards, temps range from about 50C on the coolest card, to 60C on the hottest card. trying +125core/+600mem now to see if it speeds up or not. memory speeds aren't really throttled by power limit, but the increased power required on the mem OC might cause the core clocks to drop and might drop performance. I'll evaluate the results tomorrow. ____________ ![]() | |
ID: 54602 | Rating: 0 | rate:
![]() ![]() ![]() | |
+125/+600 showed slight decrease in production. (very slight) probably due to the power situation I mentioned in my previous post. I did see a very slight bump in average core clock speeds (visually) when I reduced the mem OC from 600 to 400. It doesn't seem that GPUGRID benefits much from memory OC. | |
ID: 54610 | Rating: 0 | rate:
![]() ![]() ![]() | |
Low credits yesterday? | |
ID: 55091 | Rating: 0 | rate:
![]() ![]() ![]() | |
Don't know. I have all computers down until I find a new job. Hope all is well. TTYL | |
ID: 55100 | Rating: 0 | rate:
![]() ![]() ![]() | |
So restart a computer back here. Only a 2 GPU machine. It tried to run Python and failed miserably. So now it's running ACEMD. Temps on the top GPU is 52 C. Will need to keep an eye on that. I have Einstein set as back up. Putting out a small amount of heat. Hope it will help move the cold air out of the main room. Will see | |
ID: 56191 | Rating: 0 | rate:
![]() ![]() ![]() | |
I would avoid the experimental python tasks for now. | |
ID: 56192 | Rating: 0 | rate:
![]() ![]() ![]() | |
FYI, you can go beyond 16, barring any other issues like the absurdly long run times preventing downloading more. | |
ID: 56193 | Rating: 0 | rate:
![]() ![]() ![]() | |
I would avoid the experimental python tasks for now. Thank Ian and Keith, for now I'm leaving it as is. I swapped out the intake fan in the back for a be quiet 3 140mm and ordered a be quiet 120mm for the front intake fan. Hopefully that will be enough to move some air to keep that top GPU temps down. The bottom GPU is only at 38C ____________ ![]() ![]() | |
ID: 56194 | Rating: 0 | rate:
![]() ![]() ![]() | |
Yes, my slowest host has a turnaround time of .43 days. | |
ID: 56197 | Rating: 0 | rate:
![]() ![]() ![]() | |
the Python tasks are reaching insane levels of credit reward. | |
ID: 56198 | Rating: 0 | rate:
![]() ![]() ![]() | |
I sure hope they can properly debug this python app. Would be nice to have an alternate application and task source other than acemd3. | |
ID: 56200 | Rating: 0 | rate:
![]() ![]() ![]() | |
They’ve already made an improvement to the app, at least in getting the efficiency back up. With the last round of Python, it was similar to the Einstein GW app where the overall speed (GPU utilization) seemed to depend on the CPU speed, and it used a lot more GPU memory, and very little GPU PCIE. However with this latest round of Python tasks, they are back to the same basic setup as the MDAD tasks with low GPU mem use, good 95+% GPU utilization even on slow CPUs, and PCIe use is back up to the same as MDAD. So at least that’s better. | |
ID: 56203 | Rating: 0 | rate:
![]() ![]() ![]() | |
Message boards : GPU Users Group message board : Whatever