Author |
Message |
nateSend message
Joined: 6 Jun 11 Posts: 124 Credit: 2,928,865 RAC: 0 Level
Scientific publications
|
Hi all,
I have submitted some new work units that will replace some I submitted earlier in the week. The names will be "NATHAN_FAX3". These tasks are in the true spirit of the long queue, and will take about 12+ hours on the fastest cards. Some have already been returned and indeed have been around 13 hours. This is markedly longer than what you have expected traditionally, but we really want the long queue to be for critical tasks, computationally intensive tasks, and the like. I suggest you all take note of how these tasks run on your computers and be mindful of temps and errors as you start to receive them.
I have noticed some crunchers expressing concern/dismay that perhaps they will not be able to get the 24h bonus with such long tasks. We are mindful of that concern, and will keep an eye on this group as an experiment. If we think it is too unfair to people with fast but not the fastest cards, we'll be sure to correct that in future groups. But the less send/recieve we have to do, the better. We are also mindful of the fact that longer tasks might be more susceptible to errors/crashes, and we want to see how this goes. I'll be looking out for the severe error percentage over the next few days for any problems.
Also, a note about tasks beginning with NATHAN_FA... These tasks are unique in that they are quite large simulations, compared to many others we have done in the past which are smaller (bigger biomolecules mean bigger simulations). They not only take longer per step, but require more memory. Cards with lower memory (below 1GB) may suffer additional performance loss. There is nothing we can do about this, unfortunately.
Happy crunching.
Nate |
|
|
|
Thanks for the info.
This is markedly longer than what you have expected traditionally, but we really want the long queue to be for critical tasks, computationally intensive tasks, and the like.
but we really want the long queue to be for critical tasks
I am expecting to run them in 32h aprox. I want to know in these cases that take more than 24h if you prefer them to be processed or that I switch off the "long runs" option.
____________
HOW TO - Full installation Ubuntu 11.10 |
|
|
wiyosayaSend message
Joined: 22 Nov 09 Posts: 114 Credit: 589,114,683 RAC: 0 Level
Scientific publications
|
Regarding the 24 hour limit, I also am concerned about this because I have been experiencing a problem with the connection on upload which is apparently unrelated to either my system or the servers for GPUGRID.
When uploading a completed work unit, the upload invariably fails on one file. If there are successive failures, BOINC will retry, and BOINC adds time to the retry delay each time so that in the worst case, BOINC might not retry for up to 8 hours or more. There is, apparently, nothing that I or GPUGRID can do about this, and if the worst should happen, then it appears that my machine has not returned the work unit within the 24 hour limit - when it has, in fact, completed the work unit and made best attempts to return it.
Personally, I think the 24 hour limit is completely unfair, especially in cases like this where my machine and, indeed, I, have done everything humanly possible to return the work unit.
I understand what you are trying to accomplish by the 24 hour limit, however, I do not agree with the implementation.
____________
|
|
|
|
Man, I thought I was going crazy when I saw this task running for over 15 hours, before I read this post. I thought my card had downclocked or something, but windows wasn't reporting anything unusual. I just let it run and it finished fine, in about 20 hours on my overclocked GTX570.
I did receive consistent credit with other "NATHAN_FAX" tasks -- the whopper was 114,000 in credit.
If I had a less of a card, the 24 hour bonus would have been in jeopardy. I would prefer if the tasks ran in the 8-12 hour window, but one of these occasionally doesn't matter (as long as it doesn't fail after 95%). |
|
|
|
They not only take longer per step, but require more memory. Cards with lower memory (below 1GB) may suffer additional performance loss. There is nothing we can do about this, unfortunately.
I have 3034Mb (3Gb) on each of my two GTX 580 cards. (Most of which usually appears unused for GPUGrid tasks).
Is it possible for GPUGrid to allocate the tasks with high memory requirements to such computers?
If not, why not?
It is a high-capacity resource (6Gb) going to waste, as far as I can determine. |
|
|
skgivenVolunteer moderator Volunteer tester
Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
Scientific publications
|
If you look at task properties, from Boinc, you will only see how much system memory is being used. You would need to use a tool such as GPUZ to see the amount of GPU memory being used on your card.
How much is being used? 1243MB of my GTX470's 1280MB is being used - which explains the higher temps. ~450MB is more common for GPUGrid tasks. I expect W7 may be further restricted by this, as the operating system grabs some GDDR for itself.
Typically the amount of memory being used is directly related to the number of shader blocks (cuda cores). So if 1243MB is used on a GTX470 (with 448 shaders), a GTX580 should scale to, 512/448X1243MB=1420MB. It normally scales down quite well too; so lesser GPU’s (with less shader blocks) don’t suffer from having less GDDR (GTX460 with 512MB is normally on power with the 1GB version).
The amount of GPU memory used is also experimentally confined by the number of molecules. However, if there was a need to use more memory, to run experiments with larger molecule count, GPUGrid could run one task across more than one GPU (for those with multi-GPU setups), &/or utilize the extra memory on cards such as the 3GB GTX580.
____________
FAQ's
HOW TO:
- Opt out of Beta Tests
- Ask for Help |
|
|
BikermattSend message
Joined: 8 Apr 10 Posts: 37 Credit: 3,933,855,352 RAC: 6,312,361 Level
Scientific publications
|
Man, I thought I was going crazy when I saw this task running for over 15 hours, before I read this post. I thought my card had downclocked or something, but windows wasn't reporting anything unusual. I just let it run and it finished fine, in about 20 hours on my overclocked GTX570.
I thought about aborting one myself. Luckily when I saw one it only had a hour left so I let it finish. These are running at around 20 hours on a stock clocked GTX470 in Linux. Unfortunately with a 115 GB upload I cannot get them turned in fast enough.
I have my cache set to 0.01 but they still download a few hours before they start so I run out of time. |
|
|
|
I got one of them finshed too. I was only wondering why it gives only 30000 credits for 35000secs (FAX) instead the (CB1) nearly 25000sec for 35000 credits ^^
http://www.gpugrid.net/workunit.php?wuid=3247302
Now i got a second one. i will look how this one is going ^^
I think on a 560TI it would take ~12,5 Hours, the 285 took little bit under 12 Hours.
____________
DSKAG Austria Research Team: http://www.research.dskag.at
|
|
|
|
Man, I thought I was going crazy when I saw this task running for over 15 hours, before I read this post. I thought my card had downclocked or something, but windows wasn't reporting anything unusual. I just let it run and it finished fine, in about 20 hours on my overclocked GTX570.
I thought about aborting one myself. Luckily when I saw one it only had a hour left so I let it finish. These are running at around 20 hours on a stock clocked GTX470 in Linux. Unfortunately with a 115 GB upload I cannot get them turned in fast enough.
I have my cache set to 0.01 but they still download a few hours before they start so I run out of time.
Set your cache in BOINC to 0 and connect to 0 and then it will complete before downloading another one.
____________
Radio Caroline, the world's most famous offshore pirate radio station.
Great music since April 1964. Support Radio Caroline Team -
Radio Caroline |
|
|
|
How much is being used? 1243MB of my GTX470's 1280MB is being used - which explains the higher temps. ~450MB is more common for GPUGrid tasks. I expect W7 may be further restricted by this, as the operating system grabs some GDDR for itself.
My current memory usage is (running NATAHAN_FAX3 task, and one NATHAN_CB1 task) ~700Mb=23% @71 Celcius &
~610Mb = 20% @65C.
Again, this seems like a curious waste of RAM. |
|
|
skgivenVolunteer moderator Volunteer tester
Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
Scientific publications
|
That's around half what I expected (1420MB). So why is a GTX580 using half the RAM it should be? GDDR usage doesn't tend to change too much. So is it the operating system, driver, or app?
____________
FAQ's
HOW TO:
- Opt out of Beta Tests
- Ask for Help |
|
|
mikeySend message
Joined: 2 Jan 09 Posts: 298 Credit: 6,653,775,787 RAC: 14,838,342 Level
Scientific publications
|
How much is being used? 1243MB of my GTX470's 1280MB is being used - which explains the higher temps. ~450MB is more common for GPUGrid tasks. I expect W7 may be further restricted by this, as the operating system grabs some GDDR for itself.
My current memory usage is (running NATAHAN_FAX3 task, and one NATHAN_CB1 task) ~700Mb=23% @71 Celcius &
~610Mb = 20% @65C.
Again, this seems like a curious waste of RAM.
Do you know if an app_info file will work here? If so you may be able to set it up to run two units at once on each card. That however WILL stress the limits of your cards AND crank up the heat output of your cards too! This CAN significantly shorten the life of your cards. |
|
|
|
I got one of them finshed too. I was only wondering why it gives only 30000 credits for 35000secs (FAX) instead the (CB1) nearly 25000sec for 35000 credits ^^
http://www.gpugrid.net/workunit.php?wuid=3247302
Now i got a second one. i will look how this one is going ^^
I think on a 560TI it would take ~12,5 Hours, the 285 took little bit under 12 Hours.
Well that is significantly better than what my 560ti is doing.
currently running http://www.gpugrid.net/result.php?resultid=5098225
overclocked to 950/1900/2007@ 1.025v and it's showing 13:21:xx time used with 10:40:xx to go. So that is going to be just over the 24hrs, NOT including upload time. how big are the upload files on these anyway?
additional information for those interested. It's a 2gb model currently using 1240 mb VRAM and 263Mb system ram.
4.5Ghz i7-2600k
|
|
|
|
I got one of them finshed too. I was only wondering why it gives only 30000 credits for 35000secs (FAX) instead the (CB1) nearly 25000sec for 35000 credits ^^
http://www.gpugrid.net/workunit.php?wuid=3247302
Now i got a second one. i will look how this one is going ^^
I think on a 560TI it would take ~12,5 Hours, the 285 took little bit under 12 Hours.
Well that is significantly better than what my 560ti is doing.
currently running http://www.gpugrid.net/result.php?resultid=5098225
overclocked to 950/1900/2007@ 1.025v and it's showing 13:21:xx time used with 10:40:xx to go. So that is going to be just over the 24hrs, NOT including upload time. how big are the upload files on these anyway?
additional information for those interested. It's a 2gb model currently using 1240 mb VRAM and 263Mb system ram.
4.5Ghz i7-2600k
just as an additional to this, I am now showing that it will take approx 25 hours to complete this task. apart from missing the bonus points, 25 hours really is too long for a workunit, even a big one. I know this is only a 560ti, but a 2gb model running at 950 should be able to do one quicker.
I'll try running one more after this one, just to confirm the timeframe. If it is correct around 25 hours, well I guess I won't be doing them anymore :(
|
|
|
|
Simba: Finally, runned a real FAX3 unit now. 30-31Hours runtime (285) with additional 12h uploadtime ^^ So your not alone with the 560TI ^^ (25hours is not bad for this card!) I will not run any FAX3 WU on my 560TI cos it computes to long. I often run the GamingPC over Night for 2 or 3 WUs after i was gaming. But now i dont continue that, cos i could get a FAX3 while i´m in sleep and want to turn the PC off in morning, not waiting aditional 20hours or so ^^
We still get bonustime for report it under 48hours, but not as much as under 24hours.
____________
DSKAG Austria Research Team: http://www.research.dskag.at
|
|
|
nateSend message
Joined: 6 Jun 11 Posts: 124 Credit: 2,928,865 RAC: 0 Level
Scientific publications
|
Sorry for the delayed response, I have been at a conference for the past two days. Considering the responses and the stats we have gotten, it definitely seems that these are a little too long. There are a lot of people who complete the tasks on time but then have trouble uploading them in time for the bonuses at both the 24 and 48 hr limits (116mb is a large upload size). More importantly, there also seems to be more severe errors than are typical, though it's not clear why that is right now.
Therefore, I am going to modify and rename these tasks. They will be renamed to NATHAN_FAX4 (surprise!), and they will run for ~66% of the time of the FAX3 jobs. My checks here indicate that they will run for 8.3 hours on GTX 580, so remember that they are still not like short tasks. People who are not using the GTX 500 or 400 series cards should consider crunching only on the short queue if you are concerned about runtime, bonuses, failures, overheating, etc. Ultimately, the choice is yours. There is still a 25% bonus for finishing before 48 hours.
Hopefully this hits the sweet spot we and you guys have been looking for. This is a learning moment for us and it has been good to get some data on the behavior of such long tasks.
Nate |
|
|
nateSend message
Joined: 6 Jun 11 Posts: 124 Credit: 2,928,865 RAC: 0 Level
Scientific publications
|
Wiyosaya: Definitely an unfortunate situation. I'm not sure what we can do on our end, since we can't change how BOINC works. I don't think we can do anything, but will ask.
Michael Kingsford Gray: We can't choose which computers get which jobs. The reason is that the BOINC system is not set up to allow that, and we can't modify it to do that. We can make different queues, as we already have three (ACEMD2 aka short, ACEMDlong, and ACEMDbeta), but making too many queues makes it confusing for crunchers and also increasingly complicated for us. The only people that could fix it to a better system are the people who maintain the BOINC software.
dskagcommunity et al.: I should clear up confusion about the different "FAX" named tasks. The first round of tasks, titled "NATHAN_FAX-" (with no number after) were supposed to be long like the FAX3 tasks. However, there was a problem with our test software and it underestimated runtimes (but correctly calculated credit). Those have been replaced by the FAX3 tasks. There were "FAX2" tasks, but they were stopped before anyone recieved them because the credit was incorrect by a huge factor (a human mistake, I failed to edit a line in the submission to the server). As you know the "FAX3" tasks are the very long tasks, which have proven to be a bit too long for everyone's taste, so I will modify them to run for shorter times as I explained above. I will rename them to "FAX4" in order to make the distinction clear. Sorry for all the different task names, but I need to be sure I can keep track of the simulations, and what data comes from where.
|
|
|
|
Personaly i think 66% is a real fair alternative :)
____________
DSKAG Austria Research Team: http://www.research.dskag.at
|
|
|
ritterm Send message
Joined: 31 Jul 09 Posts: 88 Credit: 244,413,897 RAC: 0 Level
Scientific publications
|
Hopefully this hits the sweet spot we and you guys have been looking for. This is a learning moment for us and it has been good to get some data on the behavior of such long tasks.
Thanks for this response from you and the project. :-) I just got one of the FAX4's and should be crunching it on my 570 in about 90 minutes.
____________
|
|
|
|
I'm just starting on a FAX4 and while my gpu idled for few seconds I saw in GPU-Z that the memory used was 150mb for Win7.
Now with the card loaded with FAX4 it's at 1232mb so that's a 1082megabytes usage for FAX4. Memory controller load is at 28-29% with 800mhz memory clock (gpu-z reading)
One thing I do NOT like is the gpu load. With other long runs and other projects the gpu load is anything between 94-99%. Now it's at 86-88% ?!
CPU is 2500K@4500Mhz
I have swan_sync=0 and only three cpu tasks running along with gpugrid.
Do I really have to put my cpu at higher clocks or what?
This is the task http://www.gpugrid.net/workunit.php?wuid=3264669
E: Well I bumped the clocks to 4800Mhz and it really didn't affect the gpu load %, still around 87-88% |
|
|
nateSend message
Joined: 6 Jun 11 Posts: 124 Credit: 2,928,865 RAC: 0 Level
Scientific publications
|
I'm just starting on a FAX4 and while my gpu idled for few seconds I saw in GPU-Z that the memory used was 150mb for Win7.
Now with the card loaded with FAX4 it's at 1232mb so that's a 1082megabytes usage for FAX4. Memory controller load is at 28-29% with 800mhz memory clock (gpu-z reading)
One thing I do NOT like is the gpu load. With other long runs and other projects the gpu load is anything between 94-99%. Now it's at 86-88% ?!
CPU is 2500K@4500Mhz
I have swan_sync=0 and only three cpu tasks running along with gpugrid.
Do I really have to put my cpu at higher clocks or what?
This is the task http://www.gpugrid.net/workunit.php?wuid=3264669
That's a pretty solid overclock already, so I wouldn't push it unless you really want to. Or if you are using LN2 to cool your cores ;).
These tasks seem to utilize less percent of the GPU than others, and I'm not sure why that is at the moment. I'm not sure if it's an issue with dividing the task between the CPU and GPU or something else (memory management, maybe?). I'll try to get back to you. There may not be anything we or you can do. We'll see. |
|
|
|
You seen to have found the happy medium (sweet spot) nate.
I am crunching a FAX4 wu on Win7 i7 990X @ 3876MHz and a GTX570, similar GPU load as Lagittaja 86-88%, memory controller load quite a bit less @ 16% and only using 690MB memory. I have swan_sync=0 and only two cpu tasks running along with gpugrid.
It is looking like approx 13hrs 40mins to complete.
Does the usages seem right to you or is there a way to improve on this? |
|
|
|
I'm just starting on a FAX4 and while my gpu idled for few seconds I saw in GPU-Z that the memory used was 150mb for Win7.
Now with the card loaded with FAX4 it's at 1232mb so that's a 1082megabytes usage for FAX4. Memory controller load is at 28-29% with 800mhz memory clock (gpu-z reading)
One thing I do NOT like is the gpu load. With other long runs and other projects the gpu load is anything between 94-99%. Now it's at 86-88% ?!
CPU is 2500K@4500Mhz
I have swan_sync=0 and only three cpu tasks running along with gpugrid.
Do I really have to put my cpu at higher clocks or what?
This is the task http://www.gpugrid.net/workunit.php?wuid=3264669
That's a pretty solid overclock already, so I wouldn't push it unless you really want to. Or if you are using LN2 to cool your cores ;).
These tasks seem to utilize less percent of the GPU than others, and I'm not sure why that is at the moment. I'm not sure if it's an issue with dividing the task between the CPU and GPU or something else (memory management, maybe?). I'll try to get back to you. There may not be anything we or you can do. We'll see.
Temps aren't an issue with running only three cpu tasks :)
4500Mhz I can go with 1.272v and with 4800Mhz I can go with 1.376v.
But getting to my point: Well I did raise the 2500K to 4800Mhz and the gpu load didn't get affected, still in the 86-88% ball park so it is not a cpu bottleneck for sure despite the fact that the gpu process in task mananager is showing as 25% usage as in it's hogging the free core completely I gave it with swan_sync.
Funny thing is that the gpu load seems to vary a little bit. First I see that it's jumping around between 86-88%, then suddenly it's holding steady at 90-91% and after a while it goes back to jumping between 86-88%.
Otherwise great wu's :thumb: |
|
|
|
Thanks Nate :)
those changes should make things a little easier and more consistent.
As for the GPU usage, anything above 90% is good IMHO. 95% would be excellent.
I have no idea about the coding used, but it appears to be adaptable to the workunit on the fly. As in, usage of the GPU can change throughout the run. I too have noticed that usage can change from 80-88% during the run.
Also noticed that the time taken to complete a % can change throughout the run as well.
My last unit took significantly more time to process the last 25% of the unit than it did to do the other 75%(when observed in 25% blocks)
I think 12 hours or so is an optimum run-time. That lets us run machines overnight and complete a task so we can turn them off before we go to work. Also they are not long enough that it is too annoying if a unit fails close to the end. You would have heard my scream from Australia had that unit of mine crashed at 22+ hours of runtime :/ |
|
|
candidoSend message
Joined: 12 Jun 11 Posts: 12 Credit: 150,069,999 RAC: 0 Level
Scientific publications
|
just lost one after 10 hours running
aaarrrrrggggghhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh!
well, no harm done
and it was my falt (messing with overclocking and knowing nothing about that, just going to return it to default settings and leave it like that)
____________
|
|
|
|
That's around half what I expected (1420MB).
...So is it the operating system, driver, or app?
Problem solved!
I was relying on the windows side-bar widgets (one for each card) which reports half of the memory usage that GPU-Z does.
The widgets are broken.
Damn, I feel like a newbie now! |
|
|
nateSend message
Joined: 6 Jun 11 Posts: 124 Credit: 2,928,865 RAC: 0 Level
Scientific publications
|
Spatzthecat: That does seem on the slow side for 570. For these tasks 10-11 hours seems about average (for that card). Windows has slightly lower performance than Linux, so that might explain part of it. Further, looks like you are running on your display card, which will also take away some performance if you are using the system to do other stuff. I'm not sure about other factors.
Lagittaja: Thanks for the info. Still looking into it, but it seems that a script added to the simulation may be the cause. Some simulations don't have them, but this one required it. We try to optimize the scripts as much as possible but they are interpreted at runtime and cause slowdowns. Many functions, but not all, are implemented in C/C++ to help out with speed.
Simba123:
I think 12 hours or so is an optimum run-time. That lets us run machines overnight and complete a task so we can turn them off before we go to work. Also they are not long enough that it is too annoying if a unit fails close to the end. Thanks for the comment. This was one of the additional reasons for making them shorter. I noticed a few people saying something along those lines. Some people only crunch part-time, and we don't want to lose their contribution simply because they can't run steady 24/7. We may still be a little on the long side in that respect, so we'll consider that for the next big batch that goes to the long queue.
Michael Kingsford Gray: Glad to hear you got it figured out. |
|
|
ritterm Send message
Joined: 31 Jul 09 Posts: 88 Credit: 244,413,897 RAC: 0 Level
Scientific publications
|
My stock C2Q/GTX570 finished this FAX4 in about 14.5 hours with SWAN_SYNC=0 and 1 dedicated CPU. Almost 3 times as long to finish as a typical NATHAN_CB1 task for only twice the credit? :-( [ ;-)]
____________
|
|
|
MikkieSend message
Joined: 19 Apr 11 Posts: 4 Credit: 3,779,371 RAC: 0 Level
Scientific publications
|
I'm leaving this project. My card could handle those CB1 ones. They took about 12/13 hrs (my limit of computer use) on my 460SE card.
But these FAX things are too long for me and because of the time limitation I also ran out of any bonus points with these wu's. I am not interested in your short stuff.
For the fast cards and 24/7 people, 'good luck' with these FAX batch(es.) Its time to change to a other project. Bye. |
|
|
|
I'm leaving this project. My card could handle those CB1 ones. They took about 12/13 hrs (my limit of computer use) on my 460SE card.
But these FAX things are too long for me and because of the time limitation I also ran out of any bonus points with these wu's. I am not interested in your short stuff.
For the fast cards and 24/7 people, 'good luck' with these FAX batch(es.) Its time to change to a other project. Bye.
Wow. I guess this project is done for then. Time to shut down the servers, Mikkie is leaving. |
|
|
nonameSend message
Joined: 26 Aug 08 Posts: 4 Credit: 14,438,740 RAC: 0 Level
Scientific publications
|
I'm leaving this project. My card could handle those CB1 ones. They took about 12/13 hrs (my limit of computer use) on my 460SE card.
But these FAX things are too long for me and because of the time limitation I also ran out of any bonus points with these wu's. I am not interested in your short stuff.
For the fast cards and 24/7 people, 'good luck' with these FAX batch(es.) Its time to change to a other project. Bye.
I have gtx295 and gtx265, old NATHAN task were perfect for this card (runtime under 9h), but new FAX3 does run for 56hours. Sorry guys, too long. So I switched back to normal tasks and gues what? A739-TONI... task is @73% after 12h30min and aproximatley 4h30min is left to finish this task. I want to crunch this project and I dont think I have slow cards, but is it that much of a problem to make a smaller task? That 8h NATHAN tasks were ideal, even old FAX tasks with runtime 12h on my hardware wehe acceptable. Normal tasks for 18hours???
|
|
|
skgivenVolunteer moderator Volunteer tester
Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
Scientific publications
|
FAX3 has been replaced with FAX4, which is ~2/3rds as long.
____________
FAQ's
HOW TO:
- Opt out of Beta Tests
- Ask for Help |
|
|
|
It appears that both the FAX3 and FAX4 tasks have exhausted, potentially rendering nagging gripes moot. |
|
|
|
Still getting FAX4's & the occasional FAX3 here. |
|
|
skgivenVolunteer moderator Volunteer tester
Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
Scientific publications
|
There are a couple of non-FAX tasks in the queue, NATHAN_CB1 and IBUCH_xxxTRYP, which could have went your way. Nate's 'Long' CB1 tasks are the sort that run faster than Kashif's 'Normal length' HIV tasks, so we are not quite gripe-free:
I4R7-NATHAN_CB1_1-88-125-RND9588_0 3227378 91249 4 Mar 2012 | 9:08:19 UTC 4 Mar 2012 | 17:18:52 UTC Completed and validated 16,961.14 16,768.88 35,811.00 Long runs (8-12 hours on fastest card) v6.15 (cuda31)
277-KASHIF_HIVPR_cl_ba1-28-100-RND4840_0 3226987 115641 4 Mar 2012 | 7:43:11 UTC 4 Mar 2012 | 14:36:23 UTC Completed and validated 18,222.69 18,194.16 10,552.50 ACEMD2: GPU molecular dynamics v6.15 (cuda31)
Fortunately Ignasi's TRYP tasks are bringing balance to the GeForce,
358-IBUCH_metTRYP1-0-2-RND2279_2 3270731 91249 15 Mar 2012 | 23:51:50 UTC 16 Mar 2012 | 8:25:54 UTC Completed and validated 25,956.25 8,342.53 35,400.00 Long runs (8-12 hours on fastest card) v6.16 (cuda31)
____________
FAQ's
HOW TO:
- Opt out of Beta Tests
- Ask for Help |
|
|
Bruce GSend message
Joined: 16 Mar 12 Posts: 2 Credit: 52,236,725 RAC: 0 Level
Scientific publications
|
I'm new and I've got 53 hours to do and it's also not crunching but saying the computer is in use I have an i5 processor and a Cuda GEFORCE 410 mobile and latest drivers, any suggestions?
Bruce |
|
|
Bruce GSend message
Joined: 16 Mar 12 Posts: 2 Credit: 52,236,725 RAC: 0 Level
Scientific publications
|
I've gone on to join World Community grid. |
|
|
|
I'm new and I've got 53 hours to do and it's also not crunching but saying the computer is in use I have an i5 processor and a Cuda GEFORCE 410 mobile and latest drivers, any suggestions?
Your GeForce 410 mobile is too slow for this project.
I've gone on to join World Community grid.
So you figure it out by yourself. At WCG your GPU will be useful. |
|
|
skgivenVolunteer moderator Volunteer tester
Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
Scientific publications
|
it's also not crunching but saying the computer is in use
You would have needed to select 'Use GPU while computer is in use', from Boinc Manager, to run GPU tasks while using the system, but as Zoltan said, your GPU is not up to running GPUGrid tasks. While WCG has just started some GPU Beta testing, your GeForce 410M would not be up to crunching their either, but WCG has plenty of good CPU projects. GPUGrid does not run CPU tasks. Perhaps your GPU is of use at Einstein, though I think they are winding up a run.
____________
FAQ's
HOW TO:
- Opt out of Beta Tests
- Ask for Help |
|
|
|
ouch.
just picked up
http://www.gpugrid.net/result.php?resultid=5123672
and it's showing a time to complete of over 29 hours. :(
I'll give it an hour to see if that time comes down to below 24 hours.
If it doesn't, I'll abort it.
|
|
|
wiyosayaSend message
Joined: 22 Nov 09 Posts: 114 Credit: 589,114,683 RAC: 0 Level
Scientific publications
|
Just finished an FAX4 on a GTX 460 1GB. Took 28 hours, and about 45-mins of pressing the retry button to get the WU to upload completely.
http://www.gpugrid.net/workunit.php?wuid=3278935
____________
|
|
|
|
Just finished an FAX4 on a GTX 460 1GB. Took 28 hours, and about 45-mins of pressing the retry button to get the WU to upload completely.
http://www.gpugrid.net/workunit.php?wuid=3278935
Well, 41,331.61 seconds on a GTX480, driven by an QX9650 CPU, running @ 3.5GHz.,
seems OK then. The card has 15 CUDA cores and 1532(?) MByte DRAM
Work unit 3323833.
____________
Knight Who Says Ni N! |
|
|
mikeySend message
Joined: 2 Jan 09 Posts: 298 Credit: 6,653,775,787 RAC: 14,838,342 Level
Scientific publications
|
I'm new and I've got 53 hours to do and it's also not crunching but saying the computer is in use I have an i5 processor and a Cuda GEFORCE 410 mobile and latest drivers, any suggestions?
Your GeForce 410 mobile is too slow for this project.
I've gone on to join World Community grid.
So you figure it out by yourself. At WCG your GPU will be useful.
WCG does NOT have any gpu units except those in Beta testing, which are VERY infrequent!! |
|
|
|
just completed 2 'Paola' tasks which I have not seen before.
they were biggies!
http://www.gpugrid.net/result.php?resultid=5415785
http://www.gpugrid.net/result.php?resultid=5410160
ran consistenly at 90% GPU load. didn't check memory usage on them though.
this is on a 560ti 2gb @ 925
|
|
|
|
so current task
http://www.gpugrid.net/result.php?resultid=5418278
is using
currently showing a completion time of 19:04:00
bit longer than I would like, but it may come down in a few hours
|
|
|
|
Hi all,
I have submitted some new work units that will replace some I submitted earlier in the week. The names will be "NATHAN_FAX3". These tasks are in the true spirit of the long queue, and will take about 12+ hours on the fastest cards. Some have already been returned and indeed have been around 13 hours. This is markedly longer than what you have expected traditionally, but we really want the long queue to be for critical tasks, computationally intensive tasks, and the like. I suggest you all take note of how these tasks run on your computers and be mindful of temps and errors as you start to receive them.
I have noticed some crunchers expressing concern/dismay that perhaps they will not be able to get the 24h bonus with such long tasks. We are mindful of that concern, and will keep an eye on this group as an experiment. If we think it is too unfair to people with fast but not the fastest cards, we'll be sure to correct that in future groups. But the less send/recieve we have to do, the better. We are also mindful of the fact that longer tasks might be more susceptible to errors/crashes, and we want to see how this goes. I'll be looking out for the severe error percentage over the next few days for any problems.
Also, a note about tasks beginning with NATHAN_FA... These tasks are unique in that they are quite large simulations, compared to many others we have done in the past which are smaller (bigger biomolecules mean bigger simulations). They not only take longer per step, but require more memory. Cards with lower memory (below 1GB) may suffer additional performance loss. There is nothing we can do about this, unfortunately.
Happy crunching.
Nate
I really wish you wouldn't suck the last drop of blood out of my cards, they actually do have another purpose!!!
Not everyone can have dedicated machines or the latest and greatest cards for your benefit.
____________
Radio Caroline, the world's most famous offshore pirate radio station.
Great music since April 1964. Support Radio Caroline Team -
Radio Caroline |
|
|
|
I really wish you wouldn't suck the last drop of blood out of my cards, they actually do have another purpose!!!
Not everyone can have dedicated machines or the latest and greatest cards for your benefit.
Please read a little further into this thread to see where Nate describes how he hear's us and has reconfigured the tasks so they are now 66% of the project's optimum size to better suit us.
By far the GPUGrid team is much more in touch with and has shown an honest desire to collaborate with us crunchers than the vast majority of the BOINC projects (Go team GPUGrid !!!)
____________
Thanks - Steve |
|
|
|
My problem with these units isn't how long they take but the amount of resources they use while running.
Or do you mean reduce resource demands to 66%?
____________
Radio Caroline, the world's most famous offshore pirate radio station.
Great music since April 1964. Support Radio Caroline Team -
Radio Caroline |
|
|
skgivenVolunteer moderator Volunteer tester
Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
Scientific publications
|
This area has been a bone of contention for years. It's about choice for the cruncher vs project management and overall project performance for the scientists. GPUGrid is an ever changing project. New faces, new research, new apps, new cards..., new problems and re-emerging old ones.
Firstly, it's fantastic that Nate has managed to utilize larger memory resources; most CC2.0 cards have >1GB GDDR5 memory, just waiting there to be used. In doing so Nate has expanded GPUGrid's research boundaries. This is a very important achievement in itself.
Perhaps in this case some sort of opt out is worth considering?
Crunchers could chose to opt out of a project if it requires more resources (GPU memory) than they have, if not having more memory actually slows performance significantly on such cards (512MB). The alternatives are crunchers choosing which tasks to crunch (a lot of work for the team) or aborting such tasks (bad for research). Of course this impacts on the recognition system too; if you don't crunch for a project you don't get recognition (a badge and links to that research), hence the opt out rather than an opt in suggestion.
It's also worth noting that the GTX460 (and similar CC2.1 cards) are not high end cards, and while they can complete long tasks in a reasonable length of time (<2days), they are perhaps more suited to the normal length tasks.
It might also be worth waiting and seeing how the CUDA4.0 or CUDA4.2 apps perform (if there will be a 15% performance boost for Nate's tasks, and if this will allow them to finish significantly faster, or not).
Thanks for your opinions on such matters,
____________
FAQ's
HOW TO:
- Opt out of Beta Tests
- Ask for Help |
|
|
|
This area has been a bone of contention for years. It's about choice for the cruncher vs project management and overall project performance for the scientists. GPUGrid is an ever changing project. New faces, new research, new apps, new cards..., new problems and re-emerging old ones.
Firstly, it's fantastic that Nate has managed to utilize larger memory resources; most CC2.0 cards have >1GB GDDR5 memory, just waiting there to be used. In doing so Nate has expanded GPUGrid's research boundaries. This is a very important achievement in itself.
Perhaps in this case some sort of opt out is worth considering?
Crunchers could chose to opt out of a project if it requires more resources (GPU memory) than they have, if not having more memory actually slows performance significantly on such cards (512MB). The alternatives are crunchers choosing which tasks to crunch (a lot of work for the team) or aborting such tasks (bad for research). Of course this impacts on the recognition system too; if you don't crunch for a project you don't get recognition (a badge and links to that research), hence the opt out rather than an opt in suggestion.
It's also worth noting that the GTX460 (and similar CC2.1 cards) are not high end cards, and while they can complete long tasks in a reasonable length of time (<2days), they are perhaps more suited to the normal length tasks.
It might also be worth waiting and seeing how the CUDA4.0 or CUDA4.2 apps perform (if there will be a 15% performance boost for Nate's tasks, and if this will allow them to finish significantly faster, or not).
Thanks for your opinions on such matters,
Sorry SK, that looks like a copy and paste reply as you can see my GTX460's can do and return long tasks on time and have 1GB memory. They may not be "high end" but they often return results correctly when "high end" cards fail. Fact is in some cases Nathans units cause apps to run badly and only Nathans. Nothing against Nate but maybe his work should be "opt out"
EDIT TO ADD
It's worth bearing in mind that my 2 460's and I rank about 74th on this project if you exclude people like me how many "high end" "dedicated" crunchers are you going to be left with? Oh and please don't tell me to run shorties.
____________
Radio Caroline, the world's most famous offshore pirate radio station.
Great music since April 1964. Support Radio Caroline Team -
Radio Caroline |
|
|
TheFiendSend message
Joined: 26 Aug 11 Posts: 100 Credit: 2,569,652,477 RAC: 2,368,022 Level
Scientific publications
|
I'm another that runs "not high end" cards, a GTX460 and a GTX550Ti and much prefer running long tasks.....I can live with the fact my 550 misses out on the 24Hr bonus most of the time, and on only a couple of occasions I have missed out on the full bonus the the 460.... once by a matter of seconds cos of the upload speed :-(
I am quite proud of the fact that my low cost setup has me sitting in the top 150 RAC wise |
|
|
|
This area has been a bone of contention for years. It's about choice for the cruncher vs project management and overall project performance for the scientists. GPUGrid is an ever changing project. New faces, new research, new apps, new cards..., new problems and re-emerging old ones.
Firstly, it's fantastic that Nate has managed to utilize larger memory resources; most CC2.0 cards have >1GB GDDR5 memory, just waiting there to be used. In doing so Nate has expanded GPUGrid's research boundaries. This is a very important achievement in itself.
Perhaps in this case some sort of opt out is worth considering?
Crunchers could chose to opt out of a project if it requires more resources (GPU memory) than they have, if not having more memory actually slows performance significantly on such cards (512MB). The alternatives are crunchers choosing which tasks to crunch (a lot of work for the team) or aborting such tasks (bad for research). Of course this impacts on the recognition system too; if you don't crunch for a project you don't get recognition (a badge and links to that research), hence the opt out rather than an opt in suggestion.
It's also worth noting that the GTX460 (and similar CC2.1 cards) are not high end cards, and while they can complete long tasks in a reasonable length of time (<2days), they are perhaps more suited to the normal length tasks.
It might also be worth waiting and seeing how the CUDA4.0 or CUDA4.2 apps perform (if there will be a 15% performance boost for Nate's tasks, and if this will allow them to finish significantly faster, or not).
Thanks for your opinions on such matters,
It's much more complicated than that.... Although I only started here recently, I have done GPU contributions many other places.
That being said, I started here with a GTX460 (768 meg0 THAT NEVER CAME CLOSE TO 24 hrs ON ANY long run project. If any are in that case they are simply not running 24/7. We should not reassess long run projects for those not willing to do 24/7. They should be relegated to choosing regular projects. Maybe only allow the long projects based on % hours that a machine ID has had the project available?
I have been in positions over the years with my ability to contribute at varying levels. I know the feeling. We just need to keep in mind that the science "NEEDS" to trump our wants in projects. There are many options under Boinc. GPUGrid in any level that I can produce to meet my financial needs will always be my choice now. I'd never wish to slow it down just because I can't keep up.
Open dialog by the project will avert the debaucle that FAH is going through currently. WAS there , left there, all because of the way FAH handled their decisions. |
|
|
skgivenVolunteer moderator Volunteer tester
Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
Scientific publications
|
I missed the point on this one!
By 'amount of resources' I thought there was an issue with the amount of memory required to run some of Nate's tasks (~624MB), highlighted earlier.
However, for Fermi's, that could only be an issue on some versions of the GTS450 and GT440. These cards can have 512MB, 1GB or 2GB. The GTX460 has 768 or 1GB, as pointed out, and the rest are 1GB or more. GTX460's have enough for a ~620MB task, and so far I have never had a task use >700MB at GPUGrid.
So I think the resource issue is the old lag one; GPUGrid tasks using the GPU in such a way as to prevents normal use of the system by the user. Typical observations include typing and scrolling lag.
So the possibilities are, aborting these tasks, not crunching with the GPU when using the system, running shorter tasks, crunching for other projects.
Potential possibilities might include altering the priorities, freeing up a CPU core (but probably not), using the motherboards built in GPU for display (sort of defeats the purpose of having a discrete GPU, but doable on some i7 systems), reducing the requirements of the task or having a tool that throttles the app.
I don't see anyone jumping at the idea of enabling an option to only use the GPU by SM count -1, but it's maybe worth considering.
____________
FAQ's
HOW TO:
- Opt out of Beta Tests
- Ask for Help |
|
|
|
I don't know if that was meant for my previous post or not... But good explanation of things I was not aware of :)
Still, Long projects regardless of GPUGrid or elsewhere, have always been (in my experience) more resource intensive. That being said, after replacing my GPUGrid system to have both GPU's 570's as opposed to 460 and 570 have made it possible to play World of warcraft while Both crunching long GPUGrid WU's and 12 cores of WCG without issue. Sure it slows the projects down a little but there is not a constantly noticeable "lag" while gaming on my 990X. And no failures of WU's since. This is the reason that I doubt the "lag" in computer use for simple surfing and such.
Just my experience. OBTW, did the same with 570 and 460 (768) minus the 570 project startup fails (not the 460)... Gaming or not. |
|
|