Advanced search

Message boards : Number crunching : GPU Overclocking - any benifit for GPUGRID?

Author Message
Ben
Send message
Joined: 28 Dec 14
Posts: 9
Credit: 149,574,556
RAC: 0
Level
Cys
Scientific publications
watwatwat
Message 52113 - Posted: 19 Jun 2019 | 22:52:42 UTC

As the title says, is there much benifit on overclocking my card (GTX 1070 ti)? I know in gaming you're only really looking at a 10-20% performance boost tops. Does this carry through to this kind of processing?

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1284
Credit: 4,917,931,959
RAC: 6,194,586
Level
Arg
Scientific publications
watwatwatwatwat
Message 52115 - Posted: 20 Jun 2019 | 0:27:16 UTC - in response to Message 52113.

Yes, actually there is, at least for Nvidia cards. Nvidia penalizes memory clocks when a consumer card is detected to be running a compute load by the drivers.

You should at minimum overclock the penalized power state memory clock to get back to default memory clocks for P0 power state when running a video load.

kksplace
Send message
Joined: 4 Mar 18
Posts: 53
Credit: 1,397,776,749
RAC: 3,594,289
Level
Met
Scientific publications
wat
Message 52116 - Posted: 20 Jun 2019 | 2:25:00 UTC - in response to Message 52113.

Overclocking on GPUGrid can speed things up. However, you might find that you can't overclock as far as you can on the same card in gaming. One pixel off or an occasional artifact is OK for a visual image, but a single bad calculation early in a series of interdependent calculations can have a devastating effect. I would recommend increasing your clock speeds in smaller increments, and you have to be very patient, since the work units are long, to see the results. One of my two hosts (with a 1080, AIO cooled) seems to be very sensitive on increasing memory clock speeds very far at all before actually slowing down the work units.

A couple of other ideas for increasing GPU compute speeds:
- enable SWAN_SYNC (I can't see your computers so I don't know whether you have Windows or Linux and tell you how.) This keeps a CPU core always ready for the GPUGrid calculations needed in addition to the GPU, speeding up the hand-offs between the GPU and CPU. However, keep in mind this also keeps one core from other calculations while a GPUGrid work unit is active, so plan accordingly.
- switching to Linux (if on Windows) really speeds up the WUs as well. One of my hosts is a dual boot with Linux Mint and Windows 10. WUs that take 6.4 hours on the Windows side will calculate in 5.7 hours on the Linux side (same overclock on both.) I know it's a big move, but just throwing ideas out there.

As the Performance page says, overclock gently.

Erich56
Send message
Joined: 1 Jan 15
Posts: 1090
Credit: 6,603,906,926
RAC: 21,893,126
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 52117 - Posted: 20 Jun 2019 | 4:57:11 UTC - in response to Message 52116.

- switching to Linux (if on Windows) really speeds up the WUs as well. One of my hosts is a dual boot with Linux Mint and Windows 10. WUs that take 6.4 hours on the Windows side will calculate in 5.7 hours on the Linux side (same overclock on both.) I know it's a big move, but just throwing ideas out there.

another way to speed things up - if you want to stick to Windows and not switch to Linux - would be to use Windows XP. Due to lack of WDDM (contained in all Windows OS after XP, and which is kind of a brake to graphic processes) GPUGRID tasks run about 15% faster.
However, it am aware that running XP is generally no longer recommended and you would do it on your own security risk.
Further, we do not know for how long GPUGRID software will continue supporting XP.

Ben
Send message
Joined: 28 Dec 14
Posts: 9
Credit: 149,574,556
RAC: 0
Level
Cys
Scientific publications
watwatwat
Message 52119 - Posted: 20 Jun 2019 | 7:39:51 UTC
Last modified: 20 Jun 2019 | 7:40:13 UTC

I usually run Linux, but there aren't any WU for that currently. So Windows it is. I've pushed my card up 10% on both core and memory clock. I'll run that for a while and see how things go. It's still quite cool at ~67C using factory fitted cooling.

I'll give SWAN_SYNC a go. I found a post on here about it. https://www.gpugrid.net/forum_thread.php?id=4589#47419

I can't find the option to unhide my machine. Any clues?

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 52120 - Posted: 20 Jun 2019 | 8:16:11 UTC - in response to Message 52119.

I can't find the option to unhide my machine.

In your Account, under "Preferences" heading, look for "Preference for this project", select "GPUgrid Preferences" link.
There you will find an option "Should GPUGRID show your computers on its web site?" set this to YES

mmonnin
Send message
Joined: 2 Jul 16
Posts: 332
Credit: 3,772,896,065
RAC: 4,765,302
Level
Arg
Scientific publications
watwatwatwatwat
Message 52121 - Posted: 20 Jun 2019 | 10:14:20 UTC

Many projects scale better than games as there is less I/O bottle necking the GPU so a 10% clock speed is closer to 10% performance.

kksplace
Send message
Joined: 4 Mar 18
Posts: 53
Credit: 1,397,776,749
RAC: 3,594,289
Level
Met
Scientific publications
wat
Message 52122 - Posted: 20 Jun 2019 | 11:31:51 UTC - in response to Message 52116.

One of my two hosts (with a 1080, AIO cooled) seems to be very sensitive on increasing memory clock speeds very far at all before actually slowing down the work units.


Well, I must correct myself on this one. After seeing Keith Meyers below:

You should at minimum overclock the penalized power state memory clock to get back to default memory clocks for P0 power state when running a video load.


...and a similar discussion he posted over at Einstein@Home, I took a big swing at my memory clock speed last night, to good effect. It is a Linux only host so I could only check EH work units. My times went from 10:06 to 9:52. I had been scared of trying something big because, at least for my 1080, very slight increases had not had much effect, and actually decreased performance in some cases. Oh well, live and learn.... Thank you Keith Meyers for teaching me something!

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1284
Credit: 4,917,931,959
RAC: 6,194,586
Level
Arg
Scientific publications
watwatwatwatwat
Message 52123 - Posted: 20 Jun 2019 | 18:06:06 UTC - in response to Message 52122.

Depends on the tasks for a project, silicon lottery quality, task loading, GDDR RAM type, GPU generation etc. etc. whether speeding up the penalized memory clock speed on Nvidia cards helps or achieves nothing. Only way to know is to experiment.

mmonnin
Send message
Joined: 2 Jul 16
Posts: 332
Credit: 3,772,896,065
RAC: 4,765,302
Level
Arg
Scientific publications
watwatwatwatwat
Message 52128 - Posted: 21 Jun 2019 | 0:41:33 UTC

E@H is special. AMD cards perform better than NV cards comparing $ and Watt. Memory speed is more important than core clock. Its more like mining than other BOINC projects. It also requires almost 1GB vram so bigger than many other projects. A tiny, recursive, computation app won't be as memory intensive and benefit less than a larger vmem load.

Ben
Send message
Joined: 28 Dec 14
Posts: 9
Credit: 149,574,556
RAC: 0
Level
Cys
Scientific publications
watwatwat
Message 52138 - Posted: 24 Jun 2019 | 14:32:08 UTC

So I overclocked both the core and memory frequencies by 10% each last week. This resulted in a 1C increase in operating temperature and a reduction in GPUGrid time of ~50 mins!

I just ordered an artic GPU cooler and I'm about to fit that and see how far I can push this...

:)

Post to thread

Message boards : Number crunching : GPU Overclocking - any benifit for GPUGRID?

//