The “Kepler” Rides Out.

admin | March 24th, 2012 - 1:41 am

«»

1 2 3 4 5ALL

So, what rich new delights are we treated to this time around?

Multi-screen action for single card owners.

One area in which Nvidia’s single GPU cards have been lagging behind ATI (AMD) for a while is in their ability to drive multiple monitors.  Even a 2 1/2 year old HD 5870 is enough to give the user out of the box support for up to 3 individual screens (provided  that two have display port inputs) as well as multi-screen gaming action via ATI’s Eyeinifity system.

A GTX 580 by contrast, despite emerging some months later, is limited to just 2 screens per card, affording the user no opportunity to exploit Nvidia’s rival multi-screen implementation, “3D surround”, until a second card is added.

Those who have remained loyal to AMD for this reason can at long last reconsider their allegiances, since a single GTX 680 is able to drive a total of four monitors and fully supports “3D Surround”.  Moreover, users fortunate enough to own four screens can designate three as their gaming real estate, while the fourth can be simultaneously assigned to display any desktop applications including web pages, documents and photos, HD movies, online TV and entertainment services and social media meanderings!

Though it might be difficult to argue that AMD doesn’t continue set the standard, given that their 7970 can accommodate a hoard of six displays, two MST hubs are needed to achieve this and without these, the limit is also four.

In addition, as with the 7970, Nvidia users now also no longer require their monitors to be capable of identical resolutions, refresh rates, or have the same physical connectors, unless that is, they are looking to game in 3d.   Hence, it is a thoughtful and welcome addition and will likely tempt a clutch of floating voters that might otherwise have opted for the red team, at least, once they have also seen the performance figures!

TXAA – Faster…Yet Still No Jaggies!

Sticklers for eye candy will be intrigued by two more of the 680s features.  The first is a new incarnation of “temporal” anti-aliasing called “TXAA”.  It is said to combine the speed of FXAA (fast approximate anti-aliasing) with the visual quality of multi sample anti-aliasing (MXAA) by employing similar techniques to those observed “in CG films”, so says Nvidia.

There are two implementations of the algorithm, TXAA1 and TXAA2 .  TXAA1 is claimed to match the quality of 8x MXAA but delivers it at the speed of 2X MXAA while TXAA2 offers superior visuals to 8x MXAA but is able to process it as fast as 4x MXAA.

Adaptive V-Sync – Smooth as Silk….With No Tears!

The second feature is a novel form of  V-Sync known as adaptive V-Sync.  This dynamically engages v-sync as soon as an application’s frame rate exceeds the monitor’s refresh rate, whist at all other times the function remains disabled.

A game running at an average of around 60FPS on a monitor with a refresh rate of 60hz will no longer be instantly limited to 30FPS/30HZ (the next multiple down from 60) as soon as it dips below that average.  Moreover, the familiar shearing and tearing of images that would, in this scenario, be vividly evident as soon as the frame rate went above 60 FPS, is now nullified by virtue of v-sync’s intervention at that precise point.  In short, no stuttering – formerly a v-sync related issue – and no image tearing – formerly a symptom created by the absence of v-sync.

Power and Control Over Power Itself!

As with AMD’s “Powertune” feature, first introduced on their 6990, the 680’s GPU frequency is, to an extent, dictated by its TDP of 195w.  If this limit is approached, the speed is reduced accordingly, ensuring the card remains within its TDP at all times.  Ambitious tweakers will therefore be delighted by the facility to manually adjust this limit over a range of 62% (32% above the default and 30% below it) effectively increasing the card’s TDP to just over 250W!

Boost Clocks – Overclocked from the Very Beginning.

Finally, we have Nvidia’s interpretation of dynamic overclocking, “Boost Clocks”.  Closely resembling Intel’s CPU “Turbo Boost” feature, as soon as a heavy demand is placed upon the GPU, it will automatically trigger an increase in speed and significantly, voltage.  These increases are not fixed values but rather, progressive calculations based upon the card’s temperature and power consumption at any given moment.

By default, the GPU’s clock speed when running in 3d mode begins at 1006mhz, this is the “base clock”, or minimum guaranteed frequency for the vast majority of 3d applications.  From here, depending on the conditions, the speed will be increased to an average “boost clock” frequency of 1058mhz, although the further and longer the card remains running below its TDP and the GPU’s temperature below its thermal protection limit of 98 degrees, the more regularly and notably this speed will be exceeded. Meanwhile, the core voltage typically operates over a range of 1.075 to 1.175v and as one might expect, is entirely dependant on the GPU’s speed.

Advanced overclockers need not panic, since both the base and boost clock frequencies can be manually increased.  They will however need to bear in mind that no matter how high these values are set, there is an unavoidable incremental speed penalty of around 40mhz that is applied as the GPU’s temperature rises to its maximum.

«»

1 2 3 4 5ALL

Both comments and pings are currently closed.