X Marks the Titan.

admin | April 1st, 2015 - 4:59 am

«»

1 2 3 4 5 6ALL

“x-treme” “x-plosive” “x-traordinary”. What was it about the letter X that attracted designers of anything targeted at the technologically inspired and improbably rich?  It might be hard to believe, but there are commercially viable words beginning with every letter of the alphabet that could yet secure status as equally influential marketing tools and scorch similarly sizeable holes in the pockets of spellbound consumers.

Artisan, Babylon, Caprice, Deluxe.  Fair enough, extreme begins with an E but the X is the reason for its ubiquity, remove it to leave “ereme” and the expression would be relegated to a linguistic purgatory, alongside other shamefully underused descriptors such as mercurial, taciturn and Atari.

My favourite buzz word is “virtuoso” and in hurrying quickly to topic, to the date of this mostly non-fictitious piece, no video card that I can recall has included this electrifying noun in its Monika.  When it comes to Nvidia, the terms “ultra” and “extreme” are never far away, while X on its own enjoyed greater exposure than a tax dodging MP.  Thus, did the greedy green giant’s Titan “X” preclude a paradise of grammatical equality.

Despite its a radical sleek black attire, the card was a dimensional clone of all its monaural predecessors back to and including the Titan family’s founding father. Billed to the public as gaming Goliath, the GM200 chip it fostered, was claimed by its creator to bestow the Maxwell at its mightiest unless, his blood curdling voice had proclaimed, “I find a way of magically attaining higher clocks”.  An extremely significant statement in light of the original Titan’s statistical and chronological relationship with its leaner pretenders, the GTX 780 and 780ti.

In contrast to the above, Nvidia did not elect to launch its premium Maxwell product line via its principal member.   The GTX 980, sensationally efficient and stunningly fast though it may have been, was little other than shrewd sandbagging.  A competent, but contained warm-up to the main event.  We all knew that no elite GPU propelled by such a frugal TDP, surrounded by two thirds the RAM, harnessing over 800 fewer Cuda cores and occupying 50% less real estate than its predecessor, had plenty yet to yield.

Consequently, the CV of its transcendent descendent made for mouth-watering reading, though it’s chassis’s physical characteristics were nigh-on identical.  Same fan, same snow-flake styled ventilation, one dual link DVI- port, three display ports and a lone HDMI socket.  Discrepancies amounted to the absence of a backplate and the addition of an 8 pin PCI-E connector to cater for the card’s elevated power requirements.

At first I presumed this was Damien Hurst’s abacus. Then I looked closer and thought it must be the cross-section of a politician’s brain.  Wrong again.  I squinted, stared, leaned forward until my nose nudged the screen and…bingo, an aerial view of just over three thousand Lego Oompa Loompas.  Close, but apparently not.  In fact, this was yet another inspired work of “Cudism” clinically conveying the Maxwell’s astounding anatomy and causing our fatigued corneas to ache.  For all those who’d rather not count every square or avoid the imminent literary conveyance of each statistic, there’s a table in a paragraph or two.

Here we go.  Comparing Maxwell Light to Maxwell rich blend revealed no variation in basic composition.   The clusters and SMMs each encompassed were structurally identical and shared the same component ratios.   All differences concerned  dimensions and quantities.  Die size had expanded from the GM204’s already roomy 432mm2 to a record breaking 601mm2, just 60 shy of Intel’s Haswell-E’s and with 2.5 billion more transistors.  Clusters had increased from four to six, with each incorporating the same number of ROPs, texture units and CUDA cores, thereby adding a healthy 50% to all three totals.

The memory bus had reverted to 384bits, still surpassed by the Hawaii’s 512 but managing data for a gargantuan 12 gigabyte frame buffer, treble that of its revolutionary forerunner.  The only notable exclusion in relation to Ye Olde Titan was that of the double precision units, which it was said had been removed to allow the GM200 to be better optimized for activities that depended exclusively on single precision agility, such as games and a rapidly growing number of cuda-engineered applications.  This led some to question the card’s price tag of $999, no lower than its next of kin but for reduced versatility.

Nvidia was swift to silence these voices of discontent by cannily focussing on the Titan X’s electrifying speed.  Its bold declaration was that users could anticipate a twofold rise in performance relative to the GK110 at no additional expense to energy  and moreover, that the improvement extended to all forms of game specific eye-candy.  In other words, a Maxwell fuelled Titan scything through Skyrim at 1440p with its proprietary form of anti-aliasing (MFAA), would generate double the frame quotient of a Kepler based Titan running the game at the same resolution, and with an equivalent level of conventional anti-aliasing.

Just as notable was Nvidia’s claim that the Titan X would be the first solo solution capable of delivering smooth and responsive gaming at ultra UD with the richest retinal confectioneries, but no necessity for SLI.   Speeds of between 35 and 70 fps, especially when combined with the miracles of Gysnc  were what the green team considered sufficient to assuage any sane enthusiast’s demands.  With no Titan X to exploit myself, I had to establish the truth from the usual gamut of impartial sources.   Then, suddenly, I was compelled to communicate some of my findings in the style of a jaded economics correspondent.

«»

1 2 3 4 5 6ALL

Responses are currently closed, but you can trackback from your own site.