


“x-treme” “x-plosive” “x-traordinary”. What was it about the letter X that attracted designers of anything targeted at the technologically inspired and improbably rich? It might be hard to believe, but there are commercially viable words beginning with every letter of the alphabet that could yet secure status as equally influential marketing tools and scorch similarly sizeable holes in the pockets of spellbound consumers.
Artisan, Babylon, Caprice, Deluxe. Fair enough, extreme begins with an E but the X is the reason for its ubiquity, remove it to leave “ereme” and the expression would be relegated to a linguistic purgatory, alongside other shamefully underused descriptors such as mercurial, taciturn and Atari.
My favourite buzz word is “virtuoso” and in hurrying quickly to topic, to the date of this mostly non-fictitious piece, no video card that I can recall has included this electrifying noun in its Monika. When it comes to Nvidia, the terms “ultra” and “extreme” are never far away, while X on its own enjoyed greater exposure than a tax dodging MP. Thus, did the greedy green giant’s Titan “X” preclude a paradise of grammatical equality.
Despite its a radical sleek black attire, the card was a dimensional clone of all its monaural predecessors back to and including the Titan family’s founding father. Billed to the public as gaming Goliath, the GM200 chip it fostered, was claimed by its creator to bestow the Maxwell at its mightiest unless, his blood curdling voice had proclaimed, “I find a way of magically attaining higher clocks”. An extremely significant statement in light of the original Titan’s statistical and chronological relationship with its leaner pretenders, the GTX 780 and 780ti.
In contrast to the above, Nvidia did not elect to launch its premium Maxwell product line via its principal member. The GTX 980, sensationally efficient and stunningly fast though it may have been, was little other than shrewd sandbagging. A competent, but contained warm-up to the main event. We all knew that no elite GPU propelled by such a frugal TDP, surrounded by two thirds the RAM, harnessing over 800 fewer Cuda cores and occupying 50% less real estate than its predecessor, had plenty yet to yield.
Consequently, the CV of its transcendent descendent made for mouth-watering reading, though it’s chassis’s physical characteristics were nigh-on identical. Same fan, same snow-flake styled ventilation, one dual link DVI- port, three display ports and a lone HDMI socket. Discrepancies amounted to the absence of a backplate and the addition of an 8 pin PCI-E connector to cater for the card’s elevated power requirements.
At first I presumed this was Damien Hurst’s abacus. Then I looked closer and thought it must be the cross-section of a politician’s brain. Wrong again. I squinted, stared, leaned forward until my nose nudged the screen and…bingo, an aerial view of just over three thousand Lego Oompa Loompas. Close, but apparently not. In fact, this was yet another inspired work of “Cudism” clinically conveying the Maxwell’s astounding anatomy and causing our fatigued corneas to ache. For all those who’d rather not count every square or avoid the imminent literary conveyance of each statistic, there’s a table in a paragraph or two.
Here we go. Comparing Maxwell Light to Maxwell rich blend revealed no variation in basic composition. The clusters and SMMs each encompassed were structurally identical and shared the same component ratios. All differences concerned dimensions and quantities. Die size had expanded from the GM204’s already roomy 432mm2 to a record breaking 601mm2, just 60 shy of Intel’s Haswell-E’s and with 2.5 billion more transistors. Clusters had increased from four to six, with each incorporating the same number of ROPs, texture units and CUDA cores, thereby adding a healthy 50% to all three totals.
The memory bus had reverted to 384bits, still surpassed by the Hawaii’s 512 but managing data for a gargantuan 12 gigabyte frame buffer, treble that of its revolutionary forerunner. The only notable exclusion in relation to Ye Olde Titan was that of the double precision units, which it was said had been removed to allow the GM200 to be better optimized for activities that depended exclusively on single precision agility, such as games and a rapidly growing number of cuda-engineered applications. This led some to question the card’s price tag of $999, no lower than its next of kin but for reduced versatility.
Nvidia was swift to silence these voices of discontent by cannily focussing on the Titan X’s electrifying speed. Its bold declaration was that users could anticipate a twofold rise in performance relative to the GK110 at no additional expense to energy and moreover, that the improvement extended to all forms of game specific eye-candy. In other words, a Maxwell fuelled Titan scything through Skyrim at 1440p with its proprietary form of anti-aliasing (MFAA), would generate double the frame quotient of a Kepler based Titan running the game at the same resolution, and with an equivalent level of conventional anti-aliasing.
Just as notable was Nvidia’s claim that the Titan X would be the first solo solution capable of delivering smooth and responsive gaming at ultra UD with the richest retinal confectioneries, but no necessity for SLI. Speeds of between 35 and 70 fps, especially when combined with the miracles of Gysnc were what the green team considered sufficient to assuage any sane enthusiast’s demands. With no Titan X to exploit myself, I had to establish the truth from the usual gamut of impartial sources. Then, suddenly, I was compelled to communicate some of my findings in the style of a jaded economics correspondent.
Synthetic tests demonstrated performance boosts of roughly 30-35% across the board relative to the GTX 980 and 70% when compared to a standard Titan.
In the 3dMark 11 performance benchmark, the card racked up a graphics score of 22850, besting its slighter variant by 35% and extracting a commanding 65% margin over its non X rated senior. The Extreme benchmark yielded 7457 marks, which converted to slightly smaller but still notable gains of 26 and 61% for Big Daddy Maxwell .
Figures were even more jaw-dropping for the monumentally merciless Firestrike, with the Titan X scoring 17550 in the standard benchmark, dominating the Titan “Vanilla” by an astonishing 76%, and adding 36% to the 980’s already commendable total. In the Extreme test, the GM200 was one again imperious, calmly carving out leads of 38 and 70% over the 980 and Titan respectively, owing to an astonishing score of 7867 .
The R9 290X was 60% slower on average, but it’s doubled up derivative, the r9 295×2 convincingly turned the tables in all four tests and maintained a mean advantage of 18% over Nvidia’s latest and greatest.
In the gaming stakes. A high intensity, optically candified interval of Crysis 3 saw the Titan X trail the 295X by a shade over 40% at 1440p (2560×1440) and 30% at 2160p (3840×2160). The 290x and 980 were tied, a further 35% behind on average for both resolutions, while the plain old Titan proved 50% slower than its spiritual successor. For each iteration, 4X MSAA and “Very High Settings” were selected in the game’s menus.
The Titan X’s frame rates were less then stellar, hovering around 40fps at 1440p and a decidedly sluggish 2ofps when moving up to 4K, not by any rational standards conducive to a pleasurable rampage. Dropping to the “High Quality” profile boosted speeds to a touch over 40fps, matching those at “Very High” for 1440p, a substantial increase, but still some distance off the elite contingent’s ‘ magic target of 60.
The infamous Shadow of Mordar, a supercharged RPG reputed to devour more memory than a masonic initiation ceremony, forced the Titan X to settle for silver for a third time, 20% behind the 295X at 2160p and 10% at 1440p. The 980 and 290X were left 35% further afield in both high and low runs, while the honourable lord Titan failed to get within 60% of its Maxwell derived descendant. The “ultra” quality option was applied for each loop of the game’s built in benchmark and the Titan X was able to maintain a far more acceptable 50fps at 2160p and a bullish 85fps when descending to 1440p.
Minimum frame rates told a different story, and one which vividly illustrated the growing importance of what many seasoned Steamers had dismissed as an excessively hyped commodity, that of V-RAM. Thanks to its colossal reserves, the Titan X topped the charts at 2160P, over 60% ahead of its closest rivals, the R290X and Titan, a commendable showing from the ancient one.
The 980 slipped to third place, some 70% adrift while the 295X, doubtless owing to crossfire complications, was relegated to the rear and over 100% down on the head of the pack. At 1440p, the titan X reasserted its dominance, but by reduced margin of 20% relative to the 980. Grand master Titan took third, 37% in arrears, the 290X fell to fourth, 60% behind and the 295X, again a victim of its own duality, was over 300% off the pace. It’s highly probable that this anomaly was precipitated by micro-stutter, the common and confounding curse upon any configuration comprising more than one GPU.
Though statistically troubling, these pronounced “troughs” may well not have tarnished perceived fluidity to the degree they would have done prior to AMD’s diligent and profitable driver optimisations.
Finally we come to Battlefield 4, the monstrous military primary perspective blast feast, notorious for its sick desire to wring the coils of robust graphics subsystems until they whine in woeful agony. First across the line at 2160p was the 295X , 19% ahead of the Titan-X, while dropping to 1440p saw this lead sustained. The 980 placed third in both tests, 35% slower than it broader brother while not a solitary pixel separated our noble Titan of the old world from the 290X, each of which was eclipsed by 65% of Big Maxwell’s mighty shadow. With the Ultra quality profile activated at both resolutions, the Titan X ‘s raw pace was its most impressive yet, delivering averages of 60 fps at 4K and 80fps at 1440p, enough to appease all but those insistent on the silkiest action and negligible latency.
And that my comrades was largely the order of the day. Monsieur Vesuvius, still champion overall and by between 25 and 30%. The Titan X, An emphatic singles winner, 30-35% better off than a 980 and 40% more productive than an R9 290X. Ice cool under pressure, quiet as a soft summer breeze, and by all accounts, more tunable than a dragster on propane. Core speed boosts ranging from 300-400mhz were not uncommon while memory frequently granted a bonus of 1000mhz.
Yet, once a redoubtable radial fan had ceased its hushed aria, and 8 billion transistors had entered a state of redemptive tranquillity, had Nvidia make good by its pledge of a solo powered card that could undergo the rendering rigours of Ultra HD and emerge with a trace of dignity? Was Old Green Eyes’ prophecy approximately correct? At the risk of welding myself to a silicon fence, I must respectfully decline to answer since ultimately, the concept of what constitutes adequate playability will always be governed by the wealth of individual experiences and opinions shared outside the walls of a commercial exhibition, and a good thing too.
Well that was Fun wasn’t it? I had thought to present this entire article in a tapestry format and that the wonders of Photoshop CS4 might permit me to scale new heights of creativity. It didn’t, it just made me angry by continually crashing whenever I tried to save an image greater than 20000 pixels deep, ok, probably too deep, but it had to be that deep in order to incorporate the text which I kept having to shrink because of my inability to summarise.
All of which caused me to download a trial of Photoshop CS6, and waste precious life by making a backup of my entire hard drive before installing it on the off chance it might irretrievably hose my OS with scary corporate spyware that would be harder to remove than s**t from your shoe. It didn’t but that scarcely made things better because the interface was a drab dark grey making it much harder to see the tool bars and windows and worst of all, the bloody thing still CRASHED!!!
Many cynical, but discerning commentators on high brow graphical culture, insisted that the excessive pricing policy embraced by its two most ferocious forces, especially Nvidia, stemmed from the well healed enthusiast’s progressive penchant to harvest virtual currency, and the attempts of both companies to stimulate and satisfy it. With Bitcoin mining being an AMD dominated affair, it is difficult to substantiate such an assertion. Nvidia loyalists may have been second only to Apple’s in their patience for extortion but if bit coining had been truly on their brains, where would have been the sense in the green in risking defections by out pricing your deadliest rival so decisively, yet failing to approach, let alone surpass the powers of its equivalent solutions in this particular regard.
The statement would have been more easily justified if the interests of these affluent clients had extended to numerically-tortuous tasks of a critical and scientific nature, inclined to exploit the double precision cores present on the previous Titans. However, aside from folding@home, applications wholly dependent on floating point fortitude were as niche as….no, can’t think of anything.. Moreover, the first “in house” consumer solution ever to boast the subsequently common $1000 bail-bond, was the GTX 690, whose number crunching credibility was distinctly mediocre.
The Titan X is posed a similar paradox. In eliminating double precision as a feature, Nvidia had also compromised the chances of its formidable creation being a practical purchase for compute minded gamers, for whom such a product might even have represented a bargain when judged against established “professional” solutions such as those from their own “Tesla”range and AMD’s rival “Firepro” brand, all of which commanded three times the cash. Without those delectable DPUs , the Titan X was predisposed to compete core for core against opposition whose dollar to pixel ratios represented far greater value. The old cry of “to procure the best, you pay the most” didn’t hold up, because the “best” in real terms, was not “Big Maxwell”, but “Vast Vesuvius”.
In 2014, the R295x had romped resoundingly into the picture with its bounty set at a lofty $1499, considered by many on the day to be prohibitive until the Titan-Z resoundingly revealed the wonders of true exorbitance. By the time the Titan X emerged, AMD had slashed its demand to approximately to $699, while the Titan-Z remained over twice a dear and five times more scarce. This placed Nvidia apologists in an unenviable position. How to justify the acquisition of a $999 video card that, on the basis of net pixel profits, provided a conservative average some 20% less than that of a 30% cheaper alternative? If pricing had reflected performance, the 295X should have retailed at $1200 but in fact, could be obtained for close to half as much. How could even the greenest disciples avert their consciences from such a powerful and reasonable second choice, or prevent their eyes from weeping rose tinted tears. They certainly tried.
The Rt. Hon Voice of Slight Logic vs. An Nvidia Apologist.
Nvidia Apologist: Please understand. The Titan X is targeted at the most demanding and broad minded of extremists. Gamers who lust after omnipotence in an ultra HD utopia and covet computation beyond the realm of deep thought.
Voice of Slight Logic: A crafty double allusion there, deep thought as in “the ultimate question”, and “deep learning” as in breeding AI correct? Very clever. But can the Titan X really talk all four legs off an Arcturan Mega-donkey?
Nvidia Apologist: It can…and persuade it to race around the rings of Saturn afterwards.
Voice of Slight Logic: Very impressive. But what of the 295X and its compute capacity, its half brother does more than trade teraflops with a Titan Black 5.6 to 5.1, what could 11.2 do to a Titan X?
Nvidia Apologist: Disposable statistics my friend, a GPU’s CV can be a riveting read, we try to spice ours up too, but it the real world, experience is what counts, and counts, and counts, and counts.
Voice of Slight Logic: Indeed, but this doesn’t only concern specifications, look at the evidence, this troubling set of results for instance. Luxmark 2, a gamut of Sandra benchies, not to mention the Basemark CL suite. A total of 14 tests and the Titan Black soundly thrashed in 12. By a margin of 100% or more in ten, and 200% or more in six. I’d call that pretty decisive. Even your imperious Titan-Z gets mashed in 11 out of 3 cases and for twice as much money.
Nvidia Apologist: I note those are all single precision tests.
Voice of Slight Logic: Go ahead and glance at the next page, the figures equally grim. The Titan-Z loses in 4 out of 6 and Titan Black is hammered 5-1. Besides, we needn’t worry about double precision, the Titan-X sports no such talent. You suppressed it, remember?
Nvidia Apologist: Correct, to further improve single precision for CUDA apps and games and trust me, we have. Furthermore, consider over 6 teraflops of mathematical mastery, combined with 12 gigs of RAM, as much as the Titan-Z, but dedicated to one chip. And that review is ancient. Find me some benchmarks that actually pit the Titan-X against your bloated beloved and I’ll be surprised if those margins haven’t altered, dramatically.
Voice of Slight Logic: You seem unusually keen to promote memory in light of recent inaccuracies. Only last month you were claiming that a minuscule proportion of games would draw on last half gig of what was purportedly a 4 gig card
Nvidia Apologist: Let’s not take cheap shots.
Voice of Slight Logic: Why, I prefer them to your expensive ones.
Nvidia Apologist: 4 gigs is plenty for now, 6 is a luxury, but who can predict next year’s demands?
Voice of Slight Logic: You desgned this product to make 4k feasible for a single GPU right?
Nvidia Apologist: We did, and it has.
Voice of Slight Logic: Barely.
Nvidia Apologist: Squarely.
Voice of Slight Logic: I’ll not bicker, but you know the transient nature of technology. Next year?!` By then your target demographic will be gaming at 8K. Big Maxwell himself will be the bottle neck, not his RAM, and you’ll passionately be preaching the virtues of his successor’s cores and rops. Baby, Mummy and Daddy Pascal.
Nvidia Apologist: I’m not in the position to speculate, but there’s no law to that obliges you to passively embrace our latest architecture. Titan X adopters could easily circumnavigate that bottle neck by investing in a second card. A practical and risk free proposition, and enough to carry any sentient Steamer safely though the next resolution revolution.
Voice of Slight Logic: And what will you have lowered its price to? 50% I hope.
Nvidia Apologist: I shan’t hypothesize
Voice of Slight Logic: Well, that would be reasonable for what by then will be middle tier technology.
Nvidia Apologist: My pipelines are sealed.
Voice of Slight Logic: Come on. Will you lower it at all?
Nvidia Apologist: No comment.
Voice of Slight Logic: Forgive my suspicions but I get the feeling you might furtively wind down production as soon as the Pascal arrives, in whatever form that is.
Nvidia Apologist: Why would you think that?
Voice of Slight Logic: Because that’s what happened with the original Titan when the Maxwell made its début. Plus, I don’t remember its price budging more than a farthing unless you happened to stumble on a clearance or were willing to go second hand.
Voice of Slight logic: One last query on memory before it slips my own.
Nvidia Apologist: All ears.
Voice of Slight Logic: Care to explain why it states on your site that this thing needs 24 gigs of RAM to run properly?
Nvidia Apologist: That’s not a requirement.
Voice of Slight Logic: It says recommended.
Nvidia Apologist: It does. But 4 gigs is the minimum, rest assured the card will run with that, and with all cores firing…figuratively. 24 streaming multis, 96 ROPs, 3 megs of l2 cache and 12 gigs of VRAM, un-segmented un-partitioned with every memory channel chucking data stripes to the crossbar at 336gb/s. No trimming for the sake of yields, no, fused-off cache, no absent render units or subjugated controllers. All components fully accessible and maximum bandwidth available the entire time. Is that clear enough?
Voice of Slight logic: Who are you trying placate? People who didn’t buy a 970 but were privy to the RAM gate scandal, or betrayed supporters who spent $400 and might still have a cool grand set aside for charity?
Nvidia Apologist: Either suits us. Shall I continue or are we to resort to barbed bitchiness?
Voice of Slight Logic: Carry on.
Nvidia Apologist: Much obliged. Four gigabytes is essential, though any enthusiast frequenting this market would probably have more. However, in order to ensure all 12 gigs of the Titan’s frame buffer can provide optimal functionality, we recommend 24 gigs of system RAM.
Voice of Slight Logic: I can’t believe what I’m hearing. A desperate attempt to pour water RAM gate and you’re now telling us we can’t wholly utilize the resources of a card three times more expensive unless we shell out a ton more on memory?
Nvidia Apologist: I guess that depends on how much you already have….don’t punch me that was a joke.
Voice of Slight Logic: Where the hell is this recommendation on the box?
Nvidia Apologist: It’s on our site and in the manual.
Voice of Slight Logic: But not on the box, why?
Nvidia Apologist: Because it’s not compulsory
Voice of Slight Logic: If it were on the packaging don’t you think your customers might postpone that decisive click? Those are the specs that e-tailers list.
Nvidia Apologist: Unlikely, all the reviews I’ve read were based on 16 gig rigs and divulged no complications whatsoever.
Voice of Slight Logic: Maybe none of their benchmarks exploited all the Titan’s memory
Nvidia Apologist: Precisely! And barely any applications do, though on those isolated exceptions, fear not, your system won’t crash, your card won’t expire, everything won’t grind to a halt but speed might be compromised to an extent depending on the rest of your hardware.
Voice of Slight Logic: This is depressing, really depressing. Will you ever be utterly truthful about your products?
Nvidia Apologist: It’s not our fault. You can blame Gates and his cronies for this one. It relates arcane coding dating all the way back to windows 3.1.
Voice of Slight Logic: Ah! Back to finger pointing?
Nvidia Apologist: Let me explain. Recall if you will when brands such as Orchid, Trident, Tseng and 3DFX commanded respect in the graphics industry.
Voice of Slight Logic: You’re making me cry.
Nvidia Apologist: Happy Voodoo memories? Well, in those good old days video RAM was an extravagance, there were many breeds, DRAM, VRAM, EDO RAM and others, though one quality each shared was a comparative rarity to conventional system RAM, which in a typical configuration was four times more abundant. Image data’s primary destination before being delivered to the computer’s display is obviously the video card’s memory. Frames are created, stored in the buffer, then flipped to the screen.
But if the video RAM is wholly occupied, new frames cannot be composed until a proportion of those pre-rendered has been forwarded to the VDU. Hence, to keep the rendering process as seamless as possible and with video ram being such an extreme luxury, Microsoft decided to implement a procedure that automatically generated a swap, or paging file in the system memory, the next fastest medium, to employ as and when the video RAM was fully populated.
The swap file mirrored the amount of video RAM and to this day, it remains active and proportionally fixed. It was highly uncommon for video cards to incorporate quotas of memory that even approached those of their host system, hence there was no danger of the latter’s being filled to capacity, even when the swap file was being utilized. But consider the situation now. A 12 gig card equates to a 12 gig swap file, which in a system with 16 gigs of RAM, would leave 4 for everything else once the video card’s frame buffer began to overflow. A much finer margin bearing in mind OS overhead and a plethora of background tasks, hence our recommendation.
Voice of Slight Logic: Hang on a second. What if your system memory doesn’t double your VRAM or the page file is expended? Do we get lock-ups and BSODs?
Nvidia Apologist: Very good questions. In the first instance, Microsoft’s process allocates further space on your hard disk to compensate, ensuring the proportions of the page file reflect those of your video memory. Irrespective of whether your RAM doubles your VRAM once the former is full, Windows will begin to assign pre-rendered images to the hard drive as an absolute last resort, in order to ensure the GPU can continue to generate new frames.
Voice of Slight Logic: Which could cause instabilities due to hard drives being the slowest of all three storage options?
Nvidia Apologist: Exactly. But I assure you it’s nothing to worry about. The vast majority of software barely requires 4 gigs. The most advanced games in Ultra HD and surround with eyefuls of surgery decadence might stretch to 7. In any case, there are plenty of things you can do to prepare for a frame-flood. Close down any TSR garbage. Grab an SSD and make sure your page file is allocated to that instead of a mechanical drive. If and when you upgrade your RAM, buy the highest frequency modules you can. These measures will help minimize any performance decline precipitated by the slower forms of storage.
Voice of Slight Logic: But I have friends who’ve been using R9s, 980s and even Mark 1 Titans in rigs with only 8 gigs installed, that would make them vulnerable to this problem, yet their frame rates don’t suddenly dive off a cliff.
Nvidia Apologist: At last your beginning to understand. What does that tell you? This process has been an intrinsic element in Windows for decades and its undesired ramifications are a threat to everyone, yet your friends have never been troubled by them. What about you? Have you noticed any ominous symptoms?
Voice of Slight Logic: I get the odd stutter.
Nvidia Apologist: Who doesn’t? I’ll bet none of you has exceeded your VRAM allowance and if you did, 8 gigs of RAM for a 4 or even a 6 gig frame buffer would be more than enough in reserve to avoid the hard drive being called into play and potentially inducing lag. We actually recommended 12 gigs for the original Titan , but honestly, these references are only intended to represent our formal acknowledgement of the issue, just in case people are planning to harness every last byte.
Voice of Slight Logic: You know, this is hauntingly reminiscent of RAM gate.
Nvidia Apologist: It couldn’t be more different, that inhibition was by design, this one is by default and in no way relates to us. Every video card, past present and future was is and will be subject to the same limitation that is, unless Microsoft provides a fix. We build GPUs, not operating systems. As I said, you really needn’t fret.
Games won’t devour that much VRAM for years by which time, you’ll have probably upgraded your system RAM for a multitude of other reasons. Let me put this another way. Why does any replace their video card? In laymen’s terms, to increase speed and smoothness at higher resolutions. What helps us achieve these things? A faster GPU and bigger frame buffer. When we’re considering any upgrade, what is our state of mind? Are we thinking, I’ll get by with the absolute minimum?”. Unlikely. Especially if we’re a a high-end user. Some decent headroom is essential to guarantee a consistent quality of service, correct?
Voice of Slight Logic: I wonder where we’re going here..
Nvidia Apologist: For instance, if you buy 16 gigs of RAM, you’ll likely be using 12 to 14 99 percent of the time. Nonetheless, the knowledge you have that little extra in reserve for those exceptionally rare occasions is comforting, even if they never come to pass.
Voice of Slight Logic: Think I’m about to find out.
Nvidia Apologist: Likewise, If you purchase a new graphics card, your going to want to ensure it has enough VRAM to accommodate every game or app you’re intending to run, because if it doesn’t, regardless of the quantity and bandwidth of your system RAM, you will suffer a hit as soon as paging is initiated. The fastest modules you can buy still operate at half the frequency of top tier video RAM.
Voice of Slight Logic: Here it comes.
Nvidia Apologist: So when the Titan X’s 12 gigs are no longer sufficient to serve without assistance, you enthusiasts will have upgraded to a card with considerably more, hence, the chances of this issue ever affecting you are as remote as the Orion Nebula.
Voice of Slight Logic: Which is what you’re banking on right? Nothing limits damage better than time, especially in this business.
Nvidia Apologist: It’s simple. If you don’t like our products, you needn’t buy them, but I’m afraid my master knows you better than you do yourself. If you’re troubled by that, feel free to vent your frustrations before you make that inevitable donation.
Voice of Slight Logic: This is a very dangerous game you’re playing.
Nvidia Apologist: Wrong. You play the games, we provide the means.
Voice of Slight logic: Which brings us gracefully onto my next question
Nvidia Apologist: I can barely contain my excitement.
Voice of Slight Logic: On the topic of games. How do you account for Big Maxwell’s not so small deficit.
Nvidia Apologist: In relation to your hellacious Hawaii’s?
Voice of Slight Logic: The very same.
Nvidia Apologist: Look, higher frame rates are worthless if they come at the expense of fluidity. The litany of latency issues that plagues multi chip cards is a law unto itself, even we have struggled to suppress it, why do you think Gsync was invented? But of those fiery devils, they made stuttering into an art. It’s simple, if you chunks of choppiness, fluffed frames, rudimentary responsiveness and anything else likely to incur your wrath at pivotal points in death matches of every age and team wars of all varieties, feel free to submit to a scarlet chaos.
Voice of Slight logic: Oh Please, play the world’s smallest violin. Two years ago that sizzling sermon might have been valid. Now it’s ill-founded propaganda. The riddle of the runt frame was deciphered long before the Hawaii was a pink tinge in the clouds, by none other than your fine friend Ryan Shrout at PC Per…using your very own software. Don’t you remember his investigation?
Nvidia Apologist: As if it were yesterday.
Voice of Slight Logic: The techniques he used to determine whether or not measured frame rates coincided with a players perceptions?
Nvidia Apologist: Of course, we were as enthusiastic as him.
Voice of Slight logic: Well? Now we have fancy coloured curves that divulge the fruits of every graphics card’s labours. Fast, slow and average frame rates, purged or incomplete frames, frame variance by percentage and even individual frame intervals. They’ve become the industry standard. So, I ask you, they expose any gross deviations between the recorded and the observed. No, the traces were perfect clones. Is the 295X really a dire dirge of stutter and strife when compared to the Titan X?
Nvidia Apologist: No, but it’s frame variance is significantly greater.
Voice of Slight Logic: By how much
Nvidia Apologist: On average between 1 and 2 milliseconds
Voice of Slight Logic: Oh Horror, I am humbled and humiliated, my entire argument has spontaneously collapsed. What’s that in frame pacing terms? A week? A century? An ice age? Enough time for a ZX81 to render the Milky Way?
Nvidia Apologist: You’d be surprised what the human eye can Perceive.
Voice of Slight Logic: Especially if its green…
Nvidia Apologist: No need for flippancy. Why don’t you ask me about profiles. We haven’t discussed those yet.
Voice of Slight Logic: The ones you keep on all your customers?
Nvidia Apologist: No. Crossfire profiles, the ones imperative to extract the benefits of your chosen one. Notice how tardy that ruddy rabble is when it comes to keeping them in trim? Over four months and no stable release. That means every game released between then and now will likely be deprived of multi-gpu support. You’ve cited one review, allow me to direct your attention to another. Note the results for 4k. Nine games tested and four with broken implementations with no guarantee of any timely fixes. Does $699 seem like such a bargain now?
Voice of Slight Logic: Hmm. Well. Congratulations you’ve stopped me in my tracks. Nine benchmarks, eighteen iterations , eight with the 295X hobbling along at half pace and yet, the Titan X still slower by 15% at 1440p and 2% at 4K. Was this supposed to be evidence of Big Maxwell’s worth? Inferior performance despite a crippled opposition? Look, it even when over clocked in Middle Earth it came up short, and that was with the 295 at stock.
How or why those Canucks concluded they’d rather fritter grand for the sake of four games than pocket $400 and play the remaining 10721 Steam titles 20% faster is a greater mystery than the Marie Celeste. They appear to be ignoring their own results, perhaps in a desperate ploy to sleep with their review sample.
Nvidia Apologist: An outrageous statement. What’s there to suggest there wouldn’t be other games besides those four? One also pays for our service my friend. Superior engineering and quality control at least, I would argue. Frequent and potent drivers. SLI support for key titles on the week of release. You do realise there are some whose rendering needs revolve around a single piece of software.
Voice of Slight logic: And thank heavens for if there weren’t your Titans would be fit for landfill. You’re playing to the paranoid, the kinds of people who update their drivers every hour, to fix everything that isn’t broken. The tortured pix-elites that waste so much time stressing over benchmark kudos, gawking at pre-rendered foliage, using their 3Dmark rank as a foundation for personal morale and evoking elation from dancing digits, wavy lines and coloured bars, that they leave not a second to relish the joys of one solitary jaunt through Skyrim.
Meanwhile, back in the Halls of the Damned
Giant Greeneyes: Bah. I only included double precision to lend credence to my claims that consumers were receiving a Cuda coo for their coinage. They coughed up a cool grand for the Titan, they willingly parted with a monkeys for my less capable Keplers and magnificent maiden Maxwell. Hence, now that I have succeeded in instilling the notion that $1000 has become the customary fee for a humble consumer to command peak pixel proliferation, I can innocently preserve my fast Fourier fineries as a special feature for professional clientèle, as it has always been their nature to best serve. Praise be, for I have carved for myself a brand new genre of graphics card that guarantees nothing specific other than a $1000 toll. Welcome the age of the “Hyper Flagship”.
Gient Redbeard: You speak not the truth my verdant varlot. The fact is, you withdrew the function furtively and for no other reasons than your grotesque greed and lack of competence when challenged by the compute credentials of my own creations. Just look at the history. My Cayman cards attained legendary adulation amongst communities of bitcoin miners.
Entire render farms have been devised by aspiring amateurs in their modest garden sheds and duly built upon a backbone of Tahities. I made that possible, I alone brought the joys of industrial strength compute to the masses. Recall if you will my HD 6990? A four year old veteran that will vanquish, with vehemence, your finest offerings when it comes to the excellence of 64bit executions. And what of my 7990, a maligned masterpiece, that I’ll wager in would wipe the proverbial screen clean of tiresome corporate antagonism.
And not a moment too soon. One interesting fact to emerge from this round of pixel predicated profiteering was Nvidia’s disclosure that it did not intend to complement its latest creation with a doubled up counterpart, as it had done by fusing a pair of fully fortified Keplers to form the infamously exorbitant Titan-Z.
The official explanation pertained to the physical difficulties in accommodating two such expansive dies on the same stretch of circuitry, as well as the traditional complexities encountered when developing drivers to enable SLI on a single device. If taken in earnest, the result of our emerald ogre’s remarkable show of restraint, was that AMD’s Volcanic virtuoso (AH!, smuggled it in) would continue to reign as the fastest card overall, until either Red-beard excelled his own monstrous ingenuity, or a certain Pascal prevailed against the winds of hell.
Turning to more immediate matters and the imminent threat of AMD’s Fijian counter attack. Nvidia’s strategy when launching its inaugural Tatan was to voluntarily over-pitch, shrewdly survey the competition, backtrack and refine accordingly. Thus, just as the 780ti had heroically defended the honour of its elder in light of the R9 290X’s vicious attack, a 980ti that forfeited stream processors to the avail of core speeds and value was, to any journalist in the techno loop, a veritable formality. The question was, would it be worth another 2000 wilfully chosen words, 1999 of which contained the letter X?