Letter to the Luminaries


The "Luminaries" are simply the folks who were the invitees to some meetings at Siggraph 2003.  All are senior academic figures in Virtual Reality and other real time 3D interaction research.  Both workstation vendors (SGI, HP, IBM) and GPU vendors (nVidia) were also represented at the meetings.  The Luminaries meetings were sponsored by SGI.

Hello Luminaries,
Thanks once again for coming to the meetings at Siggraph.

A condensed rendering of the two main issues that emerged at the meetings:
 

1) What’s the future of big integrated graphics machines in the research environment?  Should vendors bother?

2) What’s the future of commodity GPUs in the research environment?   Is there a way to avoid spending too much time on what I was calling the “futzing epidemic”?


My two cents:

My starting hypothesis was usefully challenged so I can now present a more refined version of it.  Of course I’m not certain the following is correct- curious what you think:

Once upon a time, up until circa mid-1980s, we all spent a lot of time trying to improve the process of real time graphics rendering.  I remember working on a nutty analog domain z-compositor when it couldn’t be done digitally, for instance.  Those were great times, and I think we are all still fascinated by architecture and algorithm research, and can’t resist the search for new clever twists and tricks.

But eventually a body of knowledge accumulated into a shared sensibility that allowed a degree of standardization, so that many of us could work on other problems.  It became possible to define interfaces like GL.  This period was characterized by the appearance of products that were pre-tweaked for real time graphics, such as those from the workstation vendors represented at our meetings.  Many of us became able to spend more time on experiments that involved human subjects.  We tested ideas about how to best navigate complex worlds, support collaboration at a distance, and so on.

That we were able to focus on something other than real time graphics machine design did not mean that all of us were satisfied with the performance of existing machines.  I, for one, find that rendering is still a bottleneck.  I’d like to have lower latency, more scene complexity, and support for new display configurations with a LOT more pixels.

Fred Brooks pointed out that his research indicates that when it comes to virtual worlds, a variety of factors such as latency and haptic feedback are more important to human comprehension and performance than scene detail.  Henry Fuchs pointed out that field of view is still inadequate in all existing user interfaces, not just in virtual world designs.  I think we can all at least agree that existing off-the-shelf graphical computing resources are not adequate for the user interfaces we’d ideally like to be building and testing.

So we continue to seek better hardware.

It seems to me that in the last few years there’s been a decided reversal in the nature of our seeking.  The appearance of cheap GPUs with fascinating tweakable shaders has seduced many of us back to low level research.   If I’m not mistaken, I’m seeing a compensatory decline in human-centered research.  I’m certainly not suggesting that anyone is less interested in human-centered research than before, but rather that limited resources must be reprioritized when one becomes even more interested in something else.

Maybe I’m wrong.  Henry Fuchs suggested that it would be worth gathering data.  Perhaps a sociologist of science could take it on as a project?

I contend that my concern merits soul searching even in advance of a data gathering project, because there is an important aspect of our work that would escape such a study anyway.

Human centered research is a little different from other computer science research, in that it’s hard to fully describe goals in advance.  Of course we can identify certain specific goals.  For instance, can we help a surgeon understand volumetric data more quickly?  But beyond that, we’re exploring hidden human potential.  Who would have guessed that whole generations of kids would learn to play superfast video games with a joystick?  That was a surprise.  What gems might be hidden in the complicated intersection of human cognition and user interface design?  The drive to search for those unpredictable gems has to come from us, the researchers.  We usually have to justify our funding on other terms, but I suspect most of us make a standard practice of trying out “cool” interface ideas just in case something remarkable and wonderful might happen.

A related issue is that when labs roll their own GPU conglomerations, it becomes harder to share code, so new UI research can’t be as easily compounded between labs as it was during the era of common refrigerator-sized graphics products  (not that it was ever easy enough!)

Not everyone shares my point of view, of course.  It’s fair to say, I think, that at the very least there’s a need to discuss the relationships between researchers and vendors.

Here is my compendium of potential action items that were suggested at the meetings.  Please let me know if I missed any.   I don’t feel any one of these is mature as yet, but they might lead to a plan at least some of us are interested in following:
 

1) How to fund new human-centered research that happens to need expensive machines when funding agencies are strapped and have lost the habit of funding such things?  One idea (which was explored at length in the first meeting) is for vendors to form three-way collaborations by linking academic researchers with end-users who have technical problems.  The end user, such as an automotive company, would help finance the academic work and benefit from results.  Having administered unconventional academic research funding sources in the past, I would predict that it would be difficult to get universities and end users to agree on terms, and that a lot of time would be lost on negotiations.  Even so, this idea probably inspired the broadest interest of any that came up during either meeting.

2) Regarding the future of “Refrigerator-like” high performance graphics machines:

a) Maybe vendors should offer an integrated refrigerator rack with enhanced inter-GPU connectivity and software, with emphasis on a service contract guaranteeing that the latest GPUs would always be swapped in with pre-futzed and tested software.  Researchers would be assured of ahead-of-the-curve performance, continued benefit from improving commodity products without having to futz, and no need  to periodically throw away old refrigerators.  Vendors would gain predictability.

b) Maybe it’s time to define a cross-platform enhanced-GL or GL-like-thing that more gracefully incorporates the current body of real time rendering techniques (such as image-based approaches and weird real time shaders.)  That way futz-reduced research could use more of the latest techniques that aren’t accessible today without futzing.

3) Researchers are spending huge amounts of time on GPUs because of lack of documentation, little gotchas, and unpredictability of performance in unusual configurations.   So:  Would be helpful if GPU makers would communicate more with researchers.  Everyone agrees GPU people are nice and wonderful but time-crunched and understandably market-driven.  Maybe a central academic/GPU forum or process needs to be established.

4) It’s a shame that GPUs don’t allow access to all computational results.  In particular, would be nice to be able to get floating point results out in addition to integer pixel values.  That would not only be extremely useful for researchers using GPUs in unusual ways, but would also encourage refrigerator vendors to create new and interesting multi-GPU architectures.  GPU makers say the only force that will make them prioritize such things is the DirectX spec.  So should we therefore bug colleagues at Microsoft?

Best,

Jaron


Go back to Jaron's home page.