From The CES 2014 Interview with Palmer Luckey and Nate Mitchel..
http://www.youtube.com/watch?feature...&v=3YoUV7uty40

 

Quote:

Interviewer(Ben): Why don't you start out with a quick explanation of what low persistence is, why you are using it, and why it is better.

Oculus Devs: "I'll start back to front. We are using low persistence because it allows us to eliminate motion blur, reduce latency, and make the scene appear very stable for the user. The best way to think about it is.. a full persistence frame, you render a frame, you put it on the screen, it shows it on the screen and then it stays on the screen until the next frame comes. Then it starts all over again. The problem with that is a frame is only correct in the right place when it's right here <motions with hands together to indicate a short middle period>. For the rest of the scene, it's kind of like garbage data. It's like a broken clock - you know how a broken clock, how it's right occasionally when the hands move to the right place - most of the time it's showing an old image, an old piece of data. What we're doing with our low persistence display is rendering the image, sending it to the screen, we show it for a tiny period of time - then we blank the display. So it's black until we have another image. So we're only showing the image when we have a correct, up to date frame from the computer to show. If you do that at a high enough frame rate, you don't perceive it as multiple discreet frames, you perceive it as continuous motion , but because you have no garbage data - you know, nothing for your retina to try to focus on except correct data - you end up with a crystal clear image.
And part of that - one of the missing features that was required to do lower persistence is pixel switching time. We needed a sub millisecond pixel switching time which we get from oled technology to allow us to do all of this.
- And to be clear, pixel switching time is a big factor in motion blur. In fact, we used to think it was even a bigger factor and we drove pixel switching time down, down, down.. and .. once we starting experimenting with displays that allowed us to switch almost instantly, getting completely rid of the pixel switching time, it turns out that there are a lot of more artifacts like judder that look like motion blur even when the panel is perfect. When you put our panel under a high speed camera, every single frame would be perfectly crystal clear where as an lcd, you would see a smeared blurry image because the pixels are switching. For us, it's always crystal clear.. it's all in your brain this motion blur.

That's probably the biggest update we've made to this prototype. It's a major breakthrough in terms of immersion, comfort, and actually visual stability of the scene. Now you can actually read text, not only because of the high resolution, but with text in the world before, even if you were moving your head just a little bit, which most of us naturally are as we are looking around a scene, the text would just smear - very heavily. Now with low persistence, all of the objects feel a lot more visually stable and locked in place.
It's worth noting that, this technology will continue to be important for VR for a very long time. It's not a hack that gets around some issue we have right now. Until we get to displays and engines that can render at 1000 frames a second and can display at 1000hz basically displaying whole persistence frames that are as short as our low persistence frames, there's going to be no other way to get a good VR experience. It's really the only way that's known. And Valve's michael abrash has a blog that he posted about a year ago , talking about the potential for low persistence to solve these issues. Right now there is no other way that we know of.
.
Interviewer: "And although - everyone is talking about the positional tracking, and that's awesome, and everybody's been looking forward to that, you guys were telling us earlier that you think low persistence is perhaps a bigger, more important breakthrough for now that positional tracking".

A: "I mean, position tracking it's really good and it's important but it's something we've always known we needed to have and so we were going to have to build it, it was an expected. That's obvious for any VR system. Any VR system where you're trying to simulate reality, you want to simulate motion as accurately as possible. We weren't able to do it in the past, but we knew it was going to happen for consumers. So low persistence is a breakthrough. In that it was unexpected..it was, we did not expect to see the kind of jump in quality that we saw - where we said this isn't just one of those "every little bit helps" , it is a killer - it completely changes the way that, it completely changes the experience.. fundamentally."
.
Interviewer: "Now I want to backtrack slightly to low persistence. Earlier I think you guys had mentioned that lower persistence, in addition to bringing up the visual fidelity, reduces latency is that correct?"

A: "Kindof. Well, it that they all work together. You can't do low persistence without really fast pixel switching time, and fast pixel switching time also allows us to have really low latency. Um, because as soon as the panel gets the frame, we're displaying it and it's instantly showing the correct image."
"So I think if you look at the motion to photons latency pipeline that we've talked about alot, pixel switching time has always been one of the key elements in there, in that there is this major delay as the pixels change color. Now that we've eliminated the pixel switching time because of the oled technology - it's not that low persistence is getting us even lower latency, but all together - I think what's interesting is that at E3 when we showed the HD prototypes, those demos were running between you know, 50 to 70 ms of latency for the ue4 elemental demo. Here at ces 2014 we're showing the epic strategy VR demo and E valkyrie and both of those demos are running between 30 and 40ms of latency. So that's a pretty dramatic reduction, you know, in terms of the target goal which is really delivering consumer V1 under 20ms of latency. "
"That is a goal we'll be able to pull off."