Chimney.JPG

Southwestern Colorado

cerros3d.png

Maya Archaeological Site

MLKVR.PNG

Virtual Reality Monument (online social hub)

Digital Anthropology and VR Posts

Virtual Reality Research

Archive

Although Google is wowing the world with its new AI driven technology, some of its recent announcements regarding virtual reality rendering technology are potentially just as revolutionary. With the announcement that HTC and Lenovo are developing all-in-one headsets based on Google’s Daydream platform, many were left wondering how these headsets would have the power to run graphically intensive 3d games, simulations and more. Google’s answer is a new software rendering approach which they have named Seurat after a French painter.

Google’s Seurat technology is being hailed as a way to take cinematic realism in CGI and crunch it down for mobile processors. In other words, it allows super-detailed and highly complex scenes that would not normally be able to run on mobile processors to be rendered or displayed on mobile VR headsets like upcoming HTC and Lenovo Daydream headsets that were announced at Google I/O this week. Google reps are calling the result of their new tech real-time visuals. This indicates that fully interactive simulations and gameplay experiences will also be possible with this technology.

To show the detail and high degrees of realism that is possible, Google is showcasing Seurat with help from ILMxLab using scenes from Rogue One. ILMxLab's creative director John Gaeta says "[Seurat] potentially opens the door to cinematic realism in VR." Their demonstration included the following processes.

ILMxLabs used high-quality assets which took an hour to render on a beefy PC. After running it through Seurat, the same scene only took 13 milliseconds to render using a mobile CPU. As I describe below, Seurat’s technology works by reducing asset size by taking snapshots which in turn help to produce lower resolution textures and lower polygon count models. Seurat can reduce textures by a factor of 300 and a polygon factor of 1000. This means the new assets are not as detailed but they still look amazing.

Seurat works differently than many other graphic technologies like normal mapping or polygon tessellation methods that are commonly used in gaming applications. Beginning with high quality or cinema grade 3d graphics, developers first have to set a bounded region of interaction in which all virtual reality perspectives or viewsheds will be rendered. At this point, it appears that developers must choose fairly small and constrained region. This may or may not be the case, it is hard to tell since Google is providing so few details at this point in time.

Anyways, after the developer chooses a region of interaction, the software then goes to work. Using automated algorithms, the software takes a series of snapshots of the high-quality objects or scenes. Every angle in which the viewer will be able to see is captured. The software then uses these snapshots to assemble a lightweight version of the original scene or object.

Google is saying that information about implementing Seurat is coming later this year. With that said, it probably means that we will not see any Seurat rendered games or virtual reality experiences released for quite some time. This means that HTC and Lenovo’s headsets will have to rely on conventional rendering techniques when released this fall. This is not so bad because these headsets will probably run off of the Snapdragon 835 processors which are plenty beefy and designed for virtual reality rendering.

Today at Google I/O, Google let loose a whole smorgasbord of tech announcements. One of the most notable announcements was that Google Daydream is going to get bigger and more independent. Google is allowing manufacturers to make phone-less virtual reality systems that do not rely on cellphones, instead act as full all-one-headsets. This is great news because the Google Daydream platform has been lagging since its release last year. Perhaps this is the kick in the buttocks that Daydream needs to become a dominant virtual reality platform.

The first two companies that are working with Google are HTC and Lenovo. This is great news as both of these companies have experience in virtual reality. HTC created the awesome HTC Vive headset which has gone on to become the premier room-scale virtual reality headset that everyone envies. Lenovo on the other hand, who now owns Motorola’s phone division was among the first adopters and developers of Google Daydream last year. Although it was not widely publicized, the super flat Moto Z had Google Daydream support.

HTC is teasing us a few preview images of its headset (see above and below). Although these images are dark and mysterious (boo), a few important tech details can be ascertained. The headset in the pictures appears to have a double strap mechanism that is quite different from the Gear VR, the HTC Vive and even the Oculus Rift’s head straps. HTC's new strap has two strips. It looks like one or both of these straps can be adjusted for extra comfort.

The pictures do not provide enough information to tell if HTCs new headset could have a flip down feature like the PlayStation VR. It would be great if this was the case because the PSVR’s flip down system is regarded as the best and most comfortable of the VR headsets.Providing a few hints HTC says:

“We have been working closely with developers and consumers to define the best VR experiences over the past few years, and we are perfectly positioned to deliver the most premium standalone headset and user experience. Vive’s standalone VR headset will provide a deeper and more immersive portable VR experience than ever before.”

Although HTC is keeping tight lipped on the details, a few important bits of info have been provided to the press and public. Google is supporting HTC to make a fully stand alone headset. Importantly, this means no phone and no wires! Without a phone, the headset will contain a full all-in-one portable system. This includes a CPU, GPU, batteries etc. Besides making it easy to just slip on and wear, this will be great for the Daydream platform because it will not be limited by the design decisions used to make phones ultra-portable.

This means the headset will be able to sport a much larger and heavier battery than a phone. Of course, a larger battery will lead to far superior battery life. Beyond batteries, the headset will be able to include a much more powerful CPU and GPU combo. With the larger battery feeding more powerful silicon, the headset will be able to provide higher quality 3d visuals.

Furthermore, the amount of space in the headset will allow for more room to include more advanced motion and spatially aware sensors. On this, HTC says that a “WorldSense” tracking system that uses optics, sensors, and optimized displays will be used match your geo-location while also keeping track of your environment. This sounds like inside-out tracking to me.

When will this headset hit the market? HTC is being coy and just saying “later this year.” This probably means Fall/Christmas. As for the price, HTC isn't saying but I would venture that we will see something in the 400-500 dollar price range. HTC never releases cheap products but I can't imagine them going toe-to-toe with Oculus. That said, they have made some weird pricing decisions in the past few years.

Oculus has some pretty amazing technological breakthroughs that are tricking their way into consumer virtual reality headsets. One of their newest technological approaches is a Focal Surface Display that accurately makes elements of the what users see blurry and other parts sharp and in focus mimicking focal depth.

Why would you want part of your screen to be blurry?

Human vision works this way. When humans focus on objects in the foreground or background, elements of world come into focus while others become blurry. Oculus’ new approach aims to mirror this perceptual phenomena to provide a more natural experience that mimics human depth perception.

Oculus' believes that their new tech called the Focal Surface Display is “ground breaking.” A spokesperson for Oculus said, "Focal Surface Displays mimic the way our eyes naturally focus on objects of varying depths. Rather than trying to add more and more focus areas to get the same degree of depth, this new approach changes the way light enters the display using spatial light modulators (SLMs) to bend the headset’s focus around 3D objects—increasing depth and maximizing the amount of space represented simultaneously. All of this adds up to improved image sharpness and a more natural viewing experience in VR."

This means quite a few different things. First of all, Oculus’ new approach uses a mechanism called a spatial light modulator (SLM) which bends light. In this case the SLM bends the 2d images produced by the computer keeping elements in focus and distorting or blurry others. The spatial light modulator appears to be the meat of Oculus’ new approach. Spatial light modulators have been around for awhile. Overhead projectors are a simple form of this device.

Adding spatial light modulators (SLM) to the Oculus’ will be tricky and require some engineering tricks. Beyond shrinking such devices to fit into a headset, the software that controls the SLM must be programmed in a very specific and complex manner. Additionally, all of this required surpassing significant hurdles of optic distortion. Describing their approach Oculus says,

"Focal surface displays continue down the path set by varifocal and multifocal concepts, further customizing virtual images to scene content. We have demonstrated that emerging phase-modulation SLMs are well-prepared to realize this concept, having benefited from decades of research into closely-related adaptive imaging applications. We have demonstrated high-resolution focal stack reproductions with a proof-of-concept prototype, as well as presented a complete optimization framework addressing the joint focal surface and color image decomposition problems. By unifying concepts in goal-based caustics, retinal scanning displays, and other accommodation-supporting HMDs, we hope to inspire other researchers to leverage emerging display technologies that may address vergence-accommodation conflict in HMDs."

This means Oculus has been combining a huge variety of research for their Focal Surface Display. Their approach using SLMs has allowed them to get past many common decomposition problems that commonly plague lenses in virtual reality headsets. Beyond adding depth and a natural focus mechanism, Oculus’ new approach should have added benefits for people who wear glasses. Discussing the wide angled engineering approach Oculus said

“By combining leading hardware engineering, scientific and medical imaging, computer vision research, and state-of-the-art algorithms to focus on next-generation VR, this project takes a highly interdisciplinary approach—one that, to the best of our knowledge, has never been tried before. It may even let people who wear corrective lenses comfortably use VR without their glasses."

When will we see this tech in Oculus’ headsets? As can be seen in the video above, the image quality looks pretty poor, I would imagine consumer versions of this tech is fairly far off. That said, at the rapid pace technology moves these days, we could see this new tech sooner rather than later - perhaps a 3rd generation Oculus if not sooner.

Get in Touch

Your details were sent successfully!

  • generic-social-link
  • facebook
  • twitter
  • linkedin