top of page

Google Seurat VR Rendering

Although Google is wowing the world with its new AI driven technology, some of its recent announcements regarding virtual reality rendering technology are potentially just as revolutionary. With the announcement that HTC and Lenovo are developing all-in-one headsets based on Google’s Daydream platform, many were left wondering how these headsets would have the power to run graphically intensive 3d games, simulations and more. Google’s answer is a new software rendering approach which they have named Seurat after a French painter.

Google’s Seurat technology is being hailed as a way to take cinematic realism in CGI and crunch it down for mobile processors. In other words, it allows super-detailed and highly complex scenes that would not normally be able to run on mobile processors to be rendered or displayed on mobile VR headsets like upcoming HTC and Lenovo Daydream headsets that were announced at Google I/O this week. Google reps are calling the result of their new tech real-time visuals. This indicates that fully interactive simulations and gameplay experiences will also be possible with this technology.

To show the detail and high degrees of realism that is possible, Google is showcasing Seurat with help from ILMxLab using scenes from Rogue One. ILMxLab's creative director John Gaeta says "[Seurat] potentially opens the door to cinematic realism in VR." Their demonstration included the following processes.

ILMxLabs used high-quality assets which took an hour to render on a beefy PC. After running it through Seurat, the same scene only took 13 milliseconds to render using a mobile CPU. As I describe below, Seurat’s technology works by reducing asset size by taking snapshots which in turn help to produce lower resolution textures and lower polygon count models. Seurat can reduce textures by a factor of 300 and a polygon factor of 1000. This means the new assets are not as detailed but they still look amazing.

Seurat works differently than many other graphic technologies like normal mapping or polygon tessellation methods that are commonly used in gaming applications. Beginning with high quality or cinema grade 3d graphics, developers first have to set a bounded region of interaction in which all virtual reality perspectives or viewsheds will be rendered. At this point, it appears that developers must choose fairly small and constrained region. This may or may not be the case, it is hard to tell since Google is providing so few details at this point in time.

Anyways, after the developer chooses a region of interaction, the software then goes to work. Using automated algorithms, the software takes a series of snapshots of the high-quality objects or scenes. Every angle in which the viewer will be able to see is captured. The software then uses these snapshots to assemble a lightweight version of the original scene or object.

Google is saying that information about implementing Seurat is coming later this year. With that said, it probably means that we will not see any Seurat rendered games or virtual reality experiences released for quite some time. This means that HTC and Lenovo’s headsets will have to rely on conventional rendering techniques when released this fall. This is not so bad because these headsets will probably run off of the Snapdragon 835 processors which are plenty beefy and designed for virtual reality rendering.

bottom of page