News Stories

The future of TV as seen in Super Hi-Vision – NHK

[The Verge]

… Also known as Ultra HDTV, 8K, or just plain 7680 x 4320, Super Hi-Vision is NHK’s proposed future high-definition TV format. To give you an idea of what those names and numbers amount to, a “Full HD” 1080p picture would take up just a sixteenth of a Super Hi-Vision screen. Last month the broadcaster demonstrated the format running on the world’s largest plasma TV, a 145-inch behemoth from Panasonic that would entirely cover many living room walls. Viewing Super Hi-Vision content on the screen is a surreal experience — any closer than six feet or so and it’s almost too much information to take in. …

Super Hi-Vision has been successfully broadcast at 184Mbps using a dual-channel terrestrial signal, and NHK also demonstrated an IP transmission system that employs eight hardware H.264 encoders for use at live events. This will be put to the test during this year’s Olympic Games in London, with NHK and the BBC collaborating on public Super Hi-Vision screens to be set up in Japan, the UK, and the US. …

The format supports up to 22.2-channel surround audio, and NHK is working on ways to produce content complex enough to justify such a system. These range from a single-point microphone that can capture audio from 22 directions to a “reverberation device” that can control 3D sound. It’s difficult to imagine significant uptake of 22.2 sound in the home, but it’s pretty amazing to actually hear. …

Read the full story here: http://www.theverge.com/2012/5/29/3042847/super-hi-vision-tv-8k-nhk-future

NHK – Integral 3D

[Philip Lelyveld Comment: this could be a new 3D viewing paradigm, but isn’t expected until 2030.]

[The Verge]

… Integral 3D, in short, is one of the most astonishing display technologies we’ve ever seen, even in its presently embryonic state. Instead of stereoscopic screens that use shutter glasses to direct alternate images into each eye, NHK’s integral display uses thousands of microlenses to reconstruct a spatial 3D image filmed from multiple angles — the upshot is that you’re able to adjust your perspective on the subject based on your position. For example, in a demonstration video of a baseball game, we were able to see offscreen fielders simply by tilting our head a few inches to the side, and footage of a sumo bout allowed for greater comprehension of the wrestlers’ grapples and movement. It solves the problem of viewing angles on traditional 3DTVs, and transforms the experience of watching TV into something more involved.

Right now, the main drawback is the resolution. NHK’s current integral 3D prototype actually employs Super Hi-Vision technology, but that’s because each effective onscreen pixel requires hundreds more to render the multiple perspectives. As such, the image currently appears very grainy even on a screen of around 30 inches. While this could eventually be solved by a finer array of microlenses, each lens would then require more pixels in order to add the appropriate depth perspective. For now, integral displays aren’t much more than a tantalizing glimpse into the far-off future — NHK doesn’t expect the technology to be commercialized until around 2030.

See the original post here; http://www.theverge.com/2012/5/29/3042847/super-hi-vision-tv-8k-nhk-future

< PREVIOUS ARTICLES NEXT ARTICLES >

Specification for Naming VFX Image Sequences Released

ETC’s VFX Working Group has published a specification for best practices naming image sequences such as plates and comps. File naming is an essential tool for organizing the multitude of frames that are inputs and outputs from the VFX process. Prior to the publication of this specification, each organization had its own naming scheme, requiring custom processes for each partner, which often resulted in confusion and miscommunication.

The new ETC@USC specification focuses primarily on sequences of individual images. The initial use case was VFX plates, typically delivered as OpenEXR or DPX files. However, the team soon realized that the same naming conventions can apply to virtually any image sequence. Consequently, the specification was written to handle a wide array of assets and use cases.

To ensure all requirements are represented, the working group included over 2 dozen participants representing studios, VFX houses, tool creators, creatives and others.  The ETC@USC also worked closely with MovieLabs to ensure that the specification could be integrated as part of their 2030 Vision.

A key design criteria for this specification is compatibility with existing practices.  Chair of the VFX working group, Horst Sarubin of Universal Pictures, said: “Our studio is committed to being at the forefront of designing best industry practices to modernize and simplify workflows, and we believe this white paper succeeded in building a new foundation for tools to transfer files in the most efficient manner.”

This specification is compatible with other initiatives such as the Visual Effects Society (VES) Transfer Specifications. “We wanted to make it as seamless as possible for everyone to adopt this specification,” said working group co-chair and ETC@USC’s Erik Weaver. “To ensure all perspectives were represented we created a team of industry experts familiar with the handling of these materials and collaborated with a number of industry groups.”

“Collaboration between MovieLabs and important industry groups like the ETC is critical to implementing the 2030 Vision,” said Craig Seidel, SVP of MovieLabs. “This specification is a key step in defining the foundations for better software-defined workflows. We look forward to continued partnership with the ETC on implementing other critical elements of the 2030 Vision.”

The specification is available online for anyone to use.

Oops, something went wrong.