Skip to content
English
  • There are no suggestions because the search field is empty.

Iris CPU vs. GPU

Does Iris utilize the CPU or GPU more

GrayMeta Iris does not use GPU for HW acceleration. All the 3rd party codecs and software we used did not make use of the GPU and we prided ourselves in making a professional player without the dependency on expensive GPU Hardware.

Iris QC Pro for the last 10 years did not utilize it and even the latest codecs for Dolby, Prores and JPEG2000 we use only required enhanced CPU instruction sets like SSE1,2,3,4, AVX, AVX2, AVX512 etc which nearly all modern CPUs have. Iris is quite thread and memory intensive so the more threads and faster memory you have, the better.

The only requirement for the GPU is DirectX 9 or above with YCbCr and RGB surfaces and HW overlay support which almost all modern Nvidia and ATI graphics cards have. There are a few SoC chipsets on server motherboards that don’t support YCbCr so Iris will have to convert YCbCr to RGB in software so this will be too slow for UHD/4K. Iris does not use the CUDA or Stream programmable HW on the GFX cards.

 

The Iris Anywhere product is based on the backend processing of Iris QC Pro so its no different. One of the possible ways we could use GPU HW in the future would be to use on-board H.264, HEVC, VP8 or VP9 encoders available on certain families of chipsets. Unfortunately, the open source WebRTC libraries we use does not support HW encoding presently but there are requests to support that from the WebRTC community.