Deep Render is a London-based machine learning startup that is redesigning image compression from the ground up by mimicking how the human eye analyses images and videos.
I first came across Deep Render a few months ago and was immediately interested in their technology. This led me to learn more about video codecs and compression, admittedly something I knew very little about previously. I realized how image compression is at the heart of everything we do online, and yet we’re stuck using ill-equipped and antiquated algorithms that barely meet current needs and surely won’t do for the future.
Every two years, the global demand for data doubles, to the extent that 90% of the total data created by humanity was generated in the past two years. As a result, the entire digital universe in 2025 is set to reach an almost inconceivable 163 zettabytes: the equivalent of watching the entire Netflix catalog almost 500 million times.
The impact of the COVID-19 confinement on our global data consumption is a striking example of how easy it is to strain our data and networking infrastructures. Over the last few weeks, total internet hits have surged between 50% and 70%, according to preliminary statistics. Online streaming platforms such as YouTube, AppleTV+, Amazon Prime Video, and Disney+ have seen a 10–15% growth, and are scaling back the quality of video resolution in their services to ease the strain on networks.
If these platforms were currently using Deep Render algorithms they wouldn’t need to be scaling back their services. This shows us how quickly their state-of-the-art compression technology will become relevant as our data consumption continues to grow exponentially in the coming years.
The issue largely stems from bandwidth and reliability of the network to deliver these services in addition to compression infrastructure being wildly inefficient. Existing compression infrastructure is based on 1970’s technology and has improved in recent decades, but the gains are becoming smaller with each iterative update.
This is where Deep Render’s Biological Compression Technology is able to help. Whereas current compression software relies on linear, disconnected modules, Deep Render’s AI-powered learned image compression trains the entire algorithm and its modules in parallel. This holistic approach means each of the stages works in harmony to more efficiently achieve a much better end result.
Deep Render’s algorithm also pays special attention to the content of the image which the human eye cares about the most, allowing for more visually pleasing images. This enables visual quality to be maintained, while also achieving an 8x reduction in file size compared to JPEG, the most-used image compression format. Already now, early lab results deliver a bandwidth improvement of up to 75%, a 4x compression improvement over the current state-of-the-art.
Deep Render’s technology brings a leap forward to media compression and now is the right time for us to invest resources and effort to connect large market demand with the solution Deep Render is building.
And, we fell in love with the team! Having spun out of Imperial College London’s leading robotics lab, Deep Render’s founders Arsalan Zafar and Chri Besenbruch believe their technology not only has the power to transform how everyday people consume data — an issue that has been highlighted in the wake of the COVID-19 outbreak — but also revolutionize entire industries and organizations across every sector.
At each meeting we had with Chri and Arsalan, they managed to excite us more and more about Deep Render’s opportunity. They continue to do the same to future employees, which will stand them in good stead from a hiring perspective.
On the other hand, patent wars in video codecs have raged off and on for years. HEVC contains hundreds (probably thousands) of patents, from over 40 companies. This has long been the case with the MPEG codecs, and the way they deal with this is with patent pools. Rather than having to negotiate licenses with every company that has IP in the standard, “patent pools” like MPEG-LA offer a one-stop-shop for paying all the needed licenses in one place. The problem is, some patent holders aren’t in the MPEG-LA pool. Some are in a rival pool, HEVC Advance. And then a third one popped up. Technicolor SA has HEVC patents that aren’t in any of the pools. The crisis got so bad that MPEG’s veteran leader, Leonardo Chiariglione lamented that “the old MPEG business model is now broke”. What about Deep Render, you might ask? Since what Deep Render is building is more than new technology, it is a paradigm shift in both technology and business model. By building the tech from the ground up, they can sidestep the patent pool and the tedious business model that comes with it:
The quality of the team, the technical superiority of the product, the long-term vision of what Deep Render can be, and the overwhelmingly enthusiastic early customer feedback all combined to make Deep Render a must-do investment for the Speedinvest Deep Tech team.
Chri and Arsalan persuaded us that unless we get ahead of this data crisis, the free and open internet as we know it is at risk. Entire industries will struggle and infrastructure will buckle. Image compression may seem far away to most people, but it underpins everything we do, such as effective corporate communication, playing games, watching movies, satellite imagery, and how entire healthcare systems diagnose disease and save lives.
Over the next 12 months, the team will further develop its product and hire new talent to solve one of the most pressing problems when it comes to the storage, use, and sharing of visual data. On these foundations, Deep Render will have an array of monetization opportunities, and with a bit of experimentation, dramatically revolutionize how we use, share and create data from behind the scenes.
I am thrilled to announce that Speedinvest, along with Pentech, co-led the seed investment in Deep Render. We’re absolutely excited to partner with Arsalan, Chri and the team, and hold high hopes for what they can achieve over the coming months and years. The time for Deep Render is now.
Good luck and Godspeed to them!