DeepCube is an award-winning deep learning pioneer that provides the industry’s first software-based inference accelerator to drastically improve deep learning performance on existing hardware. Modeled after the way the human brain develops during childhood, DeepCube’s patented technology is the first to be purpose-built for deployment of deep learning models on data centers and intelligent edge devices.
Its proprietary framework can be deployed on top of any existing hardware, resulting in drastic speed improvement and memory reduction. Led by a team of experienced deep learning researchers and developers, DeepCube has patented numerous innovations, including methods for faster and more accurate training of deep learning models, and drastically improved inference performance.
Most AI breakthroughs are driven by deep learning. However, current models and deployment methods suffer from significant limitations, like large energy and memory consumption, high costs, and hyper-specific hardware. Hardware advancements have gotten deep learning deployments this far, but for AI to meet its full potential, a software accelerator approach is required.
Dr. Eli David, a pioneering researcher in deep learning and neural networks, has focused his research on the development of deep learning technologies that improve the real-world deployment of AI systems, and believes the key lies in software. Bringing his research to fruition, Eli has developed DeepCube, a software-based inference accelerator that can be deployed on top of existing hardware (CPU, GPU, ASIC) in both datacenters and edge devices to improve deep learning speed, efficiency, and memory drastically.
For example, some of his results include:
•Increasing the inference speed on a regular CPU to match and surpass that of a GPU, which costs several times more
•Increasing the inference speed on a GPU to equal the performance of 10 GPUs
Subscribe to the Tech Talks Daily Podcast