Artificial intelligence hardware accelerators are an extremely potent tool for scientific discovery, but they are also highly inefficient from an energy perspective. However, IBM Research, the company’s R&D arm, claims to have identified new ways of tackling this problem.
During the IEEE CAS/EDS AI Compute Symposium, held earlier today, IBM Research introduced new technology and partnerships that will allow businesses to dynamically run large-scale AI workloads in hybrid clouds. It claims that energy-efficient AI hardware accelerators can increase compute power “by orders of magnitude”, without extra energy demands.
The company made a number of announcements, including a new collaboration with Red Hat, which has resulted in IBM Digital AI Cores becoming compatible with Red Hat OpenShift and its ecosystem. IBM Research also said that its toolkit for Analog AI Cores will now be open source.
The AI Hardware Center will also team up with Synopsys to develop new and improved AI chip architectures. IBM claims to have tackled memory bandwidth bottlenecks, meanwhile, by investing in infrastructure to accelerate new chip packaging development.
AI will raise scientific exploration to new heights, IBM Research said, adding that large data processing workloads necessary to make new discoveries will demand processing power, memory and bandwidth as yet unseen.
“Working with Red Hat, Synopsys and other partners, our advancements in AI hardware and hybrid cloud management software integration will enable models and methods that will forever change the way we solve problems,” said the firm.