3 min read
Microsoft has unveiled a suite of four new artificial intelligence compilers designed to optimize the performance of various AI models. The “heavy metal quartet” of cutting-edge compilation tools bear the names Rammer, Roller, Welder and Grinder.
The tools were developed by Microsoft Research in collaboration with a number of academic institutions. They provide advanced solutions for compiling —basically the transformation from source code (human readable) into machine code (a bunch of ones and zeroes that make a computer executable)— mainstream AI models and running them more efficiently on hardware accelerators like GPUs.
In a Microsoft Research blog post highlighting their capabilities, the company says the compilers build on Microsoft's extensive research and development in artificial intelligence.
“The AI compilers we developed have demonstrated a substantial improvement in AI compilation efficiency, thereby facilitating the training and deployment of AI models,” wrote Jilong Xue, Principal Researcher at MSR Asia. “In the future, these large-scale models themselves may inherently assist in achieving optimization and compilation.”
The four new compilers each tackle distinct challenges in optimizing AI workloads.
Rammer focuses on maximizing hardware parallelism—the capacity of hardware to do different things simoultaneously. This is a key factor in performance, and Rammer minimizes runtime scheduling overhead through improved utilization of parallel resources.
Roller takes a different approach to accelerate compilation, using a fast construction algorithm to find solutions, ultimately generating optimized kernels in seconds rather than hours. In other words, Roller helps create efficient computer programs for AI faster by simplifying the design process.
Welder reduces expensive memory access traffic by connecting operators in a concentrated pipeline. It unifies memory optimizations into a single framework for greater efficiency.
Finally, Grinder enables control-flow execution on accelerators by integrating it with data flow. This allows optimization across control flow boundaries. Think of it like an expert guiding the steps of an apprentice, telling them what to do to get the job done faster.
As one of the leading technology giants, Microsoft has been at the forefront of AI advancement. The company has partnered closely with AI research firm OpenAI on large language models like GPT-3.5 and GPT-4, which powers ChatGPT and Bing Chat. More recently, Microsoft partnered with Meta to integrate LLaMA-2 in its cloud computing solution and introduced a technique called the Algorithm of Thoughts to enhance reasoning in models like ChatGPT.
Testing found the compilers significantly outperformed existing solutions on benchmarks. Rammer exceeded other compilers by up to 20x on GPUs. Roller matched or exceeded state-of-the-art performance while lowering compilation time by orders of magnitude. Welder surpassed frameworks like PyTorch by up to 21x on GPUs. Grinder accelerated models with control flow by up to 8x.
The heavy metal quartet demonstrates Microsoft’s continued leadership in designing breakthrough AI systems —and coming up with fun names for its products. While big partnerships in the AI space like the one with OpenAI grab headlines, the company also actively develops vital software infrastructure to empower AI behind the scenes.
With sizable performance gains over existing solutions, Rammer, Roller, Welder and Grinder could provide key competitive advantages as more complex AI workloads emerge.
Decrypt-a-cookie
This website or its third-party tools use cookies. Cookie policy By clicking the accept button, you agree to the use of cookies.