MLGO Framework brings machine learning into compiler optimizations
Google’s new machine learning-guided optimization (MLGO) is an industrial-grade general frame to systematically integrate machine learning (ML) techniques in a compiler and in particular in LLVMa ubiquitous open-source industrial compiler framework for building high-performance mission-critical software.
The Google AI Blog explored The version.
Compiling faster, smaller code can significantly reduce the operational cost of large data center applications. Compiled code size is most important for mobile and embedded systems or software deployed on secure boot partitions, where the compiled binary must fit within tight code size budgets.
In a standard compiler, decisions about optimizations are made by heuristics, but the heuristics become increasingly difficult to improve over time. Heuristics are algorithms that empirically produce reasonably optimal results for difficult problems, within pragmatic constraints (e.g. “reasonably fast”). In the case of the compiler, heuristics are widely used in optimization passes, even those that take advantage of profile comments, such as inlining and register allocation. These passes have a significant impact on the performance of a wide variety of programs. These problems are often NP-hard and finding optimal solutions may require an exponential increase in time or memory.
Recent research has shown that ML can help with these tasks and unlock more code optimization than complicated heuristics. In real code, during the inlining phase, the compiler walks through a huge call graph, because there are thousands of functions calling each other. This operation is performed on all caller-callee pairs and the compiler decides whether or not to insert a caller-callee pair. This is a sequential decision process, as previous integration decisions will modify the call graph, affecting subsequent decisions and the final outcome.
Reinforcement learning (RL) is a family of ML techniques that can be applied to find increasingly optimal solutions through an automated iterative exploration and training process. MLGO uses RL to train neural networks to make decisions that can replace heuristics in LLVM. The MLGO framework only supports two types of optimizations: inline-for-size and register-allocations-for-performance.
The MLGO framework is trained with Google’s internal codebase and tested on code from Fuchsia – a general purpose open source operating system designed to power a diverse ecosystem of hardware and software where bit size is critical. For inline optimization for size, MLGO achieves a 3% to 7% size reduction. Similar to the inlining-for-size policy, the register allocation (regalloc-for-performance) policy, MLGO achieves up to 1.5% improvement in requests per second (QPS) on a set of large-scale data center internal applications.
This framework is still in the research phase. Google says its future goals are to increase the number of optimizations and apply better ML algorithms.