This IISc-Incubated Startup Offers Compiler Building Blocks For AI
PolyMage Labs is a software startup building advanced compiler infrastructure to accelerate computations in the domains of artificial intelligence and machine learning.
Compilers are complex software systems that translate languages to instructions for the hardware to execute. PolyMage Labs is a software startup building advanced compiler infrastructure to accelerate computations in the domains of artificial intelligence and machine learning. It offers compiler building blocks that are highly reusable for the purposes of optimization, analysis, and mapping to parallel hardware including custom accelerators.
PolyMage Labs is incubated as a deep science startup at the Society for Innovation and Development (SID), Indian Institute of Science (IISc). We caught up with Uday Bondhugula, Founder and CTO, to understand more about compiler building blocks and their underlying technologies.
AIM: What are the compiler building blocks offered by Polymage Labs?
Uday Bondhugula: PolyMage Labs offer compiler building blocks that allow rapid creation of new compilers. They also act as code generators for several domains served by dense tensor computations, including deep learning, image processing pipelines, and stencil computations used in science and engineering.
These building blocks are in the form of Multi-level Intermediate Representation (MLIR) operations (explained further below) and their transformation utilities. Highly optimized code for these operations is generated using several advanced techniques. The same building blocks are reusable across a variety of programming models and target hardware.
AIM: What led you and your team to explore this field?
Uday Bondhugula: The advent of specialized AI chips and programming models for the ML/AI domain have brought new challenges in designing compiler infrastructure and translating programming models to hardware to realise high performance.
Automatic code generation, transformation, and parallelization of these computations are highly desirable in making such hardware usable and to exploit them for the intended high performance. In addition, the transformation techniques necessary are often driven by deep science available with advanced research teams specialised in compilers.
AIM: What is MLIR? What is its role in your product?
Uday Bondhugula: MLIR stands for Multi-level Intermediate Representation, and the MLIR project is an open-source compiler infrastructure project. MLIR was announced and open-sourced by Google in April 2019 but is now a community-driven compiler infrastructure project that is part of the LLVM project.
The project was initiated to deliver the next generation optimizing compiler infrastructure to serve the computational demands of AI/ML programming models. At Google itself, one of the project’s goals was to address the compiler challenges associated with the TensorFlow ecosystem.
MLIR is a new intermediate representation designed to provide a unified, modular, and extensible infrastructure to progressively lower dataflow compute graphs, through loop nests, to high-performance target-specific code. MLIR shares similarities with traditional control flow graph-based three-address static single assignment (SSA) representations (including LLVM IR or Swift intermediate language) and introduces notions from the polyhedral compiler framework first-class concepts to allow powerful analysis and transformation in the presence of loop nests and multi-dimensional arrays.
MLIR supports multiple front- and back-ends and uses LLVM IR as one of its primary code generation targets. Thus, it is a very useful infrastructure for developing new compilers, especially to solve the compilation challenges involved in targeting emerging AI and machine learning programming languages/models to the plethora of specialized accelerator chips.
I was also a founding team member of the MLIR project during my stint as a visiting researcher at Google in 2018.
All of PolyMage Labs’ technology is based on (i.e., built on top of) the MLIR infrastructure. This also allows us to benefit from and contribute back to the MLIR open-source community. We believe that certain parts of the infrastructure can only thrive in the open by being readily available for reuse by all stakeholders. Expertise in building new things with the infrastructure as well as other advanced and deep technology that leverages such infrastructure can be exploited commercially.
AIM: You referred to the Polyhedral compiler framework as the unique feature of Polymage compiler building blocks. What is it and why is it significant?
Uday Bondhugula: The polyhedral framework is a compiler representation that abstracts loops in code as dimensions of a polyhedron. The representation is convenient for the transformation of multi-dimensional loop nests and arrays.
Various sophisticated transformation and parallelization techniques have been developed using the polyhedral framework. For example, several advanced iteration space tiling techniques are available with polyhedral technology that can automatically enhance the reuse of data in caches (faster memory) or lead to efficient parallelization. Realising such techniques by hand (manually) in many cases is often difficult and, in some cases, infeasible. It also leads to repeated duplication of engineering effort each time a new generation of hardware is rolled out.
AIM: Tell us about the reusability aspect of your model
Uday Bondhugula: We believe that business surrounding compiler infrastructure will only scale if it is modular and reusable, i.e., building more compilers shouldn’t be proportionally or linearly lead to a duplication of effort, and a reinvention of the wheel.
This is also why PolyMage Labs is not building a specific compiler but only parts that can be used to quickly put together compiler stacks or automatic code generation pipelines using the same building blocks. Using an infrastructure like MLIR makes a good part of our approach language-neutral and target hardware neutral.
AIM: How can the model be repurposed for various applications?
Uday Bondhugula: A large number of applications served by the domains we target employ dense tensors, which regularly shaped multi-dimensional data structures. While the computation patterns are different, they all exhibit reuse, a large amount of parallelism, and high dimensional data and computation spaces.
As such, they require the same abstractions and techniques for representation, analysis and transformation. For example, the same polyhedral representation has been used to automatically optimize and parallelize computations from image processing pipelines, deep learning, partial differential equation solvers, and general-purpose computations relying on dense matrices and arrays.
AIM: Polymage contributes to several open-source compiler projects. Could you list the prominent examples?
Uday Bondhugula: The two prominent compiler projects we contribute to are MLIR and TensorFlow, which are increasingly being used or adopted by academia and industry. In addition, we also contribute to the Pluto project.
AIM: What enhancements and developments can we expect from Polymage Labs in the future?
Uday Bondhugula: We would like to play a key role in accelerating the most crucial computations that are driving the AI revolution, increasing the productivity of the scientists, researchers, and practitioners in this field constantly improving or developing new models in areas such as vision and translation.
In general, the impact compiler infrastructure has on end-users is indirect and broad. High performance enables higher productivity and faster discoveries for societal impact.
AIM: What are the immediate areas of focus in the development of AI/ML compilers?
Uday Bondhugula: Hardware is becoming increasingly specialised to meet the AI/ML domain’s performance demands. In order for the hardware to be programmable, there have to be easy ways to build compilers for such hardware — complex software stacks that can translate high-level languages or programming models to such hardware.
It is important for vendors to converge and adopt a common infrastructure. The number of compilers can be `N` but it only helps if the Intermediate Representation (IR) is one. In the past five years, there has been a proliferation of IRs due to the lack of the right compiler infrastructure for high-interest domains today. We believe that things will soon converge here with the arrival of MLIR and other technology built using it.
Source:https://analyticsindiamag.com/this-iisc-incubated-startup-offers-compiler-building-blocks-for-ai/