The ONNC Compiler is an innovative suite of C++ libraries and tools tailored to accelerate the development of compilers specifically for AI-on-Chip deployments. Designed to handle the intricate requirements of heterogeneous multicore system-on-chips, this compiler translates neural network frameworks into diverse machine instructions. It supports prominent deep learning frameworks such as PyTorch and TensorFlow, ensuring broad applicability across various AI system architectures.
One of the ONNC Compiler's standout features is its ability to handle multiple backend modes, catering to different SoC configurations, including PCIe accelerators and smartphone processors. The compiler's design facilitates the efficient use of multiple view address maps, allowing for optimal memory allocation and bus utilization in fragmented memory spaces. This approach ensures a significant reduction in RAM demand, offering performance enhancements and resource savings in complex AI systems.
Furthermore, the ONNC Compiler excels in co-optimizing hardware and software interactions, eliminating bottlenecks related to data transfers between storage and processing units. Through strategies like software pipelining and intelligent DMA allocation, it maximizes system throughput and efficiency, positioning itself as a crucial asset in the AI development toolkit.