All IPs > Processor > AI Processor
The AI Processor category within our semiconductor IP catalog is dedicated to state-of-the-art technologies that empower artificial intelligence applications across various industries. AI processors are specialized computing engines designed to accelerate machine learning tasks and perform complex algorithms efficiently. This category includes a diverse collection of semiconductor IPs that are built to enhance both performance and power efficiency in AI-driven devices.
AI processors play a critical role in the emerging world of AI and machine learning, where fast processing of vast datasets is crucial. These processors can be found in a range of applications from consumer electronics like smartphones and smart home devices to advanced robotics and autonomous vehicles. By facilitating rapid computations necessary for AI tasks such as neural network training and inference, these IP cores enable smarter, more responsive, and capable systems.
In this category, developers and designers will find semiconductor IPs that provide various levels of processing power and architectural designs to suit different AI applications, including neural processing units (NPUs), tensor processing units (TPUs), and other AI accelerators. The availability of such highly specialized IPs ensures that developers can integrate AI functionalities into their products swiftly and efficiently, reducing development time and costs.
As AI technology continues to evolve, the demand for robust and scalable AI processors increases. Our semiconductor IP offerings in this category are designed to meet the challenges of rapidly advancing AI technologies, ensuring that products are future-ready and equipped to handle the complexities of tomorrow’s intelligence-driven tasks. Explore this category to find cutting-edge solutions that drive innovation in artificial intelligence systems today.
The KL730 is a third-generation AI chip that integrates advanced reconfigurable NPU architecture, delivering up to 8 TOPS of computing power. This cutting-edge technology enhances computational efficiency across a range of applications, including CNN and transformer networks, while minimizing DDR bandwidth requirements. The KL730 also boasts enhanced video processing capabilities, supporting 4K 60FPS outputs. With expertise spanning over a decade in ISP technology, the KL730 stands out with its noise reduction, wide dynamic range, fisheye correction, and low-light imaging performance. It caters to markets like intelligent security, autonomous vehicles, video conferencing, and industrial camera systems, among others.
The Akida 2nd Generation represents a leap forward in the realm of AI processing, enhancing upon its predecessor with greater flexibility and improved efficiency. This advanced neural processor core is tailored for modern applications demanding real-time response and ultra-low power consumption, making it ideal for compact and battery-operated devices. Akida 2nd Generation supports various programming configurations, including 8-, 4-, and 1-bit weights and activations, thus providing developers with the versatility to optimize performance versus power consumption to meet specific application needs. Its architecture is fully digital and silicon-proven, ensuring reliable deployment across diverse hardware setups. With features such as programmable activation functions and support for sophisticated neural network models, Akida 2nd Generation enables a broad spectrum of AI tasks. From object detection in cameras to sophisticated audio sensing, this iteration of the Akida processor is built to handle the most demanding edge applications while sustaining BrainChip's hallmark efficiency in processing power per watt.
Addressing the need for high-performance AI processing, the Metis AIPU PCIe AI Accelerator Card from Axelera AI offers an outstanding blend of speed, efficiency, and power. Designed to boost AI workloads significantly, this PCIe card leverages the prowess of the Metis AI Processing Unit (AIPU) to deliver unparalleled AI inference capabilities for enterprise and industrial applications. The card excels in handling complex AI models and large-scale data processing tasks, significantly enhancing the efficiency of computational tasks within various edge settings. The Metis AIPU embedded within the PCIe card delivers high TOPs (Tera Operations Per Second), allowing it to execute multiple AI tasks concurrently with remarkable speed and precision. This makes it exceptionally suitable for applications such as video analytics, autonomous driving simulations, and real-time data processing in industrial environments. The card's robust architecture reduces the load on general-purpose processors by offloading AI tasks, resulting in optimized system performance and lower energy consumption. With easy integration capabilities supported by the state-of-the-art Voyager SDK, the Metis AIPU PCIe AI Accelerator Card ensures seamless deployment of AI models across various platforms. The SDK facilitates efficient model optimization and tuning, supporting a wide range of neural network models and enhancing overall system capabilities. Enterprises leveraging this card can see significant improvements in their AI processing efficiency, leading to faster, smarter, and more efficient operations across different sectors.
KPIT Technologies leads in the development of Advanced Driver Assistance Systems (ADAS) and autonomous driving solutions, building systems that enhance vehicle safety, comfort, and performance. These innovations extend across various aspects of vehicle automation, leveraging AI-driven data analytics and sensor fusion technologies to enable intelligent driving functions. KPIT's ADAS offerings are designed to assist drivers in complex traffic situations, reduce collision risks, and enhance the overall driving experience through adaptive, high-precision control systems. Central to KPIT's efforts in this space is the integration of state-of-the-art technologies, including machine learning algorithms and real-time data processing capabilities. These complement their extensive industry knowledge to deliver robust, scalable, and interoperable solutions that adhere to the latest automotive safety standards. Emphasizing modular design, KPIT ensures that automakers can easily integrate these technologies into existing and new vehicle platforms. KPIT's expertise extends to collaborating with automakers on developing sophisticated autonomous systems that promise to redefine the future of personal and commercial mobility. By partnering with leading automotive companies, KPIT continues to pioneer advancements in vehicular autonomy, ensuring greater safety and efficiency on roads worldwide.
The Akida IP is an advanced processor core designed to mimic the efficient processing characteristics of the human brain. Inspired by neuromorphic engineering principles, it delivers real-time AI performance while maintaining a low power profile. The architecture of the Akida IP is sophisticated, allowing seamless integration into existing systems without the need for continuous external computation. Equipped with capabilities for processing vision, audio, and sensor data, the Akida IP stands out by being able to handle complex AI tasks directly on the device. This is done by utilizing a flexible mesh of nodes that efficiently distribute cognitive computing tasks, enabling a scalable approach to machine learning applications. Each node supports hundreds of MAC operations and can be configured to adapt to various computational requirements, making it a versatile choice for AI-centric endeavors. Moreover, the Akida IP is particularly beneficial for edge applications where low latency, high efficiency, and security are paramount. With capabilities for event-based processing and on-chip learning, it enhances response times and reduces data transfer needs, thereby bolstering device autonomy. This solidifies its position as a leading solution for embedding AI into devices across multiple industries.
The Yitian 710 Processor is a landmark server chip released by T-Head Semiconductor, representing a breakthrough in high-performance computing. This chip is designed with cutting-edge architecture that utilizes advanced Armv9 structure, accommodating a range of demanding applications. Engineered by T-Head's dedicated research team, Yitian 710 integrates high efficiency and bandwidth properties into a unique 2.5D package, housing two dies and a staggering 60 billion transistors. The Yitian 710 encompasses 128 Armv9 high-performance cores, each equipped with 64KB L1 instruction cache, 64KB L1 data cache, and 1MB L2 cache, further amplified by a collective on-chip system cache of 128MB. These configurations enable optimal data processing and retrieval speeds, making it suitable for data-intensive tasks. Furthermore, the memory subsystem stands out with its 8-channel DDR5 support, reaching peak bandwidths of 281GB/s. In terms of connectivity, the Yitian 710's I/O system includes 96 PCIe 5.0 channels with a bidirectional theoretical total bandwidth of 768GB/s, streamlining high-speed data transfer critical for server operations. Its architecture is not only poised to meet the current demands of data centers and cloud services but also adaptable for future advancements in AI inference and multimedia processing tasks.
The AI Camera Module from Altek is a versatile, high-performance component designed to meet the increasing demand for smart vision solutions. This module features a rich integration of imaging lens design and combines both hardware and software capacities to create a seamless operational experience. Its design is reinforced by Altek's deep collaboration with leading global brands, ensuring a top-tier product capable of handling diverse market requirements. Equipped to cater to AI and IoT interplays, the module delivers outstanding capabilities that align with the expectations for high-resolution imaging, making it suitable for edge computing applications. The AI Camera Module ensures that end-user diversity is meaningfully addressed, offering customization in device functionality which supports advanced processing requirements such as 2K and 4K video quality. This module showcases Altek's prowess in providing comprehensive, all-in-one camera solutions which leverage sophisticated imaging and rapid processing to handle challenging conditions and demands. The AI Camera's technical blueprint supports complex AI algorithms, enhancing not just image quality but also the device's interactive capacity through facial recognition and image tracking technology.
The Veyron V2 CPU represents Ventana's second-generation RISC-V high-performance processor, designed for cloud, data center, edge, and automotive applications. This processor offers outstanding compute capabilities with its server-class architecture, optimized for handling complex, virtualized, and cloud-native workloads efficiently. The Veyron V2 is available as both IP for custom SoCs and as a complete silicon platform, ensuring flexibility for integration into various technological infrastructures. Emphasizing a modern architectural design, it includes full compliance with RISC-V RVA23 specifications, showcasing features like high Instruction Per Clock (IPC) and power-efficient architectures. Comprising of multiple core clusters, this CPU is capable of delivering superior AI and machine learning performance, significantly boosting throughput and energy efficiency. The Veyron V2's advanced fabric interconnects and extensive cache architecture provide the necessary infrastructure for high-performance applications, ensuring broad market adoption and versatile deployment options.
Chimera GPNPU is engineered to revolutionize AI/ML computational capabilities on single-core architectures. It efficiently handles matrix, vector, and scalar code, unifying AI inference and traditional C++ processing under one roof. By alleviating the need for partitioning AI workloads between different processors, it streamlines software development and drastically speeds up AI model adaptation and integration. Ideal for SoC designs, the Chimera GPNPU champions an architecture that is both versatile and powerful, handling complex parallel workloads with a single unified binary. This configuration not only boosts software developer productivity but also ensures an enduring flexibility capable of accommodating novel AI model architectures on the horizon. The architectural fabric of the Chimera GPNPU seamlessly blends the high matrix performance of NPUs with C++ programmability found in traditional processors. This core is delivered in a synthesizable RTL form, with scalability options ranging from a single-core to multi-cluster designs to meet various performance benchmarks. As a testament to its adaptability, the Chimera GPNPU can run any AI/ML graph from numerous high-demand application areas such as automotive, mobile, and home digital appliances. Developers seeking optimization in inference performance will find the Chimera GPNPU a pivotal tool in maintaining cutting-edge product offerings. With its focus on simplifying hardware design, optimizing power consumption, and enhancing programmer ease, this processor ensures a sustainable and efficient path for future AI/ML developments.
aiWare is a high-performance NPU designed to meet the rigorous demands of automotive AI inference, providing a scalable solution for ADAS and AD applications. This hardware IP core is engineered to handle a wide array of AI workloads, including the most advanced neural network structures like CNNs, LSTMs, and RNNs. By integrating cutting-edge efficiency and scalability, aiWare delivers industry-leading neural processing power tailored to automobile-grade specifications.\n\nThe NPU's architecture emphasizes hardware determinism and offers ISO 26262 ASIL-B certification, ensuring that aiWare meets stringent automotive safety standards. Its efficient design also supports up to 256 effective TOPS per core, and can scale to handle thousands of TOPS through multicore integration, minimizing power consumption effectively. The aiWare's system-level optimizations reduce reliance on external memory by leveraging local memory for data management, boosting performance efficiency across varied input data sizes and complexities.\n\naiWare’s development toolkit, aiWare Studio, is distinguished by its innovative ability to optimize neural network execution without the need for manual intervention by software engineers. This empowers ai engineers to focus on refining NNs for production, significantly accelerating iteration cycles. Coupled with aiMotive's aiDrive software suite, aiWare provides an integrated environment for creating highly efficient automotive AI applications, ensuring seamless integration and rapid deployment across multiple vehicle platforms.
SAKURA-II AI Accelerator represents EdgeCortix's latest advancement in edge AI processing, offering unparalleled energy efficiency and extensive capabilities for generative AI tasks. This accelerator is designed to manage demanding AI models, including Llama 2, Stable Diffusion, DETR, and ViT, within a slim power envelope of about 8W. With capabilities extending to multi-billion parameter models, SAKURA-II meets a wide range of edge applications in vision, language, and audio. The SAKURA-II's architecture maximizes AI compute efficiency, delivering more than twice the utilization of competitive solutions. It boasts remarkable DRAM bandwidth, essential for large language and vision models, while maintaining low power consumption. The hardware supports real-time Batch=1 processing, demonstrating its edge in performance even in constrained environments, making it a choice solution for diverse industrial AI applications. With 60 TOPS (INT8) and 30 TFLOPS (BF16) in performance metrics, this accelerator is built to exceed expectations in demanding conditions. It features robust memory configurations supporting up to 32GB of DRAM, ideal for processing intricate AI workloads. By leveraging sparse computing techniques, SAKURA-II optimizes its memory and bandwidth usage effectively, ensuring reliable performance across all deployed applications.
xcore.ai is XMOS Semiconductor's innovative programmable chip designed for advanced AI, DSP, and I/O applications. It enables developers to create highly efficient systems without the complexity typical of multi-chip solutions, offering capabilities that integrate AI inference, DSP tasks, and I/O control seamlessly. The chip architecture boasts parallel processing and ultra-low latency, making it ideal for demanding tasks in robotics, automotive systems, and smart consumer devices. It provides the toolset to deploy complex algorithms efficiently while maintaining robust real-time performance. With xcore.ai, system designers can leverage a flexible platform that supports the rapid prototyping and development of intelligent applications. Its performance allows for seamless execution of tasks such as voice recognition and processing, industrial automation, and sensor data integration. The adaptable nature of xcore.ai makes it a versatile solution for managing various inputs and outputs simultaneously, while maintaining high levels of precision and reliability. In automotive and industrial applications, xcore.ai supports real-time control and monitoring tasks, contributing to smarter, safer systems. For consumer electronics, it enhances user experience by enabling responsive voice interfaces and high-definition audio processing. The chip's architecture reduces the need for exterior components, thus simplifying design and reducing overall costs, paving the way for innovative solutions where technology meets efficiency and scalability.
The Metis AIPU M.2 Accelerator Module by Axelera AI is a compact and powerful solution designed for AI inference at the edge. This module delivers remarkable performance, comparable to that of a PCIe card, all while fitting into the streamlined M.2 form factor. Ideal for demanding AI applications that require substantial computational power, the module enhances processing efficiency while minimizing power usage. With its robust infrastructure, it is geared toward integrating into applications that demand high throughput and low latency, making it a perfect fit for intelligent vision applications and real-time analytics. The AIPU, or Artificial Intelligence Processing Unit, at the core of this module provides industry-leading performance by offloading AI workloads from traditional CPU or GPU setups, allowing for dedicated AI computation that is faster and more energy-efficient. This not only boosts the capabilities of the host systems but also drastically reduces the overall energy consumption. The module supports a wide range of AI applications, from facial recognition and security systems to advanced industrial automation processes. By utilizing Axelera AI’s innovative software solutions, such as the Voyager SDK, the Metis AIPU M.2 Accelerator Module enables seamless integration and full utilization of AI models and applications. The SDK offers enhancements like compatibility with various industry tools and frameworks, thus ensuring a smooth deployment process and quick time-to-market for advanced AI systems. This product represents Axelera AI’s commitment to revolutionizing edge computing with streamlined, effective AI acceleration solutions.
The Jotunn8 AI Accelerator is engineered for lightning-fast AI inference at unprecedented scale. It is designed to meet the demands of modern data centers by providing exceptional throughput, low latency, and optimized energy efficiency. The Jotunn8 outperforms traditional setups by allowing large-scale deployment of trained models, ensuring robust performance while reducing operational costs. Its capabilities make it ideal for real-time applications such as chatbots, fraud detection, and advanced search algorithms. What sets the Jotunn8 apart is its adaptability to various AI algorithms, including reasoning and generative models, alongside agentic AI frameworks. This seamless integration achieves near-theoretical performance, allowing the chip to excel in applications that require logical rigor and creative processing. With a focus on minimizing carbon footprint, the Jotunn8 is meticulously designed to enhance both performance per watt and overall sustainability. The Jotunn8 supports massive memory handling with HBM capability, promoting incredibly high data throughput that aligns with the needs of demanding AI processes. Its architecture is purpose-built for speed, efficiency, and the ability to scale with technological advances, providing a solid foundation for AI infrastructure looking to keep pace with evolving computational demands.
The Talamo SDK from Innatera serves as a comprehensive software development toolkit designed to maximize the capabilities of its Spiking Neural Processor (SNP) lineup. Tailored for developers and engineers, Talamo offers in-depth access to configure and deploy neuromorphic processing solutions effectively. The SDK supports the development of applications that utilize Spiking Neural Networks (SNNs) for diverse sensory processing tasks. Talamo provides a user-friendly interface that simplifies the integration of neural processing capabilities into a wide range of devices and systems. By leveraging the toolkit, developers can customize applications for specific use cases such as real-time audio analysis, touch-free interactions, and biometric data processing. The SDK comes with pre-built models and a model zoo, which helps in rapidly prototyping and deploying sensor-driven solutions. This SDK stands out by offering enhanced tools for developing low-latency, energy-efficient applications. By harnessing the temporal processing strength of SNNs, Talamo allows for the robust development of applications that can operate under strict power and performance constraints, enabling the creation of intelligent systems that can autonomously process data in real-time.
The RV12 RISC-V Processor is a highly configurable, single-core CPU that adheres to RV32I and RV64I standards. It’s engineered for the embedded market, offering a robust structure based on the RISC-V instruction set. The processor's architecture allows simultaneous instruction and data memory accesses, lending itself to a broad range of applications and maintaining high operational efficiency. This flexibility makes it an ideal choice for diverse execution requirements, supporting efficient data processing through an optimized CPU framework. Known for its adaptability, the RV12 processor can support multiple configurations to suit various application demands. It is capable of providing the necessary processing power for embedded systems, boasting a reputation for stability and reliability. This processor becomes integral for designs that require a maintainability of performance without compromising on the configurability aspect, meeting the rigorous needs of modern embedded computing. The processor's support of the open RISC-V architecture ensures its capability to integrate into existing systems seamlessly. It lends itself well to both industrial and academic applications, offering a resource-efficient platform that developers and researchers can easily access and utilize.
RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.
The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.
The KL630 is a pioneering AI chipset featuring Kneron's latest NPU architecture, which is the first to support Int4 precision and transformer networks. This cutting-edge design ensures exceptional compute efficiency with minimal energy consumption, making it ideal for a wide array of applications. With an ARM Cortex A5 CPU at its core, the KL630 excels in computation while maintaining low energy expenditure. This SOC is designed to handle both high and low light conditions optimally and is perfectly suited for use in diverse edge AI devices, from security systems to expansive city and automotive networks.
The KL520 marks Kneron's foray into the edge AI landscape, offering an impressive combination of size, power efficiency, and performance. Armed with dual ARM Cortex M4 processors, this chip can operate independently or as a co-processor to enable AI functionalities such as smart locks and security monitoring. The KL520 is adept at 3D sensor integration, making it an excellent choice for applications in smart home ecosystems. Its compact design allows devices powered by it to operate on minimal power, such as running on AA batteries for extended periods, showcasing its exceptional power management capabilities.
The Hanguang 800 AI Accelerator by T-Head Semiconductor is a powerful AI acceleration chip designed to enhance machine learning tasks. It excels in providing the computational power necessary for intensive AI workloads, effectively reducing processing times for large-scale data frameworks. This makes it an ideal choice for organizations aiming to infuse AI capabilities into their operations with maximum efficiency. Built with an emphasis on speed and performance, the Hanguang 800 is optimized for applications requiring vast amounts of data crunching. It supports a diverse array of AI models and workloads, ensuring flexibility and robust performance across varying use cases. This accelerates the deployment of AI applications in sectors such as autonomous driving, natural language processing, and real-time data analysis. The Hanguang 800's architecture is complemented by proprietary algorithms that enhance processing throughput, competing against traditional processors by providing significant gains in efficiency. This accelerator is indicative of T-Head's commitment to advancing AI technologies and highlights their capability to cater to specialized industry needs through innovative semiconductor developments.
The KL530 represents a significant advancement in AI chip technology with a new NPU architecture optimized for both INT4 precision and transformer networks. This SOC is engineered to provide high processing efficiency and low power consumption, making it suitable for AIoT applications and other innovative scenarios. It features an ARM Cortex M4 CPU designed for low-power operation and offers a robust computational power of up to 1 TOPS. The chip's ISP enhances image quality, while its codec ensures efficient multimedia compression. Notably, the chip's cold start time is under 500 ms with an average power draw of less than 500 mW, establishing it as a leader in energy efficiency.
The Maverick-2 Intelligent Compute Accelerator represents the pinnacle of Next Silicon's innovative approach to computational resources. This state-of-the-art accelerator leverages the Intelligent Compute Architecture for software-defined adaptability, enabling it to autonomously tailor its real-time operations across various HPC and AI workloads. By optimizing performance using insights gained through real-time telemetry, Maverick-2 ensures superior computational efficiency and reduced power consumption, making it an ideal choice for demanding computational environments.\n\nMaverick-2 brings transformative performance enhancements to large-scale scientific research and data-heavy industries by dispensing with the need for codebase modifications or specialized software stacks. It supports a wide range of familiar development tools and frameworks, such as C/C++, FORTRAN, and Kokkos, simplifying the integration process for developers and reducing time-to-discovery significantly.\n\nEngineered with advanced features like high bandwidth memory (HBM3E) and built on TSMC's 5nm process technology, this accelerator provides not only unmatched adaptability but also an energy-efficient, eco-friendly computing solution. Whether embedded in single-die PCIe cards or dual-die OCP Accelerator Modules, the Maverick-2 is positioned as a future-proof solution capable of evolving with technological advancements in AI and HPC.
The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.
Micro Magic's Ultra-Low-Power 64-Bit RISC-V Core is a highly efficient design that operates with remarkably low power consumption, requiring only 10mW at 1GHz. This core exemplifies Micro Magic’s commitment to power efficiency, as it integrates advanced techniques to maintain high performance even at lower voltages. The core is engineered for applications where energy conservation is crucial, making it ideal for modern, power-sensitive devices. The architectural design of this RISC-V core utilizes innovative technology to ensure high-speed processing capabilities while minimizing power draw. This balance is achieved through precise engineering and the use of state-of-the-art design methodologies that reduce operational overhead without compromising performance. As a result, this core is particularly suited for applications in portable electronics, IoT devices, and other areas where low-power operation is a necessity. Micro Magic's experience in developing high-speed, low-power solutions is evident in this core's design, ensuring that it delivers reliable performance under various operational conditions. The Ultra-Low-Power 64-Bit RISC-V Core represents a significant advancement in processor efficiency, providing a robust solution for designers looking to enhance their products' capabilities while maintaining a low power footprint.
The WiseEye2 AI Solution by Himax is a highly efficient processor tailored for AI applications, combining an ultra-low power CMOS image sensor with the HX6538 microcontroller. Designed for battery-powered devices requiring continuous operation, it significantly lowers power consumption while boosting performance. This processor leverages the Arm Cortex M55 CPU and Ethos U55 NPU to enhance inference speed and energy efficiency substantially, allowing execution of complex models with precision. Perfectly suited for applications like user-presence detection in laptops, WiseEye2 heightens security through sophisticated facial recognition and adaptive privacy settings. It automatically wakes or locks the device based on user proximity, thereby conserving power and safeguarding sensitive information. WiseEye2’s power management and neural processing units ensure constant device readiness, augmenting AI capabilities ranging from occupancy detection to smart security. This reflects Himax's dedication to providing versatile AI solutions that anticipate and respond to user needs seamlessly. Equipped with sensor fusion and sophisticated security engines, WiseEye2 maintains privacy and operational efficiency, marking a significant step forward in the integration of AI into consumer electronics, especially where energy conservation is pivotal.
StarFive's Tianqiao-70 is engineered to deliver superior performance in a power-efficient package. This 64-bit RISC-V CPU core is designed for commercial-grade applications, where consistent and reliable performance is mandatory, yet energy consumption must be minimized. The core's architecture integrates low power design principles without compromising its ability to execute complex instructions efficiently. It is particularly suited for mobile applications, desktop clients, and intelligent gadgets requiring sustained battery life. The Tianqiao-70's design focuses on extending the operational life of devices by ensuring minimal power draw during both active and idle states. It supports an array of advanced features that cater to the latest computational demands. As an ideal solution for devices that combine portability with intensive processing demands, the Tianqiao-70 offers an optimal balance of performance and energy conservation. Its capability to adapt to various operating environments makes it a versatile option for developers looking to maximize efficiency and functionality.
The NMP-750 is AiM Future's powerful edge computing accelerator designed specifically for high-performance tasks. With up to 16 TOPS of computational throughput, this accelerator is perfect for automotive, AMRs, UAVs, as well as AR/VR applications. Fitted with up to 16 MB of local memory and featuring RISC-V or Arm Cortex-R/A 32-bit CPUs, it supports diverse data processing requirements crucial for modern technological solutions. The versatility of the NMP-750 is displayed in its ability to manage complex processes such as multi-camera stream processing and spectral efficiency management. It is also an apt choice for applications that require energy management and building automation, demonstrating exceptional potential in smart city and industrial setups. With its robust architecture, the NMP-750 ensures seamless integration into systems that need to handle large data volumes and support high-speed data transmission. This makes it ideal for applications in telecommunications and security where infrastructure resilience is paramount.
aiSim 5 stands as a cutting-edge simulation tool specifically crafted for the automotive sector, with a strong focus on validating ADAS and autonomous driving solutions. It distinguishes itself with an AI-powered digital twin creation capability, offering a meticulously optimized sensor simulation environment that guarantees reproducibility and determinism. The adaptable architecture of aiSim allows seamless integration with existing industry toolchains, significantly minimizing the need for costly real-world testing.\n\nOne of the key features of aiSim is its capability to simulate various challenging weather conditions, enhancing testing accuracy across diverse environments. This includes scenarios like snowstorms, heavy fog, and rain, with sensors simulated based on physics, offering changes in conditions in real-time. Its certification with ISO 26262 ASIL-D attests to its automotive-grade quality and reliability, providing a new standard for testing high-fidelity sensor data in varied operational design domains.\n\nThe flexibility of aiSim is further highlighted through its comprehensive SDKs and APIs, which facilitate smooth integration into various systems under test. Additionally, users can leverage its extensive 3D asset library to establish detailed, realistic testing environments. AI-based rendering technologies underpin aiSim's data simulation, achieving both high efficiency and accuracy, thereby enabling rapid and effective validation of advanced driver assistance and autonomous driving systems.
The Azurite Core-hub by InCore Semiconductors is a sophisticated solution designed to offer scalable RISC-V SoCs with high-speed secure interconnect capabilities. This processor is tailored for performance-demanding applications, ensuring that systems maintain robust security while executing tasks at high speeds. Azurite leverages advanced interconnect technologies to enhance the communication between components within a SoC, making it ideal for industries that require rapid data transfer and high processing capabilities. The core is engineered to be scalable, supporting a wide range of applications from edge AI to functional safety systems, adapting seamlessly to various industry needs. Engineered with a focus on security, the Azurite Core-hub incorporates features that protect data integrity and system operation in a dynamic technological landscape. This makes it a reliable choice for companies seeking to integrate advanced RISC-V architectures into their security-focused applications, offering not just innovation but also peace of mind with its secure design.
RAIV represents Siliconarts' General Purpose-GPU (GPGPU) offering, engineered to accelerate data processing across diverse industries. This versatile GPU IP is essential in sectors engaged in high-performance computing tasks, such as autonomous driving, IoT, and sophisticated data centers. With RAIV, Siliconarts taps into the potential of the fourth industrial revolution, enabling rapid computation and seamless data management. The RAIV architecture is poised to deliver unmatched efficiency in high-demand scenarios, supporting massive parallel processing and intricate calculations. It provides an adaptable framework that caters to the needs of modern computing, ensuring balanced workloads and optimized performance. Whether used for VR/AR applications or supporting the back-end infrastructure of data-intensive operations, RAIV is designed to meet and exceed industry expectations. RAIV’s flexible design can be tailored to enhance a broad spectrum of applications, promising accelerated innovation in sectors dependent on AI and machine learning. This GPGPU IP not only underscores Siliconarts' commitment to technological advancement but also highlights its capability to craft solutions that drive forward computational boundaries.
The EW6181 GPS and GNSS solution from EtherWhere is tailored for applications requiring high integration levels, offering licenses in RTL, gate-level netlist, or GDS formats. This highly adaptable IP can be ported across various technology nodes, provided an RF frontend is available. Designed to be one of the smallest and most power-efficient cores, it optimizes battery life significantly in devices such as tags and modules, making it ideal for challenging environments. The IP's strengths lie in its digital processing capabilities, utilizing cutting-edge DSP algorithms for precision and reliability in location tracking. With a digital footprint approximately 0.05mm² on a 5nm node, the EW6181 boasts a remarkably compact size, aiding in minimal component use and a streamlined Bill of Materials (BoM). Its stable firmware ensures accurate and reliable position fixations. In terms of implementation, this IP offers a combination of compact design and extreme power efficiency, providing substantial advantages in battery-operated environments. The EW6181 delivers critical support and upgrades, facilitating seamless high-reliability tracking for an array of applications demanding precise navigation.
The KL720 AI SoC is designed for optimal performance-to-power ratios, achieving 0.9 TOPS per watt. This makes it one of the most efficient chips available for edge AI applications. The SOC is crafted to meet high processing demands, suitable for high-end devices including smart TVs, AI glasses, and advanced cameras. With an ARM Cortex M4 CPU, it enables superior 4K imaging, full HD video processing, and advanced 3D sensing capabilities. The KL720 also supports natural language processing (NLP), making it ideal for emerging AI interfaces such as AI assistants and gaming gesture controls.
The Codasip RISC-V BK Core Series offers versatile, low-power, and high-performance solutions tailored for various embedded applications. These cores ensure efficiency and reliability by incorporating RISC-V compliance and are verified through advanced methodologies. Known for their adaptability, these cores can cater to applications needing robust performance while maintaining stringent power and area requirements.
The Ncore Cache Coherent Interconnect is designed to tackle the complexities inherent in multicore SoC environments. By maintaining coherence across heterogeneous cores, it enables efficient data sharing and optimizes cache use. This in turn enhances the throughput of the system, ensuring reliable performance with reduced latency. The architecture supports a wide range of cores, making it a versatile option for many applications in high-performance computing. With Ncore, designers can address the challenges of maintaining data consistency across different processor cores without incurring significant power or performance penalties. The interconnect's capability to handle multicore scenarios means it is perfectly suited for advanced computing solutions where data integrity and speed are paramount. Additionally, its configuration options allow customization to meet specific project needs, maintaining flexibility in design applications. Its efficiency in multi-threading environments, coupled with robust data handling, marks it as a crucial component in designing state-of-the-art SoCs. By supporting high data throughput, Ncore keeps pace with the demands of modern processing needs, ensuring seamless integration and operation across a variety of sectors.
The C100 IoT chip by Chipchain is engineered to meet the diverse needs of modern IoT applications. It integrates a powerful 32-bit RISC-V CPU capable of reaching speeds up to 1.5GHz, with built-in RAM and ROM to facilitate efficient data processing and computational capabilities. This sophisticated single-chip solution is known for its low power consumption, making it ideal for a variety of IoT devices. This chip supports seamless connectivity through embedded Wi-Fi and multiple transmission interfaces, allowing it to serve broad application areas with minimal configuration complexity. Additionally, it boasts integrated ADCs, LDOs, and temperature sensors, offering a comprehensive toolkit for developers looking to innovate across fields like security, healthcare, and smart home technology. Notably, the C100 simplifies the development process with its high level of integration and performance. It stands as a testament to Chipchain's commitment to providing reliable, high-performance solutions for the rapidly evolving IoT landscape. The chip's design focuses on ensuring stability and security, which are critical in IoT installations.
The Neural Processing Unit (NPU) from OPENEDGES is geared towards advancing AI applications, providing a dedicated processing unit for neural network computations. Engineered to alleviate the computational load from CPUs and GPUs, this NPU optimizes AI workloads, enhancing deep learning tasks and inference processes. Capable of accelerating neural network inference, the NPU supports various machine learning frameworks and is compatible with industry-standard AI models. Its architecture focuses on delivering high throughput for deep learning operations while maintaining low power consumption, making it suitable for a range of applications from mobile devices to data centers. This NPU integrates seamlessly with existing AI frameworks, supporting scalability and flexibility in design. Its dedicated resource management ensures swift data processing and execution, thereby translating into superior AI performance and efficiency in multitude application scenarios.
A2e's H.264 FPGA Encoder and CODEC Micro Footprint Cores provide a customizable solution targeting FPGAs. Known for its small size and rapid execution, the core supports 1080p60 H.264 Baseline with a singular core, making it one of the industry's swiftest and most efficient FPGA offerings. The core is compliant with ITAR, offering options to adjust pixel depths and resolutions according to specific needs. Its high-performance capability includes offering a latency of just 1ms at 1080p30, which is crucial for applications demanding rapid processing speeds. This licensable core is ideal for developers needing robust video compression capabilities in a compact form factor. The H.264 cores can be finely tuned to meet unique project specifications, enabling developers to implement varied pixel resolutions and depths, further enhancing the core's versatility for different application requirements. With a licensable evaluation option available, prospective users can explore the core’s functionalities before opting for full integration. This flexibility makes it suitable for projects demanding customizable compression solutions without the burden of full-scale initial commitment. Furthermore, A2e provides comprehensive integration and custom design services, allowing these cores to be seamlessly absorbed into existing systems or developed into new solutions. This support ensures minimized risk and accelerated project timelines, allowing developers to focus on innovation and efficiency in their video-centric applications.
The Spiking Neural Processor T1 is an ultra-low power processor developed specifically for enhancing sensor capabilities at the edge. By leveraging advanced Spiking Neural Networks (SNNs), the T1 efficiently deciphers patterns in sensor data with minimal latency and power usage. This processor is especially beneficial in real-time applications, such as audio recognition, where it can discern speech from audio inputs with sub-millisecond latency and within a strict power budget, typically under 1mW. Its mixed-signal neuromorphic architecture ensures that pattern recognition functions can be continually executed without draining resources. In terms of processing capabilities, the T1 resembles a dedicated engine for sensor tasks, offering functionalities like signal conditioning, filtering, and classification independent of the main application processor. This means tasks traditionally handled by general-purpose processors can now be offloaded to the T1, conserving energy and enhancing performance in always-on scenarios. Such functionality is crucial for pervasive sensing tasks across a range of industries. With an architecture that balances power and performance impeccably, the T1 is prepared for diverse applications spanning from audio interfaces to the rapid deployment of radar-based touch-free interactions. Moreover, it supports presence detection systems, activity recognition in wearables, and on-device ECG processing, showcasing its versatility across various technological landscapes.
The RISC-V Core from AheadComputing is a state-of-the-art application processor, designed to drive next-generation computing solutions. Built on an open-source architecture, this processor core emphasizes enhanced instruction per cycle (IPC) performance, setting the stage for highly efficient computing capabilities. As part of the company's commitment to delivering world-leading performance, the RISC-V Core provides a reliable backbone for advanced computing tasks across various applications. This core's design harnesses the power of 64-bit architecture, providing significant improvements in data handling and processing speed. The focus on 64-bit processing facilitates better computational tasks, ensuring robust performance in data-intensive applications. With AheadComputing's emphasis on superior compute solutions, the RISC-V Core exemplifies their commitment to power, performance, and flexibility. As a versatile computing component, the RISC-V Core suits a range of applications from consumer electronics to enterprise-level computing. It is designed to integrate seamlessly into diverse systems, meeting complex computational demands with finesse. This core stands out in the industry, underpinned by AheadComputing's dedication to pushing the boundaries of what a processor can achieve.
The Tyr AI Processor Family revolutionizes edge AI by executing data processing directly where data is generated, instead of relying on cloud solutions. This empowers industries with real-time decision-making capabilities by bringing intelligence closer to devices, machines, and sensors. The Tyr processors integrate cutting-edge AI capabilities into compact, efficient designs, achieving high performance akin to data center class levels with much lower power needs. The edge processors from the Tyr line ensure reduced latency and enhanced privacy, making them suitable for autonomous vehicles, smart factories, and other real-time applications demanding immediate, secure insights. They feature robust local data processing options, ensuring minimal reliance on cloud services, which contributes to lower costs and improved compliance with privacy standards. With a focus on multi-modal input handling and sustainability, the Tyr processors provide balanced compute power, memory utilization, and intelligent features that align with the needs of highly dynamic and bandwidth-restricted environments. Using RISC-V cores, they facilitate versatile AI model deployment across edge devices, ensuring high adaptability to the latest technological advances and market demands.
The SEMIFIVE AI Inference Platform is engineered to facilitate rapid development and deployment of AI inference solutions within custom silicon environments. Utilizing seamless integration with silicon-proven IPs, this platform delivers a high-performance framework optimized for AI and machine learning tasks. By providing a strategic advantage in cost reduction and efficiency, the platform decreases time-to-market challenges through pre-configured model layers and extensive IP libraries tailored for AI applications. It also offers enhanced scalability through its support for various computational and network configurations, making it adaptable to both high-volume and specialized market segments. This platform supports complex AI workloads on scalable AI engines, ensuring optimized performance in data-intensive operations. The integration of advanced processors and memory solutions within the platform further enhances processing efficiency, positioning it as an ideal solution for enterprises focusing on breakthroughs in AI technologies.
The Codasip L-Series DSP Core is designed to handle demanding signal processing tasks, offering an exemplary balance of computational power and energy efficiency. This DSP core is particularly suitable for applications involving audio processing and sensor data fusion, where performance is paramount. Codasip enriches this product with their extensive experience in RISC-V architectures, ensuring robust and optimized performance.
The SCR7 application core is built for high-performance computing environments, featuring a 12-stage dual-issue out-of-order pipeline. It supports both cache coherency and symmetric multiprocessing, with the capability to handle up to 8 cores. This core is tailored for applications that require robust data processing capabilities, such as those found in data centers and enterprise networks. Its architecture supports extensive multitasking and advanced memory management, making it a powerful addition to any high-demand computing environment.
eSi-ADAS is a radar IP suite designed to enhance the performance and responsiveness of automotive and unmanned vehicle systems. It includes a complete Radar co-processor engine, facilitating rapid situational awareness necessary for safety-critical applications. The scalable nature of this IP makes it adaptable for various automotive and drone-based projects.
The Dynamic Neural Accelerator II (DNA-II) from EdgeCortix is an advanced neural network IP core tailored for high efficiency and parallelism at the edge. Incorporating a run-time reconfigurable interconnect system between compute units, DNA-II effectively manages both convolutional and transformer workloads. This architecture ensures scalable performance beginning with 1K MACs, suitable for a wide range of SoC implementations. EdgeCortix's patented architecture significantly optimizes data paths between DNA engines, enhancing parallelism while reducing on-chip memory usage. As a core component of the SAKURA-II platform, DNA-II supports state-of-the-art generative AI models with industry-leading energy efficiency. DNA-II's design acknowledges the typical inefficiencies in IP cores, improving compute utilization and power consumption metrics substantially. By adopting innovative reconfigurable datapath technologies, EdgeCortix sets a new benchmark for low-power, high-performance edge AI applications.
Targeted at high-end applications, the SCR9 processor core boasts a 12-stage dual-issue out-of-order pipeline, adding vector processing units (VPUs) to manage intensive computational tasks. It offers hypervisor support, making it suitable for diverse enterprise-grade applications. Configured for up to 16 cores, it exhibits excellent memory management and cache coherency required for state-of-the-art computing platforms such as HPC, AI, and machine learning environments. This core embodies efficiency and performance, catering to industries that leverage high-throughput data processing.
NeuroMosAIc Studio serves as a comprehensive software platform that simplifies the process of developing and deploying AI models. Designed to optimize edge AI applications, this platform assists users through model conversion, mapping, and simulation, ensuring optimal use of resources and efficiency. It offers capabilities like network quantization and compression, allowing developers to push the limits in terms of performance while maintaining compact model sizes. The studio also supports precision adjustments, providing deep insights into hardware optimization, and aiding in the generation of precise outputs tailored to specific application needs. AiM Future's NeuroMosAIc Studio boosts the efficiency of training stages and quantization, ultimately facilitating the delivery of high-quality AI solutions for both existing and emerging technologies. It's an indispensable tool for those looking to enhance AI capabilities in embedded systems without compromising on power or performance.
The NMP-350 is an endpoint accelerator designed to deliver the lowest power and cost efficiency in its class. Ideal for applications such as driver authentication and health monitoring, it excels in automotive, AIoT/sensors, and wearable markets. The NMP-350 offers up to 1 TOPS performance with 1 MB of local memory, and is equipped with a RISC-V or Arm Cortex-M 32-bit CPU. It supports multiple use-cases, providing exceptional value for integrating AI capabilities into various devices. NMP-350's architectural design ensures optimal energy consumption, making it particularly suited to Industry 4.0 applications where predictive maintenance is crucial. Its compact nature allows for seamless integration into systems requiring minimal footprint yet substantial computational power. With support for multiple data inputs through AXI4 interfaces, this accelerator facilitates enhanced machine automation and intelligent data processing. This product is a testament to AiM Future's expertise in creating efficient AI solutions, providing the building blocks for smart devices that need to manage resources effectively. The combination of high performance with low energy requirements makes it a go-to choice for developers in the field of AI-enabled consumer technology.
The Polar ID is an innovative biometric security system that elevates facial recognition technology through its use of sophisticated meta-optics. By capturing and analyzing the unique polarization signature of a face, Polar ID delivers a new standard in security. This system can detect spoofing attempts that incorporate 3D masks or other similar deceptive tactics, ensuring high security through accurate human authentication. Polar ID minimizes the complexity typically associated with face recognition systems by integrating essential optical functions into a single optic. Its compact design results in cost savings and reduces the space required for optical modules in devices like smartphones. Operating in near-infrared light, Polar ID can consistently deliver secure face unlock capabilities even under challenging lighting conditions, dramatically outperforming traditional systems that may fail in bright or dark environments. The platform does not rely on time-of-flight sensors or structured light projectors, which are costly and complex. Instead, Polar ID leverages the simplicity and efficiency of its single-shot identification process to deliver immediate authentication results. This makes it a potent tool for securing mobile transactions and providing safer user experiences in consumer technology.
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!
No credit card or payment details required.
Join the world's most advanced AI-powered semiconductor IP marketplace!
It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!
Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!