Find IP Sell IP AI Assistant Chip Talk Chip Videos About Us
Log In

All IPs > Processor > AI Processor

AI Processor Semiconductor IPs

The AI Processor category within our semiconductor IP catalog is dedicated to state-of-the-art technologies that empower artificial intelligence applications across various industries. AI processors are specialized computing engines designed to accelerate machine learning tasks and perform complex algorithms efficiently. This category includes a diverse collection of semiconductor IPs that are built to enhance both performance and power efficiency in AI-driven devices.

AI processors play a critical role in the emerging world of AI and machine learning, where fast processing of vast datasets is crucial. These processors can be found in a range of applications from consumer electronics like smartphones and smart home devices to advanced robotics and autonomous vehicles. By facilitating rapid computations necessary for AI tasks such as neural network training and inference, these IP cores enable smarter, more responsive, and capable systems.

In this category, developers and designers will find semiconductor IPs that provide various levels of processing power and architectural designs to suit different AI applications, including neural processing units (NPUs), tensor processing units (TPUs), and other AI accelerators. The availability of such highly specialized IPs ensures that developers can integrate AI functionalities into their products swiftly and efficiently, reducing development time and costs.

As AI technology continues to evolve, the demand for robust and scalable AI processors increases. Our semiconductor IP offerings in this category are designed to meet the challenges of rapidly advancing AI technologies, ensuring that products are future-ready and equipped to handle the complexities of tomorrow’s intelligence-driven tasks. Explore this category to find cutting-edge solutions that drive innovation in artificial intelligence systems today.

All semiconductor IP
146
IPs available

Akida 2nd Generation

Building on the principles of its predecessor, the Akida 2nd Generation IP further enhances AI processing at the edge by integrating additional capabilities tailored for spatio-temporal and temporal event-based neural networks. This second iteration doubles down on programmability and includes expanded activation functions and enhanced Skip Connections, offering significant flexibility for complex applications involving dynamic data streams. A key feature of the Akida 2nd Generation is its innovative approach to sparsity, optimizing the AI model's overall efficiency. The scalable fabric of nodes in this version can adeptly handle various weights and activation bit depths, adapting the computational requirements to suit the application needs effectively. This capability ensures that the Akida 2nd Generation can manage sophisticated algorithms with a heightened level of precision and power efficiency. Furthermore, this IP iteration embraces fully digital neuromorphic implementations, allowing for predictable, cost-effective design and deployment. It minimizes the computational demands and bandwidth consumption of traditional AI models by focusing compute power precisely where needed, ensuring a seamless experience with lower latency and enhanced processing accuracy. Its flexibility in configuration and scalability at the post-silicon stage makes it an essential tool for future-ready AI applications, particularly those that require real-time interaction and decision-making capabilities.

BrainChip
AI Processor, CPU, Digital Video Broadcast, GPU, Input/Output Controller, IoT Processor, Multiprocessor / DSP, Network on Chip, Security Protocol Accelerators, Vision Processor
View Details

KL730 AI SoC

The KL730 is a sophisticated AI System on Chip (SoC) that embodies Kneron's third-generation reconfigurable NPU architecture. This SoC delivers a substantial 8 TOPS of computing power, designed to efficiently handle CNN network architectures and transformer applications. Its innovative NPU architecture significantly optimizes DDR bandwidth, providing powerful video processing capabilities, including supporting 4K resolution at 60 FPS. Furthermore, the KL730 demonstrates formidable performance in noise reduction and low-light imaging, positioning it as a versatile solution for intelligent security, video conferencing, and autonomous applications.

Kneron
TSMC
28nm
2D / 3D, A/D Converter, AI Processor, Amplifier, Audio Interfaces, Camera Interface, Clock Generator, CPU, CSC, GPU, Image Conversion, JPEG, USB, VGA, Vision Processor
View Details

Metis AIPU PCIe AI Accelerator Card

Axelera AI's Metis AIPU PCIe AI Accelerator Card is designed to tackle demanding vision applications with its powerful processing capabilities. The card embeds a single Metis AIPU which can deliver up to 214 TOPS, providing the necessary throughput for concurrent processing of high-definition video streams and complex AI inference tasks. This PCIe card is supported by the Voyager SDK, which enhances the user experience by allowing easy integration into existing systems for efficient deployment of AI inference networks. It suits developers and integrators looking for an upgrade to existing infrastructure without extensive modifications, optimizing performance and accelerating AI model deployment. The card’s design prioritizes performance and efficiency, making it suitable for diverse applications across industries like security, transportation, and smart city environments. Its capacity to deliver high frames per second on popular AI models ensures it meets modern digital processing demands with reliability and precision.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Building Blocks, Ethernet, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, WMV
View Details

xcore.ai

The xcore.ai platform is designed to power the intelligent Internet of Things (IoT) by combining flexibility and performance efficiency. With its distinctive multi-threaded micro-architecture, it allows for low-latency and predictable performance, crucial for IoT applications. Each xcore.ai device is equipped with 16 logical cores distributed over two tiles, each with integrated 512kB SRAM and a vector unit capable of handling both integer and floating-point operations. Communication between processors is facilitated by a robust interprocessor communication infrastructure, enabling scalability for systems requiring multiple xcore.ai SoCs. This platform supports a multitude of applications by integrating DSP, AI, and I/O processing within a cohesive development environment. For audio and voice processing needs, it offers adaptable, software-defined I/O that aligns with specific application requirements, ensuring efficient and targeted performance. The xcore.ai is also equipped for ai and machine learning tasks with a 256-bit VPU that supports various operations including 32-bit, 16-bit, and 8-bit vector operations, offering peak AI performance. The inclusion of a comprehensive development kit allows developers to explore its capabilities through ready-made solutions or custom-built applications.

XMOS Semiconductor
21 Categories
View Details

Metis AIPU M.2 Accelerator Module

The Metis AIPU M.2 Accelerator Module from Axelera AI is engineered for applications requiring edge AI computing power in a compact form factor. Leveraging the quad-core Metis AIPU, this module provides efficient AI processing capabilities tailored for real-time analysis and data-intensive tasks in areas like computer vision. Designed to fit into standard NGFF (Next Generation Form Factor) M.2 sockets, it supports a wide range of AI models with dedicated 1GB DRAM memory for optimized performance. This module is especially suitable for systems needing enhanced image and video processing capabilities while maintaining minimal power consumption. The Metis AIPU M.2 Accelerator Module enhances computing architectures by enabling seamless integration of AI for a multitude of industrial and commercial applications. Its efficient design makes it ideal for environments where space is limited, but computational demand is high, ensuring that solutions are both powerful and cost-effective.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Building Blocks, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, WMV
View Details

Yitian 710 Processor

The Yitian 710 Processor is T-Head's flagship ARM-based server chip that represents the pinnacle of their technological expertise. Designed with a pioneering architecture, it is crafted for high efficiency and superior performance metrics. This processor is built using a 2.5D packaging method, integrating two dies and boasting a substantial 60 billion transistors. The core of the Yitian 710 consists of 128 high-performance Armv9 CPU cores, each accompanied by advanced memory configurations that streamline instruction and data caching processes. Each CPU integrates 64KB of L1 instruction cache, 64KB of L1 data cache, and 1MB of L2 cache, supplemented by a robust 128MB system-level cache on the chip. To support expansive data operations, the processor is equipped with an 8-channel DDR5 memory system, enabling peak memory bandwidth of up to 281GB/s. Its I/O subsystem is formidable, featuring 96 PCIe 5.0 channels capable of achieving dual-direction bandwidth up to 768GB/s. With its multi-layered design, the Yitian 710 Processor is positioned as a leading solution for cloud services, data analytics, and AI operations.

T-Head Semiconductor
AI Processor, AMBA AHB / APB/ AXI, Audio Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Independent, Processor Cores, Vision Processor
View Details

AI Camera Module

The AI Camera Module by Altek stands out with its strong integration of imaging lens design and software capabilities. It collaborates with major world brands, producing AI cameras capable of satisfying a range of client demands. The module supports differentiated AI+IoT solutions and meets high-resolution standards like 2K and 4K for advanced edge computing applications. This IP's synergy between hardware and software makes it versatile in addressing dynamic customer requirements in various environments.

Altek Corporation
AI Processor, Audio Interfaces, Image Conversion, IoT Processor, JPEG, Receiver/Transmitter, SATA, Vision Processor
View Details

Veyron V2 CPU

Veyron V2 represents the next generation of Ventana's high-performance RISC-V CPU. It significantly enhances compute capabilities over its predecessor, designed specifically for data center, automotive, and edge deployment scenarios. This CPU maintains compatibility with the RVA23 RISC-V specification, making it a powerful alternative to the latest ARM and x86 counterparts within similar domains. Focusing on seamless integration, the Veyron V2 offers clean, portable RTL implementations with a standardized interface, optimizing its use for custom SoCs with high-core counts. With a robust 512-bit vector unit, it efficiently supports workloads requiring both INT8 and BF16 precision, making it highly suitable for AI and ML applications. The Veyron V2 is adept in handling cloud-native and virtualized workloads due to its full architectural virtualization support. The architectural advancements offer significant performance-per-watt improvements, and advanced cache and virtualization features ensure a secure and reliable computing environment. The Veyron V2 is available as both a standalone IP and a complete hardware platform, facilitating diverse integration pathways for customers aiming to harness Ventana’s innovative RISC-V solutions.

Ventana Micro Systems
TSMC
16nm, 28nm
AI Processor, CPU, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

NMP-750

The NMP-750 is a high-performance accelerator designed for edge computing, particularly suited for automotive, AR/VR, and telecommunications sectors. It boasts an impressive capacity of up to 16 TOPS and 16 MB local memory, powered by a RISC-V or Arm Cortex-R/A 32-bit CPU. The three AXI4 interfaces ensure seamless data transfer and processing. This advanced accelerator supports multifaceted applications such as mobility control, building automation, and multi-camera processing. It's designed to cope with the rigorous demands of modern digital and autonomous systems, offering substantial processing power and efficiency for intensive computational tasks. The NMP-750's ability to integrate into smart systems and manage spectral efficiency makes it crucial for communications and smart infrastructure management. It helps streamline operations, maintain effective energy management, and facilitate sophisticated AI-driven automation, ensuring that even the most complex data flows are handled efficiently.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

AUTOSAR & Adaptive AUTOSAR Solutions

KPIT's AUTOSAR & Adaptive AUTOSAR Solutions drive innovation in automotive software architecture, promoting standardization and efficiency across the industry. AUTOSAR (AUTomotive Open System ARchitecture) is essential for creating interconnected automotive systems, providing a flexible platform that supports software reuse, scalability, and product variation. These solutions ensure that automotive software is developed with a focus on interoperability and future scalability. By utilizing a modular architecture, KPIT allows manufacturers to integrate various software components seamlessly, simplifying complex systems. In addition to Classic AUTOSAR, KPIT offers Adaptive AUTOSAR aimed at facilitating advanced functionalities for new-generation automotive applications, such as autonomous driving and advanced infotainment systems. This adaptability ensures that developers can keep pace with rapid technological advances and evolving industry standards. KPIT further provides robust support and consultancy services, helping clients navigate the complexities of implementing AUTOSAR standards. By doing so, KPIT strengthens its position as a trusted partner for automakers committed to leveraging standardization and cutting-edge software solutions to enhance vehicle functionalities.

KPIT Technologies
AI Processor, W-CDMA
View Details

Tianqiao-70 Low-Power Commercial Grade 64-bit RISC-V CPU

Designed for ultra-low power consumption, the Tianqiao-70 delivers commercial-grade processing in a 64-bit RISC-V architecture. It's particularly geared towards mobile computing, desktop applications, and environments requiring efficient performance within a constrained power budget. The Tianqiao-70 leverages advanced RISC-V features to ensure it meets modern demands for low power yet high-performance computing, especially in mobile and intelligent systems. With scalability designed within its core, this CPU can adapt to a range of applications, from mobile devices to edge AI processing. StarFive emphasizes on power efficiency without compromising on performance, making the Tianqiao-70 a strategic choice for developers focusing on sustainable and intelligent technologies. This CPU core not only extends battery life but also enhances the capability of devices to handle sophisticated tasks with minimal energy use.

StarFive
AI Processor, CPU, Multiprocessor / DSP, Processor Cores
View Details

GenAI v1

RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.

RaiderChip
GLOBALFOUNDRIES, TSMC
28nm, 65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

Jotunn8 AI Accelerator

Jotunn8 represents VSORA's pioneering leap into the world of AI Inference technology, aimed at data centers that require high-speed, cost-efficient, and scalable systems. The Jotunn8 chip is engineered to deliver trained models with unparalleled speed, minimizing latency and optimizing power usage, thereby guaranteeing that high-demand applications such as recommendation systems or large language model APIs operate at optimal efficiency. The Jotunn8 is celebrated for its near-theoretical performance, specifically designed to meet the demands of real-time services like chatbots and fraud detection. With a focus on reducing costs per inference – a critical factor for operating at massive scale – the chip ensures business viability through its power-efficient architecture, which significantly trims operational expenses and reduces carbon footprints. Innovative in its approach, the Jotunn8 supports complex AI computing needs by integrating various AI models seamlessly. It provides the foundation for scalable AI, ensuring that infrastructure can keep pace with growing consumer and business demands, and represents a robust solution that prepares businesses for the future of AI-driven applications.

VSORA
AI Processor, CPU, DSP Core, Interleaver/Deinterleaver, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

Akida IP

The Akida IP is a revolutionary neural processor platform that brings real-time AI processing capabilities to the edge. Inspired by the brain's cognitive functions, Akida IP employs neuromorphic principles to deliver low-power AI solutions specifically crafted for applications like vision, audio, and sensor fusion. It features a scalable architecture composed of up to 128 neural nodes, each supporting an efficient allocation of MACs (multiply-accumulate operations) and configurable SRAM for enhanced processing capacity. This IP is designed to operate independently, integrating seamlessly with any existing microcontroller or application processor. The emphasis on event-based hardware acceleration allows it to minimize computational and communicational loads, significantly reducing the need for host CPU intervention. Additionally, Akida IP supports on-chip learning, including one-shot and few-shot learning capabilities, which limits the transmission of sensitive data, hence bolstering security and privacy measures. With a silicon-proven design that prioritizes cost-effectiveness and predictability, BrainChip’s Akida IP enables fully digital neuromorphic implementations. It leverages flexible configurations that can be adjusted post-silicon, ensuring adaptability in deployment. The support for multiple layers and varied bit weights and activations facilitates the development of sophisticated neural network models, accommodating complex AI solutions with increased scalability and configurability.

BrainChip
AI Processor, Coprocessor, Cryptography Cores, GPU, Input/Output Controller, IoT Processor, Platform Security, Vision Processor
View Details

Chimera GPNPU

The Chimera GPNPU from Quadric stands as a versatile processing unit designed to accelerate machine learning models across a wide range of applications. Uniquely integrating the strengths of neural processing units and digital signal processors, the Chimera GPNPU simplifies heterogeneous workloads by running traditional C++ code and complex AI networks such as large language models and vision transformers in a unified processor architecture. This scalability, tailored from 1 to 864 TOPs, allows it to meet the diverse requirements of markets, including automotive and network edge computing.\n\nA key feature of the Chimera GPNPU is its ability to handle matrix and vector operations alongside scalar control code within a single pipeline. Its fully software-driven nature enables developers to fine-tune model performance over the processor's lifecycle, adapting to evolving AI techniques without needing hardware updates. The system's design minimizes off-chip memory access, thereby enhancing efficiency through its L2 memory management and compiler-driven optimizations.\n\nMoreover, the Chimera GPNPU provides an extensive instruction set, finely tuned for AI inference tasks with intelligent memory management, reducing power consumption and maximizing processing efficiency. Its ability to maintain high performance with deterministic execution across various processes underlines its standing as a leading choice for AI-focused chip design.

Quadric
15 Categories
View Details

KL630 AI SoC

The KL630 chip stands out with its pioneering NPU architecture, making it the industry's first to support Int4 precision alongside transformer networks. This unique capability enables it to achieve exceptional computational efficiency and low energy consumption, suitable for a wide variety of applications. The chip incorporates an ARM Cortex A5 CPU, providing robust support for all major AI frameworks and delivering superior ISP capabilities for handling low light conditions and HDR applications, making it ideal for security, automotive, and smart city uses.

Kneron
TSMC
28nm
ADPCM, AI Processor, Camera Interface, CPU, GPU, Input/Output Controller, USB, VGA, Vision Processor
View Details

NMP-350

The NMP-350 is a cutting-edge endpoint accelerator designed to optimize power usage and reduce costs. It is ideal for markets like automotive, AIoT/sensors, and smart appliances. Its applications span from driver authentication and predictive maintenance to health monitoring. With a capacity of up to 1 TOPS and 1 MB of local memory, it incorporates a RISC-V/Arm Cortex-M 32-bit CPU and supports three AXI4 interfaces. This makes the NMP-350 a versatile component for various industrial applications, ensuring efficient performance and integration. Developed as a low-power solution, the NMP-350 is pivotal for applications requiring efficient processing power without inflating energy consumption. It is crucial for mobile and battery-operated devices where every watt conserved adds to the operational longevity of the product. This product aligns with modern demands for eco-friendly and cost-effective technologies, supporting enhanced performance in compact electronic devices. Technical specifications further define its role in the industry, exemplifying how it brings robust and scalable solutions to its users. Its adaptability across different applications, coupled with its cost-efficiency, makes it an indispensable tool for developers working on next-gen AI solutions. The NMP-350 is instrumental for developers looking to seamlessly incorporate AI capabilities into their designs without compromising on economy or efficiency.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

KL520 AI SoC

The KL520 was Kneron's first foray into AI SoCs, characterized by its small size and energy efficiency. This chip integrates a dual ARM Cortex M4 CPU architecture, which can function both as a host processor and as a supportive AI co-processor for diverse edge devices. Ideal for smart devices such as door locks and cameras, it is compatible with various 3D sensor technologies, offering a balance of compact design and high performance. As a result, this SoC has been adopted by multiple products in the smart home and security sectors.

Kneron
TSMC
28nm
AI Processor, Camera Interface, Clock Generator, CPU, GPU, IoT Processor, MPEG 4, Receiver/Transmitter, Vision Processor
View Details

SAKURA-II AI Accelerator

The SAKURA-II AI Accelerator by EdgeCortix provides a cutting-edge solution for efficient AI processing at the edge. Engineered for optimum energy efficiency, it supports real-time Batch=1 AI inferencing and manages extensive parameter models effectively, making it ideal for complex Generative AI applications. The core of SAKURA-II, the Dynamic Neural Accelerator (DNA), is reconfigurable at runtime, which allows for simultaneous execution of multiple neural network models while maintaining high performance metrics. With its advanced neural architecture, SAKURA-II meets the challenging requirements of edge AI applications like image, text, and audio processing. This AI accelerator is distinguished by its ability to support large AI models within a low power envelope, typically operating at around 8 watts, and accommodates vast arrays of inputs such as Llama 2 and Stable Diffusion. SAKURA-II modules are crafted for speedy integration into various systems, offering up to 60 TOPS of performance for INT8 operations. Additionally, its robust design allows handling of high-bandwidth memory scenarios, delivering up to 68 GB/sec of DRAM bandwidth, ensuring superior performance for large language models (LLMs) and vision applications across multiple industries. As a key component of EdgeCortix's edge AI solution platform, the SAKURA-II excels not only in computational efficiency but also in adaptability across various hardware systems like Raspberry Pi. The accelerator system includes options for both small form factor modules and PCIe cards, granting flexibility for different application needs and allowing easy deployment in space-constrained or resource-sensitive environments, thus maximizing the utility of existing infrastructures for AI tasks.

EdgeCortix Inc.
AI Processor, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

SCR9 Processor Core

The SCR9 Processor Core represents the height of processor sophistication with a 12-stage dual-issue out-of-order pipeline complemented by a vector processing unit (VPU). This 64-bit, 16-core configuration supports hypervisor capabilities, making it a powerhouse for enterprise and high-performance applications. Designed to meet the rigorous demands of AI, ML, and computational-heavy environments, the SCR9 core delivers exceptional data throughput and processing power. Its comprehensive architecture includes robust memory and cache management, ensuring efficiency and speed in processing. This application-class core is supported by an extensive ecosystem of development tools and platforms, ensuring that developers can exploit its full potential for innovative solutions. With its focus on high efficiency and advanced capabilities, the SCR9 core is a definitive choice in fields demanding top-tier processing power.

Syntacore
AI Processor, Coprocessor, CPU, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

EW6181 GPS and GNSS Silicon

EW6181 is an IP solution crafted for applications demanding extensive integration levels, offering flexibility by being licensable in various forms such as RTL, gate-level netlist, or GDS. Its design methodology focuses on delivering the lowest possible power consumption within the smallest footprint. The EW6181 effectively extends battery life for tags and modules due to its efficient component count and optimized Bill of Materials (BoM). Additionally, it is backed by robust firmware ensuring highly accurate and reliable location tracking while offering support and upgrades. The IP is particularly suitable for challenging application environments where precision and power efficiency are paramount, making it adaptable across different technology nodes given the availability of its RF frontend.

etherWhere Corporation
TSMC
7nm
3GPP-5G, AI Processor, Bluetooth, CAN, CAN XL, CAN-FD, Fibre Channel, FlexRay, GPS, Optical/Telecom, Photonics, RF Modules, USB, W-CDMA
View Details

NMP-550

The NMP-550 is tailored for enhanced performance efficiency, serving sectors like automotive, mobile, AR/VR, drones, and robotics. It supports applications such as driver monitoring, image/video analytics, and security surveillance. With a capacity of up to 6 TOPS and 6 MB local memory, this accelerator leverages either a RISC-V or Arm Cortex-M/A 32-bit CPU. Its three AXI4 interface support ensures robust interconnections and data flow. This performance boost makes the NMP-550 exceptionally suited for devices requiring high-frequency AI computations. Typical use cases include industrial surveillance and smart robotics, where precise and fast data analysis is critical. The NMP-550 offers a blend of high computational power and energy efficiency, facilitating complex AI tasks like video super-resolution and fleet management. Its architecture supports modern digital ecosystems, paving the way for new digital experiences through reliable and efficient data processing capabilities. By addressing the needs of modern AI workloads, the NMP-550 stands as a significant upgrade for those needing robust processing power in compact form factors.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

Talamo SDK

Talamo SDK is a comprehensive software development toolkit designed to expedite the building and deployment of neuromorphic AI applications. The SDK facilitates the development of spiking neural networks by extending the familiar PyTorch environment, allowing developers to easily integrate AI models into Innatera’s Spiking Neural Processor platform. Talamo simplifies AI development by providing a standard PyTorch framework while incorporating additional tools necessary for crafting and training spiking neural networks. It features compilation and mapping capabilities that assign trained models to the heterogenous computing resources available within the processor, ensuring efficient operation and deployment. The SDK includes an architecture simulator to streamline development, offering fast iteration cycles without requiring a deep understanding of spiking neural networks. This integration drastically reduces the entry barrier for developers, enabling more innovators to create and optimize powerful AI applications for edge devices. By offering pre-existing models and customizable solutions, Talamo SDK gives developers the flexibility to handle complex processes and tailor applications to their specific needs, increasing the overall efficiency and capability of Innatera’s AI-driven products.

Innatera Nanosystems
AI Processor, Vision Processor
View Details

Polar ID Biometric Security System

Polar ID is revolutionizing biometric security by using meta-optic technology to read the unique polarization signature of human faces. This innovative approach significantly improves security, effectively differentiating between real and fake faces primarily through its precise polarization detection capabilities. The system operates efficiently in all lighting conditions thanks to its near-infrared illumination at 940nm, making it versatile enough for both indoor and outdoor settings. It's designed to be compact, suitable even for smartphones with limited space, and significantly more cost-effective compared to conventional structured light solutions. Polar ID not only enhances security by preventing unauthorized access through spoofing with masks or photos, but it also elevates user convenience through its seamless integration into mobile devices. The absence of bulky notch requirements further underscores its design excellence. Its technological makeup stems from Metalenz's proprietary meta-optics, which allows it to fuse advanced functionality into a single compact system. Additionally, Polar ID eliminates the need for additional optical modules, integrating itself as a single image-based recognition and authentication solution. By adopting a complete system approach, Polar ID is set to redefine digital security across a vast array of consumer electronics, including smartphones and IoT devices. This meta-optic advancement is also projected to enhance future applications, likely extending into secure digital transactions and possibly medical diagnostics, broadening the horizons for secure biometric technology in personal and professional spheres.

Metalenz Inc.
TSMC
28nm
13 Categories
View Details

Hanguang 800 AI Accelerator

The Hanguang 800 AI Accelerator is engineered to propel artificial intelligence tasks to new heights with its cutting-edge architecture. This accelerator enhances machine learning tasks by speeding up neural network processing, making it a key player in the burgeoning AI sector. Its innovative design is optimized for low latency and high throughput, facilitating real-time AI application performance and enabling advanced machine learning model implementations. Harnessing an extensive array of computing cores, the Hanguang 800 ensures parallel processing capabilities that significantly reduce training times for large-scale AI models. Its application scope covers diverse sectors, including autonomous driving, smart city infrastructure, and intelligent robotics, underscoring its versatility and adaptability. Built with energy efficiency in mind, this AI accelerator prioritizes minimal power consumption, making it ideal for data centers looking to maximize computational power without overextending their energy footprint. By integrating seamlessly with existing frameworks, the Hanguang 800 offers a ready-to-deploy solution for enterprises seeking to enhance their AI-driven services and operations.

T-Head Semiconductor
AI Processor, CPU, Processor Core Dependent, Security Processor, Vision Processor
View Details

aiWare

aiWare represents a high-performance neural processing solution aimed at driving efficiency in AI-powered automotive applications. At its core, aiWare is designed to deliver robust inference capabilities necessary for complex neural network operations within the automotive domain. This IP features scalable performance fitting a broad spectrum of use-cases, from sensor-edge processors to high-performance centralized models, alongside substantial variances such as L2 to L4 automated driving applications. The aiWare NPU offers unrivaled efficiency and deterministic flexibility, having achieved ISO 26262 ASIL B certification, which accentuates its safety and reliability for automotive environments. It supports a multitude of advanced neural architectures, including CNNs and RNNs, empowering developers to effectively deploy AI models within constrained automotive ecosystems. AI pollution-free data pathways ensure high throughput with minimum energy consumption, aligning with automotive standards for efficiency and operational dependability. Accompanied by aiWare Studio SDK, aiWare simplifies the development process by offering an offline performance estimator that accurately predicts system performance. This tool, celebrated by OEMs globally, allows developers to refine neural networks with minimal hardware requirements, significantly abbreviating time-to-market while preserving high-performance standards. The aiWare's architecture focuses on enhancing efficiency, ensuring robust performance for applications spanning multi-modality sensing and complex data analytics.

aiMotive
AI Processor, Building Blocks, CPU, Cryptography Cores, Platform Security, Processor Core Dependent, Processor Core Independent, Security Protocol Accelerators, Vision Processor
View Details

KL530 AI SoC

The KL530 is built with an advanced heterogeneous AI chip architecture, designed to enhance computing efficiency while reducing power usage. Notably, it is recognized as the first in the market to support INT4 precision and transformers for commercial applications. The chip, featuring a low-power ARM Cortex M4 CPU, delivers impressive performance with 1 TOPS@INT 4 computing power, providing up to 70% higher processing efficiency compared to INT8 architectures. Its integrated smart ISP optimizes image quality, supporting AI models like CNN and RNN, suitable for IoT and AIoT ecosystems.

Kneron
TSMC
28nm
AI Processor, Camera Interface, Clock Generator, CPU, CSC, GPU, Peripheral Controller, Vision Processor
View Details

Ultra-Low-Power 64-Bit RISC-V Core

This core is a remarkably energy-efficient processor designed by Micro Magic, Inc., offering a powerful solution for modern computing needs. Operating at 1GHz and consuming a mere 10mW of power, it leverages low voltage operation to achieve high-speed performance ideal for embedded and portable applications. The RISC-V core integrates design techniques that allow it to function efficiently across a range of voltages while maintaining superior performance levels often demanded by today's technology. This blend of power efficiency and performance makes it a prime candidate for application in cutting-edge consumer electronics and IoT devices where battery life and processing speed are critical. This core's architecture suits a wide variety of implementations, providing flexibility to developers aiming to create custom solutions tailored to specific market requirements. By enabling engineers to reduce power consumption without compromising on speed, the 64-bit RISC-V core demonstrates Micro Magic's commitment to pushing the boundaries of semiconductor technology.

Micro Magic, Inc.
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Independent, Processor Cores
View Details

SiFive Intelligence X280

The SiFive Intelligence X280 delivers best-in-class vector processing capabilities powered by the RISC-V architecture, specifically targeting AI and ML applications. This core is designed to cater to advanced AI workloads, equipped with extensive compute capabilities that include wide vector processing units and scalable matrix computation. With its distinctive software-centric design, the X280 facilitates easy integration and offers adaptability to complex AI and ML processes. Its architecture is built to handle modern computational demands with high efficiency, thanks to its robust bandwidth and scalable execution units that accommodate evolving machine learning algorithms. Ideal for edge applications, the X280 supports sophisticated AI operations, resulting in fast and energy-efficient processing. The design flexibility ensures that the core can be optimized for a wide range of applications, promising unmatched performance scalability and intelligence in edge computing environments.

SiFive, Inc.
AI Processor, CPU, Cryptography Cores, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, Security Processor, Security Subsystems, Vision Processor
View Details

RISC-V Core-hub Generators

The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.

InCore Semiconductors
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

SCR7 Application Core

As a pinnacle of high-performance Linux-capable processor cores, the SCR7 Application Core incorporates a 12-stage dual-issue out-of-order pipeline for enhanced execution efficiency. This core is engineered for applications that require robust data handling and processing, with support for cache coherency and virtual memory management. The SCR7 core caters to advanced markets such as AI, ML, and data centers, benefiting from its scalability in 64-bit 8-core configurations. This formidable architecture is designed to handle complex computational loads, especially where rapid real-time data processing is vital. With a complete software development toolkit, this core empowers developers to integrate and optimize seamlessly into their systems, facilitating innovation and delivering top-tier performance in demanding application landscapes.

Syntacore
AI Processor, CPU, IoT Processor, Microcontroller, Processor Core Independent, Processor Cores
View Details

C100 IoT Control and Interconnection Chip

The C100 is a highly integrated SoC designed for IoT applications, boasting efficient control and connectivity features. It is powered by an enhanced 32-bit RISC-V CPU running at up to 1.5GHz, making it capable of tackling demanding processing tasks while maintaining low power consumption. The inclusion of embedded RAM and ROM further enriches its computational prowess and operational efficiency. Equipped with integrated Wi-Fi, the C100 facilitates seamless wireless communication, making it ideal for varied IoT applications. It supports multiple types of transmission interfaces and features key components such as an ADC and LDO, enhancing its versatility. The C100 also offers built-in temperature sensors, providing higher integration levels for simplified product designs across security systems, smart homes, toys, healthcare, and more. Aiming to offer a compact form factor without compromising on performance, the C100 is engineered to help developers rapidly prototype and bring to market devices that are safe, stable, and efficient. Whether for audio, video, or edge computing tasks, this single-chip solution embodies the essence of Chipchain's commitment to pioneering in the IoT domain.

Shenzhen Chipchain Technologies Co., Ltd.
TSMC
7nm LPP, 10nm
19 Categories
View Details

eSi-ADAS

The eSi-ADAS is a cutting-edge radar processing solution tailored for Advanced Driver Assistance Systems, boosting the performance of MIMO radar systems in automotive and drone applications. This versatile IP suite includes a comprehensive radar co-processor engine designed to enhance tracking and processing capabilities, crucial for rapid situational awareness. Licensed by leading automotive Tier 1 and Tier 2 suppliers, eSi-ADAS supports a variety of radar operations. It features an array of hardware accelerators for FFT, CFAR detection, Kalman Filtering, and more, all optimized for real-time performance and ISO-26262 compliance. Whether catering to short or long-range radar modes, it excels at real-time tracking of numerous objects, significantly offloading the main ADAS system. The co-processor's low-latency and low-power architecture effectively processes fast chirp modulations, making it ideal for complex radar environments requiring robust digital processing techniques. This enables operations such as range and Doppler measurements to be rapidly and efficiently managed, enhancing overall ADAS capabilities and safety measures in automotive systems.

EnSilica
AI Processor, CAN, CAN XL, CAN-FD, Content Protection Software, Flash Controller, LIN, Multiprocessor / DSP, Processor Core Independent, Security Processor, Security Protocol Accelerators
View Details

GenAI v1-Q

The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.

RaiderChip
TSMC
65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

Topaz FPGAs - Volume Production Ready

Topaz FPGAs are engineered to be a go-to solution for industries requiring a swift scale-up to volume production without compromising on performance or efficiency. Centered around an efficient architectural framework, these FPGAs deliver the power and functionality needed to address mainstream applications. They are renowned for their innovative fabric, which optimizes both die area and performance metrics. As such, Topaz FPGAs are indispensable for projects ranging from consumer electronics to automotive solutions, ensuring adaptability and scalability along evolving technological paths. Furthermore, with their seamless system integration capability, these FPGAs significantly shorten the development cycle, facilitating a faster go-to-market strategy while maintaining the high standards Efinix is known for.

Efinix, Inc.
GLOBALFOUNDRIES, Samsung, TSMC
130nm, 150nm, 180nm
AI Processor, AMBA AHB / APB/ AXI, Audio Processor, CPU, Embedded Memories, MIPI, Processor Core Independent, Processor Cores, USB, V-by-One
View Details

CTAccel Image Processor on Intel Agilex FPGA

The CTAccel Image Processor on Intel Agilex FPGA is designed to handle high-performance image processing by capitalizing on the robust capabilities of Intel's Agilex FPGAs. These FPGAs, leveraging the 10 nm SuperFin process technology, are ideal for applications demanding high performance, power efficiency, and compact sizes. Featuring advanced DSP blocks and high-speed transceivers, this IP thrives in accelerating image processing tasks that are typically computational-intensive when executed on CPUs. One of the main advantages is its ability to significantly enhance image processing throughput, achieving up to 20 times the speed while maintaining reduced latency. This performance prowess is coupled with low power consumption, leading to decreased operational and maintenance costs due to fewer required server instances. Additionally, the solution is fully compatible with mainstream image processing software, facilitating seamless integration and leveraging existing software investments. The adaptability of the FPGA allows for remote reconfiguration, ensuring that the IP can be tailored to specific image processing scenarios without necessitating a server reboot. This ease of maintenance, combined with a substantial boost in compute density, underscores the IP's suitability for high-demand image processing environments, such as those encountered in data centers and cloud computing platforms.

CTAccel Ltd.
Intel Foundry
12nm
AI Processor, DLL, Graphics & Video Modules, Image Conversion, JPEG, JPEG 2000, Processor Core Independent, Vision Processor
View Details

Azurite Core-hub

The Azurite Core-hub by InCore Semiconductors is a sophisticated solution designed to offer scalable RISC-V SoCs with high-speed secure interconnect capabilities. This processor is tailored for performance-demanding applications, ensuring that systems maintain robust security while executing tasks at high speeds. Azurite leverages advanced interconnect technologies to enhance the communication between components within a SoC, making it ideal for industries that require rapid data transfer and high processing capabilities. The core is engineered to be scalable, supporting a wide range of applications from edge AI to functional safety systems, adapting seamlessly to various industry needs. Engineered with a focus on security, the Azurite Core-hub incorporates features that protect data integrity and system operation in a dynamic technological landscape. This makes it a reliable choice for companies seeking to integrate advanced RISC-V architectures into their security-focused applications, offering not just innovation but also peace of mind with its secure design.

InCore Semiconductors
AI Processor, CPU, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

NeuroMosAIc Studio

NeuroMosAIc Studio is a comprehensive software platform designed to maximize AI processor utilization through intuitive model conversion, mapping, simulation, and profiling. This advanced software suite supports Edge AI models by optimizing them for specific application needs. It offers precision analysis, network compression, and quantization tools to streamline the process of deploying AI models across diverse hardware setups. The platform is notably adept at integrating multiple AI functions and facilitating edge training processes. With tools like the NMP Compiler and Simulator, it allows developers to optimize functions at different stages, from quantization to training. The Studio's versatility is crucial for developers seeking to enhance AI solutions through customized model adjustments and optimization, ensuring high performance across AI systems. NeuroMosAIc Studio is particularly valuable for its edge training support and comprehensive optimization capabilities, paving the way for efficient AI deployment in various sectors. It offers a robust toolkit for AI model developers aiming to extract the maximum performance from hardware in dynamic environments.

AiM Future
AI Processor, CPU, IoT Processor
View Details

Veyron V1 CPU

The Veyron V1 is a high-performance RISC-V CPU aimed at data centers and similar applications that require robust computing power. It integrates with various chiplet and IP cores, making it a versatile choice for companies looking to create customized solutions. The Veyron V1 is designed to offer competitive performance against x86 and ARM counterparts, providing a seamless transition between different node process technologies. This CPU benefits from Ventana's innovation in RISC-V technology, where efforts are placed on providing an extensible architecture that facilitates domain-specific acceleration. With capabilities stretching from hyperscale computing to edge applications, the Veyron V1 supports extensive instruction sets for high-throughput operations. It also boasts leading-edge chiplet interfaces, opening up numerous opportunities for rapid productization and cost-effective deployment. Ventana's emphasis on open standards ensures that the Veyron V1 remains an adaptable choice for businesses aiming at bespoke solutions. Its compatibility with system IP and its provision in multiple platform formats—including chiplets—enable businesses to leverage the latest technological advancements in RISC-V. Additionally, the ecosystem surrounding the Veyron series ensures support for both modern software frameworks and cross-platform integration.

Ventana Micro Systems
TSMC
10nm, 16nm
AI Processor, Coprocessor, CPU, Processor Core Dependent, Processor Core Independent, Processor Cores, Security Processor
View Details

Codasip RISC-V BK Core Series

The Codasip RISC-V BK Core Series represents a family of processor cores that bring advanced customization to the forefront of embedded designs. These cores are optimized for power and performance, striking a fine balance that suits an array of applications, from sensor controllers in IoT devices to sophisticated automotive systems. Their modular design allows developers to tailor instructions and performance levels directly to their needs, providing a flexible platform that enhances both existing and new applications. Featuring high degrees of configurability, the BK Core Series facilitates designers in achieving superior performance and efficiency. By supporting a broad spectrum of operating requirements, including low-power and high-performance scenarios, these cores stand out in the processor IP marketplace. The series is verified through industry-leading practices, ensuring robust and reliable operation in various application environments. Codasip has made it straightforward to use and adapt the BK Core Series, with an emphasis on simplicity and productivity in customizing processor architecture. This ease of use allows for swift validation and deployment, enabling quicker time to market and reducing costs associated with custom hardware design.

Codasip
AI Processor, CPU, DSP Core, IoT Processor, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

KL720 AI SoC

The KL720 is engineered for high efficiency, achieving up to 0.9 TOPS per Watt, setting it apart in the edge AI marketplace. Designed for real-world scenarios where power efficiency is paramount, this chip supports high-end IP cameras, smart TVs, and AI-enabled devices like glasses and headsets. Its ARM Cortex M4 CPU facilitates the processing of complex tasks like 4K image handling, full HD video, and 3D sensing, making it versatile for applications that include gaming and AI-assisted interactions.

Kneron
TSMC
28nm
2D / 3D, AI Processor, Audio Interfaces, AV1, Camera Interface, CPU, GPU, Image Conversion, TICO, Vision Processor
View Details

RAIV General Purpose GPU

The RAIV General Purpose GPU (GPGPU) epitomizes versatility and cutting-edge technology in the realm of data processing and graphics acceleration. It serves as a crucial technology enabler for various prominent sectors that are central to the fourth industrial revolution, such as autonomous driving, IoT, virtual reality/augmented reality (VR/AR), and sophisticated data centers. By leveraging the RAIV GPGPU, industries are able to process vast amounts of data more efficiently, which is paramount for their growth and competitive edge. Characterized by its advanced architectural design, the RAIV GPU excels in managing substantial computational loads, which is essential for AI-driven processes and complex data analytics. Its adaptability makes it suitable for a wide array of applications, from enhancing automotive AI systems to empowering VR environments with seamless real-time interaction. Through optimized data handling and acceleration, the RAIV GPGPU assists in realizing smoother and more responsive application workflows. The strategic design of the RAIV GPGPU focuses on enabling integrative solutions that enhance performance without compromising on power efficiency. Its functionality is built to meet the high demands of today’s tech ecosystems, fostering advancements in computational efficiency and intelligent processing capabilities. As such, the RAIV stands out not only as a tool for improved graphical experiences but also as a significant component in driving innovation within tech-centric industries worldwide. Its pioneering architecture thus supports a multitude of applications, ensuring it remains a versatile and indispensable asset in diverse technological landscapes.

Siliconarts, Inc.
AI Processor, Building Blocks, CPU, GPU, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, Vision Processor, Wireless Processor
View Details

CodaCache Last-Level Cache

CodaCache optimizes system-on-chip (SoC) performance by reducing memory latency with its highly configurable shared cache. By enhancing data flow and improving power efficiency, CodaCache provides a notable advantage in handling the key challenges of SoC design such as performance, data access, and layout congestion. The product is engineered to support seamless integration and maximize the design's efficiency and throughput.

Arteris
AI Processor, AMBA AHB / APB/ AXI, Embedded Memories, Flash Controller, I/O Library, NAND Flash, ONFI Controller, SDRAM Controller, SRAM Controller, Standard cell, WMV
View Details

H.264 FPGA Encoder and CODEC Micro Footprint Cores

The H.264 FPGA Encoder and CODEC Micro Footprint Cores is an ITAR-compliant, licensable core aimed at FPGAs and offers 1080p60 H.264 Baseline support. Renowned for being the industry's smallest and fastest FPGA core, it boasts minimal latency of just 1ms at 1080p30. This core can be customized for various pixel depths and resolutions, making it versatile for various applications. Besides encoding, it also handles H.264 Codecs and I-Frame only encoding efficiently. The design targets high compatibility with various FPGA families, demonstrating robust adaptability and high performance in embedded systems. Its minimalistic footprint does not compromise on speed or quality, and users are afforded the flexibility to adapt the core to their project requirements seamlessly. Through its customization features, the H.264 FPGA Encoder and CODEC is perfect for applications that demand high resolution and low delay processing in video compression, offering scalable solutions to enhance system compatibility. An evaluation license is offered for interested parties, encouraging exploration and integration in diverse project settings.

A2e Technologies
All Foundries
1000nm
AI Processor, AMBA AHB / APB/ AXI, Arbiter, Audio Controller, H.264, H.265, Multiprocessor / DSP, Other, TICO, USB, Wireless Processor
View Details

Trion FPGAs - Edge and IoT Solution

Trion FPGAs by Efinix cater specifically to the edge computing and Internet of Things (IoT) market. These versatile FPGAs are designed with a flexible architecture that serves a wide range of logic applications, making them an excellent choice for general-purpose computing needs. With support for applications demanding both high performance and low power, Trion FPGAs provide an ideal environment for innovative designs that push the boundaries of IoT technology. Designed to integrate seamlessly within existing systems, they're ideal for applications needing agile solutions capable of processing at the network's edge, bringing new capabilities to IoT deployments and smart technology installations. From 4K to 120K logic elements, Trion offers scalable solutions to cater to varied project needs, empowering designers to tailor solutions specifically for their unique requirements.

Efinix, Inc.
HHGrace, Intel Foundry, Tower
55nm, 65nm, 90nm
2D / 3D, 3GPP-5G, AI Processor, AMBA AHB / APB/ AXI, Audio Processor, CPU, Embedded Memories, Multi-Protocol PHY, Processor Core Independent, RLDRAM Controller, SDRAM Controller, Sensor, USB
View Details

ISPido

ISPido represents a fully configurable RTL Image Signal Processing Pipeline, adhering to the AMBA AXI4 standards and tailored through the AXI4-LITE protocol for seamless integration with systems such as RISC-V. This advanced pipeline supports a variety of image processing functions like defective pixel correction, color filter interpolation using the Malvar-Cutler algorithm, and auto-white balance, among others. Designed to handle resolutions up to 7680x7680, ISPido provides compatibility for both 4K and 8K video systems, with support for 8, 10, or 12-bit depth inputs. Each module within this pipeline can be fine-tuned to fit specific requirements, making it a versatile choice for adapting to various imaging needs. The architecture's compatibility with flexible standards ensures robust performance and adaptability in diverse applications, from consumer electronics to professional-grade imaging solutions. Through its compact design, ISPido optimizes area and energy efficiency, providing high-quality image processing while keeping hardware demands low. This makes it suitable for battery-operated devices where power efficiency is crucial, without sacrificing the processing power needed for high-resolution outputs.

DPControl
21 Categories
View Details

RISC-V Core IP

AheadComputing's RISC-V Core IP delivers unmatched performance designed for intensive computing applications. This core utilizes a powerful 64-bit architecture, embodying the latest advancements in RISC-V processing technology to ensure exceptional operational efficiency. The core's design facilitates a significant boost in per-core performance, which is ideal for applications that require high computational power and reliability. With an inherent ability to support a variety of computing environments, this RISC-V Core IP is perfect for developers looking to enhance application processors' processing power across diverse industries. Crafted with scalability in mind, it offers developers the flexibility needed to adapt to evolving technological demands. By leveraging advanced RISC-V capabilities, this core ensures a future-proof solution that seamlessly fits into existing and emerging product lines.

AheadComputing Inc.
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Cores
View Details

Spiking Neural Processor T1 - Ultra-lowpower Microcontroller for Sensing

The Spiking Neural Processor T1 from Innatera is a groundbreaking microcontroller optimized for ultra-low power always-on sensing applications. Integrating a RISC-V core with an SNN-based processing engine, the T1 operates at sub-milliwatt power levels, enabling advanced signal processing and AI capabilities close to the sensor. This microcontroller effectively offloads sensor data processing, allowing for rapid pattern recognition and efficient power usage in latency-sensitive and power-constrained devices. Key among its features is a nimble 32-bit RISC-V core, supported by 384 KB of embedded SRAM. The innovative spiking neural network engine allows for the real-time inference of complex patterns, mirroring brain-like behavior in processing tasks while keeping power dissipation minimal. Its capabilities facilitate applications such as activity recognition in wearables and high-accuracy signal processing in acoustic sensors. The T1 is packaged in a compact WLCSP form factor and supports diverse interfaces including QSPI, I2C, UART, JTAG, and GPIO, making it adaptable to various sensor configurations. Additionally, developers can leverage the T1 Evaluation Kit and Talamo SDK, which provide a robust platform for developing and optimizing applications harnessing the T1’s unique processing strengths.

Innatera Nanosystems
AI Processor, Coprocessor, CPU, DSP Core, Input/Output Controller, IoT Processor, Microcontroller, Multiprocessor / DSP, Standard cell, Vision Processor, Wireless Processor
View Details

RISCV SoC - Quad Core Server Class

Dyumnin's RISCV SoC is a versatile platform centered around a 64-bit quad-core server-class RISCV CPU, offering extensive subsystems, including AI/ML, automotive, multimedia, memory, cryptographic, and communication systems. This test chip can be reviewed in an FPGA format, ensuring adaptability and extensive testing possibilities. The AI/ML subsystem is particularly noteworthy due to its custom CPU configuration paired with a tensor flow unit, accelerating AI operations significantly. This adaptability lends itself to innovations in artificial intelligence, setting it apart in the competitive landscape of processors. Additionally, the automotive subsystem caters robustly to the needs of the automotive sector with CAN, CAN-FD, and SafeSPI IPs, all designed to enhance systems connectivity within vehicles. Moreover, the multimedia subsystem boasts a complete range of IPs to support HDMI, Display Port, MIPI, and more, facilitating rich audio and visual experiences across devices.

Dyumnin Semiconductors
26 Categories
View Details

RWM6050 Baseband Modem

The RWM6050 Baseband Modem is a cutting-edge component designed for high-efficiency wireless communications, ideally suited for dense data transmission environments. This modem acts as a fundamental building block within Blu Wireless's product portfolio, enabling seamless integration into various network architectures. Focusing on addressing the needs of complex wireless systems, the RWM6050 optimizes data flow and enhances connectivity capabilities within mmWave deployments. Technical proficiency is at the core of RWM6050's design, targeting high-speed data processing and signal integrity. It supports multiple communication standards, ensuring compatibility and flexibility in diverse operational settings. The modem's architecture is crafted to manage substantial data payloads effectively, fostering reliable, high-bandwidth communication across different sectors, including telecommunications and IoT applications. The RWM6050 is engineered to simplify the setup of communication networks and improve performance in crowded signal environments. Its robust design not only accommodates the challenges posed by demanding applications but also anticipates future advancements within wireless communication technologies. The modem provides a scalable yet efficient solution that meets the industry's evolving requirements.

Blu Wireless Technology Ltd.
3GPP-5G, 3GPP-LTE, 802.11, 802.16 / WiMAX, AI Processor, AMBA AHB / APB/ AXI, CPRI, Ethernet, HBM, Multi-Protocol PHY, Optical/Telecom, Receiver/Transmitter, UWB, W-CDMA, Wireless Processor
View Details
Load more
Sign up to Silicon Hub to buy and sell semiconductor IP

Sign Up for Silicon Hub

Join the world's most advanced semiconductor IP marketplace!

It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!

Sign up to Silicon Hub to buy and sell semiconductor IP

Welcome to Silicon Hub

Join the world's most advanced AI-powered semiconductor IP marketplace!

It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!

Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!

Switch to a Silicon Hub buyer account to buy semiconductor IP

Switch to a Buyer Account

To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.

Add new company

Switch to a Silicon Hub buyer account to buy semiconductor IP

Create a Buyer Account

To evaluate IP you need to be logged into a buyer profile. It's free to create a buyer profile for your company.

Chatting with Volt