Find IP Sell IP AI Assistant Chip Talk Chip Videos About Us
Log In

All IPs > Processor > Vision Processor

Vision Processor Semiconductor IPs

Vision processors are a specialized subset of semiconductor IPs designed to efficiently handle and process visual data. These processors are pivotal in applications that require intensive image analysis and computer vision capabilities, such as artificial intelligence, augmented reality, virtual reality, and autonomous systems. The primary purpose of vision processor IPs is to accelerate the performance of vision processing tasks while minimizing power consumption and maximizing throughput.

In the world of semiconductor IP, vision processors stand out due to their ability to integrate advanced functionalities such as object recognition, image stabilization, and real-time analytics. These processors often leverage parallel processing, machine learning algorithms, and specialized hardware accelerators to perform complex visual computations efficiently. As a result, products ranging from high-end smartphones to advanced driver-assistance systems (ADAS) and industrial robots benefit from improved visual understanding and processing capabilities.

The semiconductor IPs for vision processors can be found in a wide array of products. In consumer electronics, they enhance the capabilities of cameras, enabling features like face and gesture recognition. In the automotive industry, vision processors are crucial for delivering real-time data processing needed for safety systems and autonomous navigation. Additionally, in sectors such as healthcare and manufacturing, vision processor IPs facilitate advanced imaging and diagnostic tools, improving both precision and efficiency.

As technology advances, the demand for vision processor IPs continues to grow. Developers and designers seek IPs that offer scalable architectures and can be customized to meet specific application requirements. By providing enhanced performance and reducing development time, vision processor semiconductor IPs are integral to pushing the boundaries of what's possible with visual data processing and expanding the capabilities of next-generation products.

All semiconductor IP
90
IPs available

Akida 2nd Generation

Building on the principles of its predecessor, the Akida 2nd Generation IP further enhances AI processing at the edge by integrating additional capabilities tailored for spatio-temporal and temporal event-based neural networks. This second iteration doubles down on programmability and includes expanded activation functions and enhanced Skip Connections, offering significant flexibility for complex applications involving dynamic data streams. A key feature of the Akida 2nd Generation is its innovative approach to sparsity, optimizing the AI model's overall efficiency. The scalable fabric of nodes in this version can adeptly handle various weights and activation bit depths, adapting the computational requirements to suit the application needs effectively. This capability ensures that the Akida 2nd Generation can manage sophisticated algorithms with a heightened level of precision and power efficiency. Furthermore, this IP iteration embraces fully digital neuromorphic implementations, allowing for predictable, cost-effective design and deployment. It minimizes the computational demands and bandwidth consumption of traditional AI models by focusing compute power precisely where needed, ensuring a seamless experience with lower latency and enhanced processing accuracy. Its flexibility in configuration and scalability at the post-silicon stage makes it an essential tool for future-ready AI applications, particularly those that require real-time interaction and decision-making capabilities.

BrainChip
AI Processor, CPU, Digital Video Broadcast, GPU, Input/Output Controller, IoT Processor, Multiprocessor / DSP, Network on Chip, Security Protocol Accelerators, Vision Processor
View Details

KL730 AI SoC

The KL730 is a sophisticated AI System on Chip (SoC) that embodies Kneron's third-generation reconfigurable NPU architecture. This SoC delivers a substantial 8 TOPS of computing power, designed to efficiently handle CNN network architectures and transformer applications. Its innovative NPU architecture significantly optimizes DDR bandwidth, providing powerful video processing capabilities, including supporting 4K resolution at 60 FPS. Furthermore, the KL730 demonstrates formidable performance in noise reduction and low-light imaging, positioning it as a versatile solution for intelligent security, video conferencing, and autonomous applications.

Kneron
TSMC
28nm
2D / 3D, A/D Converter, AI Processor, Amplifier, Audio Interfaces, Camera Interface, Clock Generator, CPU, CSC, GPU, Image Conversion, JPEG, USB, VGA, Vision Processor
View Details

Metis AIPU PCIe AI Accelerator Card

Axelera AI's Metis AIPU PCIe AI Accelerator Card is designed to tackle demanding vision applications with its powerful processing capabilities. The card embeds a single Metis AIPU which can deliver up to 214 TOPS, providing the necessary throughput for concurrent processing of high-definition video streams and complex AI inference tasks. This PCIe card is supported by the Voyager SDK, which enhances the user experience by allowing easy integration into existing systems for efficient deployment of AI inference networks. It suits developers and integrators looking for an upgrade to existing infrastructure without extensive modifications, optimizing performance and accelerating AI model deployment. The card’s design prioritizes performance and efficiency, making it suitable for diverse applications across industries like security, transportation, and smart city environments. Its capacity to deliver high frames per second on popular AI models ensures it meets modern digital processing demands with reliability and precision.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Building Blocks, Ethernet, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, WMV
View Details

xcore.ai

The xcore.ai platform is designed to power the intelligent Internet of Things (IoT) by combining flexibility and performance efficiency. With its distinctive multi-threaded micro-architecture, it allows for low-latency and predictable performance, crucial for IoT applications. Each xcore.ai device is equipped with 16 logical cores distributed over two tiles, each with integrated 512kB SRAM and a vector unit capable of handling both integer and floating-point operations. Communication between processors is facilitated by a robust interprocessor communication infrastructure, enabling scalability for systems requiring multiple xcore.ai SoCs. This platform supports a multitude of applications by integrating DSP, AI, and I/O processing within a cohesive development environment. For audio and voice processing needs, it offers adaptable, software-defined I/O that aligns with specific application requirements, ensuring efficient and targeted performance. The xcore.ai is also equipped for ai and machine learning tasks with a 256-bit VPU that supports various operations including 32-bit, 16-bit, and 8-bit vector operations, offering peak AI performance. The inclusion of a comprehensive development kit allows developers to explore its capabilities through ready-made solutions or custom-built applications.

XMOS Semiconductor
21 Categories
View Details

Metis AIPU M.2 Accelerator Module

The Metis AIPU M.2 Accelerator Module from Axelera AI is engineered for applications requiring edge AI computing power in a compact form factor. Leveraging the quad-core Metis AIPU, this module provides efficient AI processing capabilities tailored for real-time analysis and data-intensive tasks in areas like computer vision. Designed to fit into standard NGFF (Next Generation Form Factor) M.2 sockets, it supports a wide range of AI models with dedicated 1GB DRAM memory for optimized performance. This module is especially suitable for systems needing enhanced image and video processing capabilities while maintaining minimal power consumption. The Metis AIPU M.2 Accelerator Module enhances computing architectures by enabling seamless integration of AI for a multitude of industrial and commercial applications. Its efficient design makes it ideal for environments where space is limited, but computational demand is high, ensuring that solutions are both powerful and cost-effective.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Building Blocks, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, WMV
View Details

Yitian 710 Processor

The Yitian 710 Processor is T-Head's flagship ARM-based server chip that represents the pinnacle of their technological expertise. Designed with a pioneering architecture, it is crafted for high efficiency and superior performance metrics. This processor is built using a 2.5D packaging method, integrating two dies and boasting a substantial 60 billion transistors. The core of the Yitian 710 consists of 128 high-performance Armv9 CPU cores, each accompanied by advanced memory configurations that streamline instruction and data caching processes. Each CPU integrates 64KB of L1 instruction cache, 64KB of L1 data cache, and 1MB of L2 cache, supplemented by a robust 128MB system-level cache on the chip. To support expansive data operations, the processor is equipped with an 8-channel DDR5 memory system, enabling peak memory bandwidth of up to 281GB/s. Its I/O subsystem is formidable, featuring 96 PCIe 5.0 channels capable of achieving dual-direction bandwidth up to 768GB/s. With its multi-layered design, the Yitian 710 Processor is positioned as a leading solution for cloud services, data analytics, and AI operations.

T-Head Semiconductor
AI Processor, AMBA AHB / APB/ AXI, Audio Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Independent, Processor Cores, Vision Processor
View Details

AI Camera Module

The AI Camera Module by Altek stands out with its strong integration of imaging lens design and software capabilities. It collaborates with major world brands, producing AI cameras capable of satisfying a range of client demands. The module supports differentiated AI+IoT solutions and meets high-resolution standards like 2K and 4K for advanced edge computing applications. This IP's synergy between hardware and software makes it versatile in addressing dynamic customer requirements in various environments.

Altek Corporation
AI Processor, Audio Interfaces, Image Conversion, IoT Processor, JPEG, Receiver/Transmitter, SATA, Vision Processor
View Details

Jotunn8 AI Accelerator

Jotunn8 represents VSORA's pioneering leap into the world of AI Inference technology, aimed at data centers that require high-speed, cost-efficient, and scalable systems. The Jotunn8 chip is engineered to deliver trained models with unparalleled speed, minimizing latency and optimizing power usage, thereby guaranteeing that high-demand applications such as recommendation systems or large language model APIs operate at optimal efficiency. The Jotunn8 is celebrated for its near-theoretical performance, specifically designed to meet the demands of real-time services like chatbots and fraud detection. With a focus on reducing costs per inference – a critical factor for operating at massive scale – the chip ensures business viability through its power-efficient architecture, which significantly trims operational expenses and reduces carbon footprints. Innovative in its approach, the Jotunn8 supports complex AI computing needs by integrating various AI models seamlessly. It provides the foundation for scalable AI, ensuring that infrastructure can keep pace with growing consumer and business demands, and represents a robust solution that prepares businesses for the future of AI-driven applications.

VSORA
AI Processor, CPU, DSP Core, Interleaver/Deinterleaver, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

Akida IP

The Akida IP is a revolutionary neural processor platform that brings real-time AI processing capabilities to the edge. Inspired by the brain's cognitive functions, Akida IP employs neuromorphic principles to deliver low-power AI solutions specifically crafted for applications like vision, audio, and sensor fusion. It features a scalable architecture composed of up to 128 neural nodes, each supporting an efficient allocation of MACs (multiply-accumulate operations) and configurable SRAM for enhanced processing capacity. This IP is designed to operate independently, integrating seamlessly with any existing microcontroller or application processor. The emphasis on event-based hardware acceleration allows it to minimize computational and communicational loads, significantly reducing the need for host CPU intervention. Additionally, Akida IP supports on-chip learning, including one-shot and few-shot learning capabilities, which limits the transmission of sensitive data, hence bolstering security and privacy measures. With a silicon-proven design that prioritizes cost-effectiveness and predictability, BrainChip’s Akida IP enables fully digital neuromorphic implementations. It leverages flexible configurations that can be adjusted post-silicon, ensuring adaptability in deployment. The support for multiple layers and varied bit weights and activations facilitates the development of sophisticated neural network models, accommodating complex AI solutions with increased scalability and configurability.

BrainChip
AI Processor, Coprocessor, Cryptography Cores, GPU, Input/Output Controller, IoT Processor, Platform Security, Vision Processor
View Details

AX45MP

The AX45MP is engineered as a high-performance processor that supports multicore architecture and advanced data processing capabilities, particularly suitable for applications requiring extensive computational efficiency. Powered by the AndesCore processor line, it capitalizes on a multicore symmetric multiprocessing framework, integrating up to eight cores with robust L2 cache management. The AX45MP incorporates advanced features such as vector processing capabilities and support for MemBoost technology to maximize data throughput. It caters to high-demand applications including machine learning, digital signal processing, and complex algorithmic computations, ensuring data coherence and efficient power usage.

Andes Technology
2D / 3D, ADPCM, CPU, IoT Processor, Processor Core Independent, Processor Cores, Vision Processor
View Details

Chimera GPNPU

The Chimera GPNPU from Quadric stands as a versatile processing unit designed to accelerate machine learning models across a wide range of applications. Uniquely integrating the strengths of neural processing units and digital signal processors, the Chimera GPNPU simplifies heterogeneous workloads by running traditional C++ code and complex AI networks such as large language models and vision transformers in a unified processor architecture. This scalability, tailored from 1 to 864 TOPs, allows it to meet the diverse requirements of markets, including automotive and network edge computing.\n\nA key feature of the Chimera GPNPU is its ability to handle matrix and vector operations alongside scalar control code within a single pipeline. Its fully software-driven nature enables developers to fine-tune model performance over the processor's lifecycle, adapting to evolving AI techniques without needing hardware updates. The system's design minimizes off-chip memory access, thereby enhancing efficiency through its L2 memory management and compiler-driven optimizations.\n\nMoreover, the Chimera GPNPU provides an extensive instruction set, finely tuned for AI inference tasks with intelligent memory management, reducing power consumption and maximizing processing efficiency. Its ability to maintain high performance with deterministic execution across various processes underlines its standing as a leading choice for AI-focused chip design.

Quadric
15 Categories
View Details

KL630 AI SoC

The KL630 chip stands out with its pioneering NPU architecture, making it the industry's first to support Int4 precision alongside transformer networks. This unique capability enables it to achieve exceptional computational efficiency and low energy consumption, suitable for a wide variety of applications. The chip incorporates an ARM Cortex A5 CPU, providing robust support for all major AI frameworks and delivering superior ISP capabilities for handling low light conditions and HDR applications, making it ideal for security, automotive, and smart city uses.

Kneron
TSMC
28nm
ADPCM, AI Processor, Camera Interface, CPU, GPU, Input/Output Controller, USB, VGA, Vision Processor
View Details

KL520 AI SoC

The KL520 was Kneron's first foray into AI SoCs, characterized by its small size and energy efficiency. This chip integrates a dual ARM Cortex M4 CPU architecture, which can function both as a host processor and as a supportive AI co-processor for diverse edge devices. Ideal for smart devices such as door locks and cameras, it is compatible with various 3D sensor technologies, offering a balance of compact design and high performance. As a result, this SoC has been adopted by multiple products in the smart home and security sectors.

Kneron
TSMC
28nm
AI Processor, Camera Interface, Clock Generator, CPU, GPU, IoT Processor, MPEG 4, Receiver/Transmitter, Vision Processor
View Details

SAKURA-II AI Accelerator

The SAKURA-II AI Accelerator by EdgeCortix provides a cutting-edge solution for efficient AI processing at the edge. Engineered for optimum energy efficiency, it supports real-time Batch=1 AI inferencing and manages extensive parameter models effectively, making it ideal for complex Generative AI applications. The core of SAKURA-II, the Dynamic Neural Accelerator (DNA), is reconfigurable at runtime, which allows for simultaneous execution of multiple neural network models while maintaining high performance metrics. With its advanced neural architecture, SAKURA-II meets the challenging requirements of edge AI applications like image, text, and audio processing. This AI accelerator is distinguished by its ability to support large AI models within a low power envelope, typically operating at around 8 watts, and accommodates vast arrays of inputs such as Llama 2 and Stable Diffusion. SAKURA-II modules are crafted for speedy integration into various systems, offering up to 60 TOPS of performance for INT8 operations. Additionally, its robust design allows handling of high-bandwidth memory scenarios, delivering up to 68 GB/sec of DRAM bandwidth, ensuring superior performance for large language models (LLMs) and vision applications across multiple industries. As a key component of EdgeCortix's edge AI solution platform, the SAKURA-II excels not only in computational efficiency but also in adaptability across various hardware systems like Raspberry Pi. The accelerator system includes options for both small form factor modules and PCIe cards, granting flexibility for different application needs and allowing easy deployment in space-constrained or resource-sensitive environments, thus maximizing the utility of existing infrastructures for AI tasks.

EdgeCortix Inc.
AI Processor, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

Talamo SDK

Talamo SDK is a comprehensive software development toolkit designed to expedite the building and deployment of neuromorphic AI applications. The SDK facilitates the development of spiking neural networks by extending the familiar PyTorch environment, allowing developers to easily integrate AI models into Innatera’s Spiking Neural Processor platform. Talamo simplifies AI development by providing a standard PyTorch framework while incorporating additional tools necessary for crafting and training spiking neural networks. It features compilation and mapping capabilities that assign trained models to the heterogenous computing resources available within the processor, ensuring efficient operation and deployment. The SDK includes an architecture simulator to streamline development, offering fast iteration cycles without requiring a deep understanding of spiking neural networks. This integration drastically reduces the entry barrier for developers, enabling more innovators to create and optimize powerful AI applications for edge devices. By offering pre-existing models and customizable solutions, Talamo SDK gives developers the flexibility to handle complex processes and tailor applications to their specific needs, increasing the overall efficiency and capability of Innatera’s AI-driven products.

Innatera Nanosystems
AI Processor, Vision Processor
View Details

Polar ID Biometric Security System

Polar ID is revolutionizing biometric security by using meta-optic technology to read the unique polarization signature of human faces. This innovative approach significantly improves security, effectively differentiating between real and fake faces primarily through its precise polarization detection capabilities. The system operates efficiently in all lighting conditions thanks to its near-infrared illumination at 940nm, making it versatile enough for both indoor and outdoor settings. It's designed to be compact, suitable even for smartphones with limited space, and significantly more cost-effective compared to conventional structured light solutions. Polar ID not only enhances security by preventing unauthorized access through spoofing with masks or photos, but it also elevates user convenience through its seamless integration into mobile devices. The absence of bulky notch requirements further underscores its design excellence. Its technological makeup stems from Metalenz's proprietary meta-optics, which allows it to fuse advanced functionality into a single compact system. Additionally, Polar ID eliminates the need for additional optical modules, integrating itself as a single image-based recognition and authentication solution. By adopting a complete system approach, Polar ID is set to redefine digital security across a vast array of consumer electronics, including smartphones and IoT devices. This meta-optic advancement is also projected to enhance future applications, likely extending into secure digital transactions and possibly medical diagnostics, broadening the horizons for secure biometric technology in personal and professional spheres.

Metalenz Inc.
TSMC
28nm
13 Categories
View Details

3D Imaging Chip

Altek's 3D Imaging Chip offers cutting-edge depth sensing technology, suitable for applications requiring medium to long-range recognition. This technology enhances precision in surveillance systems and transport robots. With years of experience in 3D sensing technology, the chip provides comprehensive and integrated from module to chip solutions, ensuring high accuracy. Its advanced capabilities allow for robust identification performance across diverse environments, reinforcing Altek's reputation in the imaging sector.

Altek Corporation
A/D Converter, Coprocessor, Graphics & Video Modules, Oversampling Modulator, Photonics, Sensor, Vision Processor
View Details

2D FFT

Dillon Engineering's 2D FFT core is specifically engineered for two-dimensional digital signal processing, offering critical enhancements in data throughput and resource efficiency. This core is crafted to process two-dimensional data inputs efficiently, making it a perfect fit for image processing and other applications requiring extensive multidimensional data manipulation. The design capitalizes on medium speed and resource usage, using internal or external memory between FFT engines to optimize the data-flow pipeline. This configuration allows the core to handle complex data structures with precision, crucial for industries relying on heavy data processing, such as image analysis and computational graphics. Backed by Dillon’s innovative ParaCore Architect™ technology, the 2D FFT IP ensures adaptable and precise implementation in various FPGA or ASIC contexts. It offers users the flexibility to efficiently address complex data processing challenges, cementing Dillon Engineering's reputation as a leader in advanced signal processing solutions.

Dillon Engineering, Inc.
2D / 3D, GPU, Image Conversion, Multiprocessor / DSP, Network on Chip, PLL, Processor Core Independent, Vision Processor, Wireless Processor
View Details

aiWare

aiWare represents a high-performance neural processing solution aimed at driving efficiency in AI-powered automotive applications. At its core, aiWare is designed to deliver robust inference capabilities necessary for complex neural network operations within the automotive domain. This IP features scalable performance fitting a broad spectrum of use-cases, from sensor-edge processors to high-performance centralized models, alongside substantial variances such as L2 to L4 automated driving applications. The aiWare NPU offers unrivaled efficiency and deterministic flexibility, having achieved ISO 26262 ASIL B certification, which accentuates its safety and reliability for automotive environments. It supports a multitude of advanced neural architectures, including CNNs and RNNs, empowering developers to effectively deploy AI models within constrained automotive ecosystems. AI pollution-free data pathways ensure high throughput with minimum energy consumption, aligning with automotive standards for efficiency and operational dependability. Accompanied by aiWare Studio SDK, aiWare simplifies the development process by offering an offline performance estimator that accurately predicts system performance. This tool, celebrated by OEMs globally, allows developers to refine neural networks with minimal hardware requirements, significantly abbreviating time-to-market while preserving high-performance standards. The aiWare's architecture focuses on enhancing efficiency, ensuring robust performance for applications spanning multi-modality sensing and complex data analytics.

aiMotive
AI Processor, Building Blocks, CPU, Cryptography Cores, Platform Security, Processor Core Dependent, Processor Core Independent, Security Protocol Accelerators, Vision Processor
View Details

Hanguang 800 AI Accelerator

The Hanguang 800 AI Accelerator is engineered to propel artificial intelligence tasks to new heights with its cutting-edge architecture. This accelerator enhances machine learning tasks by speeding up neural network processing, making it a key player in the burgeoning AI sector. Its innovative design is optimized for low latency and high throughput, facilitating real-time AI application performance and enabling advanced machine learning model implementations. Harnessing an extensive array of computing cores, the Hanguang 800 ensures parallel processing capabilities that significantly reduce training times for large-scale AI models. Its application scope covers diverse sectors, including autonomous driving, smart city infrastructure, and intelligent robotics, underscoring its versatility and adaptability. Built with energy efficiency in mind, this AI accelerator prioritizes minimal power consumption, making it ideal for data centers looking to maximize computational power without overextending their energy footprint. By integrating seamlessly with existing frameworks, the Hanguang 800 offers a ready-to-deploy solution for enterprises seeking to enhance their AI-driven services and operations.

T-Head Semiconductor
AI Processor, CPU, Processor Core Dependent, Security Processor, Vision Processor
View Details

KL530 AI SoC

The KL530 is built with an advanced heterogeneous AI chip architecture, designed to enhance computing efficiency while reducing power usage. Notably, it is recognized as the first in the market to support INT4 precision and transformers for commercial applications. The chip, featuring a low-power ARM Cortex M4 CPU, delivers impressive performance with 1 TOPS@INT 4 computing power, providing up to 70% higher processing efficiency compared to INT8 architectures. Its integrated smart ISP optimizes image quality, supporting AI models like CNN and RNN, suitable for IoT and AIoT ecosystems.

Kneron
TSMC
28nm
AI Processor, Camera Interface, Clock Generator, CPU, CSC, GPU, Peripheral Controller, Vision Processor
View Details

SiFive Intelligence X280

The SiFive Intelligence X280 delivers best-in-class vector processing capabilities powered by the RISC-V architecture, specifically targeting AI and ML applications. This core is designed to cater to advanced AI workloads, equipped with extensive compute capabilities that include wide vector processing units and scalable matrix computation. With its distinctive software-centric design, the X280 facilitates easy integration and offers adaptability to complex AI and ML processes. Its architecture is built to handle modern computational demands with high efficiency, thanks to its robust bandwidth and scalable execution units that accommodate evolving machine learning algorithms. Ideal for edge applications, the X280 supports sophisticated AI operations, resulting in fast and energy-efficient processing. The design flexibility ensures that the core can be optimized for a wide range of applications, promising unmatched performance scalability and intelligence in edge computing environments.

SiFive, Inc.
AI Processor, CPU, Cryptography Cores, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, Security Processor, Security Subsystems, Vision Processor
View Details

CTAccel Image Processor on Intel Agilex FPGA

The CTAccel Image Processor on Intel Agilex FPGA is designed to handle high-performance image processing by capitalizing on the robust capabilities of Intel's Agilex FPGAs. These FPGAs, leveraging the 10 nm SuperFin process technology, are ideal for applications demanding high performance, power efficiency, and compact sizes. Featuring advanced DSP blocks and high-speed transceivers, this IP thrives in accelerating image processing tasks that are typically computational-intensive when executed on CPUs. One of the main advantages is its ability to significantly enhance image processing throughput, achieving up to 20 times the speed while maintaining reduced latency. This performance prowess is coupled with low power consumption, leading to decreased operational and maintenance costs due to fewer required server instances. Additionally, the solution is fully compatible with mainstream image processing software, facilitating seamless integration and leveraging existing software investments. The adaptability of the FPGA allows for remote reconfiguration, ensuring that the IP can be tailored to specific image processing scenarios without necessitating a server reboot. This ease of maintenance, combined with a substantial boost in compute density, underscores the IP's suitability for high-demand image processing environments, such as those encountered in data centers and cloud computing platforms.

CTAccel Ltd.
Intel Foundry
12nm
AI Processor, DLL, Graphics & Video Modules, Image Conversion, JPEG, JPEG 2000, Processor Core Independent, Vision Processor
View Details

eSi-3264

The eSi-3264 stands out with its support for both 32/64-bit operations, including 64-bit fixed and floating-point SIMD (Single Instruction Multiple Data) DSP extensions. Engineered for applications mandating DSP functionality, it does so with minimal silicon footprint. Its comprehensive instruction set includes specialized commands for various tasks, bolstering its practicality across multiple sectors.

eSi-RISC
All Foundries
16nm, 90nm, 250nm, 350nm
Building Blocks, CPU, DSP Core, Microcontroller, Multiprocessor / DSP, Processor Cores, Vision Processor
View Details

RAIV General Purpose GPU

The RAIV General Purpose GPU (GPGPU) epitomizes versatility and cutting-edge technology in the realm of data processing and graphics acceleration. It serves as a crucial technology enabler for various prominent sectors that are central to the fourth industrial revolution, such as autonomous driving, IoT, virtual reality/augmented reality (VR/AR), and sophisticated data centers. By leveraging the RAIV GPGPU, industries are able to process vast amounts of data more efficiently, which is paramount for their growth and competitive edge. Characterized by its advanced architectural design, the RAIV GPU excels in managing substantial computational loads, which is essential for AI-driven processes and complex data analytics. Its adaptability makes it suitable for a wide array of applications, from enhancing automotive AI systems to empowering VR environments with seamless real-time interaction. Through optimized data handling and acceleration, the RAIV GPGPU assists in realizing smoother and more responsive application workflows. The strategic design of the RAIV GPGPU focuses on enabling integrative solutions that enhance performance without compromising on power efficiency. Its functionality is built to meet the high demands of today’s tech ecosystems, fostering advancements in computational efficiency and intelligent processing capabilities. As such, the RAIV stands out not only as a tool for improved graphical experiences but also as a significant component in driving innovation within tech-centric industries worldwide. Its pioneering architecture thus supports a multitude of applications, ensuring it remains a versatile and indispensable asset in diverse technological landscapes.

Siliconarts, Inc.
AI Processor, Building Blocks, CPU, GPU, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, Vision Processor, Wireless Processor
View Details

KL720 AI SoC

The KL720 is engineered for high efficiency, achieving up to 0.9 TOPS per Watt, setting it apart in the edge AI marketplace. Designed for real-world scenarios where power efficiency is paramount, this chip supports high-end IP cameras, smart TVs, and AI-enabled devices like glasses and headsets. Its ARM Cortex M4 CPU facilitates the processing of complex tasks like 4K image handling, full HD video, and 3D sensing, making it versatile for applications that include gaming and AI-assisted interactions.

Kneron
TSMC
28nm
2D / 3D, AI Processor, Audio Interfaces, AV1, Camera Interface, CPU, GPU, Image Conversion, TICO, Vision Processor
View Details

SiFive Performance

The SiFive Performance series is designed to deliver the highest levels of computing power while maintaining energy efficiency. These cores are optimized for a variety of demanding applications, offering a balance of high throughput, scalability, and customization. Tailored for industries that require maximum performance, the series boasts both scalar and vector processing capabilities, equipped with advanced features like out-of-order execution and optional vector compute engines for enhanced versatility. The design of the SiFive Performance series allows for flexible deployment across diverse applications, from high-performance computing environments to embedded systems. The major emphasis on customization enables users to optimize their solutions for specific needs, ensuring that performance metrics are closely aligned with operational demands. These cores provide support for the latest RISC-V profiles, offering improved computation power, energy efficiency, and integration flexibility. The result is a highly capable core IP solution that is ready to power next-generation technology in sectors like data centers, AI, and beyond.

SiFive, Inc.
CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor
View Details

WiseEye2 AI Solution

The WiseEye2 AI Processor from Himax is a groundbreaking ultra-low power AI sensing solution designed for always-on applications. Comprising a CMOS image sensor and the high-performance microcontroller HX6538, this solution excels in energy efficiency, operating with minimal power to support battery-powered AI tasks. With advancements such as the Arm-based Cortex M55 CPU and Ethos U55 NPU, the device offers improved inferencing speed and energy efficiency compared to its predecessors, enabling it to handle more complex AI models with precision.

Himax Technologies, Inc.
Vision Processor
View Details

Spiking Neural Processor T1 - Ultra-lowpower Microcontroller for Sensing

The Spiking Neural Processor T1 from Innatera is a groundbreaking microcontroller optimized for ultra-low power always-on sensing applications. Integrating a RISC-V core with an SNN-based processing engine, the T1 operates at sub-milliwatt power levels, enabling advanced signal processing and AI capabilities close to the sensor. This microcontroller effectively offloads sensor data processing, allowing for rapid pattern recognition and efficient power usage in latency-sensitive and power-constrained devices. Key among its features is a nimble 32-bit RISC-V core, supported by 384 KB of embedded SRAM. The innovative spiking neural network engine allows for the real-time inference of complex patterns, mirroring brain-like behavior in processing tasks while keeping power dissipation minimal. Its capabilities facilitate applications such as activity recognition in wearables and high-accuracy signal processing in acoustic sensors. The T1 is packaged in a compact WLCSP form factor and supports diverse interfaces including QSPI, I2C, UART, JTAG, and GPIO, making it adaptable to various sensor configurations. Additionally, developers can leverage the T1 Evaluation Kit and Talamo SDK, which provide a robust platform for developing and optimizing applications harnessing the T1’s unique processing strengths.

Innatera Nanosystems
AI Processor, Coprocessor, CPU, DSP Core, Input/Output Controller, IoT Processor, Microcontroller, Multiprocessor / DSP, Standard cell, Vision Processor, Wireless Processor
View Details

Ceva-SensPro2 - Vision AI DSP

The **Ceva-SensPro DSP family** unites scalar processing units and vector processing units under an 8-way VLIW architecture. The family incorporates advanced control features such as a branch target buffer and a loop buffer to speed up execution and reduce power. There are six family members, each with a different array of MACs, targeted at different application areas and performance points. These range from the Ceva-SP100, providing 128 8-bit integer or 32 16-bit integer MACs at 0.2 TOPS performance for compact applications such as vision processing in wearables and mobile devices; to the Ceva-SP1000, with 1024 8-bit or 256 16-bit MACs reaching 2 TOPS for demanding applications such as automotive, robotics, and surveillance. Two of the family members, the Ceva-SPF2 and Ceva-SPF4, employ 32 or 64 32-bit floating-point MACs, respectively, for applications in electric-vehicle power-train control and battery management. These two members are supported by libraries for Eigen Linear Algebra, MATLAB vector operations, and the TVM graph compiler. Highly configurable, the vector processing units in all family members can add domain-specific instructions for such areas as vision processing, Radar, or simultaneous localization and mapping (SLAM) for robotics. Integer family members can also add optional floating-point capabilities. All family members have independent instruction and data memory subsystems and a Ceva-Connect queue manager for AXI-attached accelerators or coprocessors. The Ceva-SensPro2 family is programmable in C/C++ as well as in Halide and Open MP, and supported by an Eclipse-based development environment, extensive libraries spanning a wide range of applications, and the Ceva-NeuPro Studio AI development environment. [**Learn more about Ceva-SensPro2 solution>**](https://www.ceva-ip.com/product/ceva-senspro2/?utm_source=silicon_hub&utm_medium=ip_listing&utm_campaign=ceva_senspro2_page)

Ceva, Inc.
DSP Core, Multiprocessor / DSP, Processor Cores, Vision Processor
View Details

Tyr AI Processor Family

The Tyr AI Processor series by VSORA is revolutionizing Edge AI by bringing real-time intelligence and decision-making power directly to edge devices. This family of processors delivers the compute power equivalent to data centers but in a compact, energy-efficient form factor ideal for edge environments. Tyr processors are specifically designed to process data on the device itself, reducing latency, preserving bandwidth, and maintaining data privacy without the need for cloud reliance. This localized processing translates to split-second analytics and decision capabilities critical for technologies like autonomous vehicles and industrial automation. With Tyr, industries can achieve superior performance while minimizing operational costs and energy consumption, fostering greener AI deployments. The processors’ design accommodates the demanding requirements of modern edge applications, ensuring they can support the evolving needs of future edge intelligence systems.

VSORA
AI Processor, CAN XL, DSP Core, Interleaver/Deinterleaver, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

CTAccel Image Processor on Alveo U200

The CTAccel Image Processor for Xilinx's Alveo U200 is a FPGA-based accelerator aimed at enhancing image processing workloads in server environments. Utilizing the powerful capabilities of the Alveo U200 FPGA, this processor dramatically boosts throughput and reduces processing latency for data centers. The accelerator can vastly increase image processing speed, up to 4 to 6 times that of traditional CPUs, and decrease latency likewise, ensuring that compute density in a server setting is significantly boosted. This performance uplift enables data centers to lower maintenance and operational costs due to reduced hardware requirements. Furthermore, this IP maintains full compatibility with popular image processing software like OpenCV and ImageMagick, ensuring smooth adaptation for existing workflows. The advanced FPGA partial reconfiguration technology allows for dynamic updates and adjustments, increasing the IP's pragmatism for a wide array of image-related applications and improving overall performance without the need for server reboots.

CTAccel Ltd.
LFoundry
22nm
AI Processor, DLL, Graphics & Video Modules, Image Conversion, JPEG, JPEG 2000, Processor Core Independent, Vision Processor
View Details

Origin E1

The "Origin E1" is engineered for AI applications demanding minimal power and space, commonly deployed in home appliances, smartphones, and security devices. Tailored for efficiency, the E1 provides a compact AI processing unit optimized to handle always-on tasks with a low power footprint. Its architecture focuses on achieving less than 1 TOPS performance, making it well-suited for applications where conserving power and memory is paramount. By utilizing Expedera's packet-based schema, the E1 achieves parallel processing across layers, enhancing speed and reducing energy and space requirements, crucial for maintaining the performance of everyday smart devices. In terms of utility, the E1 is exemplified in always-listening technologies, allowing for seamless user experiences by keeping the power necessary for continuous AI analysis to a minimum, ensuring privacy as all data remains processed within the subsystem.

Expedera
13 Categories
View Details

MIPITM V-NLM-01

The MIPITM V-NLM-01 is a specialized non-local mean image noise reduction product designed to enhance image quality through sophisticated noise reduction techniques. This hardware core features a parameterized search-window size and adjustable bits per pixel, ensuring a high degree of customization and efficiency. Supporting HDMI with resolutions up to 2048×1080 at 30 to 60 fps, it is ideally suited for applications requiring image enhancement and processing.

VLSI Plus Ltd
H.265, HDMI, JPEG, Receiver/Transmitter, RF Modules, USB, Vision Processor
View Details

SoC Platform

The SoC Platform by SEMIFIVE is designed to streamline the system-on-chip (SoC) development process, boasting rapid creation capabilities with minimal effort. Developed using silicon-proven IPs, the platform is attuned to specific domain applications and incorporates optimized design methodologies. This results in reduced costs, minimized risks, and faster design cycles. Fundamental features include a domain-specific architecture, pre-verified IP components, and hardware/software bring-up tools ready for activation, ensuring seamless integration and high performance. Distinct attributes of the SoC Platform involve leveraging a pre-configured and thoroughly validated IP pool. This preparation fosters swift adaptation to varying requirements and presents customers with rapid time-to-market opportunities. Additionally, users can benefit from a reduction in engineering risk, supported by silicon-proven elements integrated into the platform's design. Whether it's achieving lower development costs or maximizing component reusability, the platform ensures a comprehensive and tailored engagement model for diverse project needs. Capabilities such as dynamic configuration choices and integration of non-platform IPs further enhance flexibility, accommodating specialized customer requirements. Target applications range from AI inference systems and AIoT environments to high-performance computing (HPC) uses. By managing every aspect of the design and manufacturing lifecycle, the platform positions SEMIFIVE as a one-stop partner for achieving innovative semiconductor breakthrough.

SEMIFIVE
15 Categories
View Details

AIoT Platform

SEMIFIVE's AIoT Platform integrates artificial intelligence with the Internet of Things (IoT), offering a comprehensive solution for smart and connected devices. Tailored for edge computing applications, the platform combines the latest RISC-V cores with energy-efficient connectivity and processing capabilities, enabling innovations in smart home technologies, cybersecurity, and robotics. The platform is structured to deliver seamless connectivity and versatile functionality, including support for advanced peripherals such as USB 3.0 and MIPI interfaces, among others. Its infrastructure promotes rapid deployment in diverse IoT sectors, ensuring the fusion of intelligence into everyday objects effectively and conveniently. Designed to uphold the requirements of modern smart environments, the AIoT Platform encourages efficiency and simplicity in development, reducing the complexity and cost associated with custom IoT solutions. Whether utilized in industrial IoT or consumer electronics, this platform is essential for organizations aiming to harness the full potential of AI-enhanced IoT.

SEMIFIVE
12 Categories
View Details

RayCore MC Ray Tracing GPU

The RayCore MC Ray Tracing GPU is a cutting-edge GPU IP known for its real-time path and ray tracing capabilities. Designed to expedite the rendering process efficiently, this GPU IP stands out for its balance of high performance and low power consumption. This makes it ideal for environments requiring advanced graphics processing with minimal energy usage. Capitalizing on world-class ray tracing technology, the RayCore MC ensures seamless, high-quality visual outputs that enrich user experiences across gaming and metaverse applications. Equipped with superior rendering speed, the RayCore MC integrates sophisticated algorithms that handle intricate graphics computations effortlessly. This GPU IP aims to redefine the norms of graphics performance by combining agility in data processing with high fidelity in visual representation. Its real-time rendering finesse significantly enhances user interaction by offering a flawless graphics environment, conducive for both immersive gaming experiences and professional metaverse developments. The RayCore MC GPU IP is also pivotal for developers aiming to push the boundaries of graphics quality and efficiency. With an architecture geared towards optimizing both visual output and power efficiency, it stands as a benchmark for future GPU innovations in high-demand industries. The IP's ability to deliver rapid rendering with superior graphic integrity makes it a preferred choice among developers focused on pioneering graphics-intensive applications.

Siliconarts, Inc.
2D / 3D, Audio Processor, CPU, GPU, Graphics & Video Modules, Vision Processor
View Details

IMG DXT GPU for Mobile Devices

Tailored for mobile platforms, the IMG DXT GPU pushes the envelope in mobile graphics by incorporating innovative ray tracing capabilities that enhance visual rendering within confined power budgets. By implementing a unique ray acceleration cluster, DXT ensures advanced lighting effects at reduced silicon area costs, making high-quality ray tracing accessible to mobile devices. This GPU is particularly significant for developers aiming to deliver console-like graphics on the go, enabling new levels of realism and visual impact in mobile gaming and augmented reality applications. Its versatility and performance make it a key asset for mobile OEMs looking to elevate their product offerings.

Imagination Technologies
2D / 3D, Audio Interfaces, Ethernet, GPU, H.265, Security Subsystems, Vision Processor
View Details

aiSim 5

aiSim 5 stands as the world's first simulator for automotive applications to achieve ISO26262 ASIL-D certification, ensuring that it meets rigorous safety and reliability standards. This advanced simulator enables high-fidelity simulation for ADAS and autonomous driving systems, helping engineers validate automotive technologies in a virtual setting. A key feature of aiSim 5 is its use of a proprietary rendering engine that handles both the physics-based simulation of environments and exhaustive sensor animatics, presenting a nuanced representation of real-world conditions including varied weather and complex sensor setups. The simulator offers a modular architecture that integrates effortlessly with existing automotive toolchains through open APIs in C++ and Python. Extensive 3D asset libraries and aiFab scenario randomization functionalities add further versatility, allowing developers to simulate a multitude of roadway conditions and events to rigorously test autonomous capabilities. By utilizing AI-based rendering, aiSim 5 optimizes computing resources for efficient operation across multi-sensor configurations, providing unparalleled utility in scenarios extending from highways to urban environments. This tool is being adopted in high-mileage testing setups to enable a new standard of safety in automotive simulation, effectively reducing the reliance on costly real-road trials. Through its advanced simulation capabilities, aiSim 5 minimizes development time and yields a more streamlined and reliable validation process.

aiMotive
15 Categories
View Details

CTAccel Image Processor on AWS

CTAccel's Image Processor for AWS offers a powerful image processing acceleration solution as part of Amazon's cloud infrastructure. This FPGA-based processor is available as an Amazon Machine Image (AMI) and enables customers to significantly enhance their image processing capabilities within the cloud environment. The AWS-based accelerator provides a remarkable tenfold increase in image processing throughput and similar reductions in computational latency, positively impacting Total Cost of Ownership (TCO) by reducing infrastructure needs and improving operational efficiency. These enhancements are crucial for applications requiring intensive image analysis and processing. Moreover, the processor supports a variety of image enhancement functions such as JPEG thumbnail generation and color adjustments, making it suitable for diverse cloud-based processing scenarios. Its integration within the AWS ecosystem ensures that users can easily deploy and manage these advanced processing capabilities across various imaging workflows with minimal disruption.

CTAccel Ltd.
All Foundries
All Process Nodes
AI Processor, DLL, Graphics & Video Modules, Image Conversion, JPEG, JPEG 2000, Processor Core Independent, Vision Processor
View Details

RISC-V CPU IP NS Class

The NS Class RISC-V CPU IP by Nuclei is ingeniously designed for applications emphasizing security and financial technologies, as well as IoT security. This IP offers a balanced blend of high-performance processing and specialized features aimed at safeguarding data integrity and promoting secure transaction environments. By leveraging a 32-bit architecture, the NS Class is particularly well-suited to applications requiring reliable secure processing capabilities. Equipped with state-of-the-art security extensions, the NS Class IP features trusted execution environments that are crucial for maintaining data security and integrity during operations. Developers can also exploit user-defined instruction extensions to customize security protocols according to specific applications, ensuring the highest level of data protection possible. The NS Class also accommodates a variety of RISC-V extensions, further enhancing the IP's adaptability to modern security-centric applications. With strong support in terms of an integrated toolchain and developmental infrastructure, Nuclei positions this IP as a solution that not only meets current IoT and fintech security requirements but is also adaptable for future developments in secure processing technologies.

Nuclei System Technology
CPU, Cryptography Cores, Embedded Security Modules, Microcontroller, Platform Security, Processor Cores, Security Processor, Security Subsystems, Vision Processor
View Details

Calibrator for AI-on-Chips

The Calibrator for AI-on-Chips utilizes sophisticated post-training quantization techniques to ensure high accuracy and precision in AI models deployed on heterogeneous multicore SoCs. By employing architecture-aware quantization, it maintains accuracy even when switching to fixed-point architectures such as INT8, crucial for reducing computational load and energy consumption on embedded systems. The calibrator leverages comprehensive entropy calculating methods, including KLD, L1, and L2, to optimize precision drops, commonly keeping them under 1% for well-defined microarchitectures. This tool provides seamless interoperability with popular AI frameworks, such as ONNX and TensorFlow, enhancing compatibility and ensuring a wide range of applications can benefit from precision enhancements. Designed to be flexible, the calibrator supports mixed precision across multiple engines and chip architectures, delivering effective solutions in both standalone applications and as part of a broader compiler optimization pass. This versatility ensures it can effectively address the unique constraints of various hardware platforms, including CIM and analog computing architectures.

Skymizer
Audio Processor, DSP Core, Vision Processor
View Details

i.MX RT700 Crossover MCU

The i.MX RT700 Crossover MCU is designed to bridge the needs between high-performance processors and microcontrollers, offering a versatile solution for modern embedded systems. It combines powerful processing cores and extensive memory resources, making it an excellent choice for edge applications requiring AI capabilities and real-time data processing. With five computing cores, this MCU provides the computational power needed to drive sophisticated algorithms and machine learning applications directly at the edge. This MCU focuses on low-power operations, making it ideal for battery-powered applications requiring extended operational times without sacrificing performance. Its comprehensive support for security features ensures data protection and device integrity, which is critical for IoT applications where data privacy is tantamount. The i.MX RT700 is adept at handling multimedia processing, thus expanding its utility in applications ranging from smart home devices to industrial controls. The flexibility of the i.MX RT700 is further underscored by its compatibility with a wide array of connectivity and interface options, facilitating easy integration into various systems. Whether developing smart appliances or deploying in an industrial setting, this crossover MCU offers a blend of performance, power efficiency, and connectivity to meet diverse application demands.

NXP Semiconductors
2D / 3D, CPU, IoT Processor, Microcontroller, Processor Core Independent, Security Protocol Accelerators, Vision Processor
View Details

Neural Network Accelerator

Designed to cater to the needs of edge computing, the Neural Network Accelerator by Gyrus AI is a powerhouse of performance and efficiency. With a focus on graph processing capabilities, this product excels in implementing neural networks by providing native graph processing. The accelerator attains impressive speeds, achieving 30 TOPS/W, while offering efficient computational power with significantly reduced clock cycles, ranging between 10 to 30 times less compared to traditional models. The design ensures that power consumption is kept at a minimum, being 10-20 times lower due to its low memory usage configuration. Beyond its power efficiency, this accelerator is designed to maximize space with a smaller die area, ensuring an 8-10 times reduction in size while maintaining high utilization rates of over 80% for various model structures. Such design optimizations make it an ideal choice for applications requiring a compact, high-performance solution capable of delivering fast computations without compromising on energy efficiency. The Neural Network Accelerator is a testament to Gyrus AI's commitment to enabling smarter edge computing solutions. Additionally, Gyrus AI has paired this technology with software tools that facilitate the execution of neural networks on the IP, simplifying integration and use in various applications. This seamless integration is part of their broader strategy to augment human intelligence, providing solutions that enhance and expand the capabilities of AI-driven technologies across industries.

Gyrus AI
AI Processor, Coprocessor, CPU, DSP Core, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor
View Details

Chimera SDK

The Chimera Software Development Kit (SDK) by Quadric provides an extensive environment tailored for developing complex application code suited for the Chimera GPNPU. This SDK bridges conventional C++ code with machine learning algorithms, enabling seamless integration and optimization of AI models across various platforms. By supporting data-parallel algorithm development in familiar frameworks like TensorFlow and PyTorch, the Chimera SDK enhances developers' proficiency in transforming machine learning graphs into executable C++ code.\n\nCentral to the SDK's functionality is the Chimera Graph Compiler (CGC), a sophisticated tool that imports AI inference models and optimizes them for the Chimera architecture. This tool offers a comprehensive optimization process, from operator transformation to memory bandwidth utilization, ensuring peak performance and integration efficiency.\n\nDevelopers can deploy the Chimera SDK on private cloud environments or local systems, leveraging its cycle-approximate Instruction Set Simulator for profiling and tuning application workloads. This allows for in-depth analysis of cycle counts, data bandwidth usage, and power consumption, facilitating precision optimization for diverse machine learning workloads beyond initial deployment.

Quadric
AI Processor, Audio Processor, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Security Processor, Vision Processor
View Details

AON1100

The AON1100 is a leading AI chip built for efficient voice and sensor processing tasks, offering exceptional performance with under 260μW power usage. It achieves a 90% accuracy level even in sub-zero decibel noise environments. Ideal for devices that require continuous sensory input without significant power drain, it is designed to function seamlessly in demanding acoustic environments, optimizing both performance and power.

AONDevices, Inc.
11 Categories
View Details

Maverick-2 Intelligent Compute Accelerator

The Maverick-2 Intelligent Compute Accelerator is redefining high-performance computing with its groundbreaking software-defined hardware architecture. Designed for HPC and AI environments, it optimizes performance in real-time, ensuring adaptability to specific workload demands. By leveraging a novel Intelligent Compute Architecture (ICA), Maverick-2 eliminates the inefficiencies of traditional computing, offering unprecedented computational performance across diverse applications. This accelerator leverages a highly efficient runtime software suite paired with a revolutionary developer toolchain that supports popular programming languages like C/C++, FORTRAN, OpenMP, and Kokkos. With the forthcoming integration with CUDA and leading AI frameworks, Maverick-2 significantly reduces development time and enhances productivity by avoiding extensive application porting efforts. Maverick-2 stands out with its energy efficiency, offering more than four times the performance-per-watt compared to conventional GPUs. This makes it an ideal choice for researchers and data centers pursuing sustainable innovation. With its adaptability and forward-thinking design, it effectively supports current workloads and is prepared to handle future computing challenges, securing its role as a critical tool for scientific breakthroughs.

Next Silicon Ltd.
TSMC
7nm
AI Processor, Audio Processor, Coprocessor, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor
View Details

AI Inference Platform

The AI Inference Platform is crafted to address the growing demand for efficient AI processing within cutting-edge applications. With a foundation in silicon-proven IPs and sophisticated architecture, this platform accelerates the development of AI-centric solutions. It offers significant benefits in terms of reduced prototype costs and development risks due to its comprehensive, pre-verified component library. Central to the platform's advantages is its optimization for AI workloads, ensuring the delivery of high computational performance alongside efficiency in power consumption and processing speeds. By utilizing advanced RISC-V cores and integrating state-of-the-art memory interfaces, the platform supports complex AI functionalities, such as vision processing and data analytics. Furthermore, its engagement model provides a streamlined pathway for bringing AI innovations to market quickly. The AI Inference Platform is ideal for enterprises seeking to swiftly navigate the complexities of AI development without compromising on quality or innovation.

SEMIFIVE
AI Processor, Cell / Packet, CPU, Multiprocessor / DSP, Processor Core Independent, Vision Processor
View Details

Vega eFPGA

The Vega eFPGA is a flexible programmable solution crafted to enhance SoC designs with substantial ease and efficiency. This IP is designed to offer multiple advantages such as increased performance, reduced costs, secure IP handling, and ease of integration. The Vega eFPGA boasts a versatile architecture allowing for tailored configurations to suit varying application requirements. This IP includes configurable tiles like CLB (Configurable Logic Blocks), BRAM (Block RAM), and DSP (Digital Signal Processing) units. The CLB part includes eight 6-input Lookup Tables that provide dual outputs, and also an optional configuration with a fast adder having a carry chain. The BRAM supports 36Kb dual-port memory and offers flexibility for different configurations, while the DSP component is designed for complex arithmetic functions with its 18x20 multipliers and a wide 64-bit accumulator. Focused on allowing easy system design and acceleration, Vega eFPGA ensures seamless integration and verification into any SoC design. It is backed by a robust EDA toolset and features that allow significant customization, making it adaptable to any semiconductor fabrication process. This flexibility and technological robustness places the Vega eFPGA as a standout choice for developing innovative and complex programmable logic solutions.

Rapid Silicon
CPU, Embedded Memories, Multiprocessor / DSP, Processor Core Independent, Vision Processor, WMV
View Details

NoISA Processor

The NoISA Processor by Hotwright Inc. stands out because it eliminates the rigidity of conventional Instruction Set Architectures (ISA) by leveraging the Hotstate machine's microcoded state machine. This architecture alleviates the drawbacks of fixed hardware controllers by allowing unprecedented flexibility in modifying how processor tasks are executed. As an alternative to softcore CPUs, the NoISA Processor delivers superior energy efficiency, making it an ideal choice for applications where power is a critical constraint. This makes it particularly well-suited for edge computing and Internet of Things (IoT) devices, where low power usage is paramount. By discarding traditional ISA constraints, the NoISA Processor can dynamically change its functions just by reloading microcode, as opposed to relying on smaller, immutable instructions typical in ISA-bound processors. This flexibility enables it to act as a highly effective controller, adaptable to various tasks without necessitating changes to the dedicated hardware such as FPGAs. Through this adaptability, the NoISA Processor offers performance benefits not typically available with fixed-instruction processors, allowing developers to achieve optimal efficiency and capability in their computing solutions. Furthermore, the NoISA Processor supports the creation of efficient, small controllers and C programmable state machines, capitalizing on faster systolic arrays and ensuring rapid development cycles. It allows for modifications and improvements in device functionality without altering the underlying hardware, ensuring future-proof solutions by catering to evolving technology demands with ease.

Hotwright Inc.
AI Processor, CPU, DSP Core, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor
View Details
Load more
Sign up to Silicon Hub to buy and sell semiconductor IP

Sign Up for Silicon Hub

Join the world's most advanced semiconductor IP marketplace!

It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!

Sign up to Silicon Hub to buy and sell semiconductor IP

Welcome to Silicon Hub

Join the world's most advanced AI-powered semiconductor IP marketplace!

It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!

Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!

Switch to a Silicon Hub buyer account to buy semiconductor IP

Switch to a Buyer Account

To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.

Add new company

Switch to a Silicon Hub buyer account to buy semiconductor IP

Create a Buyer Account

To evaluate IP you need to be logged into a buyer profile. It's free to create a buyer profile for your company.

Chatting with Volt