Find IP Sell IP AI Assistant Chip Talk Chip Videos About Us
Log In

All IPs > Processor > Vision Processor

Vision Processor Semiconductor IPs

Vision processors are a specialized subset of semiconductor IPs designed to efficiently handle and process visual data. These processors are pivotal in applications that require intensive image analysis and computer vision capabilities, such as artificial intelligence, augmented reality, virtual reality, and autonomous systems. The primary purpose of vision processor IPs is to accelerate the performance of vision processing tasks while minimizing power consumption and maximizing throughput.

In the world of semiconductor IP, vision processors stand out due to their ability to integrate advanced functionalities such as object recognition, image stabilization, and real-time analytics. These processors often leverage parallel processing, machine learning algorithms, and specialized hardware accelerators to perform complex visual computations efficiently. As a result, products ranging from high-end smartphones to advanced driver-assistance systems (ADAS) and industrial robots benefit from improved visual understanding and processing capabilities.

The semiconductor IPs for vision processors can be found in a wide array of products. In consumer electronics, they enhance the capabilities of cameras, enabling features like face and gesture recognition. In the automotive industry, vision processors are crucial for delivering real-time data processing needed for safety systems and autonomous navigation. Additionally, in sectors such as healthcare and manufacturing, vision processor IPs facilitate advanced imaging and diagnostic tools, improving both precision and efficiency.

As technology advances, the demand for vision processor IPs continues to grow. Developers and designers seek IPs that offer scalable architectures and can be customized to meet specific application requirements. By providing enhanced performance and reducing development time, vision processor semiconductor IPs are integral to pushing the boundaries of what's possible with visual data processing and expanding the capabilities of next-generation products.

All semiconductor IP

Akida Neural Processor IP

Akida's Neural Processor IP represents a leap in AI architecture design, tailored to provide exceptional energy efficiency and processing speed for an array of edge computing tasks. At its core, the processor mimics the synaptic activity of the human brain, efficiently executing tasks that demand high-speed computation and minimal power usage. This processor is equipped with configurable neural nodes capable of supporting innovative AI frameworks such as convolutional and fully-connected neural network processes. Each node accommodates a range of MAC operations, enhancing scalability from basic to complex deployment requirements. This scalability enables the development of lightweight AI solutions suited for consumer electronics as well as robust systems for industrial use. Onboard features like event-based processing and low-latency data communication significantly decrease the strain on host processors, enabling faster and more autonomous system responses. Akida's versatile functionality and ability to learn on the fly make it a cornerstone for next-generation technology solutions that aim to blend cognitive computing with practical, real-world applications.

BrainChip
AI Processor, Coprocessor, CPU, Digital Video Broadcast, Network on Chip, Platform Security, Processor Core Independent, Vision Processor
View Details

KL730 AI SoC

The KL730 is a third-generation AI chip that integrates advanced reconfigurable NPU architecture, delivering up to 8 TOPS of computing power. This cutting-edge technology enhances computational efficiency across a range of applications, including CNN and transformer networks, while minimizing DDR bandwidth requirements. The KL730 also boasts enhanced video processing capabilities, supporting 4K 60FPS outputs. With expertise spanning over a decade in ISP technology, the KL730 stands out with its noise reduction, wide dynamic range, fisheye correction, and low-light imaging performance. It caters to markets like intelligent security, autonomous vehicles, video conferencing, and industrial camera systems, among others.

Kneron
TSMC
12nm
16 Categories
View Details

Akida 2nd Generation

The second-generation Akida platform builds upon the foundation of its predecessor with enhanced computational capabilities and increased flexibility for a broader range of AI and machine learning applications. This version supports 8-bit weights and activations in addition to the flexible 4- and 1-bit operations, making it a versatile solution for high-performance AI tasks. Akida 2 introduces support for programmable activation functions and skip connections, further enhancing the efficiency of neural network operations. These capabilities are particularly advantageous for implementing sophisticated machine learning models that require complex, interconnected processing layers. The platform also features support for Spatio-Temporal and Temporal Event-Based Neural Networks, advancing its application in real-time, on-device AI scenarios. Built as a silicon-proven, fully digital neuromorphic solution, Akida 2 is designed to integrate seamlessly with various microcontrollers and application processors. Its highly configurable architecture offers post-silicon flexibility, making it an ideal choice for developers looking to tailor AI processing to specific application needs. Whether for low-latency video processing, real-time sensor data analysis, or interactive voice recognition, Akida 2 provides a robust platform for next-generation AI developments.

BrainChip
11 Categories
View Details

Metis AIPU PCIe AI Accelerator Card

Axelera AI's Metis AIPU PCIe AI Accelerator Card is engineered to deliver top-tier inference performance in AI tasks aimed at heavy computational loads. This PCIe card is designed with the industry’s highest standards, offering exceptional processing power packaged onto a versatile PCIe form factor, ideal for integration into various computing systems including workstations and servers.<br><br>Equipped with a quad-core Metis AI Processing Unit (AIPU), the card delivers unmatched capabilities for handling complex AI models and extensive data streams. It efficiently processes multiple camera inputs and supports independent parallel neural network operations, making it indispensable for dynamic fields such as industrial automation, surveillance, and high-performance computing.<br><br>The card's performance is significantly enhanced by the Voyager SDK, which facilitates a seamless AI model deployment experience, allowing developers to focus on model logic and innovation. It offers extensive compatibility with mainstream AI frameworks, ensuring flexibility and ease of integration across diverse use cases. With a power-efficient design, this PCIe AI Accelerator Card bridges the gap between traditional GPU solutions and today's advanced AI demands.

Axelera AI
13 Categories
View Details

Akida IP

The Akida IP is a groundbreaking neural processor designed to emulate the cognitive functions of the human brain within a compact and energy-efficient architecture. This processor is specifically built for edge computing applications, providing real-time AI processing for vision, audio, and sensor fusion tasks. The scalable neural fabric, ranging from 1 to 128 nodes, features on-chip learning capabilities, allowing devices to adapt and learn from new data with minimal external inputs, enhancing privacy and security by keeping data processing localized. Akida's unique design supports 4-, 2-, and 1-bit weight and activation operations, maximizing computational efficiency while minimizing power consumption. This flexibility in configuration, combined with a fully digital neuromorphic implementation, ensures a cost-effective and predictable design process. Akida is also equipped with event-based acceleration, drastically reducing the demands on the host CPU by facilitating efficient data handling and processing directly within the sensor network. Additionally, Akida's on-chip learning supports incremental learning techniques like one-shot and few-shot learning, making it ideal for applications that require quick adaptation to new data. These features collectively support a broad spectrum of intelligent computing tasks, including object detection and signal processing, all performed at the edge, thus eliminating the need for constant cloud connectivity.

BrainChip
AI Processor, Audio Processor, Coprocessor, CPU, Cryptography Cores, GPU, Input/Output Controller, IoT Processor, Platform Security, Processor Core Independent, Vision Processor
View Details

Yitian 710 Processor

The Yitian 710 Processor is a groundbreaking component in processor technology, designed with cutting-edge architecture to enhance computational efficiency. This processor is tailored for cloud-native environments, offering robust support for high-demand computing tasks. It is engineered to deliver significant improvements in performance, making it an ideal choice for data centers aiming to optimize their processing power and energy efficiency. With its advanced features, the Yitian 710 stands at the forefront of processor innovation, ensuring seamless integration with diverse technology platforms and enhancing the overall computing experience across industries.

T-Head Semiconductor
AI Processor, AMBA AHB / APB/ AXI, Audio Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Independent, Processor Cores, Vision Processor
View Details

AI Camera Module

The AI Camera Module from Altek is a versatile, high-performance component designed to meet the increasing demand for smart vision solutions. This module features a rich integration of imaging lens design and combines both hardware and software capacities to create a seamless operational experience. Its design is reinforced by Altek's deep collaboration with leading global brands, ensuring a top-tier product capable of handling diverse market requirements. Equipped to cater to AI and IoT interplays, the module delivers outstanding capabilities that align with the expectations for high-resolution imaging, making it suitable for edge computing applications. The AI Camera Module ensures that end-user diversity is meaningfully addressed, offering customization in device functionality which supports advanced processing requirements such as 2K and 4K video quality. This module showcases Altek's prowess in providing comprehensive, all-in-one camera solutions which leverage sophisticated imaging and rapid processing to handle challenging conditions and demands. The AI Camera's technical blueprint supports complex AI algorithms, enhancing not just image quality but also the device's interactive capacity through facial recognition and image tracking technology.

Altek Corporation
Samsung
22nm
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Audio Interfaces, GPU, Image Conversion, IoT Processor, JPEG, Receiver/Transmitter, SATA, Vision Processor
View Details

MetaTF

MetaTF is BrainChip's premier development tool platform designed to complement its neuromorphic technology solutions. This platform is a comprehensive toolkit that empowers developers to convert and optimize standard machine learning models into formats compatible with BrainChip's Akida technology. One of its key advantages is its ability to adjust models into sparse formats, enhancing processing speed and reducing power consumption. The MetaTF framework provides an intuitive interface for integrating BrainChip’s specialized AI capabilities into existing workflows. It supports streamlined adaptation of models to ensure they are optimized for the unique characteristics of neuromorphic processing. Developers can utilize MetaTF to rapidly iterate and refine AI models, making the deployment process smoother and more efficient. By providing direct access to pre-trained models and tuning mechanisms, MetaTF allows developers to capitalize on the benefits of event-based neural processing with minimal configuration effort. This platform is crucial for advancing the application of machine learning across diverse fields such as IoT devices, healthcare technology, and smart infrastructure.

BrainChip
AI Processor, Coprocessor, Processor Core Independent, Vision Processor
View Details

Chimera GPNPU

Quadric's Chimera GPNPU is an adaptable processor core designed to respond efficiently to the demand for AI-driven computations across multiple application domains. Offering up to 864 TOPS, this licensable core seamlessly integrates into system-on-chip designs needing robust inference performance. By maintaining compatibility with all forms of AI models, including cutting-edge large language models and vision transformers, it ensures long-term viability and adaptability to emerging AI methodologies. Unlike conventional architectures, the Chimera GPNPU excels by permitting complete workload management within a singular execution environment, which is vital in avoiding the cumbersome and resource-intensive partitioning of tasks seen in heterogeneous processor setups. By facilitating a unified execution of matrix, vector, and control code, the Chimera platform elevates software development ease, and substantially improves code maintainability and debugging processes. In addition to high adaptability, the Chimera GPNPU capitalizes on Quadric's proprietary Compiler infrastructure, which allows developers to transition rapidly from model conception to execution. It transforms AI workflows by optimizing memory utilization and minimizing power expenditure through smart data storage strategies. As AI models grow increasingly complex, the Chimera GPNPU stands out for its foresight and capability to unify AI and DSP tasks under one adaptable and programmable platform.

Quadric
16 Categories
View Details

xcore.ai

xcore.ai by XMOS is a groundbreaking solution designed to bring intelligent functionality to the forefront of semiconductor applications. It enables powerful real-time execution of AI, DSP, and control functionalities, all on a single, programmable chip. The flexibility of its architecture allows developers to integrate various computational tasks efficiently, making it a fitting choice for projects ranging from smart audio devices to automated industrial systems. With xcore.ai, XMOS provides the technology foundation necessary for swift deployment and scalable application across different sectors, delivering high performance in demanding environments.

XMOS Semiconductor
21 Categories
View Details

Metis AIPU M.2 Accelerator Module

The Metis AIPU M.2 Accelerator Module from Axelera AI provides an exceptional balance of performance and size, perfectly suited for edge AI applications. Designed for high-performance tasks, this module is powered by a single Metis AI Processing Unit (AIPU), which offers cutting-edge inference capabilities. With this M.2 card module, developers can easily integrate AI processing power into compact devices.<br><br>This module accommodates demanding AI workloads, enabling applications to perform complex computations with efficiency. Thanks to its low power consumption and versatile integration capabilities, it opens new possibilities for use in edge devices that require robust AI processing power. The Metis AIPU M.2 module supports a wide range of AI models and pipelines, facilitated by Axelera's Voyager SDK software platform which ensures seamless deployment and optimization of AI models.<br><br>The module's versatile design allows for streamlined concurrent multi-model processing, significantly boosting the device's AI capabilities without the need for external data centers. Additionally, it supports advanced quantization techniques, providing users with increased prediction accuracy for high-stakes applications.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Building Blocks, CAN, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, VGA, Vision Processor, WMV
View Details

aiWare

The aiWare Neural Processing Unit (NPU) is an advanced hardware solution engineered for the automotive sector, highly regarded for its efficiency in neural network acceleration tailored for automated driving technologies. This NPU is designed to handle a broad scope of AI applications, including complex neural network models like CNNs and RNNs, offering scalability across diverse performance tiers from L2 to more demanding L4 systems. With its industry-leading efficiency, the aiWare hardware IP achieves up to 98% effectiveness over various automotive neural networks. It supports vast sensor configurations typical in automotive contexts, maintaining reliable performance under rigorous conditions validated by ISO 26262 ASIL B certification. aiWare is not only power-efficient but designed with a scalable architecture, providing up to 1024 TOPS, ensuring that it meets the demands of high-performance processing requirements. Furthermore, aiWare is meticulously crafted to facilitate integration into safety-critical environments, deploying high determinism in its operations. It minimizes external memory dependencies through an innovative dataflow approach, maximizing on-chip memory utilization and minimizing system power. Featuring extensive documentation for integration and customization, aiWare stands out as a crucial component for OEMs and Tier1s looking to optimize advanced driver-assist functionalities.

aiMotive
12 Categories
View Details

SAKURA-II AI Accelerator

The SAKURA-II AI Accelerator by EdgeCortix is an advanced processor designed for energy-efficient, real-time AI inferencing. It supports complex generative AI models such as Llama 2 and Stable Diffusion with an impressive power envelope of just 8 watts, making it ideal for applications requiring swift, on-the-fly Batch=1 AI processing. While maintaining critical performance metrics, it can simultaneously run multiple deep neural network models, facilitated by its unique DNA core. The SAKURA-II stands out with its high utilization of AI compute resources, robust memory bandwidth, and sizable DRAM capacity options of up to 32GB, all in a compact form factor. With market-leading energy efficiency, the SAKURA-II supports diverse edge AI applications, from vision and language to audio, thanks to hardware-accelerated arbitrary activation functions and advanced power management features. Designed for ARM and other platforms, the SAKURA-II can be easily integrated into existing systems for deploying AI models and leveraging low power for demanding workloads. EdgeCortix's AI Accelerator excels with innovative features like sparse computing to optimize DRAM bandwidth and real-time data streaming for Batch=1 operations, ensuring fast and efficient AI computations. It offers unmatched adaptability in power management, enabling ultra-high efficiency modes for processing complex AI tasks while maintaining high precision and low latency operations.

EdgeCortix Inc.
AI Processor, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

Talamo SDK

The Talamo Software Development Kit (SDK) is a comprehensive toolset designed to streamline the development and deployment of neuromorphic AI applications. Leveraging a PyTorch-integrated environment, Talamo simplifies the creation of powerful AI models for deployment on the Spiking Neural Processor. It provides developers with a user-friendly workflow, reducing the complexity usually associated with spiking neural networks. This SDK facilitates the construction of end-to-end application pipelines through a familiar PyTorch framework. By grounding development in this standard workflow, Talamo removes the need for deep expertise in spiking neural networks, offering pre-built models that are ready to use. The SDK also includes capabilities for compiling and mapping trained models onto the processor's hardware, ensuring efficient integration and utilization of computing resources. Moreover, Talamo supports an architecture simulator which allows developers to emulate hardware performance during the design phase. This feature enables rapid prototyping and iterative design, which is crucial for optimizing applications for performance and power efficiency. Thus, Talamo not only empowers developers to build sophisticated AI solutions but also ensures these solutions are practical for deployment across various devices and platforms.

Innatera Nanosystems
All Foundries
All Process Nodes
AI Processor, Content Protection Software, CPU, Cryptography Cores, Multiprocessor / DSP, Processor Core Independent, Vision Processor
View Details

KL630 AI SoC

The KL630 is a pioneering AI chipset featuring Kneron's latest NPU architecture, which is the first to support Int4 precision and transformer networks. This cutting-edge design ensures exceptional compute efficiency with minimal energy consumption, making it ideal for a wide array of applications. With an ARM Cortex A5 CPU at its core, the KL630 excels in computation while maintaining low energy expenditure. This SOC is designed to handle both high and low light conditions optimally and is perfectly suited for use in diverse edge AI devices, from security systems to expansive city and automotive networks.

Kneron
TSMC
12nm LP/LP+
ADPCM, AI Processor, Camera Interface, CPU, GPU, Input/Output Controller, Processor Core Independent, USB, VGA, Vision Processor
View Details

KL520 AI SoC

The KL520 marks Kneron's foray into the edge AI landscape, offering an impressive combination of size, power efficiency, and performance. Armed with dual ARM Cortex M4 processors, this chip can operate independently or as a co-processor to enable AI functionalities such as smart locks and security monitoring. The KL520 is adept at 3D sensor integration, making it an excellent choice for applications in smart home ecosystems. Its compact design allows devices powered by it to operate on minimal power, such as running on AA batteries for extended periods, showcasing its exceptional power management capabilities.

Kneron
TSMC
65nm
AI Processor, Camera Interface, Clock Generator, CPU, GPU, IoT Processor, MPEG 4, Processor Core Independent, Receiver/Transmitter, Vision Processor
View Details

Hanguang 800 AI Accelerator

The Hanguang 800 AI Accelerator is a high-performance AI processor developed to meet the complex demands of artificial intelligence workloads. This accelerator is engineered with cutting-edge AI processing capabilities, enabling rapid data analysis and machine learning model inference. Designed for flexibility, the Hanguang 800 delivers superior computation speed and energy efficiency, making it an optimal choice for AI applications in a variety of sectors, from data centers to edge computing. By supporting high-volume data throughput, it enables organizations to achieve significant advantages in speed and efficiency, facilitating the deployment of intelligent solutions.

T-Head Semiconductor
AI Processor, CPU, IoT Processor, Processor Core Dependent, Security Processor, Vision Processor
View Details

AX45MP

The AX45MP is engineered as a high-performance processor that supports multicore architecture and advanced data processing capabilities, particularly suitable for applications requiring extensive computational efficiency. Powered by the AndesCore processor line, it capitalizes on a multicore symmetric multiprocessing framework, integrating up to eight cores with robust L2 cache management. The AX45MP incorporates advanced features such as vector processing capabilities and support for MemBoost technology to maximize data throughput. It caters to high-demand applications including machine learning, digital signal processing, and complex algorithmic computations, ensuring data coherence and efficient power usage.

Andes Technology
2D / 3D, ADPCM, CPU, IoT Processor, Processor Core Independent, Processor Cores, Vision Processor
View Details

KL530 AI SoC

The KL530 represents a significant advancement in AI chip technology with a new NPU architecture optimized for both INT4 precision and transformer networks. This SOC is engineered to provide high processing efficiency and low power consumption, making it suitable for AIoT applications and other innovative scenarios. It features an ARM Cortex M4 CPU designed for low-power operation and offers a robust computational power of up to 1 TOPS. The chip's ISP enhances image quality, while its codec ensures efficient multimedia compression. Notably, the chip's cold start time is under 500 ms with an average power draw of less than 500 mW, establishing it as a leader in energy efficiency.

Kneron
TSMC
28nm SLP
AI Processor, Camera Interface, Clock Generator, CPU, CSC, GPU, IoT Processor, Peripheral Controller, Vision Processor
View Details

Maverick-2 Intelligent Compute Accelerator

The Maverick-2 Intelligent Compute Accelerator revolutionizes computing with its Intelligent Compute Architecture (ICA), delivering unparalleled performance and efficiency for HPC and AI applications. This innovative product leverages real-time adaptability, enabling it to optimize hardware configurations dynamically to match the specific demands of various software workloads. Its standout feature is the elimination of domain-specific languages, offering a universal solution for scientific and technical computing. Equipped with a robust developer toolchain that supports popular languages like C, C++, FORTRAN, and OpenMP, the Maverick-2 seamlessly integrates into existing workflows. This minimizes the need for code rewrites while maximizing developer productivity. By providing extensive support for emerging technologies such as CUDA and HIP/ROCm, Maverick-2 ensures that it remains a viable and potent solution for current and future computing challenges. Built on TSMC's advanced 5nm process, the accelerator incorporates HBM3E memory and high-bandwidth PCIe Gen 5 interfaces, supporting demanding computations with remarkable efficiency. The Maverick-2 achieves a significant power performance advantage, making it ideal for data centers and research facilities aiming for greater sustainability without sacrificing computational power.

Next Silicon Ltd.
TSMC
5nm
11 Categories
View Details

3D Imaging Chip

Altek's 3D Imaging Chip is a breakthrough in the field of vision technology. Designed with an emphasis on depth perception, it enhances the accuracy of 3D scene capturing, making it ideal for applications requiring precise distance gauging such as autonomous vehicles and drones. The chip integrates seamlessly within complex systems, boasting superior recognition accuracy that ensures reliable and robust performance. Building upon years of expertise in 3D imaging, this chip supports multiple 3D modes, offering flexible solutions for devices from surveillance robots to delivery mechanisms. It facilitates medium-to-long-range detection needs thanks to its refined depth sensing capabilities. Altek's approach ensures a comprehensive package from modular design to chip production, creating a cohesive system that marries both hardware and software effectively. Deployed within various market segments, it delivers adaptable image solutions with dynamic design agility. Its imaging prowess is further enhanced by state-of-the-art algorithms that refine image quality and facilitate facial detection and recognition, thereby expanding its utility across diverse domains.

Altek Corporation
TSMC
16nm FFC/FF+
A/D Converter, Analog Front Ends, Coprocessor, Graphics & Video Modules, Image Conversion, JPEG, Oversampling Modulator, Photonics, PLL, Sensor, Vision Processor
View Details

Tianqiao-70 Low-Power Commercial Grade 64-bit RISC-V CPU

Crafted to deliver significant power savings, the Tianqiao-70 is a low-power RISC-V CPU that excels in commercial-grade scenarios. This 64-bit CPU core is primarily designed for applications where power efficiency is critical, such as mobile devices and computationally intensive IoT solutions. The core's architecture is specifically optimized to perform under stringent power budgets without compromising on the processing power needed for complex tasks. It provides an efficient solution for scenarios that demand reliable performance while maintaining a low energy footprint. Through its refined design, the Tianqiao-70 supports a broad spectrum of applications, including personal computing, machine learning, and mobile communications. Its versatility and power-awareness make it a preferred choice for developers focused on sustainable and scalable computing architectures.

StarFive Technology
AI Processor, CPU, Microcontroller, Multiprocessor / DSP, Processor Cores, Vision Processor
View Details

WiseEye2 AI Solution

Himax's WiseEye2 AI solution is a pioneering technology aimed at ultra-low power sensor fusion for AI on-device applications. This innovative solution integrates artificial intelligence capabilities within consumer electronics, offering smart solutions for homes, cities, and various industrial applications. The WiseEye2 technology excels in enabling devices to perform complex AI tasks onsite without relying heavily on remote data centers. This feature not only minimizes latency but also enhances privacy aspects by processing data locally. It supports a range of applications from smart home appliances and intelligent security systems to cutting-edge consumer electronics. Designed with efficiency in mind, the WiseEye2 AI solution is built to operate under minimal power conditions, extending the battery life of devices it powers. This makes it ideal for portable and remote applications where energy conservation is critical.

Himax Technologies, Inc.
AI Processor, Vision Processor
View Details

Ceva-SensPro2 - Vision AI DSP

The **Ceva-SensPro DSP family** unites scalar processing units and vector processing units under an 8-way VLIW architecture. The family incorporates advanced control features such as a branch target buffer and a loop buffer to speed up execution and reduce power. There are six family members, each with a different array of MACs, targeted at different application areas and performance points. These range from the Ceva-SP100, providing 128 8-bit integer or 32 16-bit integer MACs at 0.2 TOPS performance for compact applications such as vision processing in wearables and mobile devices; to the Ceva-SP1000, with 1024 8-bit or 256 16-bit MACs reaching 2 TOPS for demanding applications such as automotive, robotics, and surveillance. Two of the family members, the Ceva-SPF2 and Ceva-SPF4, employ 32 or 64 32-bit floating-point MACs, respectively, for applications in electric-vehicle power-train control and battery management. These two members are supported by libraries for Eigen Linear Algebra, MATLAB vector operations, and the TVM graph compiler. Highly configurable, the vector processing units in all family members can add domain-specific instructions for such areas as vision processing, Radar, or simultaneous localization and mapping (SLAM) for robotics. Integer family members can also add optional floating-point capabilities. All family members have independent instruction and data memory subsystems and a Ceva-Connect queue manager for AXI-attached accelerators or coprocessors. The Ceva-SensPro2 family is programmable in C/C++ as well as in Halide and Open MP, and supported by an Eclipse-based development environment, extensive libraries spanning a wide range of applications, and the Ceva-NeuPro Studio AI development environment. [**Learn more about Ceva-SensPro2 solution>**](https://www.ceva-ip.com/product/ceva-senspro2/?utm_source=silicon_hub&utm_medium=ip_listing&utm_campaign=ceva_senspro2_page)

Ceva, Inc.
DSP Core, Multiprocessor / DSP, Processor Cores, Vision Processor
View Details

aiSim 5

aiSim 5 is at the forefront of automotive simulation, providing a comprehensive environment for the validation and verification of ADAS and AD systems. This innovative simulator integrates AI and physics-based digital twin technology, creating an adaptable and realistic testing ground that accommodates diverse and challenging environmental scenarios. It leverages advanced sensor simulation capabilities to reproduce high fidelity data critical for testing and development. The simulator's architecture is designed for modularity, allowing seamless integration with existing systems through C++ and Python APIs. This facilitates a wide range of testing scenarios while ensuring compliance with ISO 26262 ASIL-D standards, which is a critical requirement for automotive industry trust. aiSim 5 offers developers significant improvements in testing efficiency, allowing for runtime performance adjustments with deterministic outcomes. Some key features of aiSim 5 include the ability to simulate varied weather conditions with real-time adaptable environments, a substantial library of 3D assets, and built-in domain randomization features through aiFab for synthetic data generation. Additionally, its innovative rendering engine, aiSim AIR, enhances simulation realism while optimizing computational resources. This tool serves as an ideal solution for companies looking to push the boundaries of ADAS and AD testing and deployment.

aiMotive
25 Categories
View Details

RAIV General Purpose GPU

RAIV represents Siliconarts' General Purpose-GPU (GPGPU) offering, engineered to accelerate data processing across diverse industries. This versatile GPU IP is essential in sectors engaged in high-performance computing tasks, such as autonomous driving, IoT, and sophisticated data centers. With RAIV, Siliconarts taps into the potential of the fourth industrial revolution, enabling rapid computation and seamless data management. The RAIV architecture is poised to deliver unmatched efficiency in high-demand scenarios, supporting massive parallel processing and intricate calculations. It provides an adaptable framework that caters to the needs of modern computing, ensuring balanced workloads and optimized performance. Whether used for VR/AR applications or supporting the back-end infrastructure of data-intensive operations, RAIV is designed to meet and exceed industry expectations. RAIV’s flexible design can be tailored to enhance a broad spectrum of applications, promising accelerated innovation in sectors dependent on AI and machine learning. This GPGPU IP not only underscores Siliconarts' commitment to technological advancement but also highlights its capability to craft solutions that drive forward computational boundaries.

Siliconarts, Inc.
AI Processor, Building Blocks, CPU, GPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, Wireless Processor
View Details

KL720 AI SoC

The KL720 AI SoC is designed for optimal performance-to-power ratios, achieving 0.9 TOPS per watt. This makes it one of the most efficient chips available for edge AI applications. The SOC is crafted to meet high processing demands, suitable for high-end devices including smart TVs, AI glasses, and advanced cameras. With an ARM Cortex M4 CPU, it enables superior 4K imaging, full HD video processing, and advanced 3D sensing capabilities. The KL720 also supports natural language processing (NLP), making it ideal for emerging AI interfaces such as AI assistants and gaming gesture controls.

Kneron
TSMC
16nm FFC/FF+
2D / 3D, AI Processor, Audio Interfaces, AV1, Camera Interface, CPU, GPU, Image Conversion, TICO, Vision Processor
View Details

RayCore MC Ray Tracing GPU

The RayCore MC is a revolutionary real-time path and ray-tracing GPU designed to enhance rendering with minimal power consumption. This GPU IP is tailored for real-time applications, offering a rich graphical experience without compromising on speed or efficiency. By utilizing advanced ray-tracing capabilities, RayCore MC provides stunning visual effects and lifelike animations, setting a high standard for quality in digital graphics. Engineered for scalability and performance, RayCore MC stands out in the crowded field of GPU technologies by delivering seamless, low-latency graphics. It is particularly suited for applications in gaming, virtual reality, and the burgeoning metaverse, where realistic rendering is paramount. The architecture supports efficient data management, ensuring that even the most complex visual tasks are handled with ease. RayCore MC's architecture supports a wide array of applications beyond entertainment, making it a vital tool in areas such as autonomous vehicles and data-driven industries. Its blend of power efficiency and graphical prowess ensures that developers can rely on RayCore MC for cutting-edge, resource-light graphic solutions.

Siliconarts, Inc.
2D / 3D, Audio Processor, CPU, GPU, Graphics & Video Modules, Vision Processor
View Details

Spiking Neural Processor T1 - Ultra-lowpower Microcontroller for Sensing

The Spiking Neural Processor T1 is an advanced microcontroller engineered for highly efficient always-on sensing tasks. Integrating a low-power spiking neural network engine with a RISC-V processor core, the T1 provides a compact solution for rapid sensor data processing. Its design supports next-generation AI applications and signal processing while maintaining a minimal power footprint. The processor excels in scenarios requiring both high power efficiency and fast response. By employing a tightly-looped spiking neural network algorithm, the T1 can execute complex pattern recognition and signal processing tasks directly on-device. This autonomy enables battery-powered devices to operate intelligently and independently of cloud-based services, ideal for portable or remote applications. A notable feature includes its low-power operation, making it suitable for use in portable devices like wearables and IoT-enabled gadgets. Embedded with a RISC-V CPU and 384KB of SRAM, the T1 can interface with a variety of sensors through diverse connectivity options, enhancing its versatility in different environments.

Innatera Nanosystems
UMC
28nm
AI Processor, Coprocessor, CPU, DSP Core, Input/Output Controller, IoT Processor, Microcontroller, Multiprocessor / DSP, Standard cell, Vision Processor, Wireless Processor
View Details

AI Inference Platform

Designed to cater to AI-specific needs, SEMIFIVE’s AI Inference Platform provides tailored solutions that seamlessly integrate advanced technologies to optimize performance and efficiency. This platform is engineered to handle the rigorous demands of AI workloads through a well-integrated approach combining hardware and software innovations matched with AI acceleration features. The platform supports scalable AI models, delivering exceptional processing capabilities for tasks involving neural network inference. With a focus on maximizing throughput and efficiency, it facilitates real-time processing and decision-making, which is crucial for applications such as machine learning and data analytics. SEMIFIVE’s platform simplifies AI implementation by providing an extensive suite of development tools and libraries that accelerate design cycles and enhance comprehensive system performance. The incorporation of state-of-the-art caching mechanisms and optimized data flow ensures the platform’s ability to handle large datasets efficiently.

SEMIFIVE
Samsung
5nm, 12nm, 14nm
AI Processor, Cell / Packet, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

2D FFT

The 2D FFT core is designed to efficiently handle two-dimensional FFT processing, ideal for applications in image and video processing where data is inherently two-dimensional. This core is engineered to integrate both internal and external memory configurations, which optimize data handling for complex multimedia processing tasks, ensuring a high level of performance is maintained throughout. Utilizing sophisticated algorithms, the 2D FFT core processes data through two FFT engines. This dual approach maximizes throughput, typically limiting bottlenecks to memory bandwidth constraints rather than computational delays. This efficiency is critical for applications handling large volumes of multimedia data where real-time processing is a requisite. The capacity of the 2D FFT core to adapt to varying processing environments marks its versatility in the digital processing landscape. By ensuring robust data processing capabilities, it addresses the challenges of dynamic data movement, providing the reliability necessary for multimedia systems. Its strategic design supports the execution of intensive computational tasks while maintaining the operational flow integral to real-time applications.

Dillon Engineering, Inc.
Tower, VIS
80nm, 180nm
Coprocessor, Ethernet, Image Conversion, Network on Chip, Receiver/Transmitter, Vision Processor
View Details

RISCV SoC - Quad Core Server Class

The RISCV SoC developed by Dyumnin Semiconductors is engineered with a 64-bit quad-core server-class RISCV CPU, aiming to bridge various application needs with an integrated, holistic system design. Each subsystem of this SoC, from AI/ML capabilities to automotive and multimedia functionalities, is constructed to deliver optimal performance and streamlined operations. Designed as a reference model, this SoC enables quick adaptation and deployment, significantly reducing the time-to-market for clients. The AI Accelerator subsystem enhances AI operations with its collaboration of a custom central processing unit, intertwined with a specialized tensor flow unit. In the multimedia domain, the SoC boasts integration capabilities for HDMI, Display Port, MIPI, and other advanced graphic and audio technologies, ensuring versatile application across various multimedia requirements. Memory handling is another strength of this SoC, with support for protocols ranging from DDR and MMC to more advanced interfaces like ONFI and SD/SDIO, ensuring seamless connectivity with a wide array of memory modules. Moreover, the communication subsystem encompasses a broad spectrum of connectivity protocols, including PCIe, Ethernet, USB, and SPI, crafting an all-rounded solution for modern communication challenges. The automotive subsystem, offering CAN and CAN-FD protocols, further extends its utility into automotive connectivity.

Dyumnin Semiconductors
28 Categories
View Details

Polar ID Biometric Security System

The Polar ID Biometric Security System offers an advanced, secure face unlock capability for smartphones, utilizing groundbreaking meta-optics technology to capture the full polarization state of light. Unlike traditional biometric systems, Polar ID distinguishes the unique polarization signature of human facial features, which adds an additional security layer by detecting the presence of non-human elements like sophisticated 3D masks. This system eliminates the need for multiple complex optical modules, thus simplifying smartphone design while enhancing security. Designed to fit the most compact form factors, Polar ID uses a near-infrared polarization camera at 940nm paired with active illumination. This configuration ensures functionality across various lighting conditions, from bright outdoor environments to complete darkness, and operates effectively even when users wear sunglasses or face masks. Smartphone OEMs can integrate this secure and cost-effective solution onto a wide range of devices, surpassing traditional fingerprint sensors in reliability. Polar ID not only offers a higher resolution than existing solutions but does so at a reduced cost compared to structured light setups, democratizing access to secure biometric authentication across consumer devices. The system's efficiency and compactness are achieved through Metalenz's meta-optic innovations, offering consistent performance regardless of external impediments such as lighting changes.

Metalenz Inc.
13 Categories
View Details

ELFIS2 Image Sensor

ELFIS2 is a sophisticated visible light imaging ASIC designed to deliver superior performance under the extreme conditions typical in space. Known for its radiation hard design, it withstands both total ionizing dose (TID) and single event effects (SEU/SEL), ensuring dependable operation even when exposed to high radiation levels in outer space. This image sensor features HDR (High Dynamic Range) capabilities, which allow it to capture clear, contrast-rich images in environments with varying light intensities, without motion artifacts thanks to its Motion Artifact Free (MAF) technology. Additionally, its global shutter ensures that every pixel is exposed simultaneously, preventing distortion in high-speed imaging applications. Utilizing back-side illumination (BSI), the ELFIS2 achieves superior sensitivity and quantum efficiency, making it an ideal choice for challenging lighting conditions. This combination of advanced features makes the ELFIS2 particularly well-suited for scientific and space-based imaging applications requiring exacting standards.

Caeleste
All Foundries
All Process Nodes
A/D Converter, Analog Front Ends, Analog Subsystems, GPU, Graphics & Video Modules, LCD Controller, Oversampling Modulator, Photonics, Sensor, Temperature Sensor, Vision Processor
View Details

Dynamic Neural Accelerator II Architecture

The Dynamic Neural Accelerator II (DNA-II) by EdgeCortix is a versatile and powerful neural network IP core tailored for edge AI applications. Featuring run-time reconfigurable interconnects, it achieves high parallelism and efficiency essential for convolutional and transformer networks. DNA-II can be integrated with a variety of host processors, rendering it adaptable for a wide range of edge-based solutions that demand efficient processing capabilities at the core of AI advancements. This architecture allows real-time reconfiguration of data paths between DNA engines, optimizing parallelism while reducing on-chip memory bandwidth via a patented reconfigurable datapath. The architecture significantly enhances utilization rates and ensures fast processing through model parallelism, making it suitable for mission-critical tasks where low power consumption is paramount. DNA-II serves as the technological backbone of the SAKURA-II AI Accelerator, enabling it to execute generative AI models proficiently. This innovative IP core is engineered to mesh effortlessly with the MERA software stack, optimizing neural network operations through effective scheduling and resource distribution, representing a paradigm shift in how neural network tasks are managed and executed in real-time.

EdgeCortix Inc.
AI Processor, Processor Core Dependent, Processor Core Independent, Security Protocol Accelerators, Vision Processor, Wireless Processor
View Details

RISC-V CPU IP NA Class

Specially engineered for the automotive industry, the NA Class IP by Nuclei complies with the stringent ISO26262 functional safety standards. This processor is crafted to handle complex automotive applications, offering flexibility and rigorous safety protocols necessary for mission-critical transportation technologies. Incorporating a range of functional safety features, the NA Class IP is equipped to ensure not only performance but also reliability and safety in high-stakes vehicular environments.

Nuclei System Technology
AI Processor, CAN-FD, CPU, Cryptography Cores, FlexRay, Microcontroller, Platform Security, Processor Core Dependent, Processor Cores, Security Processor, Vision Processor
View Details

Trifecta-GPU

Trifecta-GPU design offers an exceptional computational power utilizing the NVIDIA RTX A2000 embedded GPU. With a focus on modular test and measurement, and electronic warfare markets, this GPU is capable of delivering 8.3 FP32 TFLOPS compute performance. It is tailored for advanced signal processing and machine learning, making it indispensable for modern, software-defined signal processing applications. This GPU is a part of the COTS PXIe/CPCIe modular family, known for its flexibility and ease of use. The NVIDIA GPU integration means users can expect robust performance for AI inference applications, facilitating quick deployment in various scenarios requiring advanced data processing. Incorporating the latest in graphical performance, the Trifecta-GPU supports a broad range of applications, from high-end computing tasks to graphics-intensive processes. It is particularly beneficial for those needing a reliable and powerful GPU for modular T&M and EW projects.

RADX Technologies, Inc.
AI Processor, CPU, DSP Core, GPU, Multiprocessor / DSP, Peripheral Controller, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

aiData

aiData is an automated data pipeline tailored for Advanced Driver-Assistance Systems (ADAS) and Autonomous Driving (AD). This system is crucial for processing and transforming extensive real-world driving data into meticulously annotated, training-ready datasets. Its primary focus is on efficiency and precision, significantly reducing the manual labor traditionally associated with data annotation. aiData dramatically speeds up the data preparation process, providing real-time feedback and minimizing data wastage. By employing the aiData Auto Annotator, the system offers superhuman precision in automatically identifying and labeling dynamic entities such as vehicles and pedestrians, achieving significant cost reductions. The implementation of AI-driven data curation and versioning ensures that only the most relevant data is used for model improvement, providing full traceability and customization throughout the data's lifecycle. The pipeline further includes robust metrics for automatically verifying new software outputs, ensuring that performance stays at an optimal level. With aiData, companies are empowered to streamline their ADAS and AD data workflows, ensuring rapid and reliable output from concept to application.

aiMotive
11 Categories
View Details

Prodigy Universal Processor

The Prodigy Universal Processor by Tachyum is a versatile chip that merges the capabilities of CPUs, GPGPUs, and TPUs into a single architecture. This innovation is designed to cater to the needs of AI, HPC, and hyperscale data centers by delivering improved performance, energy efficiency, and server utilization. The chip functions as a general-purpose processor, facilitating various applications from hyperscale data centers to high-performance computing and private clouds. It boasts a seamless integration model, allowing existing software packages to run flawlessly on its uniquely designed instruction set architecture. By providing up to 18.5x increased performance and enhanced performance per watt, Prodigy stands out in the industry, tackling common issues like high power consumption and limited processor performance that currently hamper data centers. It comprises a coherent multiprocessor architecture that supports a wide range of AI and computing workloads, ultimately transforming data centers into universal computing hubs. The design not only aims to lower the total cost of ownership but also contributes to reducing carbon emissions through decreased energy requirements. Prodigy’s architecture supports a diverse range of SKUs tailored to specific markets, making it adaptable to various applications. Its flexibility and superior performance capabilities position it as a significant player in advancing sustainable, energy-efficient computational solutions worldwide. The processor's ability to handle complex AI tasks with minimal energy use underlines Tachyum's commitment to pioneering green technology in the semiconductor industry.

Tachyum Inc.
13 Categories
View Details

TT-Ascalon™

TT-Ascalon™ stands out as a high-performance RISC-V CPU solution from Tenstorrent, tailored for general-purpose control and expansive computing tasks. This processor is distinguished by its scalable out-of-order architecture, which is co-designed and optimized with Tenstorrent's proprietary Tensix IP. The TT-Ascalon™ is engineered to deliver peak performance while maintaining the efficiency of area and power, crucial for modern computational demands. Built on the RISC-V RVA23 profile, TT-Ascalon™ provides a compelling combination of computational speed and energy efficiency, making it suitable for a wide range of applications from data centers to embedded systems. Its superscalar design facilitates the concurrent execution of multiple instructions, enhancing computing throughput and optimizing performance for demanding workloads. The processor’s architecture is further tailored to enable seamless integration into various systems. By complementing its high-efficiency design with comprehensive compatibility, TT-Ascalon™ ensures that users can implement sophisticated computing solutions that evolve with technological advancements and industry needs. This adaptability makes it an ideal choice for enterprises aiming to future-proof their technological infrastructure. Supporting a suite of developer tools and open-source initiatives, the TT-Ascalon™ allows users to freely innovate and tailor their computing solutions. This openness, combined with the processor’s unmatched performance, positions it as a vital component for those looking to maximize their computing efficiency and capabilities.

Tenstorrent
TSMC
22nm, 22nm FDX
AI Processor, CPU, Error Correction/Detection, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor
View Details

RISC-V CPU IP NX Class

The NX Class RISC-V CPU IP by Nuclei is characterized by its 64-bit architecture, making it a robust choice for storage, AR/VR, and AI applications. This processing unit is designed to accommodate high data throughput and demanding computational tasks. By leveraging advanced capabilities, such as virtual memory and enhanced processing power, the NX Class facilitates cutting-edge technological applications and is adaptable for integration into a vast array of high-performance systems.

Nuclei System Technology
Building Blocks, CPU, DSP Core, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, Vision Processor, Wireless Processor
View Details

CTAccel Image Processor on Intel Agilex FPGA

The CTAccel Image Processor on Intel Agilex FPGA is designed to handle high-performance image processing by capitalizing on the robust capabilities of Intel's Agilex FPGAs. These FPGAs, leveraging the 10 nm SuperFin process technology, are ideal for applications demanding high performance, power efficiency, and compact sizes. Featuring advanced DSP blocks and high-speed transceivers, this IP thrives in accelerating image processing tasks that are typically computational-intensive when executed on CPUs. One of the main advantages is its ability to significantly enhance image processing throughput, achieving up to 20 times the speed while maintaining reduced latency. This performance prowess is coupled with low power consumption, leading to decreased operational and maintenance costs due to fewer required server instances. Additionally, the solution is fully compatible with mainstream image processing software, facilitating seamless integration and leveraging existing software investments. The adaptability of the FPGA allows for remote reconfiguration, ensuring that the IP can be tailored to specific image processing scenarios without necessitating a server reboot. This ease of maintenance, combined with a substantial boost in compute density, underscores the IP's suitability for high-demand image processing environments, such as those encountered in data centers and cloud computing platforms.

CTAccel Ltd.
Intel Foundry
12nm
AI Processor, DLL, Graphics & Video Modules, Image Conversion, JPEG, JPEG 2000, Processor Core Independent, Vision Processor
View Details

RISC-V CPU IP NI Class

The NI Class RISC-V CPU IP caters to communication, video processing, and AI applications, providing a balanced architecture for intensive data handling and processing capabilities. With a focus on high efficiency and flexibility, this processor supports advanced data crunching and networking applications, ensuring that systems run smoothly and efficiently even when managing complex algorithms. The NI Class upholds Nuclei's commitment to providing versatile solutions in the evolving tech landscape.

Nuclei System Technology
3GPP-LTE, AI Processor, CPU, Cryptography Cores, Microcontroller, Processor Core Dependent, Processor Cores, Security Processor, Vision Processor
View Details

RISC-V CPU IP NS Class

The NS Class is Nuclei's crucial offering for applications prioritizing security and fintech solutions. This RISC-V CPU IP securely manages IoT environments with its highly customizable and secure architecture. Equipped to support advanced security protocols and functional safety features, the NS Class is particularly suited for payment systems and other fintech applications, ensuring robust protection and reliable operations. Its design follows the RISC-V standards and is accompanied by customizable configuration options tailored to meet specific security requirements.

Nuclei System Technology
CPU, Cryptography Cores, Embedded Security Modules, Microcontroller, Platform Security, Processor Cores, Security Processor, Security Subsystems, Vision Processor
View Details

SiFive Performance

The SiFive Performance family is dedicated to offering high-throughput, low-power processor solutions, suitable for a wide array of applications from data centers to consumer devices. This family includes a range of 64-bit, out-of-order cores configured with options for vector computations, making it ideal for tasks that demand significant processing power alongside efficiency. Performance cores provide unmatched energy efficiency while accommodating a breadth of workload requirements. Their architecture supports up to six-wide out-of-order processing with tailored options that include multiple vector engines. These cores are designed for flexibility, enabling various implementations in consumer electronics, network storage solutions, and complex multimedia processing. The SiFive Performance family facilitates a mix of high performance and low power usage, allowing users to balance the computational needs with power consumption effectively. It stands as a testament to SiFive’s dedication to enabling flexible tech solutions by offering rigorous processing capabilities in compact, scalable packages.

SiFive, Inc.
CPU, DSP Core, Ethernet, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, Wireless Processor
View Details

Neural Network Accelerator

The Neural Network Accelerator from Gyrus AI is a state-of-the-art processing solution tailored for executing neural networks efficiently. Leverage this IP to achieve high-performance computing with streamlined power consumption. It features a noteworthy capability of operating at 30 TOPS/W, drastically reducing clock cycles by 10-30x compared to traditional processors. This advancement supports various neural network structures, ensuring high operational efficiency while minimizing energy demands.\n\nThe architecture of the Neural Network Accelerator is optimized for low memory usage, resulting in significantly lower power needs, which in turn reduces operational costs. Its design focuses on achieving optimal die area usage, ensuring over 80% utilization for different model structures, which supports compact and effective chip designs. This enhances the scalability and flexibility required for varied applications in edge computing.\n\nAccompanied by advanced software tools, this IP supports seamless integration into existing systems, facilitating the straightforward execution of neural networks. The tools offer robust support, helping run complex models with ease, boosting both performance and resource efficiency. This makes it ideal for companies looking to enhance their AI processing capabilities on edge devices. Its cutting-edge technology enables enterprises to maintain competitive advantages in AI-driven markets.

Gyrus AI
AI Processor, Coprocessor, CPU, DSP Core, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Security Processor, Vision Processor
View Details

eSi-3264

The eSi-3264 stands out with its support for both 32/64-bit operations, including 64-bit fixed and floating-point SIMD (Single Instruction Multiple Data) DSP extensions. Engineered for applications mandating DSP functionality, it does so with minimal silicon footprint. Its comprehensive instruction set includes specialized commands for various tasks, bolstering its practicality across multiple sectors.

eSi-RISC
All Foundries
16nm, 90nm, 250nm, 350nm
Building Blocks, CPU, DSP Core, Microcontroller, Multiprocessor / DSP, Processor Cores, Vision Processor
View Details

Origin E1

The Origin E1 is a streamlined neural processing unit designed specifically for always-on applications in personal electronics and smart devices such as smartphones and security systems. This processor focuses on delivering highly efficient AI performance, achieving around 18 TOPS per watt. With its low power requirements, the E1 is ideally suited for tasks demanding continuous data sampling, such as camera operations in smart surveillance systems where it runs on less than 20mW of power. Its packet-based architecture ensures efficient resource utilization, maintaining high performance with lower power and area consumption. The E1's adaptability is enhanced through customizable options, allowing it to meet specific PPA requirements effectively, making it the go-to choice for applications seeking to improve user privacy and experience by minimizing external memory use.

Expedera
14 Categories
View Details

Vega eFPGA

The Vega eFPGA is a flexible programmable solution crafted to enhance SoC designs with substantial ease and efficiency. This IP is designed to offer multiple advantages such as increased performance, reduced costs, secure IP handling, and ease of integration. The Vega eFPGA boasts a versatile architecture allowing for tailored configurations to suit varying application requirements. This IP includes configurable tiles like CLB (Configurable Logic Blocks), BRAM (Block RAM), and DSP (Digital Signal Processing) units. The CLB part includes eight 6-input Lookup Tables that provide dual outputs, and also an optional configuration with a fast adder having a carry chain. The BRAM supports 36Kb dual-port memory and offers flexibility for different configurations, while the DSP component is designed for complex arithmetic functions with its 18x20 multipliers and a wide 64-bit accumulator. Focused on allowing easy system design and acceleration, Vega eFPGA ensures seamless integration and verification into any SoC design. It is backed by a robust EDA toolset and features that allow significant customization, making it adaptable to any semiconductor fabrication process. This flexibility and technological robustness places the Vega eFPGA as a standout choice for developing innovative and complex programmable logic solutions.

Rapid Silicon
CPU, Embedded Memories, Multiprocessor / DSP, Processor Core Independent, Vision Processor, WMV
View Details

Pipelined FFT

The Pipelined FFT core delivers streamlined continuous data processing capabilities with an architecture designed for pipelined execution of FFT computations. This core is perfectly suited for use in environments where data is fed continuously and needs to be processed with minimal delays. Its design minimizes memory footprint while ensuring high-speed data throughput, making it invaluable for real-time signal processing applications. By structurally arranging computations into a pipeline, the core facilitates a seamless flow of operations, allowing for one-step-after-another processing of data. The efficiency of the pipelining process reduces the system's overall latency, ensuring that data is processed as quickly as it arrives. This functionality is especially beneficial in time-sensitive applications where downtime can impact system performance. The compact design of the Pipelined FFT core integrates well into systems requiring consistent data flow and reduced resource allocation. It offers effective management of continuous data streams, supporting critical applications in areas such as real-time monitoring and control systems. By ensuring rapid data turnover, this core enhances system efficiency and contributes significantly to achieving strategic processing objectives.

Dillon Engineering, Inc.
Intel Foundry, Samsung
90nm, 800nm
Coprocessor, Ethernet, Network on Chip, Receiver/Transmitter, Vision Processor
View Details
Load more
Sign up to Silicon Hub to buy and sell semiconductor IP

Sign Up for Silicon Hub

Join the world's most advanced semiconductor IP marketplace!

It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!

No credit card or payment details required.

Sign up to Silicon Hub to buy and sell semiconductor IP

Welcome to Silicon Hub

Join the world's most advanced AI-powered semiconductor IP marketplace!

It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!

Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!

Switch to a Silicon Hub buyer account to buy semiconductor IP

Switch to a Buyer Account

To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.

Add new company

Switch to a Silicon Hub buyer account to buy semiconductor IP

Create a Buyer Account

To evaluate IP you need to be logged into a buyer profile. It's free to create a buyer profile for your company.

Chatting with Volt