Find IP Sell IP AI Assistant Chip Talk About Us
Log In

All IPs > Platform Level IP > Processor Core Dependent

Processor Core Dependent Semiconductor IPs

In the realm of semiconductor IP, the Processor Core Dependent category encompasses a variety of intellectual properties specifically designed to enhance and support processor cores. These IPs are tailored to work in harmony with core processors to optimize their performance, adding value by reducing time-to-market and improving efficiency in modern integrated circuits. This category is crucial for the customization and adaptation of processors to meet specific application needs, addressing both performance optimization and system complexity management.

Processor Core Dependent IPs are integral components, typically found in applications that require robust data processing capabilities such as smartphones, tablets, and high-performance computing systems. They can also be implemented in embedded systems for automotive, industrial, and IoT applications, where precision and reliability are paramount. By providing foundational building blocks that are pre-verified and configurable, these semiconductor IPs significantly simplify the integration process within larger digital systems, enabling a seamless enhancement of processor capabilities.

Products in this category may include cache controllers, memory management units, security hardware, and specialized processing units, all designed to complement and extend the functionality of processor cores. These solutions enable system architects to leverage existing processor designs while incorporating cutting-edge features and optimizations tailored to specific application demands. Such customizations can significantly boost the performance, energy efficiency, and functionality of end-user devices, translating into better user experiences and competitive advantages.

In essence, Processor Core Dependent semiconductor IPs represent a strategic approach to processor design, providing a toolkit for customization and optimization. By focusing on interdependencies within processing units, these IPs allow for the creation of specialized solutions that cater to the needs of various industries, ensuring the delivery of high-performance, reliable, and efficient computing solutions. As the demand for sophisticated digital systems continues to grow, the importance of these IPs in maintaining competitive edge cannot be overstated.

All semiconductor IP
103
IPs available

CXL 3.1 Switch

Panmnesia's CXL 3.1 Switch is a pivotal component in networking a vast array of CXL-enabled devices, setting the bar with its exceptional scalability and diverse connectivity. The switch supports seamless integration of hundreds of devices including memory, CPUs, and accelerators, facilitating flexible, high-performance configurations suited to demanding applications in data centers and beyond. Panmnesia's design enables easy scalability and efficient memory node expansion, reflecting their dedication to resource-efficient memory management. The CXL 3.1 Switch features a robust architecture that supports a wide array of network topologies, allowing for multi-level switching and complex node configurations. Its design addresses the unique challenges of composable server architecture, enabling fine-grained resource allocation. The switch leverages Panmnesia's proprietary CXL technology, underpinning its ability to perform management tasks across integrated memory spaces with minimal overhead, crucial for achieving high-speed, low-latency data exchange. Incorporating CXL standards, it is fully compatible with both legacy and next-generation devices, ensuring broad interoperability. The architecture allows servers to tailor resource availability by employing type-specific CXL features, such as port-based routing and multi-level switching. These features empower operators with the tools to configure extensive networks of diverse devices efficiently, thereby maximizing data center performance while minimizing costs.

Panmnesia
All Foundries
All Process Nodes
CXL, D2D, Multiprocessor / DSP, PCI, Processor Core Dependent, Processor Core Independent, RapidIO, SAS, SATA, V-by-One
View Details

NMP-750

The NMP-750 is a high-performance accelerator designed for edge computing, particularly suited for automotive, AR/VR, and telecommunications sectors. It boasts an impressive capacity of up to 16 TOPS and 16 MB local memory, powered by a RISC-V or Arm Cortex-R/A 32-bit CPU. The three AXI4 interfaces ensure seamless data transfer and processing. This advanced accelerator supports multifaceted applications such as mobility control, building automation, and multi-camera processing. It's designed to cope with the rigorous demands of modern digital and autonomous systems, offering substantial processing power and efficiency for intensive computational tasks. The NMP-750's ability to integrate into smart systems and manage spectral efficiency makes it crucial for communications and smart infrastructure management. It helps streamline operations, maintain effective energy management, and facilitate sophisticated AI-driven automation, ensuring that even the most complex data flows are handled efficiently.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

Metis AIPU PCIe AI Accelerator Card

The PCIe AI Accelerator Card powered by Metis AIPU offers unparalleled AI inference performance suitable for intensive vision applications. Incorporating a single quad-core Metis AIPU, it provides up to 214 TOPS, efficiently managing high-volume workloads with low latency. The card is further enhanced by the Voyager SDK, which streamlines application deployment, offering an intuitive development experience and ensuring simple integration across various platforms. Whether for real-time video analytics or other demanding AI tasks, the PCIe Accelerator Card is designed to deliver exceptional speed and precision.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor, WMV
View Details

Origin E1

The Origin E1 is an optimized neural processing unit (NPU) targeting always-on applications in devices like home appliances, smartphones, and security cameras. It provides a compact, energy-efficient solution with performance tailored to 1 TOPS, making it ideal for systems needing low-power and minimal area. The architecture is built on Expedera's unique packet-based approach, which enables enhanced resource utilization and deterministic performance, significantly boosting efficiency while avoiding the pitfalls of traditional layer-based architectures. The architecture is fine-tuned to support standard and custom neural networks without requiring external memory, preserving privacy and ensuring fast processing. Its ability to process data in parallel across multiple layers results in predictive performance with low power and latency. Always-sensing cameras leveraging the Origin E1 can continuously analyze visual data, facilitating smoother and more intuitive user interactions. Successful field deployment in over 10 million devices highlights the Origin E1's reliability and effectiveness. Its flexible design allows for adjustments to meet the specific PPA requirements of diverse applications. Offered as Soft IP (RTL) or GDS, this engine is a blend of efficiency and capability, capitalizing on the full scope of Expedera's software tools and custom support features.

Expedera
13 Categories
View Details

Metis AIPU M.2 Accelerator Module

The Metis AIPU M.2 accelerator module by Axelera AI is engineered for AI inference on edge devices with power and budget constraints. It leverages the quad-core Metis AIPU, delivering exceptional AI processing in a compact form factor. This solution is ideal for a range of applications, including computer vision in constrained environments, providing robust support for multiple camera feeds and parallel neural networks. With its easy integration and the comprehensive Voyager SDK, it simplifies the deployment of advanced AI models, ensuring high prediction accuracy and efficiency. This module is optimized for NGFF (Next Generation Form Factor) M.2 sockets, boosting the capability of any processing system with modest space and power requirements.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, CPU, Processor Core Dependent, Vision Processor, WMV
View Details

SCR9 Processor Core

Designed for high-demand applications in server and computing environments, the SCR9 Processor Core stands as a robust 64-bit RISC-V solution. It features a 12-stage superscalar, out-of-order pipeline to handle intensive processing tasks, further empowered by its versatile floating-point and vector processing units. The core is prepared to meet extensive computing needs with support for up to 16-core clustering and seamless AOSP or Linux operating systems integration.\n\nInvesting in powerful memory subsystems including L1, L2, and shared L3 caches enhances data handling, while features like memory coherency ensure fluid operation in multi-core settings. Extensions in cryptography and vector operations further diversify its application potential, establishing the SCR9 as an ideal candidate for cutting-edge data tasks.\n\nFrom enterprise servers to personal computing devices, video processing, and high-performance computations for AI and machine learning, the SCR9 delivers across an array of demanding scenarios. Its design integrates advanced power and process technologies to cater to complex computing landscapes, embodying efficiency and innovation in processor core technology.

Syntacore
AI Processor, Coprocessor, CPU, Microcontroller, Processor Core Dependent, Processor Cores
View Details

NuLink Die-to-Die PHY for Standard Packaging

Eliyan's NuLink Die-to-Die PHY technology represents a significant advancement in chiplet interconnect solutions. Designed for standard packaging, this innovative PHY IP delivers robust high-performance with low power consumption, a balance that is crucial for modern semiconductor designs. The NuLink PHY supports multiple industry standards, including the Universal Chiplet Interface Express (UCIe) and Bunch of Wires (BoW), ensuring it can cater to a wide range of applications. A standout feature of the NuLink PHY is its simultaneous bidirectional (SBD) signaling capability, which allows data to be sent and received over the same wire at the same time, effectively doubling bandwidth. This makes it an ideal solution for data-intensive applications such as AI training and inference, particularly those requiring ultra-low latency and high reliability. The technology is also adaptable for different substrates, including both silicon and organic, offering designers flexibility in their packaging approaches. NuLink's architecture stems from extensive industry insights and is informed by Eliyan’s commitment to innovation. The platform provides a power-efficient and cost-effective alternative to traditional advanced packaging solutions. It achieves interposer-like performance metrics without the complexity and cost associated with such methods, enabling operational efficiency and reduced time-to-market for new semiconductor products.

Eliyan
All Foundries
4nm, 7nm
AMBA AHB / APB/ AXI, CXL, D2D, MIPI, Network on Chip, Processor Core Dependent
View Details

A25

The A25 processor model is a versatile CPU suitable for a variety of embedded applications. With its 5-stage pipeline and 32/64-bit architecture, it delivers high performance even with a low gate count, which translates to efficiency in power-sensitive environments. The A25 is equipped with Andes Custom Extensions that enable tailored instruction sets for specific application accelerations. Supporting robust high-frequency operations, this model shines in its ability to manage data prefetching and cache coherence in multicore setups, making it adept at handling complex processing tasks within constrained spaces.

Andes Technology
CPU, IoT Processor, Microcontroller, Processor Core Dependent, Processor Cores, Standard cell
View Details

NMP-350

The NMP-350 is a cutting-edge endpoint accelerator designed to optimize power usage and reduce costs. It is ideal for markets like automotive, AIoT/sensors, and smart appliances. Its applications span from driver authentication and predictive maintenance to health monitoring. With a capacity of up to 1 TOPS and 1 MB of local memory, it incorporates a RISC-V/Arm Cortex-M 32-bit CPU and supports three AXI4 interfaces. This makes the NMP-350 a versatile component for various industrial applications, ensuring efficient performance and integration. Developed as a low-power solution, the NMP-350 is pivotal for applications requiring efficient processing power without inflating energy consumption. It is crucial for mobile and battery-operated devices where every watt conserved adds to the operational longevity of the product. This product aligns with modern demands for eco-friendly and cost-effective technologies, supporting enhanced performance in compact electronic devices. Technical specifications further define its role in the industry, exemplifying how it brings robust and scalable solutions to its users. Its adaptability across different applications, coupled with its cost-efficiency, makes it an indispensable tool for developers working on next-gen AI solutions. The NMP-350 is instrumental for developers looking to seamlessly incorporate AI capabilities into their designs without compromising on economy or efficiency.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

GenAI v1

RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.

RaiderChip
GLOBALFOUNDARIES, TSMC
28nm, 65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

Veyron V2 CPU

Ventana's Veyron V2 CPU represents the pinnacle of high-performance AI and data center-class RISC-V processors. Engineered to deliver world-class performance, it supports extensive data center workloads, offering superior computational power and efficiency. The V2 model is particularly focused on accelerating AI and ML tasks, ensuring compute-intensive applications run seamlessly. Its design makes it an ideal choice for hyperscale, cloud, and edge computing solutions where performance is non-negotiable. This CPU is instrumental for companies aiming to scale with the latest in server-class technology.

Ventana Micro Systems
AI Processor, CPU, Processor Core Dependent, Processor Cores
View Details

Origin E8

The Origin E8 NPU by Expedera is engineered for the most demanding AI deployments such as automotive systems and data centers. Capable of delivering up to 128 TOPS per core and scalable to PetaOps with multiple cores, the E8 stands out for its high performance and efficient processing. Expedera's packet-based architecture allows for parallel execution across varying layers, optimizing resource utilization, and minimizing latency, even under strenuous conditions. The E8 handles complex AI models, including large language models (LLMs) and standard machine learning frameworks, without requiring significant hardware-specific changes. Its support extends to 8K resolutions and beyond, ensuring coverage for advanced visualization and high-resolution tasks. With its low deterministic latency and minimized DRAM bandwidth needs, the Origin E8 is especially suitable for high-performance, real-time applications. The high-speed processing and flexible deployment benefits make the Origin E8 a compelling choice for companies seeking robust and scalable AI infrastructure. Through customized architecture, it efficiently addresses the power, performance, and area considerations vital for next-generation AI technologies.

Expedera
12 Categories
View Details

AndesCore Processors

AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.

Andes Technology
CPU, FlexRay, Processor Core Dependent, Processor Core Independent, Processor Cores, Security Processor
View Details

Vehicle Engineering & Design Solutions

KPIT's Vehicle Engineering & Design Solutions empower manufacturers to leverage cutting-edge design and simulation technologies for efficient vehicle development. Focusing on sustainability and cost-efficient production, these solutions include advanced CAD methodologies and digital twins that streamline design processes. KPIT provides a comprehensive suite of engineering services that help OEMs develop appealing and compliant vehicles, facilitating faster time-to-market and innovation across model ranges.

KPIT Technologies
Microcontroller, Processor Core Dependent
View Details

Digital Connected Solutions

The Digital Connected Solutions offered by KPIT are designed to bridge the gap between in-vehicle systems and the connected world. Emphasizing on personalization and safety, these solutions feature a variety of interactive and data-driven technologies including augmented reality head-up displays, gesture controls, and AI-based customization. KPIT's framework ensures seamless integration of these technologies, transforming vehicles into intelligent, user-centric marketplaces.

KPIT Technologies
Processor Core Dependent, Processor Core Independent
View Details

Chimera GPNPU

The Chimera GPNPU is a general-purpose neural processing unit designed to address key challenges faced by system on chip (SoC) developers when deploying machine learning (ML) inference solutions. It boasts a unified processor architecture capable of executing matrix, vector, and scalar operations within a single pipeline. This architecture integrates the functions of a neural processing unit (NPU), digital signal processor (DSP), and other processors, which significantly simplifies code development and hardware integration. The Chimera GPNPU can manage various ML networks, including classical frameworks, vision transformers, and large language models, all within a single processor framework. Its flexibility allows developers to optimize performance across different applications, from mobile devices to automotive systems. The GPNPU family is fully synthesizable, making it adaptable to a range of performance requirements and process technologies, ensuring long-term viability and adaptability to changing ML workloads. The Cortex's sophisticated design includes a hybrid Von Neumann and 2D SIMD matrix architecture, predictive power management, and sophisticated memory optimization techniques, including an L2 cache. These features help reduce power usage and enhance performance by enabling the processor to efficiently handle complex neural network computations and DSP algorithms. By merging the best qualities of NPUs and DSPs, the Chimera GPNPU establishes a new benchmark for performance in AI processing.

Quadric
All Foundries
All Process Nodes
AI Processor, AMBA AHB / APB/ AXI, CPU, DSP Core, GPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, VGA, Vision Processor
View Details

NMP-550

The NMP-550 is tailored for enhanced performance efficiency, serving sectors like automotive, mobile, AR/VR, drones, and robotics. It supports applications such as driver monitoring, image/video analytics, and security surveillance. With a capacity of up to 6 TOPS and 6 MB local memory, this accelerator leverages either a RISC-V or Arm Cortex-M/A 32-bit CPU. Its three AXI4 interface support ensures robust interconnections and data flow. This performance boost makes the NMP-550 exceptionally suited for devices requiring high-frequency AI computations. Typical use cases include industrial surveillance and smart robotics, where precise and fast data analysis is critical. The NMP-550 offers a blend of high computational power and energy efficiency, facilitating complex AI tasks like video super-resolution and fleet management. Its architecture supports modern digital ecosystems, paving the way for new digital experiences through reliable and efficient data processing capabilities. By addressing the needs of modern AI workloads, the NMP-550 stands as a significant upgrade for those needing robust processing power in compact form factors.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

AndeShape Platforms

The AndeShape Platforms are designed to streamline system development by providing a diverse suite of IP solutions for SoC architecture. These platforms encompass a variety of product categories, including the AE210P for microcontroller applications, AE300 and AE350 AXI fabric packages for scalable SoCs, and AE250 AHB platform IP. These solutions facilitate efficient system integration with Andes processors. Furthermore, AndeShape offers a sophisticated range of development platforms and debugging tools, such as ADP-XC7K160/410, which reinforce the system design and verification processes, providing a comprehensive environment for the innovative realization of IoT and other embedded applications.

Andes Technology
Embedded Memories, Microcontroller, Processor Core Dependent, Processor Core Independent, Standard cell
View Details

xcore.ai

xcore.ai is a versatile platform specifically crafted for the intelligent IoT market. It hosts a unique architecture with multi-threading and multi-core capabilities, ensuring low latency and high deterministic performance in embedded AI applications. Each xcore.ai chip contains 16 logical cores organized in two multi-threaded processor 'tiles' equipped with 512kB of SRAM and a vector unit for enhanced computation, enabling both integer and floating-point operations. The design accommodates extensive communication infrastructure within and across xcore.ai systems, providing scalability for complex deployments. Integrated with embedded PHYs for MIPI, USB, and LPDDR, xcore.ai is capable of handling a diverse range of application-specific interfaces. Leveraging its flexibility in software-defined I/O, xcore.ai offers robust support for AI, DSP, and control processing tasks, making it an ideal choice for enhancing IoT device functionalities. With its support for FreeRTOS, C/C++ development environment, and capability for deterministic processing, xcore.ai guarantees precision in performance. This allows developers to partition xcore.ai threads optimally for handling I/O, control, DSP, and AI/ML tasks, aligning perfectly with the specific demands of various applications. Additionally, the platform's power optimization through scalable tile clock frequency adjustment ensures cost-effective and energy-efficient IoT solutions.

XMOS Semiconductor
TSMC
20nm
19 Categories
View Details

Veyron V1 CPU

The Veyron V1 CPU is designed to meet the demanding needs of data center workloads. Optimized for robust performance and efficiency, it handles a variety of tasks with precision. Utilizing RISC-V open architecture, the Veyron V1 is easily integrated into custom high-performance solutions. It aims to support the next-generation data center architectures, promising seamless scalability for various applications. The CPU is crafted to compete effectively against ARM and x86 data center CPUs, providing the same class-leading performance with added flexibility for bespoke integrations.

Ventana Micro Systems
AI Processor, Coprocessor, CPU, Processor Core Dependent, Processor Cores
View Details

Avispado

The Avispado is a sleek and efficient 64-bit RISC-V in-order processing core tailored for applications where energy efficiency is key. It supports a 2-wide in-order issue, emphasizing minimal area and power consumption, which makes it ideal for energy-conscious system-on-chip designs. The core is equipped with direct support for unaligned memory accesses and is multiprocessor-ready, providing a versatile solution for modern AI needs. With its small footprint, Avispado is perfect for machine learning systems requiring little energy per operation. This core is fully compatible with RISC-V Vector Specification 1.0, interfacing seamlessly with Semidynamics' vector units to support vector instructions that enhance computational efficiency. The integration with Gazzillion Missesâ„¢ technology allows support for extensive memory latency workloads, ideal for key applications in data center machine learning and recommendation systems. The Avispado also features a robust set of RISC-V instruction set extensions for added capability and operates smoothly within Linux environments due to comprehensive memory management unit support. Multiprocessor-ready design ensures flexibility in embedding many Avispado cores into high-bandwidth systems, facilitating powerful and efficient processing architectures.

Semidynamics
AI Processor, AMBA AHB / APB/ AXI, CPU, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, WMA
View Details

Origin E2

The Origin E2 from Expedera is engineered to perform AI inference with a balanced approach, excelling under power and area constraints. This IP is strategically designed for devices ranging from smartphones to edge nodes, providing up to 20 TOPS performance. It features a packet-based architecture that enables parallel execution across layers, improving resource utilization and performance consistency. The engine supports a wide variety of neural networks, including transformers and custom networks, ensuring compatibility with the latest AI advancements. Origin E2 caters to high-resolution video and audio processing up to 4K, and is renowned for its low latency and enhanced performance. Its efficient structure keeps power consumption down, helping devices run demanding AI tasks more effectively than with conventional NPUs. This architecture ensures a sustainable reduction in the dark silicon effect while maintaining high operating efficiencies and accuracy thanks to its TVM-based software support. Deployed successfully in numerous smart devices, the Origin E2 guarantees power efficiency sustained at 18 TOPS/W. Its ability to deliver exceptional quality across diverse applications makes it a preferred choice for manufacturers seeking robust, energy-conscious solutions.

Expedera
12 Categories
View Details

ISPido on VIP Board

ISPido on VIP Board is a specialized runtime solution designed for optimal performance with Lattice Semiconductors’ Video Interface Platform. It features versatile configurations aimed at real-time image optimization, allowing users to choose between automatic best-setting selection or manual adjustments via menu-driven interfaces for precise gaming control. Compatible with two Sony IMX 214 image sensors, this setup ensures superior image clarity. The HDMI VIP Output Bridge Board and sophisticated calibration menus via serial ports offer further adaptability, accommodating unique project requirements effortlessly. This versatility, combined with efficient HDMI 1920 x 1080p output utilizing YCrCb 4:2:2, ensures that image quality remains consistently high. ISPido’s modular design ensures seamless integration and easy calibration, facilitating custom user preferences through real-time menu interfaces. Whether choosing gamma tables, applying varied filters, or selecting other personalization options, ISPido on VIP Board provides robust support tailored to electronic visualization devices.

DPControl
15 Categories
View Details

GenAI v1-Q

The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.

RaiderChip
TSMC
65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

Hanguang 800 AI Accelerator

Hanguang 800 AI Accelerator is a revolutionary AI processing powerhouse designed by T-Head to maximize computational efficiency for artificial intelligence applications. By leveraging state-of-the-art chip design, it achieves unparalleled processing speeds, significantly reducing inference times for machine learning tasks. Its architecture supports intricate AI algorithms adeptly, allowing for swift model training and execution across billions of parameters. Optimized for neural network operations, the accelerator adeptly caters to dense mathematical computations with minimal power consumption. Incorporating adaptive learning mechanisms, it autonomously refines its processing strategies in real-time, catalyzing efficiency gains. This is complemented by an advanced cooling system that manages thermal output without compromising computational power. Hanguang 800's application spectrum is broad, encompassing cloud-based AI services, edge computing, and embedded AI solutions in devices, positioning it as an ideal solution for industries demanding high-speed and scalable AI processing capabilities. Its integration into T-Head's ecosystem underscores the company’s commitment to delivering top-tier performance hardware for next-gen intelligent systems.

T-Head
AI Processor, CPU, Processor Core Dependent, Security Processor, Vision Processor
View Details

aiWare

aiWare stands out as a premier hardware IP for high-performance neural processing, tailored for complex automotive AI applications. By offering exceptional efficiency and scalability, aiWare empowers automotive systems to harness the full power of neural networks across a wide variety of functions, from Advanced Driver Assistance Systems (ADAS) to fully autonomous driving platforms. It boasts an innovative architecture optimized for both performance and energy efficiency, making it capable of handling the rigorous demands of next-generation AI workloads. The aiWare hardware features an NPU designed to achieve up to 256 Effective Tera Operations Per Second (TOPS), delivering high performance at significantly lower power. This is made possible through a thoughtfully engineered dataflow and memory architecture that minimizes the need for external memory bandwidth, thus enhancing processing speed and reducing energy consumption. The design ensures that aiWare can operate efficiently across a broad range of conditions, maintaining its edge in both small and large-scale applications. A key advantage of aiWare is its compatibility with aiMotive's aiDrive software, facilitating seamless integration and optimizing neural network configurations for automotive production environments. aiWare's development emphasizes strong support for AI algorithms, ensuring robust performance in diverse applications, from edge processing in sensor nodes to high central computational capacity. This makes aiWare a critical component in deploying advanced, scalable automotive AI solutions, designed specifically to meet the safety and performance standards required in modern vehicles.

aiMotive
AI Processor, Building Blocks, CPU, Cryptography Cores, Platform Security, Processor Core Dependent, Processor Core Independent, Security Protocol Accelerators, Vision Processor
View Details

Dynamic Neural Accelerator II Architecture

The Dynamic Neural Accelerator II (DNA-II) is an advanced IP core that elevates neural processing capabilities for edge AI applications. It is adaptable to various systems, exhibiting remarkable efficiency through its runtime reconfigurable interconnects, which aid in managing both transformer and convolutional neural networks. Designed for scalability, DNA-II supports numerous applications ranging from 1k MACs to extensive SoC implementations. DNA-II's architecture enables optimal parallelism by dynamically managing data paths between compute units, ensuring minimized on-chip memory bandwidth and maximizing operational efficiency. Paired with the MERA software stack, it provides seamless integration and optimization of neural network tasks, significantly enhancing computation ordering and resource distribution. Its applicability extends across various industry demands, massively increasing the operational efficiency of AI tasks at the edge. DNA-II, the pivotal force in the SAKURA-II Accelerator, brings innovative processing strength in compact formats, driving forward the development of edge-based generative AI and other demanding applications.

EdgeCortix Inc.
AI Processor, Audio Processor, CPU, Cryptography Cores, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor
View Details

Tyr Superchip

The Tyr Superchip is engineered to tackle the most daunting computational challenges in edge AI, autonomous driving, and decentralized AIoT applications. It merges AI and DSP functionalities into a single, unified processing unit capable of real-time data management and processing. This all-encompassing chip solution handles vast amounts of sensor data necessary for complete autonomous driving and supports rapid AI computing at the edge. One of the key challenges it addresses is providing massive compute power combined with low-latency outputs, achieving what traditional architectures cannot in terms of energy efficiency and speed. Tyr chips are surrounded by robust safety protocols, being ISO26262 and ASIL-D ready, making them ideally suited for the critical standards required in automotive systems. Designed with high programmability, the Tyr Superchip accommodates the fast-evolving needs of AI algorithms and supports modern software-defined vehicles. Its low power consumption, under 50W for higher-end tasks, paired with a small silicon footprint, ensures it meets eco-friendly demands while staying cost-effective. VSORA’s Superchip is a testament to their innovative prowess, promising unmatched efficiency in processing real-time data streams. By providing both power and processing agility, it effectively supports the future of mobility and AI-driven automation, reinforcing VSORA’s position as a forward-thinking leader in semiconductor technology.

VSORA
AI Processor, Audio Processor, CAN XL, CPU, Interleaver/Deinterleaver, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

RegSpec - Register Specification Tool

RegSpec is a comprehensive register specification tool that excels in generating Control Configuration and Status Register (CCSR) code. The tool is versatile, supporting various input formats like SystemRDL, IP-XACT, and custom formats via CSV, Excel, XML, or JSON. Its ability to output in formats such as Verilog RTL, System Verilog UVM code, and SystemC header files makes it indispensable for IP designers, offering extensive features for synchronization across multiple clock domains and interrupt handling. Additionally, RegSpec automates verification processes by generating UVM code and RALF files useful in firmware development and system modeling.

Dyumnin Semiconductors
TSMC
28nm, 65nm, 90nm
AMBA AHB / APB/ AXI, Disk Controller, DMA Controller, Flash Controller, GPU, I/O Library, I2C, Input/Output Controller, ONFI Controller, PCI, Processor Core Dependent, Receiver/Transmitter
View Details

ISPido

ISPido is a sophisticated Image Signal Processing Pipeline designed for comprehensive image enhancement tasks. It is ultra-configurable using the AXI4-LITE protocol, supporting integration with processors like RISCV. The ISP Pipeline accommodates procedures such as defective pixel correction, color interpolation using the Malvar-Cutler algorithm, and various statistical adjustments to facilitate adaptive control. Furthermore, ISPido incorporates comprehensive color conversion functionalities, with support for HDR processing and chroma resampling to 4:2:2/4:2:0 formats. Supporting bit depths of 8, 10, or 12 bits, and resolutions up to 7680x7680, ISPido ensures high-resolution output crucial for next-generation image processing needs. This flexibility positions it perfectly for projects ranging from low power devices to ultra-high-definition vision systems. Each component of ISPido aligns with AMBA AXI4 standards, ensuring broad compatibility and modular customization possibilities. Such features make it an ideal choice for heterogeneous electronics ecosystems involving CPUs, GPUs, and specialized processors, further solidifying its practicality for widespread deployment.

DPControl
19 Categories
View Details

Azurite Core-hub

Azurite Core-hub is an innovative processor solution that excels in performance, catering to challenging computational tasks with efficiency and speed. Designed with the evolving needs of industries in mind, Azurite leverages cutting-edge RISC-V architecture to deliver high performance while maintaining scalability and flexibility in design. This processor core stands out for its ability to streamline tasks and simplify the complexities often associated with processor integration. The Azurite Core-hub's architecture is tailored to enhance computation-intensive applications, ensuring rapid execution and robust performance. Its open-source RISC-V base supports easy integration and freedom from vendor lock-in, providing users the liberty to customize their processors according to specific project needs. This adaptability makes Azurite an ideal choice for sectors like AI/ML where high performance is crucial. InCore Semiconductors has fine-tuned the Azurite Core-hub to serve as a powerhouse in the processor core market, ensuring that it meets rigorous performance benchmarks. It offers a seamless blend of high efficiency and user-friendly configurability, making it a versatile asset for any design environment.

InCore Semiconductors
AI Processor, CPU, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

Codasip RISC-V BK Core Series

The Codasip RISC-V BK Core Series represents a lineup of processor cores that leverage the open standard architecture of RISC-V to deliver highly customizable computational solutions. These cores provide a balance between power efficiency and performance, making them ideal for a broad range of applications, including IoT devices and embedded systems. The BK series cores are designed to be versatile, supporting a variety of operating systems while allowing full customization to meet specific workload demands. This flexibility empowers designers to implement custom instructions and optimize the cores for particular applications without compromising on power budgets. The series is compliant with RISC-V standards, ensuring seamless integration with other RISC-V based solutions.

Codasip
CPU, IoT Processor, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

SAKURA-II AI Accelerator

The SAKURA-II AI Accelerator stands out as a high-performance, energy-efficient edge co-processor designed to handle advanced AI tasks. Tailored for real-time, batch-one AI inferencing, it supports multi-billion parameter models, such as Llama 2 and Stable Diffusion, while maintaining low power consumption. The core technology leverages a dynamic neural accelerator for runtime reconfigurability and exceptional parallel processing, making it ideal for edge-based generative AI applications. With its flexible architecture, SAKURA-II facilitates the seamless execution of diverse AI models concurrently, without compromising on efficiency or speed. Integrated with the MERA compiler framework, it ensures easy deployment across various hardware systems, supporting frameworks like PyTorch and TensorFlow Lite for seamless integration. This AI accelerator excels in AI models for vision, language, and audio, fostering innovative content creation across these domains. Moreover, SAKURA-II supports a robust DRAM bandwidth, far surpassing competitors, ensuring superior performance for large language and vision models. It offers support for significant neural network demands, making it a powerful asset for developers in the edge AI landscape.

EdgeCortix Inc.
AI Processor, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

Zhenyue 510 SSD Controller

The Zhenyue 510 SSD Controller exemplifies T-Head's cutting-edge design in enterprise-grade storage solutions. Engineered to deliver exceptional I/O processing capabilities, this controller reaches stellar benchmarks such as 3400K IOPS and a data bandwidth of 14GByte/s. Its architecture integrates tightly controlled power management units with adaptable read/write power allocations, ensuring power efficiency marked by 420K IOPS per Watt. To guarantee data integrity, it utilizes T-Head’s proprietary error-checking algorithms which provide an unprecedented correction rate, reducing error counts significantly. Incorporating both hardware-software integrated algorithms, the Zhenyue 510 is capable of precisely predicting potential charge drift in flash memory at scale, optimizing storage reliability and longevity. The controller's versatility is enhanced by its 16 high-speed NAND channels, offering ample bandwidth for high-volume data demands while maintaining effective isolation in multi-tenant environments. Its SR-IOV support extends its utility across cloud-based and virtualized applications, underscoring its adaptability in modern computing scenarios, including online transactions, big data storage, and edge computing architectures.

T-Head
Flash Controller, NAND Flash, NVM Express, ONFI Controller, Processor Core Dependent, RLDRAM Controller, SAS, SATA, SDRAM Controller
View Details

Jotunn8 AI Accelerator

The Jotunn8 is engineered to redefine performance standards for AI datacenter inference, supporting prominent large language models. Standing as a fully programmable and algorithm-agnostic tool, it supports any algorithm, any host processor, and can execute generative AI like GPT-4 or Llama3 with unparalleled efficiency. The system excels in delivering cost-effective solutions, offering high throughput up to 3.2 petaflops (dense) without relying on CUDA, thus simplifying scalability and deployment. Optimized for cloud and on-premise configurations, Jotunn8 ensures maximum utility by integrating 16 cores and a high-level programming interface. Its innovative architecture addresses conventional processing bottlenecks, allowing constant data availability at each processing unit. With the potential to operate large and complex models at reduced query costs, this accelerator maintains performance while consuming less power, making it the preferred choice for advanced AI tasks. The Jotunn8's hardware extends beyond AI-specific applications to general processing (GP) functionalities, showcasing its agility. By automatically selecting the most suitable processing paths layer-by-layer, it optimizes both latency and power consumption. This provides its users with a flexible platform that supports the deployment of vast AI models under efficient resource utilization strategies. This product's configuration includes power peak consumption of 180W and an impressive 192 GB on-chip memory, accommodating sophisticated AI workloads with ease. It aligns closely with theoretical limits for implementation efficiency, accentuating VSORA's commitment to high-performance computational capabilities.

VSORA
AI Processor, Interleaver/Deinterleaver, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

General Purpose Accelerator (Aptos)

The General Purpose Accelerator (Aptos) from Ascenium stands out as a redefining force in the realm of CPU technology. It seeks to overcome the limitations of traditional CPUs by providing a solution that tackles both performance inefficiencies and high energy demands. Leveraging compiler-driven architecture, this accelerator introduces a novel approach by simplifying CPU operations, making it exceptionally suited for handling generic code. Notably, it offers compatibility with the LLVM compiler, ensuring a wide range of applications can be adapted seamlessly without rewrites. The Aptos excels in performance by embracing a highly parallel yet simplified CPU framework that significantly boosts efficiency, reportedly achieving up to four times the performance of cutting-edge CPUs. Such advancements cater not only to performance-oriented tasks but also substantially mitigate energy consumption, providing a dual benefit of cost efficiency and reduced environmental impact. This makes Aptos a valuable asset for data centers seeking to optimize their energy footprint while enhancing computational capabilities. Additionally, the Aptos architecture supports efficient code execution by resolving tasks predominantly at compile-time, allowing the processor to handle workloads more effectively. This allows standard high-level language software to run with improved efficiency across diverse computing environments, aligning with an overarching goal of greener computing. By maximizing operational efficiency and reducing carbon emissions, Aptos propels Ascenium into a leading position in the sustainable and high-performance computing sector.

Ascenium
TSMC
10nm, 12nm
CPU, Processor Core Dependent, Processor Core Independent, Processor Cores, Standard cell
View Details

Time-Triggered Protocol

The Time-Triggered Protocol (TTP) is an advanced communication protocol designed for highly reliable and deterministic networks, primarily utilized in the aerospace and automotive sectors. It provides a framework for the synchronized execution of tasks within a network, facilitating precise timing and coordination. By ensuring that data transmission occurs at predetermined times, TTP enhances the predictiveness and reliability of network operations, making it vital for safety-critical applications. The protocol is engineered to function in environments where reliability and determinism are non-negotiable, offering robust fault-tolerance and scalability. This makes it particularly suited for complex systems such as those found in avionics, where precise timing and synchronization are crucial. The design of TTP allows for easy integration and scalability, providing flexibility that can accommodate evolving system requirements or new technological advancements. Moreover, TTP is characterized by its rigorous adherence to real-time communication standards, enabling seamless integration across various platforms. Its deterministic nature ensures that network communications are predictable and maintain high standards of safety and fault tolerance. These features are crucial in maintaining operational integrity in critical applications like aerospace and automotive systems.

TTTech Computertechnik AG
AMBA AHB / APB/ AXI, CAN, CAN-FD, Ethernet, FlexRay, MIPI, Processor Core Dependent, Safe Ethernet, Temperature Sensor
View Details

SiFive Intelligence X280

Tailored specifically for AI and machine learning requirements at the edge, the SiFive Intelligence X280 brings powerful capabilities to data-intensive applications. This processor line is part of the high-performance AI data flow processors from SiFive, designed to offer scalable vector computation capabilities. Key features include handling demanding AI workloads, efficient data flow management, and enhanced object detection and speech recognition processing. The X280 is equipped with vector processing capabilities that include a 512-bit vector length, single vector ALU VCIX (1024-bit), plus a host of new instructions optimized for machine learning operations. These features provide a robust platform for addressing energy-efficient inference tasks, driven by the need for high-performance yet low-power computing solutions. Key to the X280's appeal is its ability to interface seamlessly with popular machine learning frameworks, enabling developers to deploy models with ease and flexibility. Additionally, its compatibility with SiFive Intelligence Extensions and TensorFlow Lite enhances its utility in delivering consistent, high-quality AI processing in various applications, from automotive to consumer devices.

SiFive, Inc.
AI Processor, Cryptography Cores, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, Security Processor, Security Subsystems, Vision Processor
View Details

Monolithic Microsystems

Monolithic Microsystems from Imec are revolutionizing how electronic integration is perceived by offering a platform that seamlessly combines microelectronics and microsystems. These systems are engineered to provide high functionality while maintaining a compact footprint, making them ideal for applications in areas like sensing, actuation, and control across a variety of sectors including industrial automation, medical devices, and consumer electronics. The Monolithic Microsystems platform enables the integration of various subsystems onto a single semiconductor chip, thereby reducing the size, power consumption, and cost of complex electronic devices. This not only streamlines device architecture but also enhances reliability and performance by mitigating the interconnect challenges associated with multi-chip assemblies. Imec’s comprehensive resources and expertise in semiconductor manufacturing are harnessed to deliver solutions that meet the rigorous demands of cutting-edge applications. From design to production, the Monolithic Microsystems offer a leap in capability for next-generation devices, facilitating innovations that require robust, integrated microsystem technologies.

Imec
Tower
28nm
Analog Subsystems, Building Blocks, CPU, Embedded Security Modules, Input/Output Controller, Interrupt Controller, Processor Core Dependent, Processor Core Independent, Sensor, Standard cell
View Details

RISCV SoC - Quad Core Server Class

Dyumnin's RISCV SoC is built around a robust 64-bit quad-core server class RISC-V CPU, offering various subsystems that cater to AI/ML, automotive, multimedia, memory, and cryptographic needs. This SoC is notable for its AI accelerator, including a custom CPU and tensor flow unit designed to expedite AI tasks. Furthermore, the communication subsystem supports a wide array of protocols like PCIe, Ethernet, and USB, ensuring versatile connectivity. As for the automotive sector, it includes CAN and SafeSPI IPs, reinforcing its utility in diverse applications such as automotive systems.

Dyumnin Semiconductors
TSMC
14nm, 28nm, 32nm
2D / 3D, 3GPP-5G, 802.11, AI Processor, CPU, DDR, LCD Controller, LIN, Mobile DDR Controller, Multiprocessor / DSP, Other, Processor Core Dependent, SAS, USB, V-by-One, VGA
View Details

Portable RISC-V Cores

Bluespec's Portable RISC-V Cores offer a versatile and adaptable solution for developers seeking cross-platform compatibility with support for FPGAs from Achronix, Xilinx, Lattice, and Microsemi. These cores come with support for operating systems like Linux and FreeRTOS, providing developers with a seamless and open-source toolset for application development. By leveraging Bluespec’s extensive compatibility and open-source frameworks, developers can benefit from efficient, versatile RISC-V application deployment.

Bluespec
AMBA AHB / APB/ AXI, CPU, Peripheral Controller, Processor Core Dependent, Safe Ethernet
View Details

SiFive Performance

The SiFive Performance family represents a new benchmark in computing efficiency and performance. These RISC-V processors are aimed at addressing the demands of modern workloads, including web servers, multimedia processing, networking, and storage in data centers. With its high throughput, out-of-order cores ranging from three-wide to six-wide configurations, and dedicated vector engines for AI tasks, the SiFive Performance family promises remarkable energy and area efficiency. This not only enables high compute density but also reduces costs and energy consumption, making it an optimal choice for contemporary data center applications. A hallmark of the Performance family is its scalability for various applications, including mobile, consumer, and edge infrastructure. The portfolio includes a range of models like the six-wide, out-of-order P870 core, capable of scaling up to a 256-core cluster, and the P650, known for its four-issue, out-of-order architecture supporting up to a 16-core cluster. Furthermore, the family includes the P550 series, which sets standards with its three-issue, out-of-order design, offering superior performance in an energy-efficient footprint. In addition to delivering exceptional computing power, the SiFive Performance processors excel in scenarios where power, footprint, and cost are crucial factors. With the potential for configurations up to 512 cores, these processors are designed to meet the growing demand for high-performance computing across multiple sectors.

SiFive, Inc.
CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor
View Details

Yuzhen 600 RFID Chip

The Yuzhen 600 RFID Chip embodies T-Head's expertise in developing ultra-efficient integrated circuits tailored for RFID applications, where low-power and high-performance standards are paramount. This chip is engineered to streamline RFID processes, ensuring swift and accurate reading and writing of tags even in dense environments. By adopting compact design principles, the Yuzhen 600 minimizes energy consumption while maximizing throughput speeds, ensuring extended operational life for applications in supply chain management and logistics. Equipped with sophisticated RF processing capabilities, the chip supports various RFID standards, making it versatile for global applications. Its robust design guarantees resilient performance under diverse environmental conditions, thereby enhancing reliability in critical operations. This adaptability extends to encryption features, ensuring data security and integrity during transactions and data exchanges. T-Head's Yuzhen 600 is optimized for integration into a wide range of applications, from retail inventory management to industrial asset tracking, offering businesses a dependable tool to enhance operational efficiency and reduce costs. Its presence in T-Head's diverse product portfolio highlights a commitment to advancing connected technologies.

T-Head
AMBA AHB / APB/ AXI, Bluetooth, Cryptography Cores, IoT Processor, Photonics, PowerPC, Processor Core Dependent, RF Modules, Sensor, USB
View Details

Calibrator for AI-on-Chips

The ONNC Calibrator is engineered to ensure high precision in AI System-on-Chips using post-training quantization (PTQ) techniques. This tool enables architecture-aware quantization, which helps maintain 99.99% precision even with fixed-point architecture, such as INT8. Designed for diverse heterogeneous multicore setups, it supports multiple engines within a single chip architecture and employs rich entropy calculation techniques. A major advantage of the ONNC Calibrator is its efficiency; it significantly reduces the time required for quantization, taking only seconds to process standard computer vision models. Unlike re-training methods, PTQ is non-intrusive, maintains network topology, and adapts based on input distribution to provide quick and precise quantization suitable for modern neural network frameworks such as ONNX and TensorFlow. Furthermore, the Calibrator's internal precision simulator uses hardware control registers to maintain precision, demonstrating less than 1% precision drop in most computer vision models. It adapts flexibly to various hardware through its architecture-aware algorithms, making it a powerful tool for maintaining the high performance of AI systems.

Skymizer
All Foundries
All Process Nodes
AI Processor, Coprocessor, Cryptography Cores, DDR, Processor Core Dependent, Processor Core Independent, Security Protocol Accelerators, Vision Processor
View Details

Atrevido

The Atrevido is a 64-bit RISC-V core designed for out-of-order processing, providing exceptional performance for applications needing high bandwidth and low latency. It features a 2/3/4-wide configurable out-of-order issue and completion mechanism, ensuring a seamless handling of complex, memory-intensive operations. The core is multiprocessor ready, equipped with direct hardware support for unaligned memory accesses, and supports various RISC-V extensions for enhanced functionality. This IP is particularly adept at handling machine learning workloads, key-value stores, and recommendation systems, thanks to its integration with Semidynamics' Gazzillion Missesâ„¢ technology. This technology enables the Atrevido core to sustain full memory bandwidth even with smaller processing cores, minimizing the need for a large silicon footprint. With support for the RISC-V Vector Specification 1.0, Atrevido is vector-ready, allowing for dense encoding of computational instructions and efficient handling of sparse tensor weights. Additional features of the Atrevido core include its Linux readiness, with full MMU support, and its compatibility with cache-coherent multiprocessing environments. This makes it beneficial for constructing systems on chips that require numerous cores, delivering scalability and performance tailored to extensive processing needs.

Semidynamics
AI Processor, AMBA AHB / APB/ AXI, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, WMA
View Details

iCan PicoPop® System on Module

The iCan PicoPop® System on Module (SOM) by Oxytronic is an ultra-compact computing solution designed for high-performance and space-constrained environments within the aerospace industry. Utilizing the Xilinx Zynq UltraScale+ MPSoC, this module delivers significant processing power ideal for complex signal processing and other demanding tasks. This module's design caters to embedded system applications, offering robust capabilities in avionics where size, weight, and power efficiency are critical considerations. It provides core functionalities that support advanced video processing, making it a pivotal component for those requiring cutting-edge technological support in minimal form factors. Oxytronic ensures that the iCan PicoPop® maintains compatibility with a wide range of peripherals, facilitating easy integration into existing systems. Its architectural innovation signifies Oxytronic's understanding of aviation challenges, providing solutions that are both technically superior and practically beneficial for modern aerospace applications.

Oxytronic
Building Blocks, CPU, DSP Core, Fibre Channel, LCD Controller, Processor Core Dependent, Processor Core Independent, Standard cell, Wireless Processor
View Details

RV32EC_P2 Processor Core

The RV32EC_P2 Processor Core is a compact, high-efficiency RISC-V processor designed for low-power, small-scale embedded applications. Featuring a 2-stage pipeline architecture, it efficiently executes trusted firmware. It supports the RISC-V RV32E base instruction set, complemented by compression and optional integer multiplication instructions, greatly optimizing code size and runtime efficiency. This processor accommodates both ASIC and FPGA workflows, offering tightly-coupled memory interfaces for robust design flexibility. With a simple machine-mode architecture, the RV32EC_P2 ensures swift data access. It boasts extended compatibility with AHB-Lite and APB interfaces, allowing seamless interaction with memory and I/O peripherals. Designed for enhanced power management, it features an interrupt system and clock-gating abilities, effectively minimizing idle power consumption. Developers can benefit from its comprehensive toolchain support, ensuring smooth firmware and virtual prototype development through platforms such as the ASTC VLAB. Further distinguished by its vectored interrupt system and support for application-specific instruction sets, the RV32EC_P2 is adaptable to various embedded applications. Enhancements include wait-for-interrupt commands for reduced power usage during inactivity and multiple timer interfaces. This versatility, along with integrated GNU and Eclipse tools, makes the RV32EC_P2 a prime choice for efficient, low-power technology integrations.

IQonIC Works
Audio Processor, Coprocessor, CPU, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

Tensix Neo

Tensix Neo by Tenstorrent stands as a sophisticated processor designed to cater to intricate AI computations. Known for its superior handling of large-scale machine learning tasks, Tensix Neo prioritizes both performance and efficiency. The Tensix Neo processor provides a flexible platform for developers needing high computational power for AI algorithms. It handles multiple data types and supports a variety of machine learning models, delivering high precision and speed for inference and training sessions. Its architecture is optimized for parallel computing, facilitating concurrent data processing that accelerates workflow activities. Alongside exceptional memory bandwidth, the Tensix Neo ensures that data-intensive tasks do not overwhelm system resources, maintaining a balanced load across operations. This processor is ideal for various applications in deep learning and AI analysis, offering capabilities that go beyond the basics to push boundaries in AI research. Its adaptability makes it suitable for integration in different computing environments, from data centers to cloud-based platforms, demonstrating Tenstorrent's commitment to versatile, next-gen tech solutions.

Tenstorrent
AI Processor, CPU, DSP Core, IoT Processor, Network on Chip, Processor Core Dependent, Processor Core Independent
View Details

VisualSim Architect

VisualSim Architect offers a rich modeling and simulation environment used to explore the performance, power, and functionality of entire systems before they are built. By employing system-level design, users can simulate various scenarios and configurations to better understand potential limitations and opportunities. This platform assists in determining accurate assessments of system behavior under diverse conditions, therefore eliminating unforeseen complications during the development phase. It paves the way for rapid prototyping by providing virtual representations of hardware and software components, thus ensuring a more effective system design process. The tool supports comprehensive exploration across various domains like hardware-software partitioning, autonomous driver assistance, and more. It allows users to incorporate various scenarios, such as credit-based arbitration in data movement or mapping radar systems for aircraft interactions. By leveraging a graphical interface, the platform helps engineers visualize and manage system resources efficiently. Furthermore, the batch mode simulation feature facilitates large-scale system analysis for parameter optimization, ensuring robust design under varied operational conditions. One of the key strengths of VisualSim Architect lies in its adaptability, allowing it to run on multiple operating systems including Windows, Linux, and Mac OS/X. By being platform-independent, it ensures versatility across different hardware infrastructures and deployment requirements. This flexibility is further reinforced with its support for open-source XML libraries which help craft detailed simulations applicable to real-world contexts. It’s also complemented by a robust academic program that democratizes access to system engineering exploration, fostering a deeper understanding and capability in future engineers.

Mirabilis Design
CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent
View Details

Wormhole

The Wormhole processor from Tenstorrent is engineered for advanced AI applications, providing unmatched performance and scalability. It is an integral part of their Galaxy system, featuring 32 chips interconnected with an Ethernet-based mesh topology. This design facilitates rapid data exchange and processing, ideal for high-density computing environments. Equipped with Tensix cores, the Wormhole processor attains up to 9.3 PetaFLOPS, making it excellent for AI model training and inference tasks. The processor supports various floating-point formats, including FP8, FP16, and FP32, to deliver flexibility and precision in calculations. It also features a remarkable SRAM capacity, ensuring swift data access and processing efficiency. The Wormhole is not just about sheer processing power; it also integrates a sophisticated network-on-chip architecture to improve communication between cores. This allows developers to maximize throughput while maintaining power efficiency, essential for large-scale AI deployments. Additionally, system interfaces like high-speed Ethernet enhance its connectivity, making it suitable for integration into existing infrastructures. This processor's design exhibits flexibility by supporting features such as dynamic partitioning of compute resources. Its scalability and efficiency make it a robust choice for building next-gen AI infrastructures, catering to both edge and cloud deployments. With the Wormhole processor, Tenstorrent empowers organizations to tackle complex AI workflows with confidence and ease.

Tenstorrent
AI Processor, CPU, IoT Processor, Network on Chip, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details
Load more
Sign up to Silicon Hub to buy and sell semiconductor IP

Sign Up for Silicon Hub

Join the world's most advanced semiconductor IP marketplace!

It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!

Switch to a Silicon Hub buyer account to buy semiconductor IP

Switch to a Buyer Account

To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.

Add new company

Switch to a Silicon Hub buyer account to buy semiconductor IP

Create a Buyer Account

To evaluate IP you need to be logged into a buyer profile. It's free to create a buyer profile for your company.

Chatting with Volt