All IPs > Platform Level IP > Processor Core Dependent
In the realm of semiconductor IP, the Processor Core Dependent category encompasses a variety of intellectual properties specifically designed to enhance and support processor cores. These IPs are tailored to work in harmony with core processors to optimize their performance, adding value by reducing time-to-market and improving efficiency in modern integrated circuits. This category is crucial for the customization and adaptation of processors to meet specific application needs, addressing both performance optimization and system complexity management.
Processor Core Dependent IPs are integral components, typically found in applications that require robust data processing capabilities such as smartphones, tablets, and high-performance computing systems. They can also be implemented in embedded systems for automotive, industrial, and IoT applications, where precision and reliability are paramount. By providing foundational building blocks that are pre-verified and configurable, these semiconductor IPs significantly simplify the integration process within larger digital systems, enabling a seamless enhancement of processor capabilities.
Products in this category may include cache controllers, memory management units, security hardware, and specialized processing units, all designed to complement and extend the functionality of processor cores. These solutions enable system architects to leverage existing processor designs while incorporating cutting-edge features and optimizations tailored to specific application demands. Such customizations can significantly boost the performance, energy efficiency, and functionality of end-user devices, translating into better user experiences and competitive advantages.
In essence, Processor Core Dependent semiconductor IPs represent a strategic approach to processor design, providing a toolkit for customization and optimization. By focusing on interdependencies within processing units, these IPs allow for the creation of specialized solutions that cater to the needs of various industries, ensuring the delivery of high-performance, reliable, and efficient computing solutions. As the demand for sophisticated digital systems continues to grow, the importance of these IPs in maintaining competitive edge cannot be overstated.
The Metis AIPU PCIe AI Accelerator Card is engineered for developers demanding superior AI performance. With its quad-core Metis AIPU, this card delivers up to 214 TOPS, tackling challenging vision applications with unmatched efficiency. The PCIe card is designed with user-friendly integration in mind, featuring the Voyager SDK software stack that accelerates application deployment. Offering impressive processing speeds, the card supports up to 3,200 FPS for ResNet-50 models, providing a competitive edge for demanding AI tasks. Its design ensures it meets the needs of a wide array of AI applications, allowing for scalability and adaptability in various use cases.
The Veyron V2 CPU represents Ventana's second-generation RISC-V high-performance processor, designed for cloud, data center, edge, and automotive applications. This processor offers outstanding compute capabilities with its server-class architecture, optimized for handling complex, virtualized, and cloud-native workloads efficiently. The Veyron V2 is available as both IP for custom SoCs and as a complete silicon platform, ensuring flexibility for integration into various technological infrastructures. Emphasizing a modern architectural design, it includes full compliance with RISC-V RVA23 specifications, showcasing features like high Instruction Per Clock (IPC) and power-efficient architectures. Comprising of multiple core clusters, this CPU is capable of delivering superior AI and machine learning performance, significantly boosting throughput and energy efficiency. The Veyron V2's advanced fabric interconnects and extensive cache architecture provide the necessary infrastructure for high-performance applications, ensuring broad market adoption and versatile deployment options.
The Quadric Chimera General Purpose Neural Processing Unit (GPNPU) delivers unparalleled performance for AI workloads, characterized by its ability to handle diverse and complex tasks without requiring separate processors for different operations. Designed to unify AI inference and traditional computing processes, the GPNPU supports matrix, vector, and scalar tasks within a single, cohesive execution pipeline. This design not only simplifies the integration of AI capabilities into system-on-chip (SoC) architectures but also significantly boosts developer productivity by allowing them to focus on optimizing rather than partitioning code. The Chimera GPNPU is highly scalable, supporting a wide range of operations across all market segments, including automotive applications with its ASIL-ready versions. With a performance range from 1 to 864 TOPS, it excels in running the latest AI models, such as vision transformers and large language models, alongside classic network backbones. This flexibility ensures that devices powered by Chimera GPNPU can adapt to advancing AI trends, making them suitable for applications that require both immediate performance and long-term capability. A key feature of the Chimera GPNPU is its fully programmable nature, making it a future-proof solution for deploying cutting-edge AI models. Unlike traditional NPUs that rely on hardwired operations, the Chimera GPNPU uses a software-driven approach with its source RTL form, making it a versatile option for inference in mobile, automotive, and edge computing applications. This programmability allows for easy updating and adaptation to new AI model operators, maximizing the lifespan and relevance of chips that utilize this technology.
The xcore.ai platform by XMOS is a versatile, high-performance microcontroller designed for the integration of AI, DSP, and real-time I/O processing. Focusing on bringing intelligence to the edge, this platform facilitates the construction of entire DSP systems using software without the need for multiple discrete chips. Its architecture is optimized for low-latency operation, making it suitable for diverse applications from consumer electronics to industrial automation. This platform offers a robust set of features conducive to sophisticated computational tasks, including support for AI workloads and enhanced control logic. The xcore.ai platform streamlines development processes by providing a cohesive environment that blends DSP capabilities with AI processing, enabling developers to realize complex applications with greater efficiency. By doing so, it reduces the complexity typically associated with chip integration in advanced systems. Designed for flexibility, xcore.ai supports a wide array of applications across various markets. Its ability to handle audio, voice, and general-purpose processing makes it an essential building block for smart consumer devices, industrial control systems, and AI-powered solutions. Coupled with comprehensive software support and development tools, the xcore.ai ensures a seamless integration path for developers aiming to push the boundaries of AI-enabled technologies.
The Metis AIPU M.2 Accelerator Module is designed for devices that require high-performance AI inference in a compact form factor. Powered by a quad-core Metis AI Processing Unit (AIPU), this module optimizes power consumption and integration, making it ideal for AI-driven applications. With a dedicated memory of 1 GB DRAM, it enhances the capabilities of vision processing systems, providing significant boosts in performance for devices with Next Generation Form Factor (NGFF) M.2 sockets. Ideal for use in computer vision systems and more, it offers hassle-free integration and evaluation with Axelera's Voyager SDK. This accelerator module is tailored for any application seeking to harness the power of AI processing efficiently. The Metis AIPU M.2 Module streamlines the deployment of AI applications, ensuring high performance with reduced power consumption.
The aiWare NPU (Neural Processing Unit) by aiMotive is a high-performance hardware solution tailored specifically for automotive AI applications. It is engineered to accelerate inference tasks for autonomous driving systems, ensuring excellent performance across a variety of neural network workloads. aiWare delivers significant flexibility and efficiency, capable of scaling from basic Level 2 applications to complex multi-sensor Level 3+ systems. Achieving up to 98% efficiency, aiWare's design focuses on minimizing power utilization while maximizing core performance. It supports a broad spectrum of neural network architectures, including convolutional neural networks, transformers, and recurrent networks, making it suitable for diverse AI tasks in the automotive sphere. The NPU's architecture allows for minimal external memory access, thanks to its highly efficient dataflow design that capitalizes on on-chip memory caching. With a robust toolkit known as aiWare Studio, engineers can efficiently optimize neural networks without in-depth knowledge of low-level programming, streamlining development and integration efforts. The aiWare hardware is also compatible with V2X communication and advanced driver assistance systems, adapting to various operational needs with great dexterity. Its comprehensive support for automotive safety standards further cements its reputation as a reliable choice for integrating artificial intelligence into next-generation vehicles.
The SAKURA-II is a cutting-edge AI accelerator that combines high performance with low power consumption, designed to efficiently handle multi-billion parameter models for generative AI applications. It is particularly suited for tasks that demand real-time AI inferencing with minimal batch processing, making it ideal for applications devoted to edge environments. With a typical power usage of 8 watts and a compact footprint, the SAKURA-II achieves more than twice the AI compute efficiency of comparable solutions. This AI accelerator supports next-generation applications by providing up to 4x more DRAM bandwidth compared to alternatives, crucial for the processing of complex vision tasks and large language models (LLMs). The hardware offers advanced precision through software-enabled mixed-precision, which achieves near FP32 accuracy, while a unique sparse computing feature optimizes memory usage. Its robust memory architecture backs up to 32GB of DRAM, providing ample capacity for intensive AI workloads. The SAKURA-II's modular design allows it to be used in multiple form factors, addressing the diverse needs of modern computing tasks such as those found in smart cities, autonomous robotics, and smart manufacturing. Its adaptability is further enhanced by runtime configurable data paths, allowing the device to optimize task scheduling and resource allocation dynamically. These features are powered by the Dynamic Neural Accelerator engine, ensuring efficient computation and energy management.
The A25 processor model is a versatile CPU suitable for a variety of embedded applications. With its 5-stage pipeline and 32/64-bit architecture, it delivers high performance even with a low gate count, which translates to efficiency in power-sensitive environments. The A25 is equipped with Andes Custom Extensions that enable tailored instruction sets for specific application accelerations. Supporting robust high-frequency operations, this model shines in its ability to manage data prefetching and cache coherence in multicore setups, making it adept at handling complex processing tasks within constrained spaces.
RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.
The SiFive Intelligence X280 is a high-performance CPU core tailored for advanced AI and ML workloads, featuring sophisticated vector and matrix compute capabilities. It offers developers a vast toolkit for crafting efficient AI solutions that can be deployed at the edge, thanks to a focus on providing both high performance and scalability. The X280’s architecture allows for the integration of custom accelerator engines, making it highly adaptable to shifting demands in AI technology. Boasting advanced vector compute engines, the X280 is designed to handle large datasets efficiently, ideal for inference tasks and model training. This processor stands out with its high bandwidth interconnects, facilitating seamless data flow and control across custom computing engines integrated using RISC-V custom instructions. Its capabilities mark it as a versatile choice for complex machine learning applications spanning various industry segments. The SiFive Intelligence X280 supports a broad range of tools and frameworks designed to simplify deployment and integration. By using a RISC-V architecture, it promotes open innovation and reduces dependency on traditional proprietary models. This freedom translates into increased optimization opportunities, enabling rapid iteration and development cycles critical for maintaining a competitive edge in AI technology.
The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.
Eliyan's NuLink Die-to-Die (D2D) PHY products are designed to provide high-performance, low-power connectivity between chips, or 'chiplets,' in a system. Using standard organic laminate packaging, these IP cores maintain power and performance levels that would traditionally require advanced packaging techniques like silicon interposers. This eliminates the need for such technology, allowing cost-effective system design and reducing thermal, test, and production challenges while maintaining performance. Eliyan’s approach enables flexibility, allowing a broad substrate area that supports more chiplets in the package, significantly boosting performance and power metrics. These D2D PHY cores accommodate various industry standards, including UCIe and BoW, providing configurations tailor-made for optimal bump map layout, thus enhancing overall system efficiency.
The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.
AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.
The Hanguang 800 AI Accelerator from T-Head Semiconductor delivers powerful capabilities aimed at boosting artificial intelligence applications across a range of industries. This accelerator is designed to manage complex neural network computations, facilitating real-time AI processing that meets today’s evolving technological demands. It is ideally suited for tasks such as machine learning model training and inferencing, providing significant improvements in both speed and efficiency. The Hanguang 800 features a sophisticated architecture that harnesses multiple processing units, optimized specifically for neural network computations. This structure allows for parallel processing, significantly accelerating data handling and computation tasks crucial to AI applications. This ability to rapidly process vast amounts of information makes it a preferred choice for data centers and enterprises specializing in AI technologies. Built to integrate smoothly with existing infrastructures, the Hanguang 800 accommodates various AI frameworks, enhancing development flexibility. Its robust design ensures high throughput and minimal latency in AI operations, making it highly suitable for intensive AI workloads. It provides an efficient, scalable solution for enterprises looking to enhance their AI capabilities and maintain a competitive edge in the dynamic tech landscape.
The NMP-750 is a high-performance accelerator designed for edge computing, particularly suited for automotive, AR/VR, and telecommunications sectors. It boasts an impressive capacity of up to 16 TOPS and 16 MB local memory, powered by a RISC-V or Arm Cortex-R/A 32-bit CPU. The three AXI4 interfaces ensure seamless data transfer and processing. This advanced accelerator supports multifaceted applications such as mobility control, building automation, and multi-camera processing. It's designed to cope with the rigorous demands of modern digital and autonomous systems, offering substantial processing power and efficiency for intensive computational tasks. The NMP-750's ability to integrate into smart systems and manage spectral efficiency makes it crucial for communications and smart infrastructure management. It helps streamline operations, maintain effective energy management, and facilitate sophisticated AI-driven automation, ensuring that even the most complex data flows are handled efficiently.
The Azurite Core-hub by InCore Semiconductors is a sophisticated solution designed to offer scalable RISC-V SoCs with high-speed secure interconnect capabilities. This processor is tailored for performance-demanding applications, ensuring that systems maintain robust security while executing tasks at high speeds. Azurite leverages advanced interconnect technologies to enhance the communication between components within a SoC, making it ideal for industries that require rapid data transfer and high processing capabilities. The core is engineered to be scalable, supporting a wide range of applications from edge AI to functional safety systems, adapting seamlessly to various industry needs. Engineered with a focus on security, the Azurite Core-hub incorporates features that protect data integrity and system operation in a dynamic technological landscape. This makes it a reliable choice for companies seeking to integrate advanced RISC-V architectures into their security-focused applications, offering not just innovation but also peace of mind with its secure design.
The N Class RISC-V CPU IP from Nuclei is tailored for applications where space efficiency and power conservation are paramount. It features a 32-bit architecture and is highly suited for microcontroller applications within the AIoT realm. The N Class processors are crafted to provide robust processing capabilities while maintaining a minimal footprint, making them ideal candidates for devices that require efficient power management and secure operations. By adhering to the open RISC-V standard, Nuclei ensures that these processors can be seamlessly integrated into various solutions, offering customizable options to fit specific system requirements.
RAIV represents Siliconarts' General Purpose-GPU (GPGPU) offering, engineered to accelerate data processing across diverse industries. This versatile GPU IP is essential in sectors engaged in high-performance computing tasks, such as autonomous driving, IoT, and sophisticated data centers. With RAIV, Siliconarts taps into the potential of the fourth industrial revolution, enabling rapid computation and seamless data management. The RAIV architecture is poised to deliver unmatched efficiency in high-demand scenarios, supporting massive parallel processing and intricate calculations. It provides an adaptable framework that caters to the needs of modern computing, ensuring balanced workloads and optimized performance. Whether used for VR/AR applications or supporting the back-end infrastructure of data-intensive operations, RAIV is designed to meet and exceed industry expectations. RAIV’s flexible design can be tailored to enhance a broad spectrum of applications, promising accelerated innovation in sectors dependent on AI and machine learning. This GPGPU IP not only underscores Siliconarts' commitment to technological advancement but also highlights its capability to craft solutions that drive forward computational boundaries.
The General Purpose Accelerator (Aptos) from Ascenium stands out as a redefining force in the realm of CPU technology. It seeks to overcome the limitations of traditional CPUs by providing a solution that tackles both performance inefficiencies and high energy demands. Leveraging compiler-driven architecture, this accelerator introduces a novel approach by simplifying CPU operations, making it exceptionally suited for handling generic code. Notably, it offers compatibility with the LLVM compiler, ensuring a wide range of applications can be adapted seamlessly without rewrites. The Aptos excels in performance by embracing a highly parallel yet simplified CPU framework that significantly boosts efficiency, reportedly achieving up to four times the performance of cutting-edge CPUs. Such advancements cater not only to performance-oriented tasks but also substantially mitigate energy consumption, providing a dual benefit of cost efficiency and reduced environmental impact. This makes Aptos a valuable asset for data centers seeking to optimize their energy footprint while enhancing computational capabilities. Additionally, the Aptos architecture supports efficient code execution by resolving tasks predominantly at compile-time, allowing the processor to handle workloads more effectively. This allows standard high-level language software to run with improved efficiency across diverse computing environments, aligning with an overarching goal of greener computing. By maximizing operational efficiency and reducing carbon emissions, Aptos propels Ascenium into a leading position in the sustainable and high-performance computing sector.
Bluespec's Portable RISC-V Cores are designed to bring flexibility and extended functionality to FPGA platforms such as Achronix, Xilinx, Lattice, and Microsemi. They offer support for operating systems like Linux and FreeRTOS, making them versatile for various applications. These cores are accompanied by standard open-source development tools, which facilitate seamless integration and development processes. By utilizing these tools, developers can modify and enhance the cores to suit their specific needs, ensuring a custom fit for their projects. The portable cores are an excellent choice for developers looking to deploy RISC-V architecture across different FPGA platforms without being tied down to proprietary solutions. With Bluespec's focus on open-source, users can experience freedom in innovation and development without sacrificing performance or compatibility.
The Codasip RISC-V BK Core Series is designed to offer flexible and high-performance core options catering to a wide range of applications, from low-power tasks to intricate computational needs. This series achieves optimal balance in power consumption and processing speed, making it suitable for applications demanding energy efficiency without compromising performance. These cores are fully RISC-V compliant, allowing for easy customizations to suit specific needs by modifying the processor's architecture or instruction set through Codasip Studio. The BK Core Series provides a streamlining process for developing precise computing solutions, ideal for IoT edge devices and sensor controllers where both small area and low power are critical. Moreover, the BK Core Series supports architectural exploration, enabling users to optimize the core design specifically tailored to their applications. This capability ensures that each core delivers the expected power, efficiency, and performance metrics required by modern technological solutions.
The NMP-350 is a cutting-edge endpoint accelerator designed to optimize power usage and reduce costs. It is ideal for markets like automotive, AIoT/sensors, and smart appliances. Its applications span from driver authentication and predictive maintenance to health monitoring. With a capacity of up to 1 TOPS and 1 MB of local memory, it incorporates a RISC-V/Arm Cortex-M 32-bit CPU and supports three AXI4 interfaces. This makes the NMP-350 a versatile component for various industrial applications, ensuring efficient performance and integration. Developed as a low-power solution, the NMP-350 is pivotal for applications requiring efficient processing power without inflating energy consumption. It is crucial for mobile and battery-operated devices where every watt conserved adds to the operational longevity of the product. This product aligns with modern demands for eco-friendly and cost-effective technologies, supporting enhanced performance in compact electronic devices. Technical specifications further define its role in the industry, exemplifying how it brings robust and scalable solutions to its users. Its adaptability across different applications, coupled with its cost-efficiency, makes it an indispensable tool for developers working on next-gen AI solutions. The NMP-350 is instrumental for developers looking to seamlessly incorporate AI capabilities into their designs without compromising on economy or efficiency.
As an advanced solution, the SCR9 Processor Core is built to deliver top-of-the-line performance for enterprise and high-performance computing applications. Featuring a 12-stage dual-issue out-of-order pipeline, it caters to complex processing tasks with exceptional speed and accuracy. Supporting hypervisor capabilities and a virtual processing unit, the SCR9 core is designed for scalability, handling up to 16 cores. Its coherent cache system ensures swift and consistent data flow, enhancing overall system reliability and throughput. Targeted at applications across AI, data centers, and mobile devices, this core embodies flexibility and power. The combination of virtualization, comprehensive caching, and high computational performance makes SCR9 an optimal choice for environments where maintaining high efficiency and power balance is essential.
The NMP-550 is tailored for enhanced performance efficiency, serving sectors like automotive, mobile, AR/VR, drones, and robotics. It supports applications such as driver monitoring, image/video analytics, and security surveillance. With a capacity of up to 6 TOPS and 6 MB local memory, this accelerator leverages either a RISC-V or Arm Cortex-M/A 32-bit CPU. Its three AXI4 interface support ensures robust interconnections and data flow. This performance boost makes the NMP-550 exceptionally suited for devices requiring high-frequency AI computations. Typical use cases include industrial surveillance and smart robotics, where precise and fast data analysis is critical. The NMP-550 offers a blend of high computational power and energy efficiency, facilitating complex AI tasks like video super-resolution and fleet management. Its architecture supports modern digital ecosystems, paving the way for new digital experiences through reliable and efficient data processing capabilities. By addressing the needs of modern AI workloads, the NMP-550 stands as a significant upgrade for those needing robust processing power in compact form factors.
TTTech has developed an SAE AS6003 compliant chip core widely utilized in safety-critical systems. This TTP core facilitates reliable data communication across networks, meeting rigorous RTCA DO-254 / EUROCAE ED-80 standards. While initially designed for aviation, its capabilities extend to other safety-critical environments such as energy systems. The availability of this technology across various FPGA platforms illustrates its adaptability and importance in maintaining operational reliability in diverse industries.
The Maverick-2 Intelligent Compute Accelerator (ICA) by Next Silicon represents a transformative leap in high-performance compute architecture. It seamlessly integrates into HPC systems with a pioneering software-defined approach that dynamically optimizes hardware configurations based on real-time application demands. This enables high efficiency and unparalleled performance across diverse workloads including HPC, AI, and other data-intensive applications. Maverick-2 harnesses a 5nm process technology, utilizing HBM3E memory for enhanced data throughput and efficient energy usage.\n\nBuilt with developers in mind, Maverick-2 supports an array of programming languages such as C/C++, FORTRAN, and OpenMP without the necessity for proprietary stacks. This flexibility not only mitigates porting challenges but significantly reduces development time and costs. A distinguishing feature of Maverick-2 is its real-time telemetry capabilities that provide valuable insights into performance metrics, allowing for refined optimizations during execution.\n\nThe architecture supports versatile interfaces such as PCIe Gen 5 and offers configurations that accommodate complex workloads using either single or dual-die setups. Its intelligent algorithms autonomously identify computational bottlenecks to enhance throughput and scalability, thus future-proofing investments as computing demands evolve. Maverick-2's utility spans various sectors including life sciences, energy, and fintech, underlining its adaptability and high-performance capabilities.
The AI Inference Platform is designed to optimize the deployment of AI workloads across various applications. Built with cutting-edge IP and advanced design methodologies, it ensures cost efficiency and minimizes the risks associated with product development. The platform is engineered to capitalize on domain-specific architectures, offering a reduction in time and expense during the SoC creation process. This platform boasts pre-configured and robust components, refined through silicon-proven practices, delivering rapid hardware and software integration. Its flexible design is capable of addressing diverse AI applications, enhancing performance with lower power consumption and elevated reliability. By utilizing SEMIFIVE’s AI Inference Platform, developers can accelerate their projects with reduced time to market and enhanced design reusability. This is achieved through SEMIFIVE’s commitment to validating each component rigorously, ensuring high performance in AI-driven environments.
The Zhenyue 510 SSD Controller is a high-performance storage solution designed by T-Head Semiconductor for enterprise-grade applications. This controller is engineered to provide superior data management capabilities, enhancing solid-state drive (SSD) performance in demanding operational settings. The Zhenyue 510 integrates innovative algorithms to handle high-speed data processing efficiently, significantly reducing latency while increasing throughput. This SSD controller boasts robust compatibility with various SSD architectures, allowing for seamless integration into existing storage infrastructures. Its design includes advanced error correction and data integrity techniques, ensuring reliable data retention and transmission across storage networks. By optimizing read/write processes, the Zhenyue 510 enhances performance, making it ideal for use in data-intensive sectors such as cloud computing and enterprise storage. The controller is equipped with multiple channels, providing the flexibility to handle extensive data loads efficiently. Its architecture supports modern encryption standards for advanced data security, offering enterprises peace of mind when managing sensitive information. The Zhenyue 510's scalable design ensures it meets future storage demands, enabling businesses to expand their capabilities while maintaining operational efficiency.
The AndeShape Platforms are designed to streamline system development by providing a diverse suite of IP solutions for SoC architecture. These platforms encompass a variety of product categories, including the AE210P for microcontroller applications, AE300 and AE350 AXI fabric packages for scalable SoCs, and AE250 AHB platform IP. These solutions facilitate efficient system integration with Andes processors. Furthermore, AndeShape offers a sophisticated range of development platforms and debugging tools, such as ADP-XC7K160/410, which reinforce the system design and verification processes, providing a comprehensive environment for the innovative realization of IoT and other embedded applications.
ISPido on the VIP Board is tailored for Lattice Semiconductors' Video Interface Platform, providing a runtime solution optimized for delivering crisp, balanced images in real-time. This solution offers two primary configurations: automatic deployment for optimal settings instantly upon startup, and a manual, menu-driven interface allowing users to fine-tune settings such as gamma tables and convolution filters. Utilizing the CrossLink VIP Input Bridge with Sony IMX 214 sensors and an ECP5-85 FPGA, it provides HD output in HDMI YCrCb format, ensuring high-quality image resolution and real-time calibration.
The RISCV SoC developed by Dyumnin Semiconductors is engineered with a 64-bit quad-core server-class RISCV CPU, aiming to bridge various application needs with an integrated, holistic system design. Each subsystem of this SoC, from AI/ML capabilities to automotive and multimedia functionalities, is constructed to deliver optimal performance and streamlined operations. Designed as a reference model, this SoC enables quick adaptation and deployment, significantly reducing the time-to-market for clients. The AI Accelerator subsystem enhances AI operations with its collaboration of a custom central processing unit, intertwined with a specialized tensor flow unit. In the multimedia domain, the SoC boasts integration capabilities for HDMI, Display Port, MIPI, and other advanced graphic and audio technologies, ensuring versatile application across various multimedia requirements. Memory handling is another strength of this SoC, with support for protocols ranging from DDR and MMC to more advanced interfaces like ONFI and SD/SDIO, ensuring seamless connectivity with a wide array of memory modules. Moreover, the communication subsystem encompasses a broad spectrum of connectivity protocols, including PCIe, Ethernet, USB, and SPI, crafting an all-rounded solution for modern communication challenges. The automotive subsystem, offering CAN and CAN-FD protocols, further extends its utility into automotive connectivity.
AON1100 marks a significant leap in edge AI processing for voice and sensor operations. Meticulously engineered for energy efficiency, this chip operates on less than 260µW while achieving over 90% accuracy even in low signal-to-noise environments. It is perfect for always-on gadgets demanding constant reliability and precision, especially under challenging conditions.
The Veyron V1 is a high-performance RISC-V CPU designed to meet the rigorous demands of modern data centers and compute-intensive applications. This processor is tailored for cloud environments requiring extensive compute capabilities, offering substantial power efficiency while optimizing processing workloads. It provides comprehensive architectural support for virtualization and efficient task management with its robust feature set. Incorporating advanced RISC-V standards, the Veyron V1 ensures compatibility and scalability across a wide range of industries, from enterprise servers to high-performance embedded systems. Its architecture is engineered to offer seamless integration, providing an excellent foundation for robust, scalable computing designs. Equipped with state-of-the-art processing cores and enhanced vector acceleration, the Veyron V1 delivers unmatched throughput and performance management, making it suitable for use in diverse computing environments.
The Codasip L-Series DSP Core is tailored for applications that require efficient digital signal processing capabilities. Known for its adaptability and top-notch performance, this core is a prime choice for tasks demanding real-time processing and a high level of computational density. The L-Series embraces the RISC-V open standard with an enhanced design that allows for customizing the instruction set and leveraging unique microarchitectural features. Such adaptability is ideal for applications in industries where digital signal manipulation is critical, such as audio processing, telecommunications, and advanced sensor applications. Users are empowered through Codasip Studio to implement specific enhancements and modifications, aligning the core's capabilities with specialized operational requirements. This core not only promises high-speed processing but also ensures that resource allocation is optimized for each specific digital processing task.
The iCan PicoPop® is a miniaturized system on module (SOM) based on the Xilinx Zynq UltraScale+ Multi-Processor System-on-Chip (MPSoC). This advanced module is designed to handle sophisticated signal processing tasks, making it particularly suited for aeronautic embedded systems that require high-performance video processing capabilities. The module leverages the powerful architecture of the Zynq MPSoC, providing a robust platform for developing cutting-edge avionics and defense solutions. With its compact form factor, the iCan PicoPop® SOM offers unparalleled flexibility and performance, allowing it to seamlessly integrate into various system architectures. The high level of integration offered by the Zynq UltraScale+ MPSoC aids in simplifying the design process while reducing system latency and power consumption, providing a highly efficient solution for demanding applications. Additionally, the iCan PicoPop® supports advanced functionalities through its integration of programmable logic, multi-core processing, and high-speed connectivity options, making it ideal for developing next-generation applications in video processing and other complex avionics functions. Its modular design also allows for easy customization, enabling developers to tailor the system to meet specific performance and functionality needs, ensuring optimal adaptability for intricate aerospace environments. Overall, the iCan PicoPop® demonstrates a remarkable blend of high-performance computing capabilities and adaptable configurations, making it a valuable asset in the development of high-tech avionics solutions designed to withstand rigorous operational demands in aviation and defense.
aiData is designed to streamline the data pipeline for developing models for Advanced Driver-Assistance Systems and Automated Driving solutions. This automated system provides a comprehensive method of managing and processing data, from collection through curation, annotation, and validation. It significantly reduces the time required for data processing by automating many labor-intensive tasks, enabling teams to focus more on development rather than data preparation. The aiData platform includes sophisticated tools for recording, managing, and annotating data, ensuring accuracy and traceability through all stages of the MLOps workflow. It supports the creation of high-quality training datasets, essential for developing reliable and effective AI models. The platform's capabilities extend beyond basic data processing by offering advanced features such as versioning and metrics analysis, allowing users to track data changes over time and evaluate dataset quality before training. The aiData Recorder feature ensures high-quality data collection tailored to diverse sensor configurations, while the Auto Annotator quickly processes data for a variety of objects using AI algorithms, delivering superior precision levels. These features are complemented by aiData Metrics, which provide valuable insights into dataset completeness and adequacy in covering expected operational domains. With seamless on-premise or cloud deployment options, aiData empowers global automotive teams to collaborate efficiently, offering all necessary tools for a complete data management lifecycle. Its integration versatility supports a wide array of applications, helping improve the speed and effectiveness of deploying ADAS models.
ISPido offers a comprehensive set of IP cores focused on high-resolution image signal processing and tuning across multiple devices and platforms, including CPU, GPU, VPU, FPGA, and ASIC technologies. Its flexibility is a standout feature, accommodating ultra-low power devices as well as systems exceeding 8K resolution. Designed for devices where power efficiency and high-quality image processing are paramount, ISPido adapts to a range of hardware architectures to deliver optimal image quality and processing capabilities. The IP has been widely adopted in various applications, making it a cornerstone for industries requiring advanced image calibration and processing capabilities.
The pPLL02F Family from Perceptia Devices offers a collection of versatile all-digital Fractional-N PLLs. These PLLs are compact yet powerful, perfect for driving moderate-speed microprocessors or other logic blocks in SoC designs. Boasting a low jitter below 18 picoseconds RMS, they are available across several process nodes ranging from 5nm to 40nm, making them highly adaptable. This family supports a variety of clocking applications, delivering output frequencies up to 2GHz and integrating seamlessly with complex SoC systems due to their small die footprint and power-efficient designs.
Specially engineered for the automotive industry, the NA Class IP by Nuclei complies with the stringent ISO26262 functional safety standards. This processor is crafted to handle complex automotive applications, offering flexibility and rigorous safety protocols necessary for mission-critical transportation technologies. Incorporating a range of functional safety features, the NA Class IP is equipped to ensure not only performance but also reliability and safety in high-stakes vehicular environments.
Spectral CustomIP encompasses an expansive suite of specialized memory architectures, tailored for diverse integrated circuit applications. Known for breadth in memory compiler designs, Spectral offers solutions like Binary and Ternary CAMs, various Multi-Ported memories, Low Voltage SRAMs, and advanced cache configurations. These bespoke designs integrate either foundry-standard or custom-designed bit cells providing robust performance across varied operational scenarios. The CustomIP products are engineered for low dynamic power usage and high density, utilizing Spectral’s Memory Development Platform. Available in source code form, these solutions offer users the flexibility to modify designs, adapt them for new technologies, or extend capabilities—facilitating seamless integration within standard CMOS processes or more advanced SOI and embedded Flash processes. Spectral's proprietary SpectralTrak technology enhances CustomIP with precise environmental monitoring, ensuring operational integrity through real-time Process, Voltage, and Temperature adjustments. With options like advanced compiler features, multi-banked architectures, and standalone or compiler instances, Spectral CustomIP suits businesses striving to distinguish their IC offerings with unique, high-performance memory solutions.
TUNGA is an advanced System on Chip (SoC) leveraging the strengths of Posit arithmetic for accelerated High-Performance Computing (HPC) and Artificial Intelligence (AI) tasks. The TUNGA SoC integrates multiple CRISP-cores, employing Posit as a core technology for real-number calculations. This multi-core RISC-V SoC is uniquely equipped with a fixed-point accumulator known as QUIRE, which allows for extremely precise computations across vectors as long as 2 billion entries. The TUNGA SoC includes programmable FPGA gates for enhancing field-critical functions. These gates are instrumental in speeding up data center services, offloading tasks from the CPU, and advancing AI training and inference efficiency using non-standard data types. TUNGA's architecture is tailored for applications demanding high precision, including cryptography and variable precision computing tasks, facilitating the transition towards next-generation arithmetic. In the computational ecology, TUNGA stands out by offering customizable features and rapid processing capabilities, making it suitable not only for typical data center functions but also for complex, precision-demanding workloads. By capitalizing on Posit arithmetic, TUNGA aims to deliver more efficient and powerful computational performance, reflecting a strategic advancement in handling complex data-oriented processes.
Tachyum's Prodigy Universal Processor marks a significant milestone as it combines the functionalities of Central Processing Units (CPUs), General-Purpose Graphics Processing Units (GPGPUs), and Tensor Processing Units (TPUs) into a single cohesive architecture. This groundbreaking design is tailored to meet the escalating demands of artificial intelligence, high-performance computing, and hyperscale data centers by offering unparalleled performance, energy efficiency, and high utilization rates. The Prodigy processor not only tackles common data center challenges like elevated power consumption and stagnating processor performance but also offers a robust solution to enhance server utilization and reduce the carbon footprint of massive computational installations. Notably, it thrives on a simplified programming model grounded in coherent multiprocessor architecture, thereby enabling seamless execution of an array of AI disciplines like Explainable AI, Bio AI, and deep machine learning within a single hardware platform.
The Trifecta-GPU series by RADX is a flagship family of COTS PXIe/CPCIe GPU modules that leverage the power of NVIDIA RTX A2000 Embedded GPUs. Aimed at high-complexity applications in modular test and measurement (T&M) and electronic warfare (EW), these modules provide robust compute acceleration. With peak performance metrics reaching 8.3 FP32 TFLOPS, the Trifecta-GPU modules are engineered to handle demanding tasks in signal processing and machine/deep learning inference. These products are designed for compatibility with MATLAB, Python, and C/C++, ensuring ease of integration across different programming environments. Additionally, the Trifecta-GPU configuration supports PCIe Gen 4 x8 interfaces alongside miniDP outputs, catering to high-resolution multi-monitor setups.
The RISC-V Processor Core developed by Fraunhofer IPMS is a versatile processor designed for the flexible demands of modern digital systems. It adheres to the open RISC-V architecture, ensuring a customizable and extendable computing platform. This processor core is ideal for applications requiring low-power consumption without sacrificing processing power, making it suitable for IoT devices and embedded systems. Built with a focus on energy efficiency and speed, the RISC-V core is capable of executing complex operations at rapid speeds, making it a reliable choice for time-sensitive tasks and high-performance computations. It supports a wide range of data processing capabilities, delivering optimized performance for various applications, from consumer electronics to advanced automotive systems. With its open-source foundation, this processor core allows for extensive customization, fostering innovation and adaptability in design processes. By supporting seamless integration into various system architectures, the RISC-V core ensures compatibility and scalability, crucial for modern technological advancements.
The NX Class RISC-V CPU IP by Nuclei is characterized by its 64-bit architecture, making it a robust choice for storage, AR/VR, and AI applications. This processing unit is designed to accommodate high data throughput and demanding computational tasks. By leveraging advanced capabilities, such as virtual memory and enhanced processing power, the NX Class facilitates cutting-edge technological applications and is adaptable for integration into a vast array of high-performance systems.
The SiFive Performance series represents a leap forward in computing efficiency, featuring RISC-V processors designed for peak performance and high throughput across various applications. These 64-bit out-of-order cores come equipped with up to 256-bit vectors, optimizing them for data center workloads, web services, multimedia processing, and consumer electronics like smart devices. The Performance series balances cutting-edge energy efficiency with robust core capabilities, making it suited for AI workloads while minimizing power and spatial requirements. The SiFive Performance family’s design includes flexible core configurations that empower users to mix and match performance levels according to their specific needs. This approach results in streamlined and powerful processors suitable for a broad spectrum of applications, ranging from mobile devices to complex data center infrastructure. SiFive’s utilization of vector engines enhances AI applications, offering significant computational capability without imposing additional hardware demands. With the ability to expand up to 256 cores and integrate vector computing capabilities, SiFive's Performance series provides a high degree of scalability. Users benefit from enhanced performance density, allowing for the development of next-generation applications that require high levels of processing power within limited physical constraints. This range is an excellent choice for developers looking for efficient, scalable solutions tailored to diverse operational requirements.
The silicon IP Platform for Low-Power IoT by Low Power Futures integrates pre-validated, configurable building blocks tailored for IoT device creation. It provides a turnkey solution to accelerate product development, incorporating options to employ both ARM and RISC V processors. With a focus on reducing energy consumption, the platform is prepared for various applications, ensuring a seamless transition for products from conception to market. The platform is crucial for developing smart IoT solutions that require secure and reliable wireless communications across industries like healthcare, smart home, and industrial automation.
The NI Class RISC-V CPU IP caters to communication, video processing, and AI applications, providing a balanced architecture for intensive data handling and processing capabilities. With a focus on high efficiency and flexibility, this processor supports advanced data crunching and networking applications, ensuring that systems run smoothly and efficiently even when managing complex algorithms. The NI Class upholds Nuclei's commitment to providing versatile solutions in the evolving tech landscape.
The Neural Network Accelerator by Gyrus AI is an advanced compute solution specially optimized for neural network applications. It features native graph processing capabilities that significantly enhance the computational efficiency of AI models. This IP component supports high-speed operations with 30 TOPS/W, offering exceptional performance that significantly reduces the clock cycles typically required by other systems.\n\nMoreover, the architecture is designed to consume 10-20 times less power, benefitting from a low-memory usage configuration. This efficiency is further highlighted by the IP’s ability to achieve an 80% utilization rate across various model structures, which translates into significant reductions in die area requirements up to 8 to 10 times smaller than conventional designs.\n\nGyrus AI’s Neural Network Accelerator also supports seamless integration of software tools tailored to run neural networks on the platform, making it a practical choice for edge computing applications. It not only supports large-scale AI computations but also minimizes power consumption and space constraints, making it ideal for high-performance environments.
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!
No credit card or payment details required.
Join the world's most advanced AI-powered semiconductor IP marketplace!
It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!
Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!